HUMAN IN THE LOOP

HUMAN IN THE LOOP

There’s no substitute for human judgment

 We embed human review into critical moments in the data preparation process to reduce bias and improve data quality

Curated teams

Integrating human judgment in the data preparation process is the key to building AI that serves people better. 

Project managers

We select your dedicated project manager based on language and region as well as their subject matter expertise, domain knowledge and type of data involved in your project. They lead the annotator team & provide constant quality feedback during the annotation process.

Annotation workforce

Our vetted workforce of over 25,000 expert annotators covers an incredible breadth of subject matter expertise and over 500 languages and dialects. We never crowdsource, relying on long-term relationships and intensive training to maximize annotation quality.

25k+

Vetted and trained annotators and linguists

100+

Nationalities represented across five continents

500+

Languages and dialects spoken by our annotators

Guideline definition

Our project managers apply their understanding and experience working with annotator teams to define labeling guidelines that are well-explained, and easy for annotators to follow. These guidelines provide an excellent basis for high-quality annotation. Need deeper support for creating guidelines that suit your model? See our strategy offering.

Continuous feedback and quality assessment

Your project manager continuously monitors the quality of annotation according to the guidelines, gives annotators feedback, and refines the guidelines and procedures during the running process. This makes it possible to identify snags earlier and make improvements faster.

Model evaluation

As a final human-in-the-loop check, we review the algorithmic output of the model and evaluate whether training data might be contributing to errors in the result. Depending on the findings, we can adapt guidelines or even revise parts of the annotation.

Let’s work together to build smarter AI

Whether you need help sourcing and annotating training data at scale, or you need a full-fledged annotation strategy to serve your AI training needs, we can help. Get in touch for more information or to set up your proof-of-concept.

EN