The Hidden Humans in the Loop Behind Quality  GenAI

Generative AI (GenAI) is unlocking a new frontier of possibilities —and transforming how we build AI. But even with large language models (LLMs) capable of labeling datasets at remarkable speed, the need for experienced human annotators and effective annotation guidelines is crucial for ensuring data quality.

However, the role of humans in GenAI goes beyond data annotation: it also involves curating, validating, and generating new data.

Humans bring various skills into the data annotation process, including contextual understanding, cultural diversity, creativity, and ethical considerations that machines can’t replicate.  

What are the challenges involved in generating high-quality training data for LLMs and GenAI? This article goes behind the scenes of cutting-edge AI projects and delves into the unique skillset of project managers leading specialized data annotation teams at Sigma AI

Let’s dive in!

Humans in the loop, the key to data quality

Traditional AI projects start with data annotation. This involves labeling large amounts of data —whether it’s text, audio, video, or images — using relevant tags, to help computers learn and be able to make accurate predictions. The quality of this data has a decisive impact on the model’s performance and its applications in real life.

Getting high-quality annotated data is one of the most difficult and time-consuming aspects of the AI process. But generative AI brings new challenges; instead of simply tagging and categorizing existing information, data professionals may be involved in generating new content, crafting prompts for generative models, or even curating training datasets with specific biases or styles in mind.

While AI-based systems can assist and accelerate the process, human input is essential to prepare, label, and validate the data. Relying on crowdsourced annotators appears to be a cost-effective approach for some companies. However, it can’t match the level of accuracy, consistency, and quality control you can achieve from working with a team of highly trained, in-house experts. 

Each project requires finding the appropriate mix of human expertise, technology, and processes. 

Project managers at Sigma are responsible for translating clients’ ideas into clearly defined processes. This often includes creating specific data guidelines for each project, identifying annotators with the right skills and knowledge, and training them to perform advanced tasks. 

“As project managers, we need to develop a series of skills that are not typically found in a single profession. Most of us come from the fields of translation and linguistics, and our work requires us to continually gain more technical skills to work with complex software, define objectives, and optimize processes” — explains Kassiani Tsakalidou, Program Manager at Sigma AI.

Creating data annotation guidelines for GenAI projects

To ensure the highest-quality data possible, project managers need to establish quality parameters and define clear and consistent data guidelines. Far from being static, the data guidelines need to be assessed and refined as the process moves forward, to improve labeling performance. 

The process of curating and annotating data relies heavily on collaboration and communication, involving people from diverse professional backgrounds and cultures. Training annotators to ensure consistency and make sure everyone is on the same page is an essential step.

“Even before having access to the client’s data, we design a test that resembles the project to evaluate annotators. The idea is not to replicate the same environment, but to understand what kind of skills are necessary for annotators and ensure they can be prepared and know the tasks beforehand”, says Kassiani.

GenAI poses an extra challenge for project managers: to turn something as subjective and complex as writing into a well-defined system.

“GenAI is a big challenge because it forces us to look for more specialized profiles and because writing —in the end— is highly subjective. We need to assign a value to it, create workflows, and establish quality standards to evaluate it”, explains Clara Abou Jaoude, a project manager at Sigma.

Let’s review a few examples of how project managers at Sigma leverage creativity and out-of-the-box thinking to find the best approach to data annotation challenges — and staff teams with the right annotators for each task.

Evaluating machine translation of foreign languages

Everyone’s familiar with automatic translators. But not so much with the humans in charge of improving the quality of those translations, which are still far from being 100% reliable. 

Over the years, Sigma AI translators have worked on countless machine translation post-editing tasks. “The role of translators here is to compare translations automatically generated by AI, detect errors, and ensure the automatic translator applies unified criteria”, Clara explains. However, “translation is a subjective task: sometimes there is more than one possible way of saying the same thing, so this is a challenge. Sometimes it can be difficult to detect which is the best translation among several options”, she points out.

For an automatic translation project that involved Farsi and different African languages, project managers needed to find the right annotators for the team. 

But how to evaluate the quality of human translations without knowing the target language? To solve the problem, they worked with the Research and Development Department (I+D) to design a tool that could compare various translations of the same phrase, based on a similarity ratio. As a result, the translators who similarly completed the task were selected to work on the project. 

Measuring creativity for content writing annotation projects 

For an opinion-mining project, a team of linguists needed to create comments on several products within different domains, like Electronics and Computers, Food, Fashion, and Home. These opinions were later used for sentiment analysis and rating. 

For this task, Sigma’s project managers developed extensive data guidelines, providing details on the tone, length, and variety required for the opinions.

But first, they needed to find the best-equipped candidates for the team. 

Project managers often know their teams very well, so they can quickly identify people with the required skills — even if they weren’t initially hired for that. “For example, if someone has published a book or writes a blog, they might be a valuable candidate to participate in a project that requires writing”, says Kassiani.

In this case, “the project required creativity, language knowledge, rewriting skills, and the ability to read something, understand it, and communicate it without errors”, explains Clara. To assess each of these skills, they worked with the I+D team to develop a four-step test.

The trickiest part was coming up with objective metrics to evaluate something as subjective as creativity. “We considered that a creative person should have a broader vocabulary level than someone who only describes what they see”, says Clara. 

With that hypothesis in mind, they searched for the 5,000 most common words in Spanish and asked candidates to write a paragraph. The most successful candidates were those who used less common words to express their ideas. “The further you move away from those 5,000 words, the more creativity you have, because you have a richer vocabulary and linguistic knowledge”, Clara concludes.

The test also involved creating a short story (with a beginning, middle, and end) containing three randomly generated words, as a way to assess the candidate’s imagination. 

Finding annotators with specialized domain knowledge

Rewriting and finding connections between texts are common tasks for annotation teams. However, topics that require specialized domain knowledge can pose an extra challenge. 

For a project in the STEM field, project managers needed to recruit and train a team of biology experts, who were assigned to elucidate the connection between a given search query and a brief excerpt from a scientific paper.  

Finding the experts was fairly simple, as Sigma AI has an extensive database with all the people who have worked for the company and their skill profiles. 

The most complex part was establishing quality standards for the project when project managers were not well-versed in the subject. “As with any project that requires specific knowledge, the challenge is always to identify someone from the team that stands out and ask them to be your great support during the quality assurance process”, says Kassiani.

This final stage of the process involves working closely with the reviewer and —in this case — requesting detailed feedback on each annotator’s work, to improve quality. 

Partnering with Sigma AI for your data annotation challenges

The role of human experts in generating high-quality data is vital — and it will gain even more significance in the next few years, as LLMs and GenAI projects add more complexity to the annotation process. 

“The type of paralinguistic information that systems will be able to understand or generate, such as tone, emotion, and style will change what we ask annotators to label to help machines understand and enrich the experience”, says Sigma’s Executive Senior Advisor, Dr. Jean-Claude Junqua.

Partnering with a company that brings human expertise to the center and supports better AI through effective annotation guidelines and carefully designed processes is the best way to stay ahead of the challenges of the future. 

With a growing workforce of +25,000 annotators covering more than 500 languages and dialects, Sigma AI has been solving complex data annotation projects for the world’s leading global tech companies for the past 15 years. 

Contact us to find out how we can help you tackle even the most ambitious AI projects! 

Want to learn more? Contact us ->

Sigma offers tailor-made solutions for data teams annotating large volumes of training data.
EN