The Future of AI is Human

Why Sigma.AI founder and CEO Daniel Tapias pins his business strategy on ethics and purpose

For Dr. Daniel Tapias, founder and CEO of Sigma.AI, nothing is more exciting than a good challenge. We sat down with him to find out what fascinates him most about AI, and what drove his decision to shift to entrepreneurship after 19 years of academic research in the field.

Mechanics are simple — interpretation is the challenge

The jump from scientist to founder was not the first major change for Tapias. Early in his academic career, he shifted his specialization to digital signal processing, supplementing his original domain of electrical engineering with phonetics and linguistics to focus his research in speech and natural language.

“I started my studies in electrical engineering because I wanted to understand how electronic devices, how these complex systems worked,” says Tapias. “But once I understood how they worked, it wasn’t what I expected. I discovered that, for me, actually understanding and interpreting the signals is the big challenge. And I like challenges”.

This magnetic pull towards complex challenges drew Tapias to the field of digital signal processing. “Basically it’s about taking analog signals, like images or audio signals, and digitizing them to do things such as detecting, measuring, storing, or transmitting them,” says Tapias. “As humans, we’re perceiving information all of the time through our senses, and that’s how we interpret the world around us, based on all of these inputs. At some point, I realized technology could also go one step further. I discovered that we could not only detect and process signals digitally, but we could interpret them using AI. This is when we started to have really promising results.”

Breaking ground in real-life AI applications of AI

In the 1990s, as this realization and the beginnings of true machine learning applications intersected, Tapias started to see his research take flight in real world projects. At the Research and Development Center of Telefónica, he led the department on speech recognition while also working on his Ph.D. at the Universidad Politécnica of Madrid.

Delivering automated telephone service solutions to places like movie theaters and information hotlines, Tapias and his team went from teaching machines to understand numbers and simple commands (think, “say ‘one’ to book a ticket”) to understanding over 2000 words and approaching real conversations.

But in the 90s, this was no easy feat. “Computational power was much lower than it is today — same as with memory capacity,” says Tapias. “It was a challenge from the engineering point of view to make it work in real-time. Going from research to product was an important experience for me because it gave me an idea of how difficult it is to make things out in the world, outside of the laboratory.”

Digital signal fascination turns user fascination

Apart from technical limitations, Tapias noticed new and unexpected hurdles from users, especially when their expectations of the technology diverged from what it could deliver. “We’ve all seen science fiction movies. We think that computers are going to be able to talk to us as if they were people. So the expectations are much higher than the reality. That’s why many of these types of systems fail.”

One user-based snag: people tended to speak more loudly when the automated assistant misunderstood them, rather than more clearly. “Loud speech has different properties and causes the system to work worse,” says Tapias. “It was an infinite loop where the results got worse and worse and worse. We had to analyze the behavior of users and see how to break the loop to be able to either start from scratch or provide other alternatives to overcome the problem. It was a really nice experience to learn how people use this technology and what they expect when they talk to an automatic system.”

Quality in, quality out

Tapias quickly realized that for systems to start thinking like humans and understanding inputs like a human, they needed to teach the systems better. Which meant redefining quality in the data used to train AI.

In the 90’s and early 2000s, the industry’s understanding of data quality was focused on annotation accuracy — for example, whether the color white is labeled as “white” or as a different color . But Tapias saw early on that this didn’t reflect the complexity of the real world. “If we looked at one color and I asked you if it was dark or light, I’m pretty sure we’d disagree,” says Tapias. “For the really dark colors and for the really light colors, we’ll agree, but for the ones in the middle, we’ll probably disagree. Because we all have a slightly different understanding of what is dark and light. Each person has a different way of interpreting these colors and making a decision”.

Complex annotation requires detailed annotation guidelines that cover all possible cases and create a consistent basis for annotation, independent of variations in human perception. “If we want the data annotations to be consistent, we have to make sure all the annotations are carried out following the exact same annotation guidelines,” says Tapias. “But we have to be very careful when we design these guidelines since the way in which they’re explained can lead to biased annotations.”

Tapias also realized that there are two more components in data quality: Data coverage and balance. “We cannot expect an AI system to learn all colors if we do not provide the system with examples of all colors. This is what’s called data coverage. In the same way, we cannot expect an AI system to learn all colors equally if, for some colors, there are just a few examples. We need to provide a sufficiently large number of examples for all the colors. This is what’s called data balance.”

Following his intuition that improving data quality would unlock new possibilities for AI applications, Tapias decided to focus his efforts on data preparation and annotation. “I had a view on quality that was slightly different from the current trend,” says Tapias. “I felt strongly enough about it that I thought it was worth giving it a shot.” He decided to found Sigma.AI to provide data services that trained AI with human nuance.

Sigma.AI sets out with a human-centric approach

In 2009, Tapias and his business partner Nuria Gómez took the leap and founded Sigma.AI with the purpose of providing higher quality, more human-centric AI training data, financing the company’s growth solely through profits. Their philosophy of human centricity proved itself again and again as the cornerstone of the company, from providing much-needed jobs in Spain after the financial crisis of 2008, to the close partnerships they built with their now long-term customers from Silicon Valley.

Even the name “Sigma” encapsulates the idea of people working better together. “Sigma is a Greek character used in mathematics that means summation,” says Tapias. “For us, it’s the summation of wills, of many people aligned towards a goal. A team can create more impact together, when they’re coordinated and motivated than the same group of people would in isolation.”

A new definition of data quality

This human centricity was also central to Sigma.AI’s new definition of quality. In addition to accuracy, quality now spanned across three new vectors: Consistency, coverage and balance.

Consistency, says Tapias, is achieved by defining clear and objective criteria that annotators can follow while they label data — and making sure that the annotators themselves have the necessary training, mindset and working environment to follow them to a T even after the thousandth data point.

Understanding how to set up a human-centric work environment came with experience, and empathy. “When you work with people, you have to take care and understand how we are as humans, and try to organize things in a way that adapts to the way we work, not the other way around,” says Tapias. “For example, we know that annotators have to rest for some minutes every hour, because annotating takes incredible focus, and after an hour you start to get tired. So by taking breaks every hour, we can decrease the error rate — but just as importantly, people will be happier.” Other measures included keeping annotators on board over the long term, and staffing each project with specialist annotators who had the specific skills and knowledge needed for the job.

Coverage and balance proved equally crucial and gave Sigma another lever to fight against bias creeping into AI applications. For example, when you train a machine to recognize speech, coverage means including all dialects, accents, intonation, and pitch possible for a given language. Balance means representing each of these facets equally, as they’d appear in a population. The bottom line: diversity and inclusion is a prerequisite for training data quality. Not only in the data itself but in the annotators who apply their judgment and lived experience to labeling that data and giving it context. “AI doesn’t discriminate, but the way you train it can make it discriminatory,” says Tapias. “So we have to be extremely careful in how we train the system and how we assess its performance.”

Ethical AI and data quality go hand-in-hand

The quality measures that Sigma.AI decided to adhere to wasn’t just a business tactic — it was also about taking responsibility as the conscientious creators of new technology. “Ethical AI is quality AI,” says Tapias. “Because quality AI is AI that works for everyone. If we want to build technology that works for and with people, we have to make sure it includes all races, genders, ages and abilities that you have in the population. Every person, every kind of person needs to be appropriately represented in the data.” From data collection to annotation to quality testing, Tapias believes it’s essential to check for bias every step of the way.

According to Tapias, that responsibility extends from the data with which AI is trained, to the purpose that AI serves. “It’s difficult to define exactly what makes an AI ethical, but for me, an ethical AI is an AI that’s built and used for good,” says Tapias. “Protecting people’s rights, protecting them from harm or illness, improving their quality of life. These are AIs built for a human purpose.”

The future of AI has a human purpose

Keeping with his philosophy, Tapias is most excited about the potential for future AI applications that improve human life. Noting a recent experiment Sigma undertook to detect breast cancer signs before early-stage onset, Tapias is most enthusiastic about the possibilities for medical advancement. “I’m convinced that there are so many things we can do here,” says Tapias. “we may even be able to predict diseases before they happen”.

Because AI can perceive more kinds of signals, and with more nuance, than humans, the possibilities are beyond what our senses provide. “In a digital image, each pixel is a set of numbers,” says Tapias. “Each number represents color intensity. But at a certain level of nuance, our human eyes can’t differentiate between an intensity of a 7 and an 8. Machines can. And we can only process a finite amount of data. Machines don’t get tired”. Machines also have the capability to interpret data our senses aren’t evolved to perceive at all, like the infrared spectrum.

Predictive maintenance is another kind of beneficial AI that Tapias sees as having huge potential. “If we’re able to predict when a machine is going to fail before it fails, we can avoid plane crashes, car accidents, or stoppages in production lines,” says Tapias. “We’ll be able to see that a component is going to fail in, say, two weeks, and be able to replace the part before it happens, instead of being reactive.”

Even after more than 30 years in the field of AI, Tapias seems he’ll never lose his original spark of scientific curiosity, and his drive to create systems where AI and people can collaborate towards something bigger and better. “What we’ll be able to do is much more than we can even imagine.”

Want to learn more? Contact us ->

Sigma offers tailor-made solutions for data teams annotating large volumes of training data.
ES