The Challenges of Ethical AI

Business leaders are often hyper-focused on the exciting, action-packed numbers: clicks, conversions, sales, revenue, and all of the other KPIs and metrics that tangibly relate to income. So, it makes sense that developers and engineers are often pushed towards baking these KPIs into digitally transformative AI solutions. But there’s a problem. AI isn’t human. It’s data-driven, and it often operates without the wealth of context humans can ingest and analyze. Most AI goes through a “learning phase,” where it intakes data and digests patterns. But that learning period can’t cover every possible situation, and it can lead to “artificial stupidity” — something that has the very real potential to create regulatory headaches (see: Goldman Sachs Apple Card scandal), destroy brand value (see: IBM’s Weather Channel app), cause significant employee attrition (see: Google’s AI ethics backlash) and generate boycotts and social movements (see: Cambridge Analytica Scandal) that degrade profits and impede growth.

Your business needs to grapple with the ethical decisions surrounding AI. And it’s not easy. How do you create balance? Do you build AI to protect customers, make ethical decisions, and prevent social frictions? Or do you create AI to create tangible value that delivers to shareholders? Better yet, do you have to choose? You can have your cake. But you’re going to need to put in some real effort if you want to eat it.

What Does it Mean to Have Ethical AI?

Ethical AI can be defined as AI that incorporates ethical guidelines, including promoting individual rights, anti-discrimination, non-manipulation, bias reduction, and privacy into its core design (and ongoing maintenance). In other words, ethical AI is AI that’s built and maintained (via policies, processes, and teams) to adhere to the human and brand values your company exudes. It’s important to separate the term “ethical AI” from AI built around laws and regulatory guidelines. The goal of ethical AI is to deliver value far beyond regulatory guidelines or laws. In fact, ethical AI is built around human values — not regulatory ones. So, it’s much denser, a little trickier to deliver, and filled with ambiguity that can be challenging to tackle.

As an example, internal Facebook research was recently leaked to the Wall Street Journal, and it detailed how surveys conducted internally showed that Facebook’s AI was toxic to the mental health of teenagers on the platform. Legally, Facebook’s AI is compliant with regulatory guidelines and international laws. But damaging the mental health of teenagers certainly can’t be considered “ethical.” So, there is a gap between legal and ethical AI, though it can be challenging to identify at times.

There are plenty of examples (see above) of unethical AI degrading company values and causing financial and social havoc. But what about the carrot? As Bart Willemsen — VP of research at Gartner — puts it: “Even where regulations do not yet exist, customers are actively choosing to engage with organizations that respect their privacy.” Ethical AI is capable of:

  • Attracting top talent: As Google recently discovered, top talent is willing to bolt if they feel technology is being used in destructive ways. A recent BCG survey also suggested that one in six AI workers have quit their jobs to avoid developing potentially harmful products. In a world where companies are struggling to attract and retain tech talent, developing ethical AI can be a strong barrier against attrition.
  • Improving your bottom line: People want to interact, engage, and purchase from companies that are honesttrustworthy, and engage in ethical business practices. And these companies tend to outperform their competitors across financial metrics. Or, as
  • Reducing your risk vectors: Ethical AI reduces legal and social risk vectors that can cause harm to your brand and bottom line.

The Many Ethical Dilemmas of AI

Nearly every industry on the planet is digging around in the AI toolbox, looking to revolutionize the way they do business. And most of them will. According to McKinsey, AI will generate $9.5 trillion to $15.4 trillion annually across industries. But, as companies chase their so-called “white whale” of AI business value, it’s important to take a step back. AI does pose ethical dilemmas. The sooner you grapple with these dilemmas, the sooner you can take full advantage of the chest of AI gold.

Let’s quickly look at a few of the most common ethical dilemmas surrounding AI. But it’s important to note that there are many more — some of which only exist in specific industries.

  • Racial bias: What happens when a bank lending AI model indiscriminately rejects people based on their race? Or, what if an HR AI system creates a stale and stereotypical work environment due to inherent bias? Even worse, what if an AI discriminately provides medical care based on race or ethnicity? These things have all happened.
  • Social bias: This AI ethics issue is actually relatively common. What if an AI takes your social media connections into account during a medical, lending, or application process? Suddenly, your AI is discriminating against people based on their social affluence.
  • The trolley problem: A classic ethical conundrum. The trolley problem questions whether the moral value of a decision is determined by its outcome. In the problem, a train is traveling towards 5 people who are tied up on a railroad track. You have a choice. You can turn the train towards a single human tied to a second track or keep the train on the same track. Or you can not take action and let the five people die. It’s a challenging question, and it has produced plenty of academic papers and arguments over the decades. So… how does AI solve that problem? Let’s take this to the real world. What choice should a self-driving car make if a pedestrian steps in front of your car? Should it swerve to avoid the pedestrian but cause a potentially-harmful accident between you and the car next to you? Or should it hit the pedestrian? The answer isn’t simple. And to progress in the area of Ethical AI, it may be necessary for science and engineering students to receive training in ethics to better grasp the social implications of the technologies they will very likely be developing.
  • Privacy: We’ve all seen this one in play. AI needs to use, consume, and store data with customer privacy top of mind.
  • Manipulation: AI impacts millions, if not billions, of lives. If AI can be manipulated, it has the very real potential to cause physical, political, and societal harm.

These dilemmas matter. And it’s important that businesses consider them as equals to bottom-line KPIs during the development process. But how does that work exactly?

How to Measure Outcomes Associated with Ethics

Let’s cut to the chase: how do you actually measure “ethics.” At the end of the day, businesses will use AI. If they don’t, they will get left behind and swallowed by their competitors. So, we need to figure out a way to measure the outcomes associated with ethics and pair those measures with the AI development process. For the purposes of this post, we want to bullet point a list of processes to bake into the AI development cycle. These are ethic-centric processes that help develop more ethical AI solutions. However, there are several existing “frameworks” (see:United States Intelligence Community & The IEE  Global Initiative on Ethics of Autonomous and Intelligent Systems) you can utilize if you require more concrete, certification-centric guidelines.

When developing AI, you should consider implementing the following outcome-associated controls:

  • Use enterprise-wide definitions of values to guide AI creation.
  • Build feedback loops into the development and post-development lifecycle to help uncover ethical dilemmas and guide ethical decision-making.
  • Create data quality requirements that put users front-and-center — not just CCPA or GDPR.
  • Go on the offensive; build AI around ethics instead of waiting for ethical problems to present themselves.
  • Describe your “ethical nightmares” and create metrics to circumvent those worst outcomes.
  • Identify ongoing ethical risk vectors to continuously improve your AI.

Large-scale enterprises can afford to build teams around AI ethics. But, some mid-sized or small businesses can’t afford these large-scale investments. In these cases, it’s important to work with a team that does understand these responsibilities. For larger companies who outsource some of their development processes, partnering with a development company that understands the intricacies associated with ethical AI development should be the first step in the ethical AI journey.

Building a More Holistic, Responsible, and Ethical AI

AI is a profoundly powerful technology that unlocks trillions of dollars in value for companies in industries across the globe. But that power brings tangible threats to your business. As you scale your technology, you scale the threats to your business. AI can create significant ethical issues for your company. To prevent these issues, it’s important to be proactive during your AI development and post-development process.

At Sigma AI, we specialize in building turn-key and customized AI solutions for enterprises looking to generate significant value without creating ethical headaches. Our team is dedicated to bridging the gaps between ethics and profitability. Are you ready to build a world-class AI ecosystem that’s honest, trustworthy, and engaging? Contact us to learn more about our AI solutions. From healthcare to call centers, we’re ready to help you tackle your biggest problems — in the most ethical way possible.


AI and Machine Learning

The Challenges and Opportunities of Generative AI

An interview with Dr. Jean-Claude Junqua It seems like articles about Chat GPT, Bard, and Generative AI (Gen AI) appear almost daily. We caught up

Natural Language Processing

What is Natural Language Processing?

Natural Language Processing (NLP) for short refers to the manipulation of speech and text by software.

Storm clouds
Training Data

Establishing Ground Truth Data

Ground truth data is the objective, provable data used to train, validate and test models. It is directly related to the task that needs to be achieved. AI cannot set the objectives. It is the job of humans.

Recent Posts

The Challenges and Opportunities of Generative AI

The Challenges and Opportunities of Generative AI

An interview with Dr. Jean-Claude JunquaIt seems like articles about Chat GPT, Bard, and Generative AI (Gen AI) appear almost…
What is Natural Language Processing?

What is Natural Language Processing?

Natural Language Processing (NLP) for short refers to the manipulation of speech and text by software.
Establishing Ground Truth Data

Establishing Ground Truth Data

Ground truth data is the objective, provable data used to train, validate and test models. It is directly related to…