Ethical AI vs. Responsible AI

As artificial intelligence (AI) becomes more and more prevalent in our lives, the question of ethics comes to the forefront. What is ethical AI? And how is it different from responsible AI? Many people use the two terms interchangeably, but there’s a big difference between the two.

Ethical AI is about doing the right thing and has to do with values and social economics. Responsible AI is more tactical. It relates to the way we develop and use technology and tools (e.g. diversity, bias).

AI has incredible potential to benefit humans and society, but it must be developed thoughtfully. Discussions and policies around ethical and responsible AI aim to help data scientists build socially beneficial, safe, and accountable technology.

Table of Contents

Enterprises have the opportunity to differentiate themselves with an accountable approach to AI. But progress is lagging. The IBM Institute for Business Value found that while over 50% of organizations have publicly endorsed common principles of AI ethics, less than 25% have operationalized AI ethics.

AI should be in service to humans. This article explores responsible vs. ethical AI, including challenges and solutions to consider when navigating the development of human-centered AI applications.

What are the Ethical Issues Surrounding Artificial Intelligence and Machine Learning?

Despite the need for higher automation and AI inclusion across industries and specialties, there are also significant challenges and potential ethical concerns surrounding AI and machine learning.

Some of the issues raised by ethical and responsible AI / ML include:

Addressing Bias and Discrimination

When it comes to AI, bias can exist in the data that’s used to train machine learning models. This is often due to historical patterns of discrimination that have been captured and perpetuated in data sets. For example, if a data set only includes male employment history, a machine learning algorithm trained on this data would be more likely to recommend male candidates for job openings. To avoid perpetuating bias, it’s important to use data sets that are diverse and inclusive.

The topic of AI-based bias and discrimination gained attention recently after Apple’s credit card was flagged for instances of gender-based discrimination. A couple experienced discrimination when applying to raise their credit limit. While Ms. Hansson had a higher credit score than her partner, Mr. David Hansson, he was awarded the additional credit, while Ms. Hansson was denied.

While the initial complaint and conversation of AI and algorithmic ethics centered around Danish developer David Hansson and his wife, it soon prompted a larger discussion spearheaded by Linda Lacewell, Superintendent of the New York State Department of Financial Services. Ms. Lacewell reaffirmed the need for a greater level of responsibility and accountability across the tech sector.

Privacy Concerns

Many people are uncomfortable with their personal data being used to train machine learning models. There are also concerns about how these systems will be used once they are deployed. For example, a facial recognition system could be used to track people’s movements or target them for advertising. Individuals should be given a choice to consent to their data being used in AI, should know when and where their data is being used, and whether AI is being used to make decisions about them.

AI’s Impact on Jobs Through Automation

The impact of artificial intelligence (AI) on jobs is well documented. Studies have shown that AI will lead to increased automation, which will in turn reduce the demand for human labor.

This trend is already underway, as businesses increasingly look to reap the efficiency and cost-savings benefits of AI-driven automation. It’s impossible to predict exactly how the employment landscape will change as a result of AI. However, what we can do is ensure that we’re prepared for the change. This means investing in education and training so that workers have the skills they need to adapt.

It’s also important to remember that AI is not just about automation. It can also be used to augment human labor, making workers more productive and efficient. For example, hospitals are using AI-powered chatbots to help answer patient questions, freeing up nurses to focus on more critical tasks.

Guarding Against Mistakes, Protecting Humans From Unexpected Consequences

AI and machine learning systems are “taught” to detect certain patterns and inputs during the training phase. For these systems to be effective, they need to be exposed to as many different examples as possible. However, it is impossible to cover every single potential real-world scenario during the training phase. As a result, these systems can sometimes be fooled by unexpected inputs. For example, a chatbot might say something offensive or a self-driving car might get into an accident. It’s essential to have systems to guard against these mistakes and protect people from any unexpected consequences.

Each of these examples highlights ethical decisions. Ethics help us decide right from wrong. Responsible implementation of AI is mainly related to the development and usage of AI and is up to humans. These issues come down to choices based on what is responsible and ethical for society.

What is “Ethical AI”?

The intent of community conversation around ethical AI is to do no harm and continuously assess the impact of AI on society, as well as the inherent risks of abuse, overuse, and consumer risk. This form of self-regulation continues to shape and evolve the industry, and actively supports responsible use and advancement of AI technology.

The term “Ethical AI” is typically used to describe two different but related concepts: ethical concerns surrounding the development and use of AI technology, and ethical decision-making by AI systems.

Ethical concerns surrounding the development and use of AI technology are mainly related to the potential impact of AI on society. For example, there are concerns about AI’s impact on jobs, privacy, and democracy. There is also a worry that as AI gets more sophisticated it could become uncontrollable and even dangerous.

Ethical decision-making by AI systems – This refers to the ethical choices made by AI systems when they are deployed in the real world. For example, an AI system might be used to decide who gets a loan or whether someone is released on bail. These decisions can have a big impact on people’s lives, so they must be made ethically.

Ethical AI emphasizes assessing the impact the use of AI may bring to individuals and society. Ethical AI seeks to do no harm, and it deals with moral concerns.

With this in mind, elements of ethical AI construction are:

  • Protection of individual rights: Ethical AI models incorporate protective measures to uphold individual rights, such as the right to privacy and an equitable structure for all users.
  • Non-discrimination in solution construction: There have been several examples of bias in AI-based tools and solutions. Organizations and experts must take it upon themselves to be the change and facilitate more humanized solutions to these challenges and risks. Creating a continuous auditing system and an intent-based solution (emphasizing causal and intentional over the more common correlation-based algorithm refinement practices) will be essential as more regulation comes to the field in 2022.
  • Awareness, responsiveness, and an ongoing commitment to change: Simply constructing the solution isn’t enough. Data science teams ought to implement plenty of “space” for ongoing awareness and risk management and responsibility and commitment to retraining the solution as often as possible to omit the risk of manipulation, bias, or other such ethical dilemmas. While this may seem to be an understood need, it is not to be understated. It’s essential to the core concept of ethical AI construction.

What is Responsible AI?

When we talk about AI, it’s essential to consider the ethical implications of the technology and the responsibility that comes with building it. Responsible AI is about putting principles into practice and ensuring that the technology we create is safe, fair, and trustworthy. It continues to put the ethical questions and considerations of ethical AI  into actual, tangible AI experiences, technology, and products.

There are three key principles of responsible AI:

  • Fairness – This principle is about ensuring that everyone has a fair chance to benefit from AI technologies.
  • Accountability – This principle relates to the ability to explain and justify decisions made by AI systems. Humans are accountable for AI design, development, decision processes, and outcomes. This includes thinking through the impact of choices made in the creation of a model.
  • Transparency – This principle is about making sure that the decisions made by AI systems are understandable and explainable. For example, if an AI system makes a decision that seems unfair or unaccountable, it’s important to be able to understand why the system made that decision.

The Challenges of Building Ethical AI

What does ethical and responsible AI mean for model development? At the most basic level, AI systems start with data. Establishing trust in the underlying data and data processes is the first step in enabling the ethical and responsible use of AI and ML.

The more technical aspects of model development highlight the importance of humans-in-the-loop. Human reasoning and context have a significant impact on the development of responsible AI. 

Because of the inherent human involvement and context applied to the AI development process, there are ethical considerations at every stage of the model development process to preserve the integrity and utility of AI technology and end products. These include:

Explainability

Often problems occur during the earlier stages of conceptualization, research, and design of AI models.  Understanding the rationale behind an AI algorithm’s results (explainability) and the algorithm’s accuracy and consistency (robustness) in producing results contributes to building responsible models.

Discrepancies can occur at any point in your training data, model, or production process. When you know how and why your models are doing something, you have the power to improve them. Plus, updating and addressing discrepancies efficiently can contribute to a more ethical, responsible, and streamlined process for future builds and products.

Suppose all AI technology companies remain committed to explainability and robustness as concepts. In that case, it can continue to bolster the confidence of the general public in AI as a relevant and safe technology to develop.

Bias

Annotating or labeling data is an integral part of machine learning.

Machines learn from what we teach them. But how we define data is subjective. This is where a range of biases- historical, measurement, and modeling bias- can creep in.

Historical bias: This type of bias can be inherent in the previous data pools that are used to continue to refine the tools, and can be more prevalently seen across disadvantaged or marginalized groups, categorizations, or data sets.

Measurement or modeling bias: There are two main components to measurement bias – errors that are intrinsically woven into the machine learning process based on past errors, and human-based errors in human data capture.

Teams need to be aware of accurate, equitable, and quality data representation — focusing on whether data in the sample is over or under-representing a specific group within the data set. Data also needs to be consistently represented in the same way throughout the dataset to preserve both the integrity and accuracy of the data.

Accuracy

The goal of machine learning training and testing is to assess and improve your model’s accuracy. A well-documented development and testing process can help mitigate and correct unintended risks from inaccurate predictions. If errors occur, having highly accurate training data can help assess how likely the errors are.

Machine learning enables machines to learn independently, but a machine can’t distinguish between good and bad data. Machines must learn from a training dataset or input data, and the quality of that data is directly proportional to a machine learning model’s success.

So, if a machine learning model delivers inaccurate or irrelevant results, the first step in troubleshooting is evaluating data quality.

Subpar data quality is a critical problem that engineers can’t overlook. Poor data quality can create data cascades that cause adverse downstream effects. For example, these errors can work their way down the development process. These can jeopardize the quality of the decision-making process around refinement and design.

This is why data quality is vital to ML performance and the value it provides to the operation where it’s deployed.

Quantity of data

Insufficient data sets may not accurately reflect all of the qualities that need to be accounted for in a model to produce a consistent outcome. Data must be as complete as possible before moving to the next part of the production process. Ideally, datasets should represent all aspects of operating conditions in a balanced way.

Resources for Ethical and Responsible AI

Global technology companies such as Microsoft, IBM, Google, and Salesforce publish high-level ethical and responsible AI principles. These resources are a great start as far as governing and ethical regulations go. However,  the nuances of operationalizing these principles are more challenging — especially as the field continues to innovate and evolve.

Gaining a variety of perspectives from different industries, regulations, experiences, and use cases support the concept of putting ethical theory and responsibility of design into practice.

Below are some resources that can provide a variety of ideas, frameworks, and tools to support developing an approach for your organization.

  • AI Now Institute: The AI Now Institute at NYU researches four core themes: rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure.
  • Involve: Involve is on a mission to put people at the heart of decision-making. They often explore the impact of AI-driven decisions.
  • Partnerships on AI: Partnership on AI is dedicated to  addressing the most important and difficult questions concerning the future of AI.
  • The Center for Human-Compatible AI’s goal is  to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.
  • Algorithm Watch is a non-profit focused on evaluating and shedding light on algorithmic decision-making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically.

Building Responsible and Ethical AI

Even though AI and machine learning have been discussed for years, the industry, emerging technologies, and use cases for AI are changing quickly. The speed of these changes and the ethical challenges they raise warrant consideration. The development of responsible AI is an emerging field for many companies, AI developers, and data scientists.

The team at Sigma.AI has over 30 years of experience creating ethical and responsible machine learning technologies.  We work across various industries and seek to bring a human perspective to sourcing, annotating, and scaling data for responsible and ethical AI.

Are you ready to experience the difference? Connect with us today to discuss how Sigma can help you develop quality training data to support ethical and responsible AI.  

Want to learn more? Contact us ->
Sigma offers tailor-made solutions for data teams annotating large volumes of training data.
EN