Ethics is a set of moral principles which help us discern between right and wrong. AI ethics is a multidisciplinary field that studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes.
Examples of AI ethics issues include data responsibility and privacy, fairness, explainability, robustness, transparency, environmental sustainability, inclusion, moral agency, value alignment, accountability, trust, and technology misuse. This article aims to provide a comprehensive market view of AI ethics in the industry today. To learn more about IBM’s point of view, see our AI ethics page here.
With the emergence of big data, companies have increased their focus to drive automation and data-driven decision-making across their organizations. While the intention there is usually, if not always, to improve business outcomes, companies are experiencing unforeseen consequences in some of their AI applications, particularly due to poor upfront research design and biased datasets.
As instances of unfair outcomes have come to light, new guidelines have emerged, primarily from the research and data science communities, to address concerns around the ethics of AI. Leading companies in the field of AI have also taken a vested interest in shaping these guidelines, as they themselves have started to experience some of the consequences for failing to uphold ethical standards within their products. Lack of diligence in this area can result in reputational, regulatory and legal exposure, resulting in costly penalties. As with all technological advances, innovation tends to outpace government regulation in new, emerging fields. As the appropriate expertise develops within the government industry, we can expect more AI protocols for companies to follow, enabling them to avoid any infringements on human rights and civil liberties.
While rules and protocols develop to manage the use of AI, the academic community has leveraged the Belmont Report as a means to guide ethics within experimental research and algorithmic development. There are main three principles that came out of the Belmont Report that serve as a guide for experiment and algorithm design, which are:
There are a number of issues that are at the forefront of ethical conversations surrounding AI technologies in the real world. Some of these include:
The release of ChatGPT in 2022 marked a true inflection point for artificial intelligence. The abilities of OpenAI’s chatbot—from writing legal briefs to debugging code—opened a new constellation of possibilities for what AI can do and how it can be applied across almost all industries. ChatGPT and similar tools are built on foundation models, AI models that can be adapted to a wide range of downstream tasks. Foundation models are typically large-scale generative models, comprised of billions of parameters, that are trained on unlabeled data using self-supervision. This allows foundation models to quickly apply what they’ve learned in one context to another, making them highly adaptable and able to perform a wide variety of different tasks. Yet there are many potential issues and ethical concerns around foundation models that are commonly recognized in the tech industry, such as bias, generation of false content, lack of explainability, misuse and societal impact. Many of these issues are relevant to AI in general but take on new urgency in light of the power and availability of foundation models.
While this topic garners a lot of public attention, many researchers are not concerned with the idea of AI surpassing human intelligence in the near or immediate future. This is also referred to as superintelligence, which Nick Bostrum defines as “any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.” Despite the fact that Strong AI and superintelligence is not imminent in society, the idea of it raises some interesting questions as we consider the use of autonomous systems, such as self-driving cars. It’s unrealistic to think that a driverless car would never get into a car accident, but who is responsible and liable under those circumstances? Should we still pursue autonomous vehicles, or do we limit the integration of this technology to create only semi-autonomous vehicles which promote safety among drivers? The jury is still out on this, but these are the types of ethical debates that are occurring as new, innovative AI technology develops.
While a lot of public perception around artificial intelligence centers around job loss, this concern should be probably reframed. With every disruptive, new technology, we see that the market demand for specific job roles shift. For example, when we look at the automotive industry, many manufacturers, such as GM, are shifting to focus on electric vehicle production to align with green initiatives. The energy industry isn’t going away, but the source of energy is shifting from a fuel economy to an electric one. Artificial intelligence should be viewed in a similar manner, where artificial intelligence will shift the demand of jobs to other areas. There will need to be individuals to help manage these systems as data grows and changes every day. There will still need to be resources to address more complex problems within the industries that are most likely to be affected by job demand shifts, such as customer service. The important aspect of artificial intelligence and its effect on the job market will be helping individuals transition to these new areas of market demand.
Privacy tends to be discussed in the context of data privacy, data protection and data security, and these concerns have allowed policymakers to make more strides here in recent years. For example, in 2016, GDPR legislation was created to protect the personal data of people in the European Union and European Economic Area, giving individuals more control of their data. In the United States, individual states are developing policies, such as the California Consumer Privacy Act (CCPA), which require businesses to inform consumers about the collection of their data. This recent legislation has forced companies to rethink how they store and use personally identifiable data (PII). As a result, investments within security have become an increasing priority for businesses as they seek to eliminate any vulnerabilities and opportunities for surveillance, hacking and cyberattacks.
Instances of bias and discrimination across a number of intelligent systems have raised many ethical questions regarding the use of artificial intelligence. How can we safeguard against bias and discrimination when the training datasets can lend itself to bias? While companies typically have well-meaning intentions around their automation efforts, there can be unforeseen consequences of incorporating AI into hiring practices. In their effort to automate and simplify a process, Amazon unintentionally biased potential job candidates by gender for open technical roles, and they ultimately had to scrap the project. As events like these surface, Harvard Business Review has raised other pointed questions around the use of AI within hiring practices, such as what data should you be able to use when evaluating a candidate for a role.
Bias and discrimination aren’t limited to the human resources function either; it can be found in a number of applications from facial recognition software to social media algorithms.
As businesses become more aware of the risks with AI, they’ve also become more active this discussion around AI ethics and values. For example, last year IBM’s CEO Arvind Krishna shared that IBM has sunset its general purpose IBM facial recognition and analysis products, emphasizing that “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.”
There is no universal, overarching legislation that regulates AI practices, but many countries and states are working to develop and implement them locally. Some pieces of AI regulation are in place today, with many more forthcoming. To fill the gap, ethical frameworks have emerged as part of a collaboration between ethicists and researchers to govern the construction and distribution of AI models within society. However, at the moment, these only serve to guide, and research shows that the combination of distributed responsibility and lack of foresight into potential consequences isn’t necessarily conducive to preventing harm to society.
Artificial intelligence performs according to how it is designed, developed, trained, tuned, and used, and AI ethics is all about establishing an ecosystem of ethical standards and guardrails around throughout all phases of an AI system’s lifecycle.
Organizations, governments and researchers alike have started to assemble frameworks to address current AI ethical concerns and shape the future of work within the field. While more structure is injected into these guidelines every day, there is some consensus around incorporating the following:
Governance is an organization’s act of overseeing the AI lifecycle through internal policies and processes, staff and systems. Governance helps to ensure that AI systems are operating as an organization’s principles and values intend, as stakeholders expect, and as required by relevant regulation. A successful governance program will:
define the roles and responsibilities of people working with AI.
educate all people involved in the AI lifecycle about building AI in a responsible way.
establish processes for building, managing, monitoring and communicating about AI and AI risks.
leverage tools to improve AI’s performance and trustworthiness throughout the AI lifecycle.
An AI Ethics Board is a particularly effective governance mechanism. At IBM, the AI Ethics Board is comprised of diverse leaders from across the business. It provides a centralized governance, review, and decision-making process for IBM ethics policies and practices. Learn more about IBM’s AI Ethics Board.
An organization’s approach to AI ethics can be guided by principles that can be applied to products, policies, processes and practices throughout the organization to help enable trustworthy AI. These principles should be structured around and supported by focus areas, such as explainability or fairness, around which standards can be developed and practices can be aligned.
When AI is built with ethics at the core, it is capable of tremendous potential to impact society for good. We’ve started to see this in its integration into areas of healthcare, such as radiology. The conversation around AI ethics is also important to appropriately assess and mitigate possible risks related to AI’s uses, beginning the design phase.
Since ethical standards are not the primary concern of data engineers and data scientists in the private sector, a number of organizations have emerged to promote ethical conduct in the field of artificial intelligence. For those seeking more information, the following organizations and projects provide resources for enacting AI ethics:
IBM has also established its own point of view on AI ethics, creating Principles of Trust and Transparency to help clients understand where its values lie within the conversation around AI. IBM has three core principles that dictate its approach to data and AI development, which are:
IBM has also developed five pillars to guide the responsible adoption of AI technologies. These include:
These principles and focus areas form the foundation of our approach to AI ethics. To learn more about IBM’s views around ethics and artificial intelligence, read more here.
Learn how the EU AI Act will impact business, how to prepare, how you can mitigate risk and how to balance regulation and innovation.
Learn about the new challenges of generative AI, the need for governing AI and ML models and steps to build a trusted, transparent and explainable AI framework.
Read about driving ethical and compliant practices with a platform for generative AI models.
Gain a deeper understanding of how to ensure fairness, manage drift, maintain quality and enhance explainability with watsonx.governance™.
We surveyed 2,000 organizations about their AI initiatives to discover what's working, what's not and how you can get ahead.
Learn how to select the most suitable AI foundation model for your use case.
Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.
Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.
Simplify how you manage risk and regulatory compliance with a unified GRC platform.