Environmental, Social, and Governance The Future of...

Why Your Organization Needs a Set of Ethical Principles for AI

The ethical principles related to artificial intelligence (AI) and machine learning (ML) are fast becoming a critical topic of discussion. These technologies can (and do) confer enormous benefits — helping us use the earth’s resources more judiciously, predicting fraud, preventing identity theft, and more. However, biased data sets, careless misuse, and bad actors can easily turn AI into a weapon with dire consequences.

Fortunately, the Information Technology (IT) industry, non-profit organizations, governments, and academia are increasingly advocating for guidelines to encourage the most ethical use of AI possible. For example, the European Commission’s High-Level Expert Group on Artificial Intelligence has proposed a set of Ethics Guidelines for Trustworthy AI. The Association for the Advancement of Artificial Intelligence published its Code of Professional Ethics and Conduct — an adaptation of the code developed by the Association of Computer Machinery, which bears the same name. As you think about helping your organization develop its own set of principles and policies to enforce an ethical and lawful use of AI, these examples are a suitable place to start. 

Why AI ethics matter

There are many good reasons for your organization to invest the time and energy required to develop a robust code of ethics to guide and regulate the responsible development and use of AI technologies. Here are the four I see as the most critical:

  1. To minimize bias. In a well-known book on this topic, “The Alignment Problem: Machine Learning and Human Values” by Brian Christian, the author explained the bias intrinsic to many datasets. He gave the example of the results of an early machine-learning study, where a model ingested a vast database of language from published books and the Internet. The goal was for the model to be able to simulate an understanding of language, using a sophisticated mathematical algorithm intended to facilitate word translations and the expression of linguistic relationships. Despite the massive data source, the researchers found some troubling results. Entering “King-Man+Woman” returned “Queen,” which was the desired response. However, entering “Doctor-Man+Woman,” returned “Nurse.” “Shopkeeper-Man+Woman” produced “Housewife.” Despite the sophistication of the algorithm, the model was deeply flawed by the inerherent bias in its dataset.

    I recall a case where an ML practitioner from another company mentioned that they were using facial recognition to allow users to log in to corporate systems. They trained the AI model with photos of the faces of their employees. However, inadvertently (but perhaps not unexpectedly), these faces lacked the ethnic diversity necessary to reliably recognize the facial features of people from underrepresented minorities. As a result, these employees were unable to start a session and do their work. For more information on the effects of bias and countermeasures, I recommend reading “Biases in AI Systems” by R. Srinivasan and A. Chand.

  2. To build trust. If your organization establishes a clear and well-defined framework to build and use trustworthy AI (see Building Trust in Human-Centric AI), it will be easier for your employees, customers, and partners to trust that your organization will use their data in a lawful and ethical manner. Rest assured that this will become increasingly relevant over time for every business around the world.

  3. To put people and earth at the center. Used for the common good, the development of AI may become the ultimate testament to humankind’s scientific evolution. Every company — of any size, vertical, or geography — has an opportunity to use AI as an agent of change. These technologies can help us all create cleaner operations, augment people’s intellectual capacity, free people up to do their most creative, meaningful work, and have a positive impact on their communities.

  4. To gain a key competitive advantage. While the goals of AI ethics are not explicitly profit-driven, I believe that an organization that is committed to ethical AI — with appropriate frameworks to produce and use trustworthy AI — is in a better position to earn trust in the marketplace, better prepared to comply with emerging regulations, and likely to see better returns from products and services that rely on AI.

Make no mistake: the exercise of crafting a code of ethical principles that resonates with your organization’s culture may be fraught with obstacles. You may face internal resistance. You may be directed to, instead, invest your time developing data-science practitioners’ guides using existing, well-accepted ethical principles. Reasonable as that may sound, there are important reasons why every organization needs to judiciously choose the ethical principles for AI that match their industry, purpose, and identity.

Codes of AI ethics are not one-size-fits-all

As you begin to sketch out your own set of ethical principles for AI, it is essential to consider a couple of fundamental matters. First off, it is much easier to get buy-in when the embodied principles align with your established corporate values and policies. Second, it is necessary to have a well-defined decision-making process that helps organization members make correct and lawful decisions (including decisions about how to develop and use AI technologies). These two factors alone highlight the importance of creating your own policies, as opposed to arbitrarily adopting someone else’s code of ethics for AI. Conscious reflection on the principles your company can adhere to will result in the most appropriate set of guidelines.

For example, VMware’s culture is based on a set of shared values expressed through the acronym EPIC2: execution, passion, integrity, customers, and community. Employees are continuously encouraged to live up to the EPIC2 values to make VMware a force for good. Any ethical principles for AI we decide to embrace must be in line with both our EPIC2 and our environmental, social, and governance (ESG) goals.

VMware approaches ethical decision-making in a methodical fashion. Rather than rationalizing or relying on gut instinct, we have implemented an ethical decision-making framework (called DECIDE) to evaluate our options. Before making complex decisions, we assess the potential impact based on our values, the rules (policies and the law), and community. Check out our Business Conduct Guidelines to learn more about VMware’s values and decision-making frameworks.

Seven ethical principles for AI to keep in mind

As an exercise, if I were to devise a code of ethical principles for AI, I would want to ensure that they aligned with our EPIC2 Values and our Business Conduct Guidelines. Following that methodology, I might end up with a set of ethical principles for VMware’s AI that looked like the following:

  • Be inclusive. Diversity and inclusiveness in society result in teams that generate better outcomes — including in the practice of AI. Therefore, AI practitioners should adhere to the fundamental principles of diversity, equity, and inclusion when developing AI models that process people’s data.
  • Strive for fairness. As I explained earlier, it is mandatory to acknowledge that any data source may have an intrinsic degree of bias. It is critical to invest stringent effort to identify such bias to avoid unfair and improper behavior from AI systems. Regardless of race, gender, disabilities, income, and any other indicator of diversity, all people should be treated fairly by AI systems. All data should be collected and labeled, keeping bias detection and mitigation as top concerns.
  • Deliver explainability and transparency. The AI systems’ decision-making process and outcomes should be well-documented and accessible for audit (in situations where it makes sense and/or is required). Models should also be transparent — the decisions they make and actions they take should be explainable in straightforward language.
  • Make it reliable. It is critical to take steps to ensure that AI systems function according to their design purpose. This requires rigorous testing to verify reliability and to understand the expected margin of error. Where appropriate, AI-powered systems should have control mechanisms to allow human operators to deactivate the AI component without affecting business continuity. Careful forethought is needed to develop AI systems that accurately and consistently operate in accordance with their designers’ expectations. In addition, continuous monitoring and validation of the AI components are necessary to deliver robust and reliable systems.
  • Enforce privacy and security. AI systems should adhere to the organization’s policies regarding privacy and data security. This should include adequate data labeling and governance mechanisms defined by established data privacy, information security, and data-retention policies. Also consider that the AI models may themselves constitute sensitive data and may also be subject to government regulations.
  • Remain accountable. Individuals in your organization should be accountable for the ideation, design, implementation, and deployment of each AI-powered system they create and/or use — including the outcomes, results, and consequences of its use.
  • Care about sustainability. AI systems should be assessed regarding their impact on the environment. The development and consumption of AI technologies should align with and support the company’s ESG goals.

How ethical principles for AI can help us all

 While VMware is just at the beginning of this exercise, I imagine that we will categorize our goals for use of AI into three pillars (categories). Conceivably, we would build our frameworks to apply ethical principles for AI to each of them.

  • Smarter customers and partners. Focus on ensuring that customer ML workloads run well on the platform components they buy from us. That means that the way we implement our ethical principles should not interfere with customer ML workloads and that we should deliver features (over time) that facilitate the implementation of ethical principles that the industry is recognizing as important.
  • A smarter organization. Teams throughout the company may be developing ML models for many business purposes, including for efficient product development. The ethical principles (as a complement to existing policies) should define how the company uses people’s and businesses’ data and the types of models we may build.
  • Smarter products and services. Create ML models that are incorporated into the company’s own products and services to enhance automation, scale, and efficiency. Then apply ethical principles during the development of such ML models and building product features to empower customers using these models.

In future articles, we will double click on each principle to more deeply examine why they matter and how they can be realized by data-science practitioners, so stay tuned!

Working on ethical principles for AI requires a diverse team skilled in a variety of disciplines and with diverse backgrounds. I’d like to thank the people that are helping VMware develop a framework for the practice of ethics for AI: Sharon Feng, Luyi Wang, Bhavani Kumar, Justin Sampson, Philip Jang, and Josh Simons. Also, many thanks to Brianna Blacet for helping us organize, simplify, and fix the original article’s text to make it clearer and much easier to read.

Comments

Leave a Reply

Your email address will not be published.