The Whiteboard

Machine Learning: Addressing Inherent Bias

Machine learning (ML) and artificial intelligence (AI) have become so ingrained in our lives that we take them for granted. These technologies are part and parcel of self-driving cars, search-engine results, recommendations on every streaming TV and music service…the list goes on and on (if you don’t believe me, just ask Siri!). The benefits of these technologies are boundless and the technology involved is becoming more sophisticated with every passing day.

ML and AI rely on data sets and algorithms to “learn” and make decisions. For example, the more blues tracks you listen to on Apple Music, the more blues songs and artists the streaming service will recommend to you. But when data sets and ML models are biased, the results can have serious consequences.

For example, hiring tools (think ZipRecruiter, Glassdoor, Indeed, LinkedIn, etc.) routinely use ML algorithms to suggest jobs to candidates and candidates to employers. It is a well-known fact that the tech industry (among others) has notoriously favored white men. As such, data sets based solely on past hires would inevitably be biased, creating a loop that would continue the employment discrimination endemic to these industries.

In a Harvard Business Review article titled “All the Ways Hiring Algorithms Can Introduce Bias,”  author Miranda Bogen poses the question, “Do hiring algorithms prevent bias, or amplify it?” She describes research her company (Upturn, a nonprofit research and advocacy group that promotes equity and justice in the design, governance, and use of digital technology) conducted in concert with Northeastern University and the University of Southern California. Among their findings was evidence that broadly targeted ads on Facebook for supermarket cashier positions were shown to an audience of 85% women, while jobs with taxi companies went to an audience that was approximately 75% Black. It’s easy to see how damaging this situation can be.

An article published on becominghuman.ai entitled “7 Types of Data Bias in Machine Learning” (Lionbridge AI, now Telus International) parsed ML bias out into seven categories: sample bias, exclusion bias, measurement bias, recall bias, observer bias, racial bias, and association bias. While there’s not enough space here to examine each of these in detail, I recommend reading the article (among others — many authors and sources have tackled this topic and categorized biases in various ways). But regardless of how they are labeled, the fact is that bias is difficult (if not impossible) to avoid in data sets and ML models. The only way to eliminate it completely would be to abandon ML and AI altogether, which is unrealistic, at best. That horse has already left the barn.

The route we must take involves developing and enforcing rigorous codes of ethics that include human oversight and accountability. We can’t allow unsupervised algorithms to govern human life. While these concepts are not new, implementation (and enforcement) is still in its infancy. For example, although we’ve used ML for decades, it wasn’t until mid-2021 that the United States’ Government Accountability Office published its first guidance, in the form of a framework for AI accountability. It addresses governance, data, performance, and monitoring, along with key practices for selecting, and implementing AI systems. Each practice includes a set of questions for entities, auditors, and third-party assessors to consider, as well as procedures for auditors and third-party assessors.

To learn more about the importance of ethics in AI and how we might develop these principles, check out Enrique Corro’s new blog post, “Why Your Organization Needs a Set of Ethical Principles for AI.”

Let me know your thoughts in the comments. As always, I look forward to hearing from you.

Comments

2 comments have been added so far

  1. Incredible article Kit.

    I think this is why explainability techniques will be so important moving forward into the future. Whenever models are trained using human driven data (i.e. hired v. not-hired), as opposed to absolute data (i.e. temperature over time, etc.), I think a LOT of thought should go into not only understanding preexisting human biases/shortcomings, but also identifying how we will audit the model(s) in the real world to expose other potential biases.

    To flip the problem, given the right approach, using AI and explainability may allow us to identify previously unknown biases.

    Thank-you so much for bringing this issue to the forefront, it’s absolutely pivotal that the leaders at VMWare (and other corporations) are not only aware of this, but actively pursuing solutions.

    Cheers,
    Darien Schettler

    1. Darien – thanks for the thoughts! Yes totally agree on your point about human biased data vs absolute data. I think this is an important distinction that often gets lost. And yes, a strong focus on identifying biases or blindspots are critical. Definitely something we’re still working through and we will be sure to keep it top of mind. Kit

Leave a Reply

Your email address will not be published. Required fields are marked *