[wpdreams_ajaxsearchlite]
Select Page

Blog

Home / Blog / Artificial Intellgence / Blog Abstract: Tackling Bias in Artificial Intelligence

Blog Abstract: Tackling Bias in Artificial Intelligence

by | Jun 2, 2019 | Artificial Intellgence, Human Interest

From time to time, team members will share their views stimulated by content from an industry thought leader. Here, our CEO, Lisa Maier, discusses the recent McKinsey & Company article, “Tackling bias in artificial intelligence (and in humans)” by Jake Silberg and James Manyika.

In the last year or so, I saw a documentary about Google and the inherent bias in their search algorithm, which basically showed how it baked in the bias of the programmers, probably without any intention of doing so. Then, in this last week, I saw an AllSides blog post, “Audit Finds Evidence of Google’s Bias Toward These Media Outlets” that starts out with this paragraph:

“A new audit shows Google is biased toward a small number of major media outlets — and most of them have a Left-leaning political bias. One-sided information isn’t healthy for democracy, yet the world’s most popular source of news and information displays a major bias.

Back in October, AllSides conducted a 39-page Google bias report that found Google News is biased 65% toward Left and Left-leaning news sources. Now, a new audit by the Computational Journalism Lab at Northwestern University bolsters those claims…”

The question McKinsey raises in their article is this: “Will AI’s decisions be less biased than human ones? Or will AI make these problems worse?”

This article discusses ways that AI can reduce human biases by helping draw out when factors appear to reflect a conscious or unconscious bias. I really like this idea because we humans are so blind to ourselves, as reflected in well known and well documented biases. AI also can help us to learn to make better decisions over time since only factors that improve predictive accuracy can easily be identified by machine learning at a rate of speed that greatly outpaces our own learning.

The deeper and darker side of AI is well-phrased in this sentence, “At the same time, extensive evidence suggests that AI models can embed human and societal biases and deploy them at scale.” I think that the Google algorithm ‘glitches’ are probably of that category. And that is troubling, indeed. This article suggests that the underlying data used in learning are likely the source of those biases, and I am quite sure that is true. I also suggest that it is likely that some of the biases of our own thinking are unable to be separated from the artifacts we create, including search engine algorithms. The jury is out on this one, and I am not aware of active research in this arena, but it would be useful to flesh out and fully understand.

The good news is that there are tactics that can be used to help offset bias that is accidentally baked in to AI models, and the article discusses a couple such as that of ‘pre-processing’ the data, through the use of ‘post-processing’ techniques, or the use of innovative AI training techniques that may help reduce bias and error. The remainder of this article then discusses six ways that AI practitioners and business and policy leaders may want to consider when implementing AI model solutions:

  1. Be aware: where can AI correct for or where can it be at high risk for exacerbating bias?
  2. Establish processes: to test and mitigate bias (without creating a different kind of bias).
  3. Talk about it: know about and discuss where bias can affect human decisions.
  4. Explore: how can humans and machines best help each other?
  5. Invest in bias research: including making data available for evaluation.
  6. Diversify: spread out AI development participation so it actually reflects societal diversity.

Hopefully this article helps raise your awareness about the bias that can be reflected in AI models and what we might do to minimize the incidence and mitigate the impact.

RELATED POSTS…

Accessibility Tools