How We Won Our First Tech Debate

Sam.AI has been on my radar since from my time as editor of And since I’ve been working from WeWork Labs on West 57 Street, I see founder, Raz Choudhery and the SAM team every day. So when Raz mentioned he was participating in a Tech Duels debate on Artificial Intelligence and Machine Learning I was intrigued.

(My memory of how I got involved)

Me: What’s the question being debated?

Raz: It’s “Is it possible to entirely remove human-induced bias from ML/AI models? ”

Me: Very cool. I’ve read a bunch of articles and a few books on this topic and have thoughts I could share.

Raz: Nice. I’d like to hear those thoughts.

So, I’m sure I prattled on about this company’s missteps and that that government agency’s overreliance on flawed software and Raz finally said, Why don’t you join me on this debate?

Many hours of research and much writing and rewriting of our opening statement ensued.. and here’s what I presented to the 100+ people at Galvanize. But a quick note – in addition to my desire to craft a compelling argument with supporting examples I was keenly aware of the 4 minutes I had to fill but not exceed. Limitations are wonderfully inspiring in the creative process! So now – here’s what I presented.

It is a truth universally acknowledged that bias exists, and will always exist in In Artificial Intelligence.

What is bias? It’s a structural orientation for a selected outcome. All algorithms are biased in that they are all optimized for a desired outcome. And since that optimization is created by humans, who are fallible, there is a likelihood that the biases or let’s say preferences, of those humans will unconsciously creep into that set of rules.

It is unconscious, unintended bias that we all have to be on guard for.

The problem and the urgent need to find solutions to this problem has prompted the New York City Economic Development Corporation to announce the creation of the first-ever, NYC Center for Responsible AI and just yesterday Stephen Schwarzman, billionaire founder of the investment firm Blackstone announced he is contributing $188 million dollars to Oxford University to fund an academic center that will include an institute for the study of the ethical implications of artificial intelligence. He’s previously given 350 million to MIT for AI studies, including the ethical and policy implications.

So what are real-life examples of bias in AI?

First: The Florida Department of Children and Families (DCF) had the goal of IDing families where the child is at risk of abuse or death.  And they used data compiled by a private analytics company called SAS, including criminal records, drug and alcohol treatment data, and behavioral health data.

But there was no data used from privately insured patients (their data is protected) and this lead to bias against families that rely on government programs.  This selection bias output skewed results. (garbage in / garbage out)

Next:  Amazon’s hiring algorithm was found to be biased towards hiring men, based on optimizing for those properties that correlated with prior successes. If they only hired men previously it’s no surprise that the algorithm optimized – or was biased towards  – hiring men.

Finally: In March this year, Facebook was charged with violating the Fair Housing Act of 1968 by allowing advertisers to restrict access to housing ads based on religion, race or national origin.

Bias – i.e. choosing, discriminating, optimizing for, will always, of necessity exist in algorithms. That’s not the problem. The problem is unintended, unwanted bias that is hidden in a block box, cannot be audited nor held accountable.

Many businesses that claim their secret sauce is AI will not let 3rd parties look at their algorithms for bias, claiming it’s proprietary, intellectual property.

How can we live with biased AI?

  • With government regulation to set minimum standards – with the lightest touch possible
  • With transparency on goal, appropriated rule sets, and relevant data
  • By making an ongoing place at the table for the relevant stakeholders,

These can ensure that AI’s biases are those we can all live with.

Most companies are not run by mustache-twirling guys and gals. The benefits of NLP and Machine Learning are substantial but the implementation of AI has to be transparent, iteratively improved and accountable to users.

We have to embrace both the reality of the risk of bias and then have processes for transparency, ongoing stakeholder input, and ongoing correction,

Human bias cannot be eliminated so let’s do it right.

Leave a Reply

Your email address will not be published. Required fields are marked *