ABBYY
Back to ABBYY Blog

Bad Things Can Come from Non-Neutral Technology

Andrew Pery

February 27, 2020

SMM Blog | ABBYY Blog Post

Artificial Intelligence (AI) is becoming embedded into almost every facet of our lives. According to a Deloitte study, shipments of devices with AI are poised to increase from 79 million in 2018 to 1.2 billion by 2022. The potential benefits and social impact of AI are substantial. However, when the technology is not utilized properly, it can have unintended negative consequences.

Below are three examples of bad, biased, or unethical AI that this article will explore:

  • Facial Recognition. While facial recognition technologies can yield great benefits in terms of ID verification and security features, when utilized for other purposes, potential vulnerabilities arise, such as recognition bias and unequal error rates.
  • Criminal Justice. AI is also commonly utilized for criminal risk assessment in the U.S. court system. This application can sometimes generate disproportionately and mistakenly high recidivism rate predictions for minority offenders.
  • Benefits Entitlement. Increasingly, AI technologies are being applied for systems used to manage benefits such as unemployment insurance or disability benefits. However, some of the current systems contain deep flaws which have resulted in thousands of recipients being falsely accused of fraud and temporarily denied benefits.

These challenges may be caused by a number of factors including poorly designed algorithms or biased data sets. Regardless of the cause, these bad, biased or unethical applications of AI have the potential to impact people’s lives in very real and consequential ways.

The full article, “Bad Things Can Come from Non-neutral Technology,” can be read on the Association for Intelligent Information Management (AIIM) website. The article is part one of a three-part series, “Ethical Use of Data for Training Machine Learning Technology,” by Andrew Pery, digital transformation expert and consultant for ABBYY. Andrew will be presenting on “The Ethics of Deep Learning: how to train your machines without bias or bad habits,” at The AIIM Conference taking place March 3 – 5th in Dallas. Additional information can be found here.

Artificial Intelligence (AI)
Andrew Pery ABBYY

Andrew Pery

Digital transformation expert and AI Ethics Evangelist for ABBYY

Andrew Pery is an AI Ethics Evangelist at intelligent automation company ABBYY. His expertise is in artificial intelligence (AI) technologies, application software, data privacy and AI ethics. He has written and presented several papers on the ethical use of AI and is currently co-authoring a book for the American Bar Association. He holds a Masters of Law degree with Distinction from Northwestern University Pritzker School of Law and is a Certified Information Privacy Professional (CIPP/C), (CIPP/E) and a Certified Information Professional (CIP/AIIM).

Connect with Andrew on LinkedIn.