Bad Things Can Come from Non-Neutral Technology
Andrew Pery
February 27, 2020
Artificial Intelligence (AI) is becoming embedded into almost every facet of our lives. According to a Deloitte study, shipments of devices with AI are poised to increase from 79 million in 2018 to 1.2 billion by 2022. The potential benefits and social impact of AI are substantial. However, when the technology is not utilized properly, it can have unintended negative consequences.
Below are three examples of bad, biased, or unethical AI that this article will explore:
- Facial Recognition. While facial recognition technologies can yield great benefits in terms of ID verification and security features, when utilized for other purposes, potential vulnerabilities arise, such as recognition bias and unequal error rates.
- Criminal Justice. AI is also commonly utilized for criminal risk assessment in the U.S. court system. This application can sometimes generate disproportionately and mistakenly high recidivism rate predictions for minority offenders.
- Benefits Entitlement. Increasingly, AI technologies are being applied for systems used to manage benefits such as unemployment insurance or disability benefits. However, some of the current systems contain deep flaws which have resulted in thousands of recipients being falsely accused of fraud and temporarily denied benefits.
These challenges may be caused by a number of factors including poorly designed algorithms or biased data sets. Regardless of the cause, these bad, biased or unethical applications of AI have the potential to impact people’s lives in very real and consequential ways.
The full article, “Bad Things Can Come from Non-neutral Technology,” can be read on the Association for Intelligent Information Management (AIIM) website. The article is part one of a three-part series, “Ethical Use of Data for Training Machine Learning Technology,” by Andrew Pery, digital transformation expert and consultant for ABBYY. Andrew will be presenting on “The Ethics of Deep Learning: how to train your machines without bias or bad habits,” at The AIIM Conference taking place March 3 – 5th in Dallas. Additional information can be found here.