Artificial Intelligence has been revolutionary for mankind, allowing organizations to unlock greater efficiency, lower costs and strengthen business in many ways. But it’s not flawless.
AI’s three biggest limitations are (1) AI can only be as smart or effective as the quality of data you provide it, (2) algorithmic bias and (3) its “black box” nature.
- Inaccuracy in Data Analysis:
The only way AI programs can learn is through the information that we provide it. But if the data provided to the program is incomplete or unreliable, your results may be inaccurate or skewed. So, AI can only be as smart or effective as the quality of data you provide it.
For example, Amazon began using an AI program to review new job applicants in 2014. It was trained using the resumes submitted in the previous 10 years, where the majority was male. The system mistakenly concluded that being male was the preferred quality for new hires and started filtering out female candidates.
- Algorithmic Bias:
Algorithms are a set of instructions, may or may not be written by a human programmer, that a machine follows to complete a certain task. But if the algorithms themselves are faulty or biased, they will only show you unfair results and we cannot rely on them. Biases mainly emerge from the partial way programmers have designed the algorithm by favoring some desired or self-serving criteria. The algorithmic bias is commonly present across large platforms such as social media sites and search engines.
For example, a Facebook algorithm set up an algorithm to remove hate speeches in 2017. But it was later reported that the algorithm removed hate speeches about white men but allowed hate speeches against black children. Because the algorithm was designed to filter out only broad categories such as ‘whites’, ‘blacks’, ‘Muslim’, ‘terrorists’ and ‘Nazi’ and not specific subsets of categories, the algorithm allowed these hate speeches.
- AI’s “Black Box” Nature:
AI is renowned for its ability to learn from large volumes of data, to discover the underlying patterns and to make data-driven decisions. But even though the system quickly produces accurate results every time, there is a big drawback—the AI system cannot express or explain how it arrived at this conclusion. So, this automatically raises the question—how can we trust the system in very sensitive matters such as national security, governance or high-stakes business ventures?
Because of the high risks from these limitations, governments, innovators, business leaders and regulators should practice an ethical approach to AI technology.