The Dark Side of AI

Josh Henry
 

The rise of AI is as inevitable as the next data breach. 

The 21st century has seen a massive resurgence in highly advanced AI usage techniques. We can now process large amounts of data in a short amount of time. We can use machine learning to help machines improve on their mistakes. In fact, we have seen AI dip into every industry, from the demonstrated surgery performed by a robot in the Children’s National Medical Center to the rise in self driving cars.

However, we have also studied the dark side of algorithms, and artificial intelligence in general. People do not often consider the biases of AI. The Federal Court System uses voice, facial recognition and sentencing guideline software. We would all like to believe that the use of AI algorithms would increase neutrality and emotionless decision making. However, that has not proved to be the case.

AI Issues

Algorithms, by design, have inherent flaws. After all, humans design the algorithms. Therefore, algorithms are subject to human error. Unfortunately, some of the attributes and features are subject to biases such as age, gender, race, color, national origin, religion, marital status, education level and even income sources. This worsens when you reflect on all the ways that AI evaluates people. The Federal Court System is just one major key-point. Reports states that at least 15 states use AI or some type of machine learning program to determine the risk for prisoners to re-offend.

Downloading Decisions

When you consider someone’s life being in the hands of an AI software that has biased tendencies, the ensuing problems are easy to predict. Other situations and industries that have major impact on peoples lives are also using these algorithms to make very important decisions. Banking, finance, and insurance industries are just a few examples. Consider someone who is declined for a loan to start their small businesses. What if they were declined for simply being a woman? These are life changing decisions that are in the hands of extremely fallible programs. Even government funded aide is using machine learning techniques to decipher who gets federal aide.

Bad Examples

A few events that occurred in 2017 illustrate this. There were gender biases found in Google translation when handling Turkish-English translations. Sadly that was not all that went wrong for Google that year. An AI program they had launched for quick intelligent responses yielded a racist emoji suggestion system. Upon sending a gun emoji, one of the three recommended responses was a man wearing a turban. Google responded quickly by changing the algorithm.

We Need Responsibility

Those are just a few examples of how AI and algorithms have shown to be bias in astonishing ways. These algorithms are affecting more and more people as the popularity among them increase. Humans need to set more boundaries for AI. Additionally, they need to be more vigilant in regards to the programs they write. Similar issues, such as Elsagate, have exposed children to obscene material. As civilians, we need to be more aware of what our kids are doing while browsing online. AI and ML flaws are being exposed and exploited. Unfair and unwarranted results are occurring that are completely avoidable.

 

Tags: , , , , ,