Not Neutral: A Look at Bias in Artificial Intelligence Algorithms that Are Assumed to Be Objective
dc.contributor.advisor | Carter, Teresa | |
dc.contributor.author | Warfield, Bradley Wayne Akins | |
dc.date.accessioned | 2020-12-08T14:51:13Z | |
dc.date.available | 2020-12-08T14:51:13Z | |
dc.date.issued | 2020-11-18 | |
dc.description.abstract | The world is becoming more digital, and data is ubiquitous. Algorithms are needed to help deal with all of that data. But neither the data, nor the algorithms, nor the use of outputs is neutral. They often reflect the biases of the humans involved in collecting, analyzing, and interpreting the data. Sometimes biased algorithms serve to amplify human bias. Biased algorithms cause people to spend more time in jail. Algorithms and data that are intended to help racial minorities sometimes instead hurt them by limiting their voices. Data and algorithms have great potential for protecting civil rights, but there is also enormous potential harm to civil rights that data and algorithms may facilitate. Biased algorithms impact what is considered fact in society. The Internet cannot be navigated without a search engine. But search engines are focused on advertisers and profits, not knowledge or high-quality information. Biased search engines, especially Google Search, have radicalized domestic terrorists and perpetuated harmful stereotypes. Since every part of the process of choosing data, collecting data, writing an algorithm, and using the outputs can contain harmful bias, care and transparency are needed throughout the entire process. Data scientists and system designers may not be able to fully remove bias from algorithms, but they can be improved. Through education and the creation of a public non-profit search engine, society will be able to better take advantage of the huge amounts of data while increasing equality and reducing harm. | en_US |
dc.identifier.uri | http://hdl.handle.net/11558/5419 | |
dc.language.iso | en_US | en_US |
dc.subject | Artificial intelligence | en_US |
dc.subject | Machine learning | en_US |
dc.subject | Systemic racism | en_US |
dc.subject | Algorithms | en_US |
dc.subject | Algorithmic justice | en_US |
dc.subject | Big Data | en_US |
dc.subject | Equity | en_US |
dc.title | Not Neutral: A Look at Bias in Artificial Intelligence Algorithms that Are Assumed to Be Objective | en_US |
dc.type | Thesis | en_US |