Show simple item record

dc.contributor.advisorCarter, Teresa
dc.contributor.authorWarfield, Bradley Wayne Akins
dc.date.accessioned2020-12-08T14:51:13Z
dc.date.available2020-12-08T14:51:13Z
dc.date.issued2020-11-18
dc.identifier.urihttp://hdl.handle.net/11558/5419
dc.description.abstractThe world is becoming more digital, and data is ubiquitous. Algorithms are needed to help deal with all of that data. But neither the data, nor the algorithms, nor the use of outputs is neutral. They often reflect the biases of the humans involved in collecting, analyzing, and interpreting the data. Sometimes biased algorithms serve to amplify human bias. Biased algorithms cause people to spend more time in jail. Algorithms and data that are intended to help racial minorities sometimes instead hurt them by limiting their voices. Data and algorithms have great potential for protecting civil rights, but there is also enormous potential harm to civil rights that data and algorithms may facilitate. Biased algorithms impact what is considered fact in society. The Internet cannot be navigated without a search engine. But search engines are focused on advertisers and profits, not knowledge or high-quality information. Biased search engines, especially Google Search, have radicalized domestic terrorists and perpetuated harmful stereotypes. Since every part of the process of choosing data, collecting data, writing an algorithm, and using the outputs can contain harmful bias, care and transparency are needed throughout the entire process. Data scientists and system designers may not be able to fully remove bias from algorithms, but they can be improved. Through education and the creation of a public non-profit search engine, society will be able to better take advantage of the huge amounts of data while increasing equality and reducing harm.en_US
dc.language.isoen_USen_US
dc.subjectArtificial intelligenceen_US
dc.subjectMachine learningen_US
dc.subjectSystemic racismen_US
dc.subjectAlgorithmsen_US
dc.subjectAlgorithmic justiceen_US
dc.subjectBig Dataen_US
dc.subjectEquityen_US
dc.titleNot Neutral: A Look at Bias in Artificial Intelligence Algorithms that Are Assumed to Be Objectiveen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


Milligan

© Milligan University. All Rights Reserved.