Artificial Intelligence & Ethics in GovCon
-As shared by Hannah Altman
Artificial intelligence (AI) is changing the way we interact with the world; the applications seem limitless. But as with any nascent technology, its growth is moving faster than the speed of regulation. This is especially concerning when world leaders – the United States as well as its near-peer adversaries – are actively harnessing the power of AI for military applications. The way we treat and regulate AI as an offensive tool will have a profound impact on the space for decades to come, as other countries and even non-state actors follow suit in adopting AI into their security strategy. As the use of AI as a tool for lethality increases, the United States must lead the space by creating guidelines to prevent bias and advocate for the responsible use of AI.
Very basically, artificial intelligence refers to a computer or computing system trained by humans and designed to execute tasks. The more complex the task, the more hours people will need to spend on training the machine. No machine is designed to be unethical or biased – the people programming it bring with them their own personal biases and worldviews, intentionally or by accident. A prominent example of bias in artificial intelligence is the use of facial recognition in law enforcement settings. People of color – women of color especially – are more likely to be incorrectly identified by facial recognition software than their white counterparts. The cameras themselves don’t house inherent bias. But having a homogenous group of people (with a similar background and demographic) as the coders and decision-makers for the machine will lead to bias of omission. If the group of coders is not inclusive, if they do not themselves contain a variety of different perspectives and experiences, we cannot expect the AI to do that either.
So where does U.S. Artificial Intelligence strategy go from here? In 2020, the Department of Defense published its Ethical Principles for Artificial Intelligence, which stated that AI must responsible, equitable, traceable, reliable, and governable. Accomplishing this will necessarily need people and a lot of them. At govmates, we like to talk about the human element – goals are best accomplished when technology complements the work done by people and vice versa. Creating ethical AI means focusing on the people doing coding and the people working alongside the technology. In order for the data sets to be complete, they need to be inclusive. We need voices and perspectives that are currently being left out of the conversation. We need transparency in algorithms. The data is only as good as we make it, and right now there is room to make it a lot better.