Ctrl: Don’t Be Evil

Caio Brighenti, Maroon-News Staff

As a double major, I’m always looking for issues that bridge both of my majors. Unfortunately, there’s not a whole lot out there that falls into the realm of Computer Science and Peace and Conflict Studies simultaneously. One issue, however, nearly perfectly splits across the two fields: cybersecurity. 

The last few weeks have been especially interesting in the

cyberspace realm, particularly on its private business side. A letter formally protesting Google’s contracting work with the U.S. government on a Pentagon program called “Project Maven” obtained over 3,000 signatures from Google employees. It very clearly stated the opinion that “Google should not be in the business of war,” citing the company’s famous motto, “Don’t Be Evil.” Google’s board of directors reacted by claiming the technology they provide would never help “operate or fly drones” nor would it “be used to launch weapons.” 

Of course, no one outside of Google and the Pentagon know the

extent of Google’s involvement with Project Maven, but we can get an idea of what the project entails through an article on it on the Department of Defense website. The article is surprisingly transparent, stating that Project Maven “focuses on computer vision” in order to identify objects in images or videos, specifically citing applications in the conflict against ISIS in Iraq and Syria. Additionally, the article highlights the importance of private businesses such as Google, directly stating that their project is only feasible with the help of, aid from and partnership with commercial partners. Thus, it is easy to see why such an outrage was sparked within Google, even if their technologies aren’t (technically) firing the shot. 

Were employees overreacting, or is there legitimate cause for concern? Obviously there are valid arguments on both sides, but I’m certainly on the “really concerned” side of things. I have a problem with letting computer algorithms make or inform decisions of such importance, at least in the current state of artificial intelligence. It’s not that artificial intelligence (AI) isn’t useful – in fact, it’d probably be more effective than humans most of the time – but the real problem emerges when something goes wrong. 

There are no perfect algorithms; mistakes do happen, and will happen with algorithms such as this one. When that inevitably comes to pass, whose fault will it be? You can’t hold a computer accountable. Do you blame the programmer? Well, there are thousands of them involved in a project of that complexity. It would be nearly impossible to even understand what part of the algorithm failed in the first place. Deep learning, the subsection of artificial intelligence that’s employed in systems like this, is predicated on running calculations at a number and complexity that humans could never dream of comprehending. In essence, you know when it works and when it doesn’t, but you’ll never really know why.

There are other problems, too. In some ways, artificial intelligence systems are just highly complex pattern recognition systems. They look at massive datasets and figure out what patterns are present. The system never understands the problem, it just really understands the data. So what happens if there’s a problem in your data? An example of this that is rarely talked about enough is social and cognitive bias in data. If we’re teaching these systems how things work based on the data we generate, then the system will learn the biases present in our society. 

Medium writer Laura Douglas showed this in a great article in which she trained a word prediction system using a Google News dataset of 100 billion words. For instance, Laura had her algorithm fill in the blank in the sentence, “Man is to computer programmer as woman is to blank.” The computer’s best guess, based on the data it was given, was “homemaker.” It’s easy to see how dangerous this could be in the context of military targeting. In a society where implicit racial bias is clearly a factor in the police force, what would happen to an algorithm trained on a dataset of the country’s prison records? What if news articles, which are more likely to label the perpetrator of a crime as a terrorist if he is Muslim,  were used to help a system like Project Maven identify potential terrorists?

It’s not hard to imagine a dystopian scenario emerging here, and I’m not the only one who sees that. Recently, a group of 34 high-profile tech companies have signed what some call a “Digital Geneva Convention,” promising to not provide their services to governments in the cyberwar realm. Hopefully, more companies will follow suit. Notably, Google has not signed the agreement. Perhaps “Don’t Be Evil” is too much to ask for in today’s world. 

I mean, come on – even Facebook signed it.

Contact Caio Brighenti at [email protected]