The recipe for a good Super Bowl ad is fairly simple: a handful of celebrities and a punchline or two. With such a large audience watching, it is almost impossible to not persuade someone to buy your product. Among the variety of ads that followed this basic script, there was one particular ad that stood out against the rest: The Dawn Project’s Boycott Tesla Ad.
The Dawn Project spent $552,000 advertising this past Super Bowl on its campaign against Tesla and its fatal self-driving feature. Unlike the other ads, The Dawn Project had no celebrity features, only videos of a Tesla blowing past school buses with stop signs and running over child-sized test dummies. The spine-chilling clip ended with the line, “Boycott Tesla to keep your kids safe.” The ad seemingly called on consumers to use the power of the purse to hold Tesla accountable.
The Dawn Project’s mission statement echoes the same doomsday rhetoric seen in its Super Bowl ads. The company aims to make computers safer for humanity and has a lot of say about the current state of the technology in the world.
“Ordinary commercial software is riddled with bugs and security defects. These can easily be purchased from hackers, enabling any dictator or billionaire to destroy our critical infrastructure, like the electric power grid, leaving us shivering and starving in the dark.” The statement goes on to compare hackers to wild animals that “steal our money, privacy and personal information who can now kill us all with a click of a mouse.” The warnings of The Dawn Project invoke a sense of fear among consumers that technology can be fatal if not utilized safely.
In a contentious interview with Fox News’s Liz Claman, Dan O’Dowd, The Dawn Project’s founder, spoke up against Tesla and its dangers. Claman seemed disillusioned by the fear-mongering tactics of O’Dowd and raised some interesting questions. Particularly, O’Dowd falsified Tesla’s claim that their Full Self-Driving cars are safer than driving yourself. In response, Claman cited human error.
“If you look at the accidents of people who don’t drive Teslas, there are hundreds and hundreds of them every single year. Human error is a huge problem and a distracted driver is something that you can blame much more easily on people texting while driving [than on] a Tesla autopilot,” Claman said.
Claman does make a reasonable point against O’Dowd. Furthermore, her point makes the issue much less black and white. With or without Tesla, human error in automobiles will continue to cause fatalities every year. Should we stop human progress and innovation in the face of these complications?
O’Dowd thinks so, and takes it a step further by aiming his anger at Elon Musk himself, as seen on The Dawn Project’s website.
“Anyone who buys a Tesla from Elon Musk is an enabler for his reckless behavior, including his self-driving experiments that have resulted in over 1,000 crashes and at least 33 tragic deaths,” the website reads.
O’Dowd’s previous research into software designed for government use demonstrates his commitment to reliable technology. He prides himself on creating “unhackable” software systems. For him, there’s no room for mistakes in innovation, especially when so much of our critical infrastructure depends on it. Furthermore, he establishes himself as the antithesis of Elon’s less-than-perfect self-driving cars. Dedicating his life’s work to creating reliable technology, it’s not surprising that O’Dowd sees Tesla as an out-of-control, deathly tech company rather than the industry-pioneering enterprise many people think it is. Whereas Tesla serves as a model for reckless innovation, O’Dowd hopes to bring modern technology into a world where “systems do not fail,” according to The Dawn Project’s website.
Together, O’Dowd and Musk are on opposite two sides of the modern technology spectrum: one is committed to playing it safe, and the other is determined to innovate no matter the cost.
It’s hard to say where I fall on the spectrum. It’s amazing to see what modern technology is capable of, but its progress is coming at a fatal cost to humans. Overall, this issue opens up a broader perspective on accountability and how modern technology is blurring that line. When technology can have fatal consequences, who is responsible?
Another example of these fatal consequences of technology can be seen in modern warfare. The United States has launched drone attacks that kill thousands of civilians, but when a weapon of mass destruction is operated by a computer thousands of miles away, how can any be held accountable? Technology has detached us from the accountability of protecting human life and put it in the hands of computers. Innovation will come at a cost, but it’s up to us as a society to decide how much is too much.