Skip to main content

Google’s upcoming AI rules will ‘preclude’ weapons use as Sergey Brin, DeepMind stances revealed

For the past several months, Google fiercely debated the military applications of artificial intelligence, with many employees opposed to their work being used in weaponry and war settings. This stance would essentially see Google forgo a huge market to Amazon and Microsoft, where employees do not have similar qualms.

A new report today provides some insight on principles that will guide future work, while several positions from within the company have also been highlighted.

Since work on the Department of Defense’s Project Maven was revealed in March, the company promised that it was creating “policies and safeguards around the development and use of our machine learning technologies.”

In a comment to the New York Times, Google revealed that the new principles “precluded the use of A.I. in weaponry,” but did not elaborate further. Employees expect the rules to come in the “next few weeks,” with Sundar Pichai providing an update at last week’s TGIF meeting, noting that the goal is to create rules that “stood the test of time.”

It’s unclear whether these guidelines will be explicit enough for the over 4,000 signatories of an internal petition that asked the Google CEO to cancel Maven and commit to not building “warfare technology.” Some have critiqued how these rules should have been in place prior to the DoD contact.

Today’s NYT piece also delves into the viewpoints of Alphabet and Google’s upper echelon. Co-founder Sergey Brin at last week’s company-wide meeting was asked about the topic. Brin in the past has taken moral stands and advocated Google’s exit from China instead of continuing to censor Search. This of course led to Google’s lack of presence in a significant market, with the company only slowly returning in recent years.

As Alphabet’s President, Brin noted extensive discussions with Larry Page and Pichai. However, he argued the nuanced point that peace at large benefits from governments and militaries working together with companies like Google.

Within Google, Cloud’s chief scientist Dr. Fei-Fei Li cautioned and foresaw the media backlash to Maven, while her boss, Cloud head Dianne Greene has been a defender of the company’s work with the military industry. Contracts, especially from governments in the coming years, will be worth billions, while Amazon and Microsoft employees are less hesitant over controversial involvement.

Approximately a dozen employees have quit over Google’s work, while the academic backgrounds of many top AI researchers suggest opposition to military contracts. Google’s recently appointed head of AI this month signed a petition opposing the use of autonomous weapons where there would be no human to make the final call.

Meanwhile, Alphabet DeepMind subsidiary is opposed to the deal. The 2014 acquisition included conditions that prevent the AI lab’s work from being used for military or surveillance purposes. Its co-founders are involved with the current discussions over the new guidelines, according to the NYT.

Check out 9to5Google on YouTube for more news:

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel



Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: