Skip to main content

Google’s AI principles focus on social benefits & accountability, ban on weapons & surveillance

Following the revelation that Google was working with the US military on recognizing drone footage, the company promised that it would develop guidelines to govern AI usage. Today, Sundar Pichai announced these principles and clarified what kind of research and work the company will and won’t undertake in the future.

Sundar Pichai and company have been working on these principles since the employee backlash first arose, with founders Larry Page and Sergey Brin, along with other top brass, discussing and crystallizing these guidelines. The Google CEO reiterated today his hope that the rules “stood the test of time.”

There are seven objectives beginning with how AI and it uses should “be socially beneficial,” with projects only moving forward when the “overall likely benefits substantially exceed the foreseeable risks and downsides.”

Google wants to “avoid creating or reinforcing unfair bias” given how there are already examples of algorithms reflecting unfair biases due to the training dataset used. Safety will be another key consideration with strict monitoring throughout development, while users can provide feedback and expect explanations when something occurs. Privacy will be incorporated from the start, with notice, consent, transparency, and user control apparent.

The last two principles deal with “upholding high standards of scientific excellence” and governing the availability of AI. Google will work to limit “potentially harmful or abusive applications,” while Pichai explicitly notes what AI applications Google will not pursue.

This includes technologies and weapons that cause harm and whose primary purpose is to injure. Google is also ruling out use in surveillance technology that violates internationally accepted norms, laws, and human rights.

The CEO notes that while Google will never develop weapons, it does not rule out working with governments and militaries on cybersecurity, training, military recruit, and more.

These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

Google AI today also published technical practices on how to implement these principles. The full list of seven objectives are listed below:

1. Be socially beneficial. 

The expanded reach of new technologies increasingly touch society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

2. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases.  We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

3. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.  We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

4. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

5. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

6. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

7. Be made available for uses that accord with these principles.  

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

  • Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
  • Nature and uniqueness: whether we are making available technology that is unique or more generally available
  • Scalewhether the use of this technology will have significant impact
  • Nature of Google’s involvementwhether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

Check out 9to5Google on YouTube for more news:

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com