In a blog post this morning, Google CEO Sundar Pichai outlined the principles that will govern the company’s military work going forward. The new guidelines come after weeks of internal turmoil as employees threatened to or did actually resign in opposition to agreements Google had made with the federal government to leverage their AI capabilities for the U.S. military. The company told employees last week that it would not proceed with the initiative, known internally as Maven.
Pichai’s memo lists seven principles. AI projects need to “be socially beneficial” and “avoid creating or reinforcing unfair bias.” They should be “built and tested for safety,” be built with privacy in mind, and “uphold high standards of scientific excellence.” And they should only be made available for purposes that fall in line with the above.
Sure. Here is what Google will not pursue:
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3. Technologies that gather or use information for surveillance violating internationally accepted norms.
4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
That last bullet point, about upholding human rights, is probably the most important one because it makes working with the U.S. government pretty difficult! America does not have a great track record when it comes to adhering to “widely accepted principles of international law and human rights” or keeping its word. I mean, sure, Google might not be the company developing such systems firsthand, but they are helping prop it up by working on defense contracts.