Google and Alphabet CEO Sundar Pichai has called for new regulations in the world of AI, highlighting the dangers posed by technology like facial recognition and deepfakes, while stressing that any legislation must balance “potential harms ... with social opportunities.”

“[T]here is no question in my mind that artificial intelligence needs to be regulated. It is too important not to,” writes Pichai in an editorial for The Financial Times. “The only question is how to approach it.”

Although Pichai says new regulation is needed, he advocates a cautious approach that might not see many significant controls placed on AI. He notes that for some products like self-driving cars, “appropriate new rules” should be introduced. But in other areas, like healthcare, existing frameworks can be extended to cover AI-assisted products.

“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used,” writes Pichai. “It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”

The Alphabet CEO, who heads perhaps the most prominent AI company in the world, also stresses that “international alignment will be critical to making global standards work,” highlighting a potential area of difficulty for tech companies when it comes to AI regulation.

Currently, US and EU plans for AI regulation seem to be diverging. While the White House is advocating for light-touch regulation that avoids “overreach” in order to encourage innovation, the EU is considering more direct intervention, such as a five-year ban on facial recognition. As with regulations on data privacy, any divergence between the US and EU will create additional costs and technical challenges for international firms like Google.

But Pichai’s editorial also foregrounds unresolved questions in Google’s own approach to AI regulation. For example, the CEO notes that the company’s internal principles ban certain uses of the technology, “such as to support mass surveillance or violate human rights.” It’s because of concerns like this that Google doesn’t sell facial recognition technology.

At the same time, Pichai doesn’t call for rivals who do sell facial recognition, like Amazon and many others, to be stopped. If Google believes such technologies are a danger to the public, why does the company not call for direct regulation on this specific issue?

Ultimately, Google — like government regulators — must balance the promise and threat of AI technologies. But as Pichai notes, “principles that remain on paper are meaningless.” Sooner or later, talk about the need for regulation is going to have to turn into action.