With the introduction of new export controls on artificial intelligence software last week, the White House appealed to lawmakers, businesses, and European allies to avoid overregulation of artificial intelligence. It also maintained its refusal to participate in a project proposed by the Group of Seven leading economies, which seeks to establish shared principles and regulations on artificial intelligence, as the U.S. prepares to take over the presidency of the organization this year.
The U.S. has rejected working with other G-7 nations on the project, known as the Global Partnership on Artificial Intelligence, maintaining that the plan would be overly restrictive.
Kay Mathiesen, an associate professor at Northeastern who focuses on information and computer ethics and justice, contends that the U.S."s refusal to cooperate with other nations on a united plan could come back to hurt its residents.
Advocates of the plan say it would help government leaders remain apprised of the development of the technology. The project, they say, could also help build consensus among the international community on limiting certain uses of artificial intelligence, especially in cases where it's found to be controlling citizens or violating their privacy and autonomy.
U.S. leaders, including deputy chief technology officer Lynne Parker, counter that the proposal appears overly bureaucratic and could hinder the development of artificial intelligence at U.S. tech companies.
But Mathiesen says that many companies are already ahead of the curve in considering or implementing oversight mechanisms to guide the ethical development of their products. She says that it's important to rein in the potentially harmful effects of artificial intelligence to ensure that the benefits of the technology are not overridden by the cost.
"The idea that we should just not regulate at all or not even think about this, because maybe then we might limit ourselves, I think that's a pretty simplistic view," says Mathiesen, a professor of philosophy who studies political philosophy and ethics. "It's not like the G-7 is going to have the power to all of a sudden impose regulations on U.S. industry. So that argument that merely by joining this [group] and beginning to think these things through, and do research on this, and develop [policy] recommendations—that that by itself is going to put us behind on artificial intelligence doesn't hold a lot of water."
Mathiesen suggests that failing to work with other countries in addressing privacy issues stemming from the unchecked spread of artificial intelligence products—such as facial recognition—could result in consumer backlash, and thereby slow down the development of artificial intelligence in the U.S.
"The technology is advancing incredibly rapidly and we want to make sure that we're thinking ahead, and we're building at the beginning protections for consumers before these things come out and it's too late and we have to try to fix problems that we could've prevented," she says.
The plan for the Global Partnership on Artificial Intelligence, which was introduced in December 2018, is to ensure that artificial intelligence projects are designed responsibly and transparently, in a way that prioritizes human values, such as privacy. The initiative received a major boost from Canada, which held the G-7's rotating presidency at the time, and was kept alive by France the following year. The U.S. will take over the presidency of the organization this year.
In addition to Canada and France, the other G-7 countries, including Germany, Italy, Japan, and the U.K., are on board with the project. The European Union, India, and New Zealand have also expressed interest. Mathiesen says that while she understands the concerns of some U.S. government officials about being out-competed, it's important for the U.S. to be a participating member in this effort, especially while the technology is still in its nascent stages.
"In a way, it's better that the U.S. has buy-in at the beginning and is at the table to make these arguments about how do we balance concerns about things like privacy, security, and possible harm that could be produced by artificial intelligence? How do we balance that with also wanting to enable companies and inventors to create new things with artificial intelligence that can be economically and socially beneficial?" she says.
Mathiesen suggested that failing to engage in these conversations with the wider international community could leave the U.S. trailing behind.
"I think that the American citizens are going to suffer for that, just like they do now with the lack of data privacy," she says.
In conjunction with global professional services company Accenture, researchers at Northeastern's Ethics Institute last year produced a report that provided organizations a framework for creating ethics committees to help guide the development of smart machines.
Citation: The G7 wants to regulate artificial intelligence. Should the US get on board? (2020, January 14) retrieved 14 January 2020 from https://techxplore.com/news/2020-01-g7-artificial-intelligence-board.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.