How to regulate AI? The tech firms, lobbyists are pushing their vision for tech regulation as EU readies several proposals; ‘balancing potential harms with opportunities’. Google also pushes sensible ideas to regulate artificial intelligence.
The Silicon Valley executives and lobbyists are likely to say that they embrace regulation. So, in this they’re launching a frenzy of lobbying on what they want that regulation to be and Europe is set to be one of the first battlegrounds.
The new chief executive of Google parent Alphabet Inc., Sundar Pichai gave a policy speech on Monday in Brussels, is poised to release a raft of new regulatory proposals for tech business that also includes a white paper, is due next month on possible rules for artificial intelligence.
Sundar Pichai gave a message to policymakers: “the sensible regulation must also take a proportionate approach to artificial intelligence, and balancing the potential harms with social opportunities.”
He said that, “There is no question in my mind that AI needs to be regulated. The question is how best to approach this”.
The Silicon Valley companies have faced a growing backlash against their vast market power, and their perceived abuses of it. Some complain that the companies have created an ecosystem designed to vacuum up and weaponize the consumers’ personal information.
The response from U.S and Europe tech companies’ has generally been to call for some regulation. And there is a question, what they would be willing to accept the AI in practice. European Commission proposals will likely to provide an early testing ground for how these debates will play out.
AI could be among the thorniest topics, and companies like Google see the opportunities to profit through the use of deep learning and neural networks that can spot complex patterns much more quickly than humans. The adoption of AI technology could allow businesses to save the resources and time, with the applications ranging from lowering factories power consumption to headhunters departments and scanning resumes automatically.
However, according to researchers and activists such AI systems pose dangers. They could risk codifying the certain human biases, such as racism or sexism. By automating the processes like facial recognition, enable the mass surveillance.
Google and other data companies have adopted their own specific principles to tackle these threats in adoption of AI technology, but EU is pushing ahead with potential regulations.