The European Commission today unveiled a sweeping set of proposals that it hopes will establish the region as a leader in artificial intelligence by focusing on trust and transparency.
The proposals would lead to changes in the way data is collected and shared in an effort to level the playing field between European companies and competitors from the U.S. and China. The EC wants to prevent potential abuses while also building confidence among citizens in order to reap the benefits promised by the technology.
In a series of announcements, EC leaders expressed optimism that AI could help tackle challenges such as climate change, mobility, and health care, along with a determination to keep private tech companies from influencing regulation and dominating the data needed to develop these algorithms.
“We want citizens to trust the new technology,” said Ursula von der Leyen, president of the European Commission. “Technology is always neutral. It depends on what we make with it. And therefore we want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible human-centric approach to artificial intelligence.”
She added that today’s proposals include “key enablers to make the most of technology, and an ecosystem of trust which mitigates the risk that artificial intelligence may pose to our fundamental values.”
In recent years, Europe has been trying to carve out a distinct identity when it comes to technology development and regulation. The continent faces a tricky balancing act as it tries to crack down on abuses by large tech companies while encouraging local innovation and startups.
Although transformative technologies like AI have been labeled critical to economic survival, Europe is perceived as slipping behind the U.S., where development is being led by tech giants with deep pockets, and China, where the central government is leading the push.
With its latest digital strategy, the EC wants to encourage more cooperation between public and private sectors. The plans call for finally creating a digital single market across the continent, a goal the EC has been pursuing for years with only limited results.
The belief is that pooling data from governments and businesses would create a critical asset Europe could use to leverage AI development. The EC is concerned that the bulk of data, particularly from individuals, is currently being gathered and controlled by a handful of corporations.
To boost competition, the EC wants to gather this data in a way that makes it widely available to entrepreneurs interested in using it as the foundation for new services and startups. To make citizens comfortable with this centralization of data, the proposals call for strict guidelines and oversight of AI development.
“We want a fair and competitive economy,” said Margrethe Vestager, vice president of the European Commission for a Europe Fit for the Digital Age. “[We want] full single markets, where companies of all sizes can compete on equal terms, where the road from garage to startup is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy where consumers can be sure that their rights are being respected and where profits are being taxed where they’re made.”
The proposals also call for rules to prevent algorithmic bias that could lead to discrimination. And the commission wants to create a certification process for “high risk” uses of AI, such as in vehicles. For less risky uses, the EC is considering a voluntary labeling system to disclose information around the algorithms and data used.
“We have to make sure that the data used is free of any bias,” von der Leyen said. “If you feed an algorithm with data that is being mainly produced in a field where you have men in the medical sector, then an AI-driven therapeutic or diagnostic instrument will not be fit for purpose if you have a population where [women are] treated with these applications.”
The EC will also continue to study the questions surrounding facial recognition. While uses like unlocking a smartphone are seen as relatively safe, the commission warns that using facial recognition to remotely identify people poses human rights risks. Currently, such facial recognition use is allowed in Europe under very limited exceptions when there is deemed to be a heavy public interest. The commission is proposing a “broad debate on which circumstances might justify exceptions in the future, if any.”
“Without trust, we cannot make all the positive things happen,” Vestager said. “I think you will see that artificial intelligence is not good or bad in itself. It all depends on why and how it’s being used. It can be used to make a chatbot that is much faster and to give us a better consumer experience. But it can also be used to create fake news … None of the positive things will be achieved if we distrust the technology.”
Despite the group’s lofty ambitions, most of today’s announcements are simply statements of principles and goals. For instance, in a whitepaper released today, the EC proposes the creation of a framework for what it calls “trustworthy artificial intelligence.” But the paper merely summarizes discussions that have happened to date while outlining future steps to defining and implementing that framework.
The EC did disclose plans to spend almost $21 billion on AI and data research programs, as well as the platforms that may eventually allow for the pooling of data envisaged by the commission.
While still vague, the policy process has drawn nervous visits from Silicon Valley leaders eager to offer input. In rece