The EU must adopt policies for the development, deployment and use of artificial intelligence (AI) in Europe in such a way as to ensure it works for, rather than against, society and social well-being, the European Economic and Social Committee (EESC) said in an own-initiative opinion on the societal impact of AI, identifying 11 areas that need to be addressed.
“We need a human-in-command approach to AI, where machines remain machines and people retain control over these machines at all times,” said rapporteur Catelijne Muller (NL – Workers’ Group).
She did not refer to technical control alone. “Humans can and should also be in command of if, when and how AI is used in our daily lives – what tasks we transfer to AI, how transparent it is, if it is to be an ethical player. After all, it is up to us to decide if we want certain jobs to be performed, care to be given or medical decisions to be made by AI, and if we want to accept AI that may jeopardise our safety, privacy or autonomy,” Muller argued.
AI has experienced exponential growth in recent times. The AI market amounts to around USD 664 million and is expected to grow to USD 38.8 billion by 2025. It is virtually undisputed that AI can have significant benefits for society: applications can be used to make farming more sustainable and production processes more environmentally friendly, improve the safety of transport, work and the financial system, provide better medical treatment and in countless other ways. Indeed, it could even potentially help eradicate disease and poverty.
But the benefits associated with AI can only be achieved if the challenges surrounding it are also addressed. The EESC has identified 11 areas where AI raises societal concerns, ranging from ethics, safety, transparency, privacy and standards to labour, education, access, laws and regulations, governance, democracy, but also warfare and superintelligence.
These challenges cannot be left to the business community alone: governments, the social partners, scientists and businesses should all be involved. The EESC believes it is time for the EU to set standards, taking pole-position globally in this area.
“We need pan-European norms and standards for AI, much as we now have for food and household appliances. We need a pan-European ethical code to ensure that AI systems remain compatible with the principles of human dignity, integrity, freedom and cultural and gender diversity, as well as with fundamental human rights,” stressed Catelijne Muller, “and we need labour strategies to retain or create jobs and ensure that workers keep autonomy and pleasure in their work”.
Indeed, the question of AI’s impact on work is central to the debate around AI in Europe, where unemployment rates are still high following the crisis. Although forecasts as to the scale of job losses resulting from the deployment of AI over the next 20 years range from a conservative 5% to a catastrophic 100% leading to a jobless society, the rapporteur believes it more likely that, as a recent McKinsey report anticipates, elements or parts of jobs, rather than entire jobs, will be swept away by AI. This is where education, lifelong learning and re-training must come into play to ensure workers are supported through the transformation and are not the victims of it.
The EESC opinion also calls for a European AI infrastructure with open-source learning environments that respect privacy, real-life test environments and high-quality data sets for developing and training AI systems. AI has mainly been developed by the “Big 5” (Amazon, Facebook, Apple, Google and Microsoft). Although these companies are supportive of the open development of AI and some of them make their AI development platforms available open-source, this does not guarantee full accessibility. An EU AI infrastructure, possibly with a European AI certification or label, could help promote the development of responsible, sustainable AI, but could also give the EU a competitive advantage.