Artificial intelligence applications increasingly shape the societies and opportunities of their citizens. While it is argued that applications can be of great benefit to humanity, there is also evidence of risk. Recently there have been cases of distorted and discriminatory outcomes of artificial intelligence technologies, which illustrate how ethical problems and negative social impacts can be integrated into the criteria and design of artificial intelligence applications and decision-making systems based on algorithms. Algorithmic prejudices have consequences on real-life and as society moves more and more towards forms and applications of algorithmic decision, a more urgent request to understand and face these prejudices is emerging.

Ursula von der Leyen, in presenting the new course of the European Commission she led, expressed her commitment to new legislation for a coordinated European approach to the human and ethical implications of artificial intelligence. The Innovation Ministers participating in the multi-stakeholder conference on G7 Artificial Intelligence state that

Artificial Intelligence should focus on enabling environments that promote social trust and responsible AI adoption to build on a common vision centered on man.

The announced European legislation could be a pioneer of a coordinated effort towards international regulation of the frenetic developments of AI. There are many challenges that can be made for this legislative effort, at a local and international level.

How to balance technological development and regulation in a region without tightening global competition? How to legislate on a technological development that does not yet exist? And what technical solutions should be adopted to regulate this evolution? Should these solutions be shared and standardized nationally or globally?

More importantly, when political, economic, cultural and social differences seem to lead to very different interpretations of the values that these artificial intelligence applications should incorporate, a more fundamental discussion emerges about the global social consequences that such diversity would entail.

Many events concerning Artificial Intelligence are scheduled from here to the end of the year.

One of these, is in fact the "Artificial Intelligence: Ethics and Algorithmic Biases", which will take place:

 Monday 21 October from 10.30 to 12.30 at the John Cabot University - Piazza Giuseppe Gioachino Belli 11 in Rome.

Participating in the round table are:

  • Francesco Lapenta, Director of John Cabot University Institute of Future and Innovation Studies
  • Kai Härmand, Undersecretary Ministry of Justice - Estonia
  • Irene Sardellitti, European Commission
  • Alexey Malanov, Antivirus expert, Kaspersky
  • Corrado Giustozzi, cyber security expert of the Agency for Digital Italy for the development of CERT-PA
  • Massimo Buscema, Semeion Institute
  • Fabio Filocamo, Managing Director of Dnamis, Author of "2081 - Technologies, Humans, Future"
  • Andrea Gilli, Senior Researcher, NATO Defense College
  • Ann-Sophie Leonard, Mercator Fellow, NATO Defense College
  • Luca Baraldi, Cultural Diplomacy Advisor Energy Way and promoter Manifesto of sensitive rationality
  • Philip Larrey, Chair of Logic and Epistemology, Author of "Connected World" & "Artificial Humanity"
  • Amedeo Cesta, Research Director at CNR-ISTC
  • Sébastien Bratières, Director of Artificial Intelligence at Translated
  • Ansgar Koene, Senior Research Fellow: ReEnTrust, UnBias & Horizon Policy Impact, Horizon Digital Economy Research Institute, University of Nottingham, Chair for IEEE Standard on Algorithm Bias Considerations
  • Alina Sorgner, Professor of Applied Data Analytics, John Cabot University