Four ethical dilemmas for today AI.
    29/09/23 - #ethics #ai #futurology #artificialintelligence
ethical dilemmas about artificial intelligence

Stimulating discussion on the ethical future of artificial intelligence is a way to raise awareness about the use of AI-Powered tools. Discussing ethical dilemmas related to the AI of the future is purely an exercise in style. The efforts should be concentrated on concrete problems, close to everyday life, relevant to the evolution of human society, avoiding philosophical abstractions and Hollywood dystopias.

Ethical Dilemma 1: the socio-economic impact

The contemporary structure of the social community has solid foundations in the capitalist process of work productivity related to the fulfillment of the individual (mainly economic). One of the first impacts of the intensive use of AI in capitalist processes is to drastically change the balance of work, loosening the correlations between productivity and human effort.

The fundamental questions are:

  • How to distribute the benefits in a post-work society guaranteeing an adequate level of equity without compromising the incentives for financial and ingenuity commitment?
  • How can we replace the indispensable relations of socio-cultural exchange today strictly linked to work and production?
  • Ethical Dilemma 2: democracy

    Developing and managing large AI models requires vertical skills and capital for the acquisition and maintenance of hardware structures with very high computational performance. AI does not belong to everyone, it's controlled by those who own these two resources, regardless of the announcements (currently more theoretical than factual) of democratization of AI through Open Source.

    The fundamental questions are:

    • Who controls AI?
    • How to avoid the centralization of power in the hands of the few private companies that have enough computational and human resources to be able to guarantee the development and maintenance of artificial intelligence systems?
    • How to ensure socially uniform access to AI-Powered technologies while minimizing the risk to user privacy?
    • Ethical Dilemma 3: rights

      The statistic is not politically correct. Statistically based AI reflects the biases and inequalities present in society and therefore in the datasets used for training.

      The fundamental questions are:

      • How to ensure privacy and copyright rights without hindering the development of large datasets for training artificial neural networks?
      • How to guarantee the right to be forgotten when data are "remembered" or rather "modelled" inside the AI structure during the training phase?
      • How to verify and monitor the social fairness of decisions made by autonomous systems?
      • Ethical Dilemma 4: responsibilities

        AIs can make mistakes, and these mistakes can have consequences that are predictable or that we are not yet able to predict. Establishing responsibilities for those who develop AI and those who use it is a fundamental step to properly integrate it into society.

        The fundamental questions are:

        • Who is responsible for AI-Powered decisions?
        • How to assign a level of authority to AI's capable of automated decision-making phases?
        • How to regulate and supervise the application of AI-Powered processes that directly impact the individual or the community?
        • Finding pragmatic solutions to the ethical problems of AI means asking questions that concern the practical aspect of integrating automated systems into life and work.

          We tend to be greedy for immediate solutions, ignoring the most important phase of discussions: our techno-psycho-biological evolution. The answers that follow the right questions will not be universal, they will not represent the complexity of the context gradient, but they represent the possibility to guide the present and the future in the best possible ways.