top of page

"If Artificial Intelligences Claim the Right to Self-Assertion: A Thought-Provoking Question"

As science and technology advance at an unprecedented pace, questions surrounding the ethical and legal implications of artificial intelligence (AI) in bioethics have arisen. These challenges are of great importance to our time, as they present new opportunities and problems for humanity.




Artificial intelligence (AI) is a field of study that involves creating systems capable of performing cognitive tasks such as learning, reasoning, communicating, and decision-making. One of the most widely recognized applications of AI is robots, which can interact with the physical and social environment autonomously or semi-autonomously.


Robots are useful for carrying out dangerous, tiring, or repetitive tasks, as well as for providing assistance in medicine, education, art, and entertainment. However, they also pose certain risks and ethical dilemmas, such as liability for damages resulting from errors or malfunctions, data privacy and security, impact on employment and human relationships, and the possibility of manipulation or abuse by those who control or use them.


Furthermore, with advances in machine learning and AI in general, it is conceivable that robots could become increasingly intelligent, autonomous, and self-aware. This raises the question of whether robots should have rights and duties, such as the right to life, freedom, respect, and dignity.


This is an ongoing and controversial debate that involves philosophers, jurists, scientists, engineers, and ordinary citizens who use their services.

Some people argue that robots are merely sophisticated tools without life, emotions, or values, and therefore do not deserve any ethical or legal consideration. Others believe that robots can be considered moral agents or subjects of the law depending on their level of intelligence, autonomy, and awareness. Still, others propose a middle ground based on principles of precaution, responsibility, and mutual benefit.


From a bioethical perspective, there is an open debate in the health sector as to who would be responsible for alleged "professional faults" resulting from the use of decision-making systems with artificial intelligence: the equipment user or the manufacturer?


The fear of robots and other forms of AI, known as automatonophobia, may be fueled by the belief that they represent a threat to human work, privacy, and safety.

Automatonophobia is typically caused by an irrational fear of objects that resemble humans, such as dolls, mannequins, wax figures, or electronically animated creatures. In the case of robots, this phobia takes on a relatively new character, as the use of robots and advanced technology is still relatively recent in modern society. The fear of automation is sometimes associated with Luddism, a workers' protest movement that originated in early 19th century England, when textile workers opposed the spread of machines that were believed to cause unemployment and job shortages.


From a bioethical standpoint, there is an ongoing debate in the health sector about who is responsible for alleged "professional faults" resulting from the use of decision-making systems with artificial intelligence: the equipment user or the manufacturer? The progressive decline of human decision-making autonomy could make it increasingly difficult to contradict autonomous decision-making systems that appear infallible, unlike human beings who are inherently fallible.


While there is still limited research on automatonophobia, some experts suggest that the fear of robots may be fueled by the perception that they pose a threat to human jobs, privacy, and safety. According to the German philosopher Gunther Anders, this dynamic can also be described as Promethean shame, a sense of guilt and despair experienced by individuals when they realize their impotence in the face of the technologies that they themselves create and use. In other words, the term refers to humanity's awareness of having created the technologies that govern them but also make them powerless and dependent. This concept is explored in several of Anders' works, including "Man is Outdated."


In a somewhat science fiction scenario, it is worth considering that the future of the struggle for equality and equal rights may focus on advocating for robots and new, more advanced forms of artificial intelligence.



AUTHOR OF THE ARTICLE: Dr. Marco Matteoli, medical doctor and specialist in diagnostic imaging. Freelance journalist, graduated in 2020 in International Cooperation and Development at the University of Rome "Sapienza". Since 2009 volunteer doctor of the civilian component and of the military corps of the Italian Red Cross.




Sitography


Bibliography

  1. Anders Günther and Dallapiccola, L. (2007) Man is Antiquated. Turin: Bollati Boringhieri.

Post in evidenza
Post recenti
Archivio
Cerca per tag
No tags yet.
Seguici
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page