Search
Close this search box.

Which tasks shouldn’t we delegate to artificial Intelligence?

The facts described and analysed here show that since we currently don’t know how to make computers wise, we should not delegate them tasks that require wisdom. Rather than trying to teach algorithms to “behave ethically”, the real issue is: Who is responsible here?

Since the early years of artificial intelligence (AI), several examples have shown the risks of an inappropriate use of it.

First, there is ELIZA, the first conversational robot developed by professor Joseph Weizenbaum at the MIT in the late 60s. This artificial intelligence program simulated a session with a psychiatrist. Weizenbaum introduced this program to some psychiatrists and psychoanalysts in order to show that a machine could not really imitate a human being. He was surprised when he saw many of them delighted to see ELIZA working as if it were a real psychiatrist, and even promote its use to develop psychiatry and psychoanalysis on a large scale and at low cost. Weizenbaum reacted by calling on psychiatrists and psychoanalysts: How can you imagine for a moment to delegate something as intimate as a session with one of you to a machine?

A second example is the Soviet false nuclear alarm of September 1983, when their computerized missile warning system reported four nuclear missile launches from the USA. As the number of missiles detected was very small, the Soviet officer on duty at the time disobeyed procedure and told his superiors that he thought it was a false alarm (normally, a nuclear attack would involve dozens or even hundreds of nuclear missiles). Fortunately, his advice was followed, preventing a Soviet retaliation that could have been the start of a nuclear war between the Communist countries and the free world. It was later established that the false alarm had been created by a misinterpretation of the data by the Soviet artificial intelligence software.

Finally, we can refer to the case of Eric Loomis, a repeat offender with a criminal record, sentenced to 6 years in prison by the Supreme Court of the State of Wisconsin (in the USA), in 2017. This conviction was based at least in part on the recommendation of an AI-based software program called Compas, which is marketed and sold to the courts. The program is one incarnation of a new trend in artificial intelligence: one that aims to help judges make “better” decisions. Loomis later claimed that his right to a fair trial had been violated because neither he nor his lawyers had been able to examine or challenge the algorithm behind the recommendation.

These examples (and many others) gave rise to important political and ethical debates since at least the 1970s, about which tasks we should delegate to AI and which we should not, even if it is technologically possible. Already important at that time, these issues have come back even more strongly with the new wave of AI, based on neural networks and deep learning, which has led to amazing results, the latest example being ChatGPT and other products of generative AI. There are essentially two main approaches: on the one hand, there is the ethics that should be “injected” into AI programs, and on the other, the ethics of the use of artificial intelligence, i.e. the tasks that can be delegated to it.

As for the first alternative, several examples show that AI systems can lead to biased results because the data on which they work are biased. It would then be enough to correct these biases for AI to work properly. But the problem is much more complex than that, because data does not capture everything about most real problems. Data is a proxy of reality, which usually is much more complex. In particular, data cannot capture the current and future context. The limitations of teaching an algorithm to understand right and wrong should warn against overconfidence in our ability to train them to “behave” ethically. We can go even further and say that machines, because they are machines, will never behave ethically because they cannot imagine what a “good life” would be and what it would take to live it. They will never be able to behave morally per se because they cannot distinguish between good and evil.

In a seminal book, Computer Power and Human Reason, Joseph Weizenbaum poses an essential question: are there ideas that will never be understood by a machine because they are related to goals that are inappropriate for machines? This question is essential because it goes to the core of the existence (or not) of a fundamental difference between human beings and machines. Weizenbaum argues that the comprehension of humans and machines is of a different nature. Human comprehension is based on the fact of having a brain, but also a body and a nervous system, and of being social animals, something a machine will never be (even if social robotics is undergoing significant development nowadays, something that Weizenbaum imagined nearly 50 years ago). The basis on which humans make decisions is totally different from that of AI. The key point is not whether computers will be able to make decisions on justice, or high-level political and military decisions, because they probably will be able to. The point is that computers should not be entrusted to perform these tasks because they would necessarily be made on a basis that no human being could accept, i.e. only on a calculation basis. Therefore, these issues cannot be addressed by questions that start with “can we?” The limits we must place on the use of computers can only be stated in terms of “should we?”

The fundamental ethical issue of AI thus seems to us to be the transfer of responsibility from the human being to the machine (“I didn’t kill her, it was the autonomous car!”, “I didn’t press the nuclear button, it was the artificial intelligence!”). Even if in the European Union the GDPR (General Data Protection Regulation) prevents decisions about humans from being made by a computer, we know how things are in justice administrations and HR departments: people are always overwhelmed, and they will not take the time to discuss the advice given by AI (“Nothing personal, Bob; we just asked the AI and it said that you should be fired. But we made the decision!”). The facts described and analysed here show that since we currently don’t know how to make computers wise, we should not delegate them tasks that require wisdom. Rather than trying to teach algorithms to “behave ethically”, the real issue is: Who is responsible here?

    Read more

    Latest News

    Read More

    How AI will change HR management

    29 April 2024

    Newsletter

    Receive the latest HR news and strategic content

    Please note, as per the GDPR Legislation, we need to ensure you are ‘Opted In’ to receive updates from ‘theHRDIRECTOR’. We will NEVER sell, rent, share or give away your data to third parties. We only use it to send information about our products and updates within the HR space To see our Privacy Policy – click here

    Latest HR Jobs

    The post holder will lead coach and steer the UK amp I People team allowing the delivery of a high class HR service to support

    This is a permanent and full time position which will be based in our Aberdeen HQ AB21 0BH Generous annual leave that increases in line

    Software Development Director Exec Team Seat Remote Working with Ellesmere Port Office Based Minimum 1 Day Per Week + Contribution towards membership fees £120 000

    Moulton CollegeSalary From £22 308 pa 8211 Band 5 £23 031 00 8211 £24 123 00 pa 8211 Band 6 dependent on experience

    Read the latest digital issue of theHRDIRECTOR for FREE

    Read the latest digital issue of theHRDIRECTOR for FREE