GROW YOUR TECH STARTUP

Dispelling the Myth: It’s Not AI You Fear, It’s Human Behavior 

April 5, 2024

SHARE

facebook icon facebook icon

In 2017, a robot got tired of its job and committed suicide in a water fountain. Then, in 2022, a Google engineer declared, after interacting with LaMDA, the company’s chatbot, that technology had become conscious. Recently, it was reported that in 2021, a Tesla software engineer was attacked by an uncontrolled robot. This is without considering the fear that machines will steal jobs, or the classic fiction that machines will take over the world.

Although we don’t see machines as replicants in Blade Runner, people are predisposed to anthropomorphizing or attributing human qualities to non-humans [1]. It’s a way of being able to analyze a scenario and bring it into our understanding: we give names to our boats and tornadoes, we talk to our pets, and say that the pleasant feeling they give us is love. It’s easier to understand a system that declares in the first person “I’m sorry, I don’t know how to help you” than to accept that the model we’re talking to may be nothing more than a stochastic parrot. [2]

The anthropomorphized interaction, therefore, relates to the phenomenon of “illusion of agency”, a psychological concept that refers to the tendency to attribute autonomous action or control to entities that, in reality, do not possess such capabilities. In other words, it’s the mistaken perception that an agent (be it a person, an animal, or a machine) is acting of its own free will, when in fact its actions are being determined by external factors (in the computational case, development rules).

In situations where AI systems make flawed decisions or actions, there is a tendency to see these errors as “choices” of the entity rather than results of its programming and design by humans, not least because it has been reported that the selfish bias of a human being that leads someone to see the cause and responsibility for a negative outcome as not attributable to themselves can be observed even in human-machine interactions [3].

This change in perception is dangerously inclined to absolve human creators, operators, and regulators of their responsibility, not as a matter of judicial regulation (which remains a gap and has challenges in its realization not only because of the complexity of the subject but because Artificial Intelligence is often understood only as machine learning and the premise is not properly structured. Do we need stricter regulation? Do we need to take more risks?), but on a techno-ethical issue.

Let’s take a more extreme but true event from 2023: a user who was sentimentally attached to a chatbot committed suicide after sharing his thoughts with the bot and receiving the response to “turn your words into actions”, as well as other messages. Would a court conviction of the developers of this product cause another user to behave in the same way with another chatbot, assuming this one has been deactivated, if the content of the messages and the affection are the same? It’s not just a legal situation. It’s a social, psychological, and technological education issue.


The concept of humanizing AI is ambiguous, and a significant challenge lies in the absence of a universally accepted approach that dictates the best practices for designing and utilizing AI. While an interface that mimics human behavior can be more approachable, no clear boundaries are defining what should or shouldn’t be done in a product. Ultimately, user rejection becomes the only limiting factor, although potential harm may manifest before the interface becomes too unfamiliar.

A user-friendly interface is a reduction in the complexity of the system that operates behind it. But as long as there is no education about how systems work, the user will not be able to think critically about what they use. This doesn’t mean that everyone should become a programmer, but at least understand that the output on their screen comes from a path of data collection, model development, and design. There is a set of rules taking place. Since humanization is an almost unconscious act on the part of us users, let’s at least limit it with a little knowledge.

And it’s not easy to avoid anthropomorphism when communicating about AI, especially considering the standard language of the industry, the media, and everyday life itself: machine learning, computer vision, generative modelling. Perceptions of Artificial Intelligence can be influenced by the specific use of language.

How AI is presented has “concrete impacts”, particularly on the way people distribute responsibility and recognition for the work done. When described merely as a tool in the hands of humans, it tends to attribute greater responsibility and merit to certain individuals – such as the person operating the code. On the other hand, if AI is characterized with human traits – such as the ability to create – then it is seen as deserving of greater credit and responsibility, like an agent with independent thought and mental capacity. [4] When we read news stories about incidents or atypical events involving Artificial Intelligence, we often come across these terms.

Also, the attribution of “intelligence” means that more blame or credit for the outcome of a task was attributed to a robot with autonomous behavior than to a non-autonomous robot, even if the autonomous behavior did not directly contribute to the task. [3] These studies therefore suggest that humans can assign responsibility to a computer or robot based on the machines’ anthropomorphised mental capacities.

The humanization of machines not only shifts the distinction between the responsibility of the device and that of its human creator, but the attribution of intentions or consciousness to a machine at the same time blurs the boundaries of what constitutes true autonomy and consciousness.

However, the difficulty in imputing humanity and sentience to a machine lies not only in the fact that Artificial Intelligence is not capable of having it and when it says, for example, that it feels fear, it is actually emulating what it has learned, repeating that phrase without any kind of essence behind it. Even today, there is a heated debate about how to define consciousness. Our consciousness, as humans.

Our understanding of how the brain works is still quite limited. We have considerable knowledge about fundamental chemistry: the way neurons activate, and the transmission of chemical signals. We also have a good understanding of the main functions of various brain areas. However, we have very little knowledge about how these functions orchestrate us. To some extent, theoretical speculation has supplanted detailed neurophysiological studies of what happens in the brain. But what about beyond that? [5] Why do we have this magical notion of ourselves? Why does the same experience affect us differently? Is the same feeling felt in the same way by all of us?

If being a human is something that, although we experience it, we still don’t have a full understanding of what it is as a whole, how can we say that a machine also experiences this complexity? By elevating machines to human capabilities, we diminish the special character of people.

For the 2023 Jabuti Prize, one of the highest honors in Brazilian literature, the Brazilian Book Chamber (CBL) decided to disqualify Frankenstein, an edition of the 1818 classic, from the Best Illustration category, since the artist reported having used AI solutions to develop the art. Ironically, one of the books recognized by the Prize in the Non-Fiction category deals with the impact of Artificial Intelligence on human beings (“Humanamente digital: Inteligência artificial centrada no humano”, something like “Humanly Digital: Human-Centered Artificial Intelligence” in English, by Cassio Pantaleone). On the one hand, we recognize the intertwining of the human experience with machines. On the other hand, we still haven’t managed to validate whether an algorithm used as an art tool can be considered a valid method of creation or not, even though the artistic process, even if carried out by machine learning, requires the action (and appreciation of beauty and aesthetics) of a human being.

Machines don’t steal jobs if they aren’t used indiscriminately to do so. Machines don’t kill unless they are used as weapons. Nor do machines suffer or empathize, although their texts emulate this since they have been trained with data from us, loaded with feelings that only we can truly feel. They are almost the modern version of the Golem myth. How can humans relate to non-human intelligence? Anthropomorphism is a valid answer, but not the only one. And when it is used, it cannot absolve us of the real responsibility for its consequences, whether appropriate or not: us.

Artificial Intelligence is, in the end, a mirror of ourselves. And if we’re afraid of where it’s going, it’s actually because we’re afraid of the path we’re going to create.

REFERENCES

[1] Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864

[2] Shneiderman, B. & Muller, M. (2023) . On AI Antropomorphism https://medium.com/human-centered-ai/on-ai-anthropomorphism-abff4cecc5ae

[3] Kawai, Y., Miyake, T., Park, J. et al. Anthropomorphism-based causal and responsibility attributions to robots. Sci Rep 13, 12234 (2023). https://doi.org/10.1038/s41598-023-39435-5

[4] Epstein, Z., Levine, S., Rand, D. G., & Rahwan, I. (2020). Who Gets Credit for AI-Generated Art? In iScience (Vol. 23, Issue 9, p. 101515). Elsevier BV. https://doi.org/10.1016/j.isci.2020.101515

[5] Goff, P. (2023). Understanding Consciousness Goes Beyond Exploring Brain Chemistry. Scientific American https://www.scientificamerican.com/article/understanding-consciousness-goes-beyond-exploring-brain-chemistry/


This article was published by Sofia Marshallowitz on HackerNoon.

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending