Chosun Column
I oppose 'humane artificial intelligence'.
Dean Jang Dae-ik
May 14, 2024
When AlphaGo made its debut, an experiment was conducted.
How do humans respond when their identity is threatened?
If AI wins at one or two things, we can just find something it does better, but
With the survival of big tech resting on investment competition, we are soon entering an era of 'general AI' that is not specific AI.
At that time, how will humans find their self-esteem?
Let’s assume that in the baseball series between South Korea and Japan, Korea hasn’t won a single game in the past 30 years. According to social identity theory, our national reaction can be categorized into one of three responses. There’s the response of trying again (the challenge strategy), the notion that we can’t possibly beat Japan at baseball, so we should switch to another sport like soccer (alternative compensation strategy), or a declaration to support Japan next time (withdrawal strategy).
When a group feels that its identity is threatened by another group, individuals tend to either compete again, seek another identity, or completely withdraw from that group. Here, the ‘other group’ refers to human groups, of course. Such as, male groups, Caucasians, progressive factions, heterosexual groups, etc. Anyway, they are subgroups within the species known as sapiens. However, in March 2016, a new group emerged.
Many will remember the victory of AlphaGo (an artificial intelligence developed by Google DeepMind) over Lee Sedol in the matches held in Seoul from March 9 to 15, 2016. On that day, AlphaGo became the champion of a new field of artificial intelligence, succeeding Deep Blue, which defeated the chess champion, and IBM's Watson, which won a quiz show. However, that was just the beginning of a new evolution in artificial intelligence.
At that time, my research lab studied how the general public reacted psychologically to the results of that match. Social psychology states that there are ten major attributes of human nature. These include traits that distinguish humans from animals, such as morality, maturity, cultivation, depth, and sophistication, and traits that distinguish humans from machines, such as warmth, emotional response, flexibility, agency, and rationality. Among these traits, which aspects did AlphaGo's overwhelming victory threaten? And did we, as social identity theory predicts, adopt one of the three strategies?
The results were very interesting. The aspect in which artificial intelligence threatened humans during this historic Go match was the elements of rationality and sophistication. Due to this shock, we adopted an alternative compensation strategy, which was neither that of challenging nor giving up. In other words, the response was to seek hopes in other areas since we could no longer win against artificial intelligence in terms of rationality and sophistication. In fact, the participants judged that among the ten traits, morality, emotional response, and autonomy were much more important to human identity, asserting that humans were significantly better in those areas than AI. They sought to compensate for their loss in rationality and sophistication by excelling in other domains. Just as when one side of a balloon is pressed down hard, the other side bulges out.
What will happen if another artificial intelligence emerges that threatens human morality, emotional response, and autonomy? Furthermore, if an artificial intelligence poses a significant threat to all ten facets of human nature, what alternative domains can we turn to in order to compensate for our damaged psychology?
The shocking truth is that there isn't much time left before we have to answer these crucial questions. The AI arms race triggered by OpenAI's ChatGPT has reignited the debate on when general artificial intelligence (AGI) will be realized this year. ‘General artificial intelligence’ refers to a type of machine intelligence that can operate at or above the average human intelligence level over a broad range of tasks, rather than being limited to specific tasks. For instance, an AI that collects and analyzes radiology data is considered a narrow AI, while intelligence that implements general human reasoning, learning, memory, and perception, and can become a companion in daily life, is referred to as general AI.
Earlier this year, Facebook's Zuckerberg presented the vision that “to create the products we want, we need to develop artificial general intelligence,” and one of the most frequently asked questions of OpenAI’s Altman this year was when exactly artificial general intelligence would be realized. SoftBank's Son Masayoshi stated, “A general-purpose AI that surpasses human intelligence will be realized within ten years, so focus on that,” and Nvidia's Huang predicted, “It should happen within five years.” In South Korea, companies such as Samsung and SK have entered into fierce competitions to produce AI chips that will be used for future general-purpose artificial intelligence.
If general artificial intelligence, capable of surpassing all human abilities, emerges in the near future, humans will face a great turmoil of identity. Even the self-esteem of humans will inevitably plummet in the presence of general artificial intelligence, which may even acquire autonomy, and humans will make desperate efforts to find another domain that they believe they can outperform machines. Thus, the likely statement that ‘Yes, unlike machines, humans don’t make mistakes!’ would be sheer patheticness.
In this context, it must not be taken as a foregone conclusion that the global arms race among big tech companies to develop general artificial intelligence is a necessary strategy. As Asimoglu pointed out in Power and Progress, “We should not obsess over how similar artificial intelligence is to human intelligence, but rather consider how instrumentally useful it is to us.”