- Trump-Zelensky spat: First public scuffle in the White House
Artificial Intelligence.. Does it pose an existential threat to humans?
Information and science| 23 August, 2024 - 1:20 AM

With the spread of artificial intelligence programs and applications, many question marks are raised about its danger and what it could represent as an "existential threat" to humans. Does it constitute a real threat? Or is it nothing more than an exaggeration?
In an attempt to determine the implications that could result from its emergence, and contrary to what some fear, a recent study concluded that artificial intelligence applications and advanced language models such as ChatGPT may not pose a real “existential threat” to humanity.
Researchers from the University of Bath in the UK and the Technical University of Darmstadt in Germany conducted experiments to test the ability of robots to accomplish tasks that artificial intelligence models have never encountered before, in what are called “emergent capabilities.”
The researchers explain that these robots can answer questions about social situations without being explicitly trained or programmed to do so, as a result of “learning from context.”
“The fear was that as models got larger and larger, they would be able to solve new problems that we cannot currently predict, posing a threat that these larger models could acquire dangerous capabilities including reasoning and planning,” says study co-author Dr Harish Tayar Madabhushi, a researcher at the University of Bath.
However, the research team concluded through thousands of experiments that a combination of memory, efficiency and the ability to follow instructions can explain the capabilities and limitations of these software, according to the University of Bath website, as they have only a “superficial ability to follow instructions and excel at language proficiency.”
The study confirmed that these softwares lack the ability to master or acquire new skills independently "without explicit instructions," which means that they remain subject to human control and the possibility of predicting their activity.
The research team concluded that learnable AI models, trained on ever larger data sets, could continue to be used without safety concerns.
These models are likely to generate more complex language and become better at following clear, detailed instructions.
Although it is “unlikely to acquire the complex thinking skills” of humans, researchers warn that this new technology could be misused to fake news or commit fraud.
Source: German
Related News
Miscellaneous | 27 Feb, 2025
Genius and obsessed with deadly weapons.. Another billionaire in the footsteps of Elon Musk
Information and science | 23 Feb, 2025
How does artificial intelligence make humans stupid?
Information and science | 31 Dec, 2024
Artificial Intelligence in 2025.. How Can It Help Us?
Information and science | 27 Jul, 2024
Researchers: Artificial intelligence systems may be on the verge of collapse
Information and science | 25 Jun, 2024
I'm familiar with the seven most vulnerable jobs than artificial intelligence