News

Artificial Intelligence.. Does it pose an existential threat to humans?

Information and science| 23 August, 2024 - 1:20 AM

image

With the spread of artificial intelligence programs and applications, many question marks are raised about its danger and what it could represent as an "existential threat" to humans. Does it constitute a real threat? Or is it nothing more than an exaggeration?

In an attempt to determine the implications that could result from its emergence, and contrary to what some fear, a recent study concluded that artificial intelligence applications and advanced language models such as ChatGPT may not pose a real “existential threat” to humanity.

Researchers from the University of Bath in the UK and the Technical University of Darmstadt in Germany conducted experiments to test the ability of robots to accomplish tasks that artificial intelligence models have never encountered before, in what are called “emergent capabilities.”

The researchers explain that these robots can answer questions about social situations without being explicitly trained or programmed to do so, as a result of “learning from context.”

“The fear was that as models got larger and larger, they would be able to solve new problems that we cannot currently predict, posing a threat that these larger models could acquire dangerous capabilities including reasoning and planning,” says study co-author Dr Harish Tayar Madabhushi, a researcher at the University of Bath.

However, the research team concluded through thousands of experiments that a combination of memory, efficiency and the ability to follow instructions can explain the capabilities and limitations of these software, according to the University of Bath website, as they have only a “superficial ability to follow instructions and excel at language proficiency.”

The study confirmed that these softwares lack the ability to master or acquire new skills independently "without explicit instructions," which means that they remain subject to human control and the possibility of predicting their activity.

The research team concluded that learnable AI models, trained on ever larger data sets, could continue to be used without safety concerns.

These models are likely to generate more complex language and become better at following clear, detailed instructions.

Although it is “unlikely to acquire the complex thinking skills” of humans, researchers warn that this new technology could be misused to fake news or commit fraud.

Source: German

Related News

[ The writings and opinions express the opinion of their authors and do not, in any way, represent the opinion of the Yemen Shabab Net administration ]
All rights reserved to YemenShabab 2024