ChatGPT and other large language models (LLMs) cannot learn independently or acquire new skills, meaning they pose no existential threat to humanity, according to new research from the University of Bath in the UK and the Technical University of Darmstadt in Germany.
The study, published as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe.
The research team concluded that LLMs – which are being trained on ever larger datasets – can continue to be deployed without safety concerns, though the technology can still be misused.
With growth, these models are likely to generate more sophisticated language and become better at following explicit and detailed prompts, but they are highly unlikely to gain complex reasoning skills.
“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies and also diverts attention from the genuine issues that require our focus,”…


