Changing how we think and talk about large language models like ChatGPT can help us cope with the strange, new sort of intelligence they have. This is according to a new paper from an Imperial College London researcher published in Nature.
Such chatbots, which are underpinned by neural network-based large language models (LLMs), can induce a compelling sense that we are speaking with fellow humans rather than artificial intelligence.
"Our social brains are always looking for connection, so there is a vulnerability here that we should protect." Professor Murray Shanahan Department of ComputingHardwired for sociability, human brains are built to connect and empathise with entities that are human-like. However, this can present problems for humans who interact with chatbots and other AI-based entities. Were these LLMs to be used by bad faith actors, for example scammers or propagandists, people could be vulnerable to handing over their bank To better understand what the researchers refer to as "exotic mind-like artefacts" requires a shift in the way we think about them, argue the researchers. This can be achieved by using two basic metaphors. First, taking a simple and intuitive view, we can see AI chatbots as actors role-playing a single character. Second, taking a more nuanced and technical view, we can see AI chatbots as maintaining a simulation of many possible roles, hedging their bets within a multiverse of possible characters.
Professor Shanahan said: "Both viewpoints have their advantages, which suggests the most effective strategy for thinking about such agents is not to cling to a single metaphor, but to shift freely between multiple metaphors."
All image credit: Shutterstock