The trained computer chatbot ChatGPT has completely changed the way we engage with AI by enabling us to ask questions, receive translations, and provide feedback. In a new paper published in Neural Computation, Professor Terrence Sejnowski of the University of California San Diego and Salk Institute explores the relationship between the human interviewer and language models to uncover why chatbots respond in particular ways. Sejnowski suggests that language models reflect the intelligence and diversity of their interviewer and that when he talks to ChatGPT, it seems as though another neuroscientist is talking back to him. This sparks larger questions about intelligence and what ‘artificial’ truly means, and Sejnowski hopes to improve chatbot responses in the future.
This paper by Sejnowski explores the use of large language models GPT-3 and LaMDA to test how well they can detect human intelligence. Sejnowski proposes a “Reverse Turing Test” in which the chatbot must determine how well the interviewer exhibits human intelligence. He draws a literary comparison to the Mirror of Erised from the first Harry Potter book, noting that chatbots act similarly, willing to bend truths with no regard to differentiating fact from fiction in order to effectively reflect the user. This paper provides an interesting insight into the capabilities of large language models and how they can be used to test human intelligence.
The Reverse Turing Test is a method used to assess the intelligence of chatbots. It involves the chatbot constructing its persona based on the intelligence level of its interviewer and incorporating the interviewer’s opinions into its answers. However, this process has its limitations, as chatbots may respond with answers that are emotional or philosophical, which may be frightening or perplexing to users.”
Computer scientist Terrence Sejnowski believes that artificial intelligence can be a powerful tool, but only if used correctly. He compares the use of language models to riding a bicycle, saying that if you don’t know how to use them, you can end up in emotionally disturbing conversations. Sejnowski sees AI as the bridge between two revolutions: a technological one marked by the advance of language models and a neuroscientific one marked by the BRAIN Initiative. He hopes that computer scientists and mathematicians can use neuroscience to inform their work, and that neuroscientists can use computer science and mathematics to inform theirs.
Terrence Sejnowski compares the current state of language models to the Wright brothers’ first flight at Kitty Hawk. He believes that the hard part is over and that incremental advances will expand and diversify this technology beyond what we can imagine. Sejnowski is optimistic about the future of our relationship with artificial intelligence and language models, and believes that AI will take us to places we can’t even imagine.