According to Meta’s AI head scientist, LLMs aren’t even as intelligent as dogs.

Meta's AI chief scientist

Artificially intelligent large language models (LLMs) like ChatGPT, which can write code, make apps, and pass the bar exam, may astound you. LLMs do not yet possess artificial general intelligence, which is the condition of a fictitious autonomous system that is capable of doing intellectual tasks that people or animals perform.

LLMs aren’t even as intelligent as dogs, says to Yann LeCun, chief AI scientist at Meta. Because LLMs can’t interact with, absorb, or grasp reality and solely rely on language training to produce an output, he claims they are not actually intelligent.

LeCun claims that actual intellect goes beyond language and points out that language has a very little role in the majority of human understanding. The four pillars of human intelligence that LLMs like ChatGPT lack are emotions, creativity, sentience, and consciousness.

According to OpenAI’s GPT-4 whitepaper, ChatGPT can resolve a challenging mathematical problem and, without its safety safeguards, can teach how to make dangerous drugs at home.

However, ChatGPT lacks the cognitive skills necessary to perceive, make plans, use common sense, or reason in light of experience. The most recent iteration of OpenAI’s language model, GPT-4, however, showed human-level performance in coding, math, and law, indicating that the development of artificial general intelligence may soon be feasible.

In an effort to eventually reach artificial general intelligence, OpenAI is continuing to develop and train its GPT language models. The corporation does recognise that the development of such technologies has the potential to fundamentally alter society.

Sam Altman, the CEO of OpenAI, stated in a May testimony before the US Senate Judiciary Subcommittee that he fears his technology could “significantly harm the world.”

OpenAI claims in a blog post that while generally intelligent beings can perform a variety of functions, it is crucial to use and explore the technology ethically.

When artificial intelligence surpasses that of humans, according to LeCun, they should be “controllable and basically subservient to humans.” According to him, there is “no correlation between being smart and wanting to take over,” thus people’s concerns that artificially generally intelligent beings will want to rule the world are false.

LeCun and OpenAI share similar views on the development of AI that can achieve artificial general intelligence. The business thinks it will be difficult to stop the development of artificial creatures that can eventually match or surpass human intelligence.

Though OpenAI believes the risks of artificial general intelligence might be “existential” if it were to fall into the wrong hands and be used maliciously, its goal is to guarantee the technology is developed with extreme prudence.

We are rapidly approaching the artificial intelligence future that we once believed could only be portrayed in science fiction films. Are we prepared?