State-of-the-art AI capabilities vs humans
How smart are the latest AI models compared to humans? Let’s take a look at how the most competent AI systems compare with humans in various domains. The list below is regularly updated to reflect the latest developments.
Last update: 2024-04
Superhuman (Better than all humans)
- Games: For many games (Chess, Go , Starcraft, Dota, Gran Turismo etc.) the best AI is better than the best human.
- Memory: An average human can remember about 7 items (such as numbers) at a time. Gemini 1.5 Pro can read and remember 99% of 7 million words .
- Thinking speed: AI models can read thousands of words per second, and write at speeds far surpassing any human.
- Learning speed: A model like Gemini 1.5 Pro can read an entire book in 30 seconds. It can learn an entirely new language and translate texts in half a minute.
- Amount of knowledge: GPT-4 knows far more than any human, its knowledge spanning virtually every domain, even remembering things like URLs.
- Storage efficiency: GPT-4 has about 1.7 trillion parameters , whereas humans have about 100 to 1000 times as much . However, GPT-4 knows thousands of times more, storing more information in a smaller amount of parameters.
Better than most humans
- Language: The best language models can translate virtually all languages fluently, have superhuman vocabulary and can write in many different styles. In December 2023, an AI-written novel won an award at a science fiction national competition . The professor who used the AI crafted the narrative from a draft of 43,000 characters generated in just three hours with 66 prompts.
- Creativity: Better than 99% of humans on the Torrance Tests of Creative Thinking where relevant and useful ideas need to be generated. However, the tests were relatively small and for larger projects (e.g. setting up a new business) AI is not autonomous enough yet.
- Persuasion: GPT-4 with access to personal information was able to increase participants’ agreement with their opponents’ arguments by a remarkable 81.7 percent compared to debates between humans - almost twice as persuasive as the human debaters.
- IQ: With verbal IQ tests, SOTA models score better than 95 to 99% of humans (score between 125 and 155 ). With non-verbal (pattern matching) IQ tests (like the Mensa IQ test), Claude 3 was the first model to beat the human average and scored 101.
- Research: GPT-4 can do autonomous chemical research and DeepMind has built an AI that has found a solution to an open mathematical problem . However, these architectures require a lot of human engineering and are not general.
- Art: Image generation models have won art and even photography contests .
- Specialized knowledge: GPT-4 Scores 75% in the Medical Knowledge Self-Assessment Program , humans on average between 65 and 75% . It scores better than 68 to 90% of law students on the bar exam.
- Programming: GPT-4 can write code in 20+ programming languages and can even create simple games . It can solve many coding challenges in one go, although it does not do well at harder levels . It scores in the bottom 5% of human coders in the Codeforces competition. Devin can solve 13% of coding Issues and can earn money on Upwork .
- Hacking: GPT-4 can autonomously hack websites and beats 89% of hackers in a Capture-the-Flag competition. Luckily, SOTA models still fail essential tasks required for autonomous self-replication (see below).
Worse than most humans
- Saying “I don’t know”. Virtually all Large Language Models have this problem of ‘hallucination’, making up information instead of saying it does not know. This might seem like a relatively minor shortcoming, but it’s a very important one. It makes LLMs unreliable and strongly limits their applicability. However, studies show that larger models hallucinate far less than smaller ones.
- Being a convincing human. GPT-4 can convince 54% of people that it’s a human, but humans can do so 67% of the time. In other words, GPT-4 doesn’t yet consistently pass the Turing test.
- Dextrous movement. No robots can move around like a human can, but we’re getting closer. The Atlas robot can walk, throw objects and do somersaults . Google’s RT-2 can turn objectives into actions in the real world, like “move the cup to the wine bottle”. Tesla’s Optimus robot can fold clothes and Figure’s biped can make coffee .
- Self-replication. All lifeforms on earth can replicate themselves. AI models could spread from computer to computer through the internet, but this requires a set of skills that AI models do not yet possess. A 2023 study lists a set of 12 tasks for self-replication, of which tested models completed 4. We don’t want to find out what happens if an AI model succeeds in spreading itself across the web.
- Continual learning. Current SOTA LLMs separate learning (‘training’) from doing (‘inference’). Although LLMs can learn using their context, they cannot update their weights while being used. Humans learn and do at the same time. However, there are multiple potential approaches towards this . A 2024 study detailed some recent approaches for continual learning in LLMs.
- Planning. LLMs are not yet very good at planning (e.g. reasoning about how to stack blocks on a table) . However, larger models do perform way better than smaller ones.
The endpoint
As time progresses and capabilities improve, we move items from lower sections to the top section. When some specific dangerous capabilities are achieved, AI will pose new risks. At some point, AI will outcompete every human in every metric imaginable. When we have built this superintelligence, we will probably soon be dead . Let’s implement a pause to make sure we don’t get there.