ChatGPT may be a language model capable of generating text that rivals human quality, but when it comes to generating random numbers, it falls short. As a data scientist at Facebook-turned-Meta, Colin Fraser discovered that ChatGPT’s concept of random numbers is less random and more human-like.
To test ChatGPT’s ability to generate random numbers, Fraser prompted the chatbot to select a random number between 1 and 100, collecting 2,000 separate responses. However, upon analyzing the distribution of the returned numbers, it became apparent that certain numbers were overrepresented.
One such number was 42, which comprised around ten percent of all 2,000 responses. This is likely because of its status as a meme number online, as it’s famously known as the answer to the “ultimate question of life, the universe, and everything,” as mentioned in Douglas Adams’ novel “The Hitchhiker’s Guide to the Galaxy.”
Fraser’s discovery shows that ChatGPT is not a true random number generator but is instead influenced by the popular numbers chosen by humans in its vast dataset. The fact that the number 69, another popular meme number, was strangely underrepresented in the test further suggests that ChatGPT’s responses were not truly random and may have been manually suppressed.
While this finding may seem trivial, it has important implications for the use of ChatGPT in certain applications. For example, in cryptography or other areas where true randomness is crucial, ChatGPT’s failure to generate truly random numbers could be a significant issue.
In conclusion, ChatGPT’s ability to generate random numbers leaves much to be desired. Its tendency to favor certain numbers over others reflects the influence of popular culture on its vast dataset rather than true randomness. While this may not be an issue in some applications, it highlights the limitations of artificial intelligence and the need for caution when relying on it for certain tasks.