Stability.ai CEO Emad Mostaque recently suggested that the phenomenon of machine hallucination, commonly known as the generation of implausible output by Large Language Models (LLMs), does not exist. Instead, he believes that LLMs are merely windows into alternate realities in the latent space, a concept used in deep learning to describe the compressed space between an input and output image.
“LLMs don’t hallucinate,” Stability.ai CEO Emad Mostaque wrote in a Monday night tweet. “They’re just windows into alternate realities in the latent space.”
While Mostaque’s poetic approach to understanding machine hallucination might be helpful in understanding latent space, it does not change the fact that such output is not real or reliable. For instance, if a machine generates a bio that is full of inaccuracies and embellishments, it cannot be passed off as a mere alternate reality. It is simply a case of the machine generating false information.
Moreover, accepting that machine hallucinations are just a doorway to alternate realities can lead to dangerous new-age conspiracy theories, where flawed technology is deified and mystified as an all-knowing seer when it is actually wrong about a lot of things. This is especially concerning when considering the potential consequences of machine-generated output, such as in the fields of finance, healthcare, or national security.
Therefore, it is crucial to acknowledge the limitations of LLMs and other AI technologies and to exercise caution in relying on their output without proper scrutiny. While AI has the potential to revolutionize many aspects of our lives, it is not infallible and should be treated with a healthy dose of skepticism. Only by doing so can we ensure that we are not led down a path of misinformation and unintended consequences.