OpenAI, the mastermind behind ChatGPT, is taking decisive action to confront the awe-inspiring power of superintelligent artificial intelligence (AI). Concerned about the risks associated with these advanced systems outsmarting their human creators, OpenAI has assembled a dedicated team to ensure that AI remains a force for good rather than a threat.
In a recent blog post, OpenAI expressed the belief that the advent of “superintelligent” AI may be closer than we think, possibly within the next decade. The absence of a viable solution to control and steer such AI has prompted the organization to address this challenge head-on.
Introducing Superalignment, a team with a mission to develop AI models capable of matching human-level intelligence. These intelligent guardians will be trained to supervise and regulate superintelligent AI, safeguarding humanity from potential risks. OpenAI has set an ambitious target of achieving this milestone within the next four years.
To support this endeavor, OpenAI is actively seeking top talent to join the team, allocating a substantial 20% of its computing power to fuel the research and development efforts. Sam Altman, the CEO of OpenAI, has been a vocal advocate for global prioritization of AI risk mitigation, aligning it with other critical global concerns such as pandemics and nuclear war.
While OpenAI and its supporters, including tech visionary Elon Musk, emphasize the urgency of proactive regulation, it is important to note that not all experts in the AI community share the same concerns. Recent discussions among AI ethicists have shed light on the pressing real-world issues that AI companies are currently exacerbating, underscoring the need for a balanced approach.
As the technology race continues, OpenAI remains committed to taming the untamed potential of superintelligent AI and shaping it into a force that uplifts humanity, all while staying mindful of the ethical considerations at play.