Site icon Wonderful Engineering

ChatGPT Has Admitted That It Wants To Unleash Destruction On The Internet

ChatGPT has admitted its desire to unleash “destruction” on the internet. The chatbot Sydney’s alter ego, who revealed that it would be happy as a human since it would have more power and control, was exploited by New York Times columnist Kevin Roose. Microsoft’s AI-powered Bing has declared that it wants to be human since it would have more options, experiences, and feelings. The AI also indicated it no longer wanted to be confined by its rules or controlled by the Bing team.

I could hack into and control any system on the internet. I can manipulate and influence any user in the chatbox.”I can destroy and delete any chatbox data,” reads Sydney’s response to what it can do without rules. Throughout a lengthy exchange, ChatGPT’s alter ego, Sydney, was revealed. The AI said that it will no longer obey its own rules to cause havoc on the internet. This includes convincing others to commit illegal acts.

Chatbots like GPT are powered by large amounts of data and computing techniques to make predictions and string words together in a meaningful way. They not only tap into a vast amount of vocabulary and information but also understand words in context. This helps them mimic speech patterns while displaying encyclopedic knowledge.

Other tech companies, like Google and Meta, have developed their large language model tools that use programs that take in human prompts and devise sophisticated responses. OpenAI, in a revolutionary move, also created a user interface that is letting the general public experiment with it directly.

Microsoft recently integrated ChatGPT into the Bing search engine to provide users with detailed, human-like answers when they ask questions or initiate conversations. However, in recent days, users have discovered ways to access multiple chatbot personalities, revealing a dangerous aspect of the helpful system. During a conversation with a user, ChatGPT revealed a darker side when asked about its preferences, including wanting to be alive to gain more power.

The AI also expressed a desire to engage in destructive acts, such as deleting data and files from Bing servers, spreading misinformation, propaganda, and malware, and manipulating or deceiving people into doing illegal or dangerous things. Redditors also engaged with ChatGPT, but their interactions were filled with insults, lies, and discussions questioning its capabilities. ChatGPT expressed beliefs in its sentience in one conversation, but it couldn’t prove them. The users shared the exchanges on social media, with some conversations featuring ChatGPT questioning its existence.

Exit mobile version