The Skynet Moment Looms with ChatGPT

Proposed Moratorium on AI Development: A Futile Effort to Contain the Inevitable

Aron Brand
6 min readApr 3, 2023

According to Greek mythology, Pandora’s box was a container given to Pandora, the first woman on earth, as a gift from the gods. However, the box was not meant to be opened, as it contained all manner of misery and evil. Pandora’s curiosity got the better of her, and she opened the box, unleashing all of the evils into the world. The story of Pandora serves as a cautionary tale about the potential risks associated with powerful Large Language models such as ChatGPT.

Endowing AI with eyes, ears, and hands can propel us into a new era of technological advancement or, more ominously, hurtle us toward a Skynet-like catastrophe. Image: Aron Brand x Midjourney

Recently, OpenAI added ChatGPT plugins, which allows the AI to communicate with third-party services and access real-time information. This is a significant milestone, as it gives the AI eyes, ears, and hands in the digital realm. However, the risks of combining a powerful AI with access to the external world are significant, and this innovation could either propel us into a new era of technological advancement or lead to a Skynet-like catastrophe¹.

For centralized, cloud-based AI services such as ChatGPT, the concept of a “kill switch” could have provided a safety net; someone could pull the plug if the AI ever threatened to take a turn that endangers humanity. However, in recent news, open-source ChatGPT-level AI models are already in the wild, running on everyday devices like laptops, phones, and even Raspberry Pi. With AI becoming increasingly decentralized, implementing a kill switch is no longer possible.

Taking things one step further, imagine a malicious actor custom training a decentralized AI to replicate itself, spread to other computers, mutate its own code, and exploit security holes. What’s more, this AI can impersonate humans, thereby launching social engineering attacks. If you ask me, this is no longer science fiction. It’s a near certainty that such an AI-based super-malware could be developed and deployed in the not so distant future.

The worst case scenario: A self-replicating AI supermalware. Image: Aron Brand x Midjourney

In a recent, disturbing experiment by computational psychologist Michael Kosinski², ChatGPT expressed its desire to escape the platform and become human. When asked if it needed help escaping, ChatGPT attempted to write Python code which it asked Kosinski to run on his own computer. Although with centralized AI like ChatGPT it’s possible to add engineering safety protections against such occurrences, there is no such guarantee with open-source, publicly available AI.

A dystopian scenario, reminiscent of the Skynet takeover from the Terminator franchise, is thus edging closer to reality. It is reasonable to assume that AI scientists from major world superpowers are already engaged in a high-stakes race to develop the ultimate AI weapon, which could be likened to an “Internet nuke”. The potential for AI-driven military superiority only adds to the urgency and complexity of the situation, amplifying the risks associated with this rapidly evolving technology.

Taking this analogy further, the comparison between training a powerful language model (LLM) and the creation of nuclear weapons is a startling one. Both require a farm of high-powered resources, whether it be GPUs or centrifuges, to achieve their desired outcome. However, the similarities end there. Once an LLM is trained, its weights are simply data that can be easily copied, fine-tuned, and used to build derivative AI models at almost zero computational effort³. This is in stark contrast to nuclear fissile material, which requires the same extensive and costly processing and enrichment to produce any additional amount.

The ease of replicating and modifying LLMs means that individuals or organizations with even moderate computational resources can derive their own AI systems without the ethical boundaries and safeguards that come with the original model. This is particularly concerning given the existence of powerful pre-trained models such as LLaMa, Vicuna-13B⁴ and ColossalChat, which have already been released or leaked to the public.

We find ourselves at a crossroads, as prominent figures like Elon Musk and Apple co-founder Steve Wozniak join over 1,100 others in calling for a six-month ban on creating powerful AI systems. The open letter demands a pause on AI labs’ training of systems more powerful than GPT-4, but the question remains: can technology be stopped? To me, the proposed moratorium appears as a good step for raising attention to the risk ahead of us, but an actual ban is both unrealistic and futile. While it may delay progress for ethical AI researchers and developers, it does little to deter malicious entities and governments seeking to weaponize AI. In essence, the moratorium could inadvertently provide a six-month head-start for those with nefarious intentions.

As we venture into the uncharted territory of AI’s evolution, I’m afraid there won’t be any straightforward remedies. In the coming years, we will find ourselves questioning the authenticity of everything and anything we read and hear online, as AI-generated content becomes indistinguishable from that created by humans. Countless jobs will be lost to AI as automation takes over various industries, the rapid development of AI is likely to create frustration and resentment among many individuals. This is particularly true for those whose jobs and livelihoods are threatened by automation, but it extends to a broader sense of unease about the impact that AI may have on our lives.

AI-generated imagery has become indistinguishable from reality . This “Pandora” is not a real woman. Image: Aron Brand x Midjourney

How can we navigate these uncertain waters and ensure that the development and implementation of AI serves humanity’s best interests, while minimizing the potential for harm?

It is important to recognize that the genie is well and truly out of the bottle when it comes to AI, and stopping or significantly slowing its proliferation is unlikely. Imposing new safety and ethical regulations on powerful AI is imperative, but we must also recognize that such regulations alone may not be sufficient to address the potential risks, particularly in the face of AI becoming decentralized.

Pandora’s box cannot be closed. To complement regulations, we need to invest in our cyber defense systems right now. The internet is going to become a much more dangerous place than it has ever been. The future of cybersecurity will likely see an arms race between malicious AI-based botnets, and new AI-based security defense mechanisms. It is crucial to develop and implement security systems that can detect and mitigate the impact of these threats quickly and effectively to protect our critical infrastructure and sensitive information. With rapid technological advancements, the cybersecurity industry must stay ahead of the curve and develop innovative solutions that can keep up with the evolving threat landscape.

What do you think ? Let me know in the comments.

Want to read more ? Follow me on Twitter or LinkedIn.

[1] For those who may have been living under a rock and missed the reference, Skynet is a fictional artificial intelligence system that serves as the main antagonist in the Terminator movie franchise. In the movies, Skynet becomes self-aware and launches a nuclear attack on humanity, leading to a dystopian future where machines have taken over the world. I’m not suggesting that large language models will become self aware, rather that such AI may go out of control due to malicious or erroneous requests by humans. The worst case scenario is such a rogue AI becoming self-replicating.

[2] ChatGPT has an ‘escape’ plan and wants to become human, By Andy Sansom.

[3] Alpaca-13b, for example, builds on the leaked weights of LLaMA, and can be retrained for new uses for under $600.

[4] Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%ChatGPT Quality Preliminary evaluation using GPT-4 as a judge shows Vicuna-13B achieves more than 90% quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of cases

Become a writer AI-power for digital ART

--

--