The new GPT-4 language model developed by OpenAI may not seem all that harmful. Nonetheless, the worst risks are those that are never expected.
Recently, users were served with the latest artificial intelligence language model from OpenAI, GPT-4. GPT-4’s arrival had been much expected with lots of fanfare from artificial intelligence (AI) enthusiasts.
For several months, lots of rumors spread wildly about its features and capabilities:
“I heard it has 100 trillion parameters.” “I heard it got a 1,600 on the SAT.” “My friend works for OpenAI, and he says it’s as smart as a college graduate.”
These rumors might not be true, but they pointed out how impressive the technology’s abilities may feel. One early GPT-4 tester said that testing the language model gave them an “existential crisis,” since it proved how creative and powerful the AI was compared with the tester’s frail brain.’
The new technology might not give every user an existential crisis. But, it might exacerbate the dizzying feeling that many people have when they start thinking about the potential strength of artificial intelligence as we head deeper into the AI era.
In the AI space, a lot is changing too quickly and users might be served with ‘future shocks’ for the rest of their lives.
The GPT-4 all-in-one subscription is available featuring ChatGPT Plus, the $20-a-month version of OpenAI’s chatbot, and ChatGPT.
Related: The Ultimate ChatGPT Guide for 2023
GPT-4 Early Adopters
GPT-4 seems to have been hiding in plain sight. Microsoft confirmed that Bing Chat, its chatbot tech co-created with OpenAI, is powered by GPT-4.
On the positive, GPT-4 is a powerful engine that boosts creativity. Nobody knows what new cultural, scientific, and educational production it might support. It is already known that artificial intelligence can assist scientists to develop new drugs, increasing programmers’ productivity, and detecting some types of cancer.
Other adopters include Stripe, which is using the technology to scan business websites and deliver an extensive summary to customer support staff. Duolingo developed GPT-4 into a new language learning subscription tier.
Morgan Stanley is developing a GPT-4 powered network that will retrieve information from company documents and distribute it to financial analysts. Khan Academy is also leveraging GPT-4 to develop some form of an automated tutor. Also, Be My Eyes is using technology to help blind and visually impaired people to navigate the world.
Since developers can integrate GPT-4 into their apps, we might soon see lots of the software we use to become smarter and highly capable. That is the optimistic case. However, there are also various reasons to fear GPT-4 as well.
We do not yet know everything that this technology can do!
Development
OpenAI spent up to six months “iteratively aligning” GPT-4 guided by lessons from an internal adversarial testing program and ChatGPT. This strategy resulted in the best-ever results on steerability, faculty, and avoiding going outside of guardrails, as highlighted by the company.
Similar to the previous GPT models, GPT-4 was extensively trained using publicly available data, including public web pages and data licensed by OpenAI. Microsoft partnered with OpenAI to create a ‘supercomputer’ from scratch in the Azure cloud. The supercomputer was used to train GPT-4.
While announcing GPT-4, OpenAI wrote in a blog post:
“In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle. The difference comes out when the complexity of the task reaches a sufficient threshold — GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.”
Undoubtedly, one of GPT-4’s highly interesting aspects is its ability to understand images and text. GPT-4 can caption and interpret complex images. For example, it can identify a Lightning Cable adapter from a picture of a plugged-in iPhone.
OpenAI is yet to release this feature to the public out of fears of how it might be misused. However, Greg Brockman, OpenAI’s president, shared the massive potential of this tool in a live-streamed demo. He showed a photo of a drawing he made in a notebook – a crude pencil sketch of a website.
He put the photo in GPT-4 and commanded the app to create a real, working version of the website using JavaScript and HTML. Within a few seconds, GPT-4 scanned that image, turned its contents into text instructions, changed the text into working computer code, and built the website. Impressively, the buttons worked.
Should you be scared or impressed by GPT-4? Maybe the right answer is both.
Related: We Are Beginning The Age Of AI
Based on the company, GPT-4 is highly capable and accurate compared with the original ChatGPT. It also performs incredibly well on various tasks and tests, including the Uniform Bar Exam (where it scored higher than 90% of the human test-takers) and the Biology Olympiad (on which it scored higher than 99 percent of humans).
GPT-4 also impresses in several Advanced Placement exams, including A.P. Biology and A.P. Art History, and gets a 1,410 on the SAT – not a perfect achievement, but it is a score that most human high schoolers would covet.
There is the added intelligence in GPT-4 since it responds more fluidly than previous versions, and appears more comfortable with a bigger range of tasks. GPT-4 also has more guardrails set in place than ChatGPT. It is less unhinged than the original Bing, which was powered by a version of GPT-4 but which seems to have been significantly less keenly fine-tuned.
AI Emergent Behaviors
A strange phenomenon of current AI language models is that they act in ways their developers did not expect. They also pick up skills they were not programmed to do. These are known as ‘emergent behaviors’ and there are several examples.
An algorithm that was originally designed to predict the next viable word in a sentence may instinctively learn to code. A chatbot that was developed and taught to act helpful and pleasant might become creepy and manipulative. Some AI language models even learn to replicate themselves, creating copies in case the original was disabled or destroyed.
Currently, GPT-4 does not seem dangerous since OpenAI has spent many months understanding the language model and mitigating its risks. What will happen if the testing missed some risky emergent behaviors? Or its introduction inspires another, less careful AI lab to quickly create a language model and send it to the market with fewer guardrails?
Some of the frightening examples of what GPT-4 can do can be found in a document released by OpenAI. In the document titled “GPT-4 System Card,” there are ways that OpenAI’s testers tried to get the technology to do dangerous and dubious things, and it succeeded extensively.
Related: Dark Web Criminals Plan Chat GPT Takeover- NordVPN
In one test that linked GPT-4 to several other systems, it managed to hire a human TaskRabbit worker to complete a simple online task for it – solving some Captcha test – without alerting the person that it was a robot. This AI even lied to the worker about why it wanted the Captcha done, creating a fake story about vision impairment.
Many dangerous ideas play on old, Hollywood-inspired narratives about what a rogue AI might do to people. But these are not science fiction anymore. These are things that some of the current AI systems are capable of doing.
There are good types of AI risks that can be tested, planned for, and try to neutralize ahead of time. On the other hand, the worst risks are those that cannot be expected or planned for.
For now, nobody knows half of what is coming with the introduction of AI systems like GPT-4.