ChatGPT AI is now exploding. Twitter is already inundated with screenshots of the app. Furthermore, coding sites like Stack Overflow are banning answers created with it, and at least 1 million people have played with it. It has become a sensation.
Professional AI researchers already agree that ChatGPT is delivering on its desired purpose. ChatGPT is well-trained particularly to function as a chatbot, but essentially, it uses the same GPT-3 technology introduced more than two years ago.
What ChatGPT 3 shows more than being an impressive technology is the critical role that its access plays in making breakthroughs effectively usable. In that context, OpenAI has now made people wake up and notice the massive power of today’s AI by packaging GPT-3 in a manner that normal individuals can use.
This strategy is nothing new; we may believe that Thomas Edison is the inventor of the light bulb, not because he invented it, but because he successfully introduced it to the market. He also turned the light bulb into something normal people could easily understand.
That is expected to become a trend in the artificial intelligence (AI) sector going forward; the firms that manage to make the use of AI as easy as possible will be expected to thrive.
How ChatGPT Works
ChatGPT was trained using machine learning and human intervention in a method known as reinforcement learning from human feedback (RLHF). The first stage of training featured humans playing the role of a human user and an AI assistant engaged in a conversation. This exercise helped show responses that are preferred by humans and went towards building a massive dataset to feed the model.
Next, a reward model had to be created for reinforcement learning. To achieve that, human AI trainers once more stepped in, but this time they needed to rank multiple model answers by quality, which enabled ChatGPT to select the best response.
This means real people had a hand in helping to train ChatGPT to ensure that its answers are not just factually accurate but written in a natural, human-like manner. From AI writing tools to AI voice generators, it is clear that the future will be shaped by powerful machine learning tools.
It is also important to note that ChatGPT does not always offer factually correct answers and it might produce biased content as well. The technology is still under development and it is not yet foolproof.
Related: Artificial Intelligence Projects Need Equity, Diversity, And Inclusion
The Benefit Of Use Cases
Most of the current impressive artificial intelligence systems are powered by huge language models. Interestingly, the language models are trained on all of the text humans have created in the past 6,000+ years.
GPT-3 consumed 8 billion pages of text, all of Wikipedia, and nearly every book ever published. Then, it created an AI system that shows behaviors and properties of general intelligence and can do everything ranging from solving coding problems to writing sea shanties.
None of that is new. Some analysts started testing GPT-3 in 2020, and the fundamentals of the system have been around for a longer time than that.
Notably, tools like GPT-3 have been designed into all types of apps without anybody knowing it. Most of the AI writing assistants that writers on various platforms either scream about or fawn over are only fancy wrappers around GPT-3.
Furthermore, most of the utilitarian text on the Internet – consider summaries of a restaurant’s menu or small blurbs about what you expect to get in a new city – are already written by artificial intelligence systems like GPT-3. Probably you have already consumed some text written by GPT-3 and not even known about it.
The Power Of Access
Why is ChatGPT such a sensation if the underlying technology has been out for a while?
The reason is that ChatGPT makes the technology readily accessible. The chatbot is free to use, and the normal person can sign up and interact with it as if they are texting their family or friends. It is not that ChatGPT does anything entirely new. It has just managed to do it in a manner that the normal person can access and hence get blown away by.
Supporting that kind of accessibility is not easy. OpenAI’s Sam Altman said on Twitter that unleashing ChatGPT to the public resulted in some massive computing costs.
Every chat sent to the system allegedly costs “low-digit cents” to process. Whenever 1 million people are using the platform, OpenAI is maybe sinking hundreds of thousands of dollars per day into keeping ChatGPT active, without any immediate business case.
Multiple researchers and other people who have made integral breakthroughs behind technologies like GPT-3 just could not afford that. Without the needed resources to put into making this technology accessible, the tech can never make it out into the real world.
Related: The Best Machine Learning Blogs to Follow in 2023
A Look Backwards
This dynamic has existed for long. The history of science is sprinkled with situations where a researcher developed a breakthrough idea, only to be shelved by the entrepreneur or visionary who made this idea easily accessible to the public.
Many believe that Thomas Edison is the inventor of the lightbulb. But, inventors like Thomas Wright, Vasilij Petrov, and Joseph Swan designed and made the first lightbulbs. Edison’s genius was in building power plants and wiring, electrifying public buildings, and making technology accessible and visible to normal people.
Edison maybe lost a lot of money pulling off some dramatic stunts, like setting up a whole power plant only to light the home of wealthy financier JP Morgan, several newspapers’ headquarters, and the New York Stock Exchange.
However, after people saw the benefits of electric lights, they wanted them for their homes. By making this technology accessible and readily visible – even at a huge personal cost – Edison had managed to unlock a market for it that proved highly profitable.
In the artificial intelligence space, something similar appears to be brewing. ChatGPT indicates that developing breakthrough technology will not lead to cultural change in case technology is limited to the lab or even to the high-powered servers of B2B clients.
For a technology as groundbreaking as AI to make it out into the physical world, it needs to engage the imaginations of normal people. People will have to play directly with it – and see its revolutionary power – before the technology will manage to impact the world.
As the field continues with its evolution, various firms will continue making the technology widely accessible – even at a massive cost – that will eventually succeed.
OpenAI had better get used to these eye-watering compute fees. They are creating a revolution, and in most cases the revolutions are expensive.
How Does ChatGPT Impact Web3 And Online Security?
In recent weeks, ChatGPT dialogue-based AI chatbot capable of understanding normal human language, took the entire world by storm. It gained more than 1 million registered users in 5 days, becoming the quickest-growing tech platform ever.
ChatGPT creates incredibly detailed human-like written text and thoughtful prose after it is fed a text input prompt. Furthermore, ChatGPT also writes code. The Web3 community seems to be intrigued, curious, and shocked by the intense power and capabilities of the AI Chatbot.
Related: Artificial Intelligence Chatbots and the Future of Marketing
The ChatGPT AI code writer is a great addition to Web3 which can go two ways:
- Near instant security audits of smart contract codes seeking exploits and vulnerabilities (existing and before implementation).
- On the other hand, criminals and hackers can program AI to discover vulnerabilities to exploit smart contract code. (thousands of the current smart contracts could now find themselves exposed to the hackers)
The Naoris Protocol
In the long term, the protocol is expected to be a net positive for the future of Web3 security. In the near term, artificial intelligence will expose different vulnerabilities which will have to be addressed promptly before we see a spike in hacks and breaches. AI will help illuminate where humans need to improve.
For Web3 developers and development (pre-deployment)
The Web3 developers and auditors will be in less demand. The future will appear this way:
“Devs will instruct, write and generate code using AI. Devs will read and criticize AIs output, learning patterns, looking for weak spots. Auditors will need to understand errors, mistakes, and code patterns. Auditors will need to learn the limitations of AI. AI will work in tandem with dev teams to strengthen future code and systems. AI will be part of the development of the production pipeline.”
With all this put into consideration, it will be survival of the fittest for the auditors and developers. Only those who can work with, instruct and evaluate artificial intelligence will survive. The number of developers will reduce considerably in number with an AI working with the team.
For Web3 Security (Post-Deployment)
Swarm AI will be used to scan in almost real-time the status of Smart Contracts. The underlying code will be monitored for code injections, anomalies, and hacks. The attack position is meant to find bugs and errors in the artificial intelligence tools, instead of the code itself.
This strategy will help improve Web3 smart contracts security majorly since $3 billion has been hacked and lost in 2022 to date. Notably, it will also affect the CISOs and IT teams’ ability to monitor instantaneously.
Eventually, security budgets will be reduced, the cybersecurity teams will reduce in numbers and only those who can work with and interpret AI will be in demand.
Related: How Does Web3 Resolve Fundamental Issues In Web2?
Artificial intelligence is not a human being. Hence, it will miss basic knowledge, preconceptions, and subtleties that only people can see. The tool will enhance vulnerabilities that are coded in error by people. Thus, it will improve the quality of coding in Smart Contracts.
The Takeaway
An AI that writes and hacks code may spell lots of trouble for systems, enterprises, and networks. Currently, cybersecurity measures are failing with exponential increases in hacks across all sectors in recent years with 2022 allegedly already 50% more than what was recorded in 2021.
ChatGPT is coming up. It can be implemented positively within an enterprise’s security and development workflow, which enhances the defense capabilities above the existing security standards. Nevertheless, criminals can increase the attack vector, working a lot quicker and smarter by instructing artificial intelligence to seek exploits in highly-established codes and networks.
Well-regulated enterprises like FSI spaces, for instance, would not react or recover in time because of the way existing cybersecurity and regulation are configured.
The current breach detection time as measured by IBM (IBM’s 2020 Data security report) is 280 on average. Using artificial intelligence could reduce the detection time to less than 1 second, which changes the entire digital space.
The arrival of AI platforms like ChatGPT will need enterprises to invest more in their security measures. They will need to implement and use artificial intelligence services within their security QA workflow processes before launching any new code or programs.