A deep dive into the transformative space of generative artificial intelligence (AI) helps in exploring big-tech innovations and what the future has to offer for the budding space. In the past year, generative AI has gained lots of traction within the global technology space.
This is mainly because of its advanced ability to change how individuals and businesses approach problem-solving, creativity, and decision-making. Notably, the versatility and efficiency of generative AI applications have resulted in their adoption across many sectors of the global economy, from entertainment to healthcare, which is seen in its rapidly growing market size.
As of 2023, the global generative AI market was valued at $12.1 billion; however, the figure is expected to rise to $119.7 billion by 2032, according to various projections.
Furthermore, in 2022, a time when discussions surrounding the technology were yet to become mainstream, generative AI startups managed to raise $2.6 billion in 110 deals, a number that increased to almost $50 billion in 2023, with prominent firms like OpenAI, Inflection AI, and Anthropic securing several billion dollars each.
Another notable indicator of increasing interest in this space is the growing number of searches related to the term “generative AI.” As can be seen from market charts, following the release of OpenAI’s ChatGPT platform, interest in the technology spiked rapidly – peaking in June – specifically across countries like Hong Kong, Singapore, India, China, and Israel.
Thus, as the world of AI-enabled tech keeps evolving, its application scope also expands, leading more firms to integrate these technologies into their operations.
The founder and CEO of ChainGPT.org, Ilan Rakhmanov, said:
“Most well-known brands can now afford to engage with generative AI and use it as a competitive edge. Also, we know what generative AI is capable of, but we still have a limited understanding of how it will evolve in the long-term future as more and more organizations and individuals leverage the technology and as a growing number of models train on its associated data sets.”
ChainGPT.org is an AI infrastructure provider for Web3 projects and blockchain entities.
Mainstream Entities Exploring Generative AI
At the turn of 2024, JPMorgan announced the release of DocLLM, the generative large language model (LLM) customized for multimodal document comprehension. It can allegedly analyze and process data linked with a wide range of enterprise documents – from invoices and forms to contracts and reports – mostly having complex combinations of layout and text.
What makes DocLLM unique is its operational design, since it eschews the massive reliance on image encoders common among existing multimodal language models. Instead, it mainly focuses on bounding box information, integrating spatial layout structures more effectively. This is achieved via a new disentangled spatial attention mechanism that refines the attention process in the classical transformers.
Amazon has also enhanced its generative AI game by integrating a new tool to assist the sellers on its platform. It generates accurate and engaging product descriptions, considerably easing the process of listing new products. It is already popular among most Amazon sellers.
Mistral’s new sparse mixture of experts, or SMoE model, has gained lots of traction in the developer community thanks to its efficiency, speed, and extensive feature set. This model is open-source based, making it a go-to tool for developers building unique language models with limited resources.
One subsidiary of Google, DeepMind, has also continued to become a considerable player in the generative AI arena. Their advancements are clear in services such as Google Translate and Google Brain. A significant recent contribution is the launch of Bard AI, a chatbot mirroring the capabilities of ChatGPT and enabling users to generate high-quality text and lots of creative content.
Amazon Web Services (AWS) has now made its mark with the launch of Bedrock, a service that provides access to various models from multiple AI firms. Bedrock is specifically notable for its extensive developer toolsets, which are instrumental in building and scaling generative AI applications.
Cloud-based software firm Salesforce has integrated generative artificial intelligence algorithms – collectively known as “Einstein GPT” – into its customer relation management platform, hence considerably boosting customer engagement and customization.
Eventually, IBM released its Watson AI platform, which integrates generative AI techniques with machine learning (ML) and natural language process (NLP).
What Does The Future Hold For Generative Artificial Intelligence?
Although the future of generative AI appears to be poised for transformative growth, the industry is still navigating an unchartered terrain filled with promise and challenges. Based on Rakhmonov, the trajectory for generative AI-driven technologies still mainly depends on the development of models that are not just reliable but also introduce tangible value to their users, adding:
“The future of generative AI is somewhat uncertain as it evolves with wider adoption and more data. However, the ‘black-box’ nature of many AI models poses a significant challenge, as it could lead to problems in verifying the reliability of data and insights. Without clarity on how AI models produce outputs, public support for mainstream AI could wane.”
On a somehow similar note, Scott Dykstra, chief technical officer and co-founder of Space and Time – an AI-enabled, Microsoft-backed decentralized data warehouse – told reporters that although there is a lot of uproar surrounding generative AI, the reality of the matter is a lot more nuanced.
Dykstra said that, as everything stands now, most Fortune 500 firms are navigating the generative AI space instead of conservatively, which is shown by the fact that most of them are contented to “simply add an AI chatbot to their website and call it a day.” He then stated:
“The problem is that enterprises have to operate at enterprise scale, and today, it’s really expensive to do so. While GPT-4 is in a clear lead in terms of quality of inference, it’s also quite expensive for the workloads of enterprise production-grade products. Across the board, we need to see token prices driven down, faster inference, and more tools around automating retrieval augmented generation.”
Issues That Affect The Growth Of Generative AI
As noted earlier, the evolution of generative AI is not without challenges. Dykstra thinks a critical technical challenge for generative models (such as LLMs) will be the speed of their respective token streams. He added:
“For a real LLM-based internet, what we need is sub-second inference speed, which is incredibly challenging.”
On the development side, Dykstra thinks that while progress has been made when it comes to AI-driven coding tools, a notable breakthrough in ‘no-code’ solutions is yet to be seen. A no-code solution is a software development strategy that needs minimal programming skills to build an application rapidly.
“Numerous projects are utilizing GPT-4 for coding within large codebases, but the no-code design remains unsolved due to the complexity of contextualizing the entire codebase.”
On the other hand, Rakhmanov, has his focus set on the wider landscape influencing generative AI. He thinks that regulatory actions from top governments will be an important factor to consider as they stand to define acceptable AI practices.
Furthermore, he thinks that we might also be on the precipice of a global race for AI dominance, mostly between major tech operators and nations like China and the United States. He noted:
“Computing power and chip production are among the crucial conversations that will shape AI’s future.”
Therefore, as we move into a future that is powered by technologies like machine learning (ML), artificial intelligence (AI), and NLP, it will be interesting to see how the global digital landscape will keep evolving and growing over the coming ten years.