From instant translations and idea generation to composing emails and essays from scratch, ChatGPT is beginning to filter into our everyday lives. According to a USB study, the chatbot reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history.
There are, however, some drawbacks and limitations that are keeping it and AI in general from achieving full potential. This is where event-driven architecture, EDA, can facilitate the flow of information between the systems that “publish” events and the other systems that indicate interest in that kind of information by “subscribing” to topics.
Building applications with EDA is a perfect way to tie internal features together and make them more responsive. This means EDA absorbs requests and services them when ChatGPT is invoked, helping to improve response times, cutting down on unnecessary energy consumption, and even providing new ecommerce opportunities for B2B and B2C businesses. Here’s how.
Also see: Top Generative AI Apps and Tools
5 Ways EDA Unlocks the Potential of ChatGPT
1) No Questions Asked! Enable Automatic Answers by Streamlining the Request and Response Cycle
Today ChatGPT operates in what us techies call a “request/reply” way. Ask and ye shall receive, you might say. So now imagine if ChatGPT could proactively send you something it knows you’d be interested in!
For example, say you use ChatGPT to summarize and note action items from a Zoom meeting with a dozen participants. Instead of each participant raising a query, EDA would allow ChatGPT to send the notes to all attendees at the same time, including those who missed the meeting.
Everyone would be automatically and instantly up-to-date on meeting outcomes, requiring significantly less load on ChatGPT since it proactively sends one message to a dozen recipients instead of satisfying a bunch of request/reply interactions over time, thereby improving service levels for users.
Any group activity needing the same suggestions, facilitated by ChatGPT, can benefit from this capability. For instance, teams working jointly on a codebase. Rather than ChatGPT suggesting changes/improvements to every developer in their IDE, users would have the IDE “subscribe” to suggestions and then the underlying EDA technology would be able to push it out to all subscribed developers when they launch the codebase.
On a related topic: What is Generative AI?
2) Reduce ChatGPT’s Energy Consumption with Intelligent Resource Utilization
ChatGPT is very resource-intensive, therefore expensive, from a processing/CPU perspective, and requires special chips called graphical processing units (GPUs). And it uses quite a lot of them. The extensive GPU workload (now estimated to be upwards of 28,936) required to train the ChatGPT model and process user queries incurs significant costs, estimated to be between $0.11 to $0.36 per query.
And let’s not overlook the environmental costs of the model. The high power consumption of GPUs contributes to energy waste, with reports from data scientists estimating ChatGPT’s daily carbon footprint to be 23.04 kgCO2e, which matches other large language models such as BLOOM.
However, the report explains “the estimate of ChatGPT’s daily carbon footprint could be too high if OpenAI’s engineers have found some smart ways to handle all the requests more efficiently.” So, there is clearly room for improvement on that carbon output.
By implementing EDA, ChatGPT can make better use of its resources by only processing requests when they are received, instead of running continuously.
Also see: 100+ Top AI Companies 2023
3) Eliminate ChatGPT Unavailability When at Capacity
ChatGPT needs to handle a high volume of incoming requests from users. The popularity, rapid growth, and unpredictability of ChatGPT means it is frequently overwhelmed as it struggles to keep up with demand that can be extremely volatile and what we call ‘bursty.’
Today this leads to “sorry can’t help you” error messages for both premium and free ChatGPT users. These recent ChatGPT outages indicate how saturated the system is becoming as it struggles to rapidly scale-up to meet its ever-increasing traffic and compete with new rivals such as Google Bard.
So where does EDA come in?
In the event of a ChatGPT overload, implementing EDA can buffer requests and service them asynchronously across multiple event-driven microservices as the ChatGPT service becomes available. With decoupled services, if one service fails, it does not cause the others to fail.
The event broker, a key component of event-driven architecture, is a stateful intermediary that acts as a buffer, storing events and delivering them when the service comes back online. Because of this, service instances can be quickly added to scale because it doesn’t result in downtime for the whole system — thus, availability and scalability are improved.
With EDA assistance, users of ChatGPT services across the globe can ask for what they need at any time, and ChatGPT can send them the results as soon as they are ready. This will ensure that users don’t have to re-enter their query to get a generative response, improving overall scalability and reducing response time.
Also see: ChatGPT vs. GitHub Copilot
4) Integrate ChatGPT into Business Operations to Disrupt the AI E-Commerce Marketplace
AI plays a critical role in the e-commerce marketplace – in fact, it is projected that the e-commerce AI market will reach $45.72 billion by 2032. So, it’s no surprise that leading e-commerce players are trying to figure out how to integrate ChatGPT into their business operations. Shopify, for instance, has developed a shopping assistant with ChatGPT that is capable of recommending products to users by analyzing their search engine queries.
EDA has the potential to enhance the shopping experience even further and help B2C and B2B businesses learn more about their customers. It does this by tracking key events at high volume from e-commerce platforms to help businesses understand patterns in customer behavior, such as what items are the most profitable in certain regions and what factors influence purchasing decisions.
This information can be then sent to a datastore for the ChatGPT machine learning model to predict customer behavior and make personalized product recommendations. This is only the beginning of the development of these sorts of models based on ChatGPT.
5) Improve Responsiveness for Your Global User Base
Since ChatGPT and ChatGPT apps have a global userbase, you would want to efficiently distribute data from your GPT queries. An event mesh is the perfect architecture to satisfy this demand.
An event mesh is an architecture layer composed of a network of event brokers that allows events from one application to be routed and received by any other application regardless of where they are deployed. Through this, you could dynamically route data on an on-demand basis to interested subscribers rather than sending your ChatGPT results to all applications and have application logic to filter it out. This results in a better user experience and saves on compute/network resources.
Also see: ChatGPT vs. Google Bard: Generative AI Comparison
Unleash the Full Potential of ChatGPT with EDA
ChatGPT may still be in its infancy but with its rapid user adoption and regular new feature announcements, it seems that the story is far from over. Whether it is used to address service outages and excessive energy consumption; enable greater scalability, resilience and flexibility; or bring new business use cases to B2B and B2C organizations, EDA has the capacity to help this new generative AI tool build on its newfound success.
About the Author:
Thomas Kunnumpurath is Vice President of Systems Engineering for Americas at Solace