“Since ChatGPT’s launch just nine months ago, we’ve seen teams adopt it in over 80% of Fortune 500 companies”
OpenAI blog
OpenAI announced this that ChatGPT Enterprise is now available, offering better data protection/privacy, more security, improved speed/performance and longer context windows. The Information revealed that OpenAI passed the $1 billion revenue pace over the next 12 months, far ahead of its projections.
According to the Information, as of March of this year, OpenAI had between 1 million and 2 million ChatGPT subscribers paying $20 per month, so conservatively speaking, most of the company’s revenue is coming from its enterprise clients. Despite the fact that ChatGPT isn’t yet a year old, OpenAI counts amongst its enterprise clients Fortune500 companies across industries including Stripe, Duolingo, Databricks, Volvo, Coca Cola, Morgan Stanley, Zoom, Canva, PWC, Shopify, Square, Zendesk and others. Impressive list given the short time the product has been in the market.
For many companies, the base use case is to deploy an internal chatbot, powered by GPT-4 that can serve employees to search and engage with the company’s internal information (with the right level of access control and data management), without model hallucinations. The consumer product that was available until now, fell short in a number of features that OpenAI addressed with their current launch. But will it be enough for enterprises to adopt ChatGPT into their organisations?
Unpacking ChatGPT Enterprise offering
To gain enterprise adoption, products face a higher bar than consumer. Security, SLAs, access control, are just a few of the minimum requirements to enter. The new ChatGPT for Enterprise runs faster, and offers more security and privacy features. The enterprise features followed “Fine Tuning”, a feature by OpenAI that enables developers using the OpenAI API to customise GPT 3.5 Turbo model by restricting the data sources, showing strong performance for narrow tasks. This enables to adjust the tone of the model, so the output fits the company’s brand/language, improve model streerability (i.e. the model’s ability to follow instructions), and to better control the format of the model’s responses (critical for use with third party APIs). The Enterprise version takes all these features one step further.
OpenAI for enterprise key features:
- Enhanced privacy, SOC2
- No usage caps
- performs up to two times faster
- 32k context in Enterprise, allowing users to process 4x longer inputs or files
Samsung Electronics, Google and others have restricted employees from using generative AI bots for fear that confidential information will be leaked. OpenAI’s new enterprise features addresses their data privacy concerns, to some extent, as prompts won’t be used for training. Security has been a another major concern in enterprise adoption, according to a survey by McKinsey & Company, and the Soc2 compliance might help IT departments tick the box on security and privacy compliance. Another major feature is the performance
Who lives who dies who tells your story?
Much excitement for AI remains in the techindustry. Microsoft, Google and otherestablished companies are investing heavilyand rolling out new AI products, and business isbooming for Nvidia, whose chips are used totrain AI models.
WSJ
On the curtails of OpenAI’s announcement, Google Cloud today shared a number of new generative AI features coming to Google’s Cloud services: Model Garden, a collection of 100 different models including Meta’s LLaMa 2 and Anthropic’s Claude 2, automations to Gmail and Google Docs, improved performance on code generation, Google Meet automations, new copy creation tools, etc.
One of the main beneficiaries of all of this is of course Nvidia, which currently provides the best performing GPUs for generative AI. Nvidia’s CEO Jensen Huang stunned Wall Street with a record $13.5 billion quarterly revenue, driven by surging demand for its AI chips. It represents an 88% increase QoQ. Many suspected this was the peak for Nvidia, but the company yesterday announced a partnership with Google Cloud, which took the company’s market cap to $1.2 trillion, fast approaching Apple ($1.4 trillion).
But it’s not just large tech companies fighting for their share of cloud revenue – startups are entering the space quickly. As I mentioned before, I believe we will see generative AI tools supporting every role companies and adding automation, especially to manual repetitive tasks.
As an example, take Ycombinator. Generative AI startups accounted for 22% of YC’s winter 2023 batch across several categories:
- Model Training & Deployment
- HR & Enterprise Support:
- Chat Assistant & Copilot
- Sales, marketing and customer success
- Software Infrastructure and coding automation
- Middleware and application builders
- Data Insights and Optimisation
- Observability, Monitoring and Evaluation
- Creative: image and design
- Data Environment
- Project/Product Management
- Video
- Search
- Voice / transcription
Investment into the generative AI space is growing too. In 2023 so far, Generative AI startups raised $15.2B so far in 2023. In Q2 2023, investment in generative AI startups—those focused on systems that produce humanlike text, images, and computer code—increased 65% to $3.3 billion. Goldman Sachs forecasts that AI investments AI investment will approach $200 billion globally by 2025.
But not all is rosy. Venture investors are realizing that generative artificial intelligence might not be enough to stem years long startup downturn. As Index Ventures Partner Mark Goldberg put it in a WSJ article: “there is a shallow trough of disillusionment”.
Startups that enjoyed the buzz are now realising that they also need to become good businesses, not just cool technology, to survive.
Barriers remain for Enterprise adoption of Generative AI tools
While the current OpenAI release opens the door for enterprises to start using a super-version of ChatGPT, wider enterprise adoption will also depend on:
1) Price – OpenAI didn’t publish its price for enterprise clients, but it’s safe to assume it ain’t cheap.
2) Accuracy (unclear if OpenAI can completely stop hallucinations)
3) Copyright – should enterprises risk adopting Generative AI tools that are based on copyright infringed training data?
4) Pending regulation
5) GPU shortage and availability
Are we ready for the impact of Generative AI enterprise adoption?
“AI will not replace you, but the person using AI will”
We’re starting to see announcements from CEOs like IBM CEO Arvind Krishna, who said the company will replace ~8k jobs with AI, largely in non customer-facing roles across departments like HR, finance, and accounting. Or PWC, who said it will invest $1 billion in generative AI over the next 3 years.
There are many studies and statistics on the number of jobs that are likely to be lost due to AI and automation. But we don’t yet have much information about the jobs that could be created as a result of these new tools.
One thing is certain, for startups, it’s an incredible time to be building tools and solutions in this space, even at the application layer of generative AI. Clever founding teams can do much more with limited resources and automate workflows for both consumers and enterprises across verticals. For investors, deploying capital in this space remains attractive, but it’s not always straightforward. The many startups using OpenAI’s API to bring ChatGPT to the enterprise are finding themselves nearly redundant today, and with the pace the generative AI space is moving, it’s nearly impossible to predict what isn’t going to get commoditised soon.
- On SuperAgency and AI Adoption Going Forward - December 9, 2024
- Weekly #FIRGUN Newsletter – December 6 2024 - December 6, 2024
- Weekly #Firgun Newsletter – November 29 2024 - November 29, 2024
I am interested to see if Meta’s interest in disrupting the large Generative AI companies building a moat around their business by releasing capable open source models grabs some of that enterprise market. In fact I am betting on it. Given I can build a 80 cpu 160GB of RAM 1U server to train and provide inference on models for £1300 with the combination of mixture of experts technology and multiple task specific fine tuned models the costs will be significantly lower that any GPT service provider. It also provides easy support for air-gapped infrastructure and most of the top concerns in the McKinsey chat above.