Earlier this week, OpenAI made waves by announcing the launch of ChatGPT Enterprise, a paid version of the popular ChatGPT chatbot designed for modern organizations.
ChatGPT Enterprise uses the GPT-4 large language mode (LLM), which offers two times greater performance than its predecessor and comes without any usage caps. It also comes with a 32,000-token context window, which enables users to enter 4x longer inputs.
OpenAI said in the announcement blog post:
“We’re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more.”
OpenAI’s enterprise-grade security offerings include encryption in transit and at rest with AES-256 and TLS 1.2+.
The launch comes just months after Microsoft committed to investing $10 billion into ChatGPT and after more than 80% of Fortune 500 companies have experimented with the popular chatbot since its release in November 2022.
Why is ChatGPT Enterprise Important?
At a high level, the release of ChatGPT Enterprise is significant because, unlike ChatGPT, it doesn’t use the data entered into prompts to train OpenAI’s proprietary models. This means that an organization’s employees can enter prompts without worrying about them being shared with unauthorized third parties.
Garner analyst Jim Hare told Techopedia:
“OpenAI is throwing its hat into the ring by now launching a product that is suitable for enterprise use cases rather than offering a tool that is more geared for consumers/individuals.”
These privacy assurances are much needed, as the enterprise adoption of generative AI has been an uphill struggle, with companies like Apple, Wells Fargo, JP Morgan, and Verizon implementing restrictions on the technology due to concerns over privacy and data leakage.
Hare added:
“Enterprises put policies in place that prohibited the use of ChatGPT because [of] concerns that their enterprise data would become part of the model training set and that the data wasn’t secure. ChatGPT Enterprise addresses these two major concerns.”
Misinformation and Hallucinations
While ChatGPT is a step in the right direction from its predecessor, Hare notes that it leaves some significant issues unaddressed, particularly when it comes to output accuracy and hallucinations.
Across the board, one of the biggest problems LLMs have is that they have a tendency to hallucinate and make up facts. They are designed to process a user’s prompt and then predict what words will best answer their question. It can’t think autonomously or fact-check the way a person can.
In other words, chatbots are prone to spreading misinformation, which is a major barrier to enterprise adoption, because sharing incorrect or inaccurate information with customers or the public at large can have serious legal consequences.
Just recently, a radio host decided to sue OpenAI for defamation after the chatbot generated a false legal complaint that accused him of embezzling money. In another case in Belgium, a widow alleged that a chatbot encouraged her husband to commit suicide, which is something that GPT-3 is also alleged to have done in the past.
Together, these cases indicate that enterprises open themselves up to significant legal and reputational risk if they use generative AI to produce or enable certain products or services.
Transparency and Intellectual Property
Another significant issue that ChatGPT Enterprise fails to address is the lack of transparency over what data OpenAI’s models are trained on. This not only makes it difficult to spot potential bias in the training data but also leaves organizations unable to see if the model has been trained on copyrighted material or other intellectual property.
Currently, OpenAI is at the center of a wave of controversy for allegedly scraping copyrighted material from the internet and other sources.
For instance, in June, a group of anonymous individuals sued OpenAI for allegedly violating privacy laws, arguing that the organization scraped information from books, articles, websites, and posts, as well as personal information, without consent.
The lawsuit estimates up to $3 billion in potential damages and millions of harmed individuals.
In another case in July, Comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey announced they were suing OpenAI over alleged copyright infringement due to using illegally acquired shadow library websites to train their AI models.
Together, these cases highlight that organizations could open the door to intellectual property and copyright infringement lawsuits if they attempt to use OpenAI’s models to build enterprise-grade products and services (if these are judged to have been trained on copyrighted material without consent).
Making this matter more complicated is the fact that OpenAI’s black-box approach to model training means that enterprises are unable to assess and mitigate potential risks themselves.
To use these models, they’d have to take a leap of faith that OpenAI has collected data legally and ethically, which presents an unnecessary level of risk.
A Small Step in the Right Direction
The release of ChatGPT Enterprise is a small step in the right direction, but it’s still a long way off from providing enterprises with an LLM that presents an acceptable level of legal and compliance risk.
On some level, this is to be expected, as LLMs are a relatively recent innovation, and there are no comprehensive regulations governing how they should be developed or even a comprehensive risk management framework.
Ultimately, organizations that were nervous about ChatGPT and generative AI before are unlikely to have their fears erased with the new release. But it’s at least shown that OpenAI is interested in extending support to the enterprise market.