The limits of ChatGPT in the corporate context: a critical look

It all started with hype

Do you still think directly of ChatGPT when you think of artificial intelligence? It’s obvious – why not use a well-known technology that seems easy to implement? ChatGPT has seen unprecedented growth in its user base. Within just two months of its release in November 2022, ChatGPT reached 100 million users. This made it the fastest growing consumer app in history, highlighting the huge interest and rapid uptake. A major factor in this hype was its free availability during the trial period, which allowed a wide audience to test the model’s capabilities. The ease of use and ability to generate human-like responses also contributed to the enthusiasm. The media played a major role in spreading the hype. Numerous reports and discussions on news portals and social media increased awareness and interest in GPT. Screenshots of interactions with the chatbot were widely shared, further fueling curiosity and engagement. The introduction of ChatGPT led to a shift in awareness in society and business. Companies and individuals recognized the potential applications and benefits of AI technologies such as Large Language Models, triggering a wave of investment and development in the field of artificial intelligence.

Limits of ChatGPT in the corporate context

Despite the hype, there has been increasing criticism and discussion about the actual capabilities and limitations of ChatGPT. We and other experts emphasize that while the technology is practical, it does not achieve the desired effect in the corporate context. These perspectives have led to a more nuanced view of the hype, as large-scale language models face inherent limitations in the corporate environment

Short disclaimer

ChatGPT is the best known large language model (LLM) based on Generative AI. However, the following challenges affect all LLMs, including other well-known models such as BERT, LaMDA, PaLM from Google and LLaMA from Meta . LLMs are AI models that are trained to generate human-like text by analyzing large amounts of data and recognizing patterns. In the following, we often refer to all LLMs in order to explain the general challenges and not limit ourselves to ChatGPT.

Hallucinations

One of the biggest hurdles are so-called hallucinations, where the model generates false or inaccurate information. We’ve all been there: you ask for biographical facts about yourself and suddenly ChatGPT claims that you have won a prestigious prize or studied at an elite university – even though this is not true. While you can laugh about it yourself, this phenomenon poses a major problem in a critical corporate context. Imagine a company in the financial sector relying on ChatGPT for market analysis and suddenly receiving fictitious data about market trends. Such misinformation can have serious consequences if business decisions are based on incorrect information.

However, artificial intelligence should and can be able to provide reliable information and offer serious support. We don’t want to be satisfied with this status quo in the long term, do we?

Lack of transparency

Another problem is the lack of transparency of the technology. The underlying Transformer architecture often makes it difficult to trace the source of the information. You may have received a precise answer from ChatGPT and asked yourself: “Where does this information actually come from?” Without clear traceability of sources, it is difficult to verify and validate responses. Additionally, LLMs also raise concerns about ethics in AI and social issues. They can reproduce and reinforce existing biases and discrimination in the training data. This can lead to unexpected and potentially harmful responses. Bias and interpretation errors are common problems. It is often unclear who is responsible for the content generated by LLMs, especially if it leads to harm. This lack of transparency and the legal risks can therefore have serious consequences for companies.

Legal uncertainties

Legal uncertainties also exist in connection with the content generated by LLMs. Copyright infringements can occur when the models generate text that uses existing copyrighted content without proper attribution. Imagine ChatGPT generates a text that contains parts of a copyrighted book without labeling it. This could cause legal problems for your company, especially if this content is used in official documents or publications.

Limited access to proprietary data

Access to proprietary data is a double-edged issue. LLMs such as ChatGPT mainly rely on publicly available sources and therefore cannot access specific or confidential company information. Unless you upload a document to be analyzed. This way, you can give the AI access to interact with it and provide contextualized answers. From a pragmatic point of view, this is a smart move, but not in the case of a public, non-transparent LLM, as the security and confidentiality of the data cannot be guaranteed. Proprietary knowledge, however, is crucial for the effective use of AI in companies, as it allows specific requirements and contexts to be better taken into account.

Data protection issues

But let’s take the problem a step further: data breaches can occur when LLMs unintentionally disclose this confidential information. Remember the news when Italy temporarily blocked ChatGPT because of privacy concerns? Or when several companies banned the use of ChatGPT for confidential data? These incidents show that while access to proprietary data is important for customized responses, the use of such data carries the risk of inadvertently exposing sensitive information.

So how can you take advantage of such benefits without taking the potential risks?

Lack of long-term memory

Assuming the risk of misuse is resolved, the next obstacle for an LLM is a lack of long-term memory. They can only take into account the context within a limited amount of text, which limits their usefulness in longer or more complex interactions. In practice, this means repeatedly entering all context-specific documents, details and instructions, endlessly. You may have tried to have a longer discussion with ChatGPT and found that it doesn’t remember earlier parts of the conversation. This means that LLMs struggle to keep track of longer conversations or complex topics. Imagine working on a long-term project and having to explain the same context over and over again – it’s not only inefficient, it’s frustrating. Building your own GPTs offers a possible way out of this deficit at first glance, but these also need regular manual maintenance and of course still involve the other problems mentioned above, which are far more critical to deal with. The conclusion is that these models are currently inadequate, unreliable and unsafe for use in the company.

Industry customization and specific solutions

In the corporate context, the first priority is to avoid hallucinations. Answers MUST be reliable, certain and correct if critical decision-making is to be identified. This requires an AI with:

  1. Industry-specific knowledge
  2. Contextual understanding
  3. Data integrity and compliance

The future of AI in the corporate environment

There are already promising prospects. As we described in our article “The future of AI: What awaits us?“, Gartner predicts

“By 2027, more than 50% of GenAI models used by companies will be specific to either an industry or a business function – up from around 1% in 2023.”

This is exactly what we are already working on. Our industry-specific systems will be free from the complications of generic LLMs. See for yourself:

Comparison table of the functions and capabilities of ChatGPT and a specialized solution in an enterprise context.

Comparison of the features and capabilities of ChatGPT and our specialized solution in an enterprise context. The table shows the advantages of our solution in areas such as contextual understanding, industry-specific knowledge, avoidance of hallucinations and adherence to compliance requirements.

Conclusion

ChatGPT is not a reliable option for companies due to its hallucinations and lack of transparency. The situation is exacerbated by the requirements of the GDPR and the upcoming EU AI Act. These regulations require a high level of transparency and reliability that ChatGPT cannot offer.

Contact us to find out how you can position yourself correctly now and be at least three steps ahead of the competition.