A Case Study on Samsung’s ChatGPT Incident


Industry: Consumer Electronics and Technology

About Company

Samsung, founded in 1938, is a South Korean multinational company with expertise in telecommunications, appliances, and electronics. It is among the top brands in the technology sector worldwide and is renowned for its goods and services. To stay ahead of its rivals and hold onto its market position, Samsung makes significant investments in research and development.

For technological businesses like Samsung, trade secrets are an essential part. Plans for research and development, product designs, client information, and other details are among these secret bits of information. They provide businesses with a competitive edge in the market and are critical to their long-term success. Trade secrets must thus be protected and kept out of the hands of unauthorised people.

For technological businesses like Samsung, trade secrets are an essential part. Plans for research and development, product designs, client information, and other details are among these secret bits of information. They provide businesses with a competitive edge in the market and are critical to their long-term success. Trade secrets must thus be protected and kept out of the hands of unauthorised people.

Why is it necessary to read a manual?

Recently, owing to employee carelessness, Samsung experienced a significant loss of its trade secrets. Samsung employees have learned this the hard way after accidentally leaking top-secret Samsung data. The same story that happened to Samsung is playing out right now in companies all over the world, whether they know it or not. Samsung allowed their staff to use ChatGPT to help them write better code. And in doing so, their employees put proprietary code into ChatGPT and that code is now in the hands of OpenAI.

The event emphasises the value of protecting trade secrets in the technology sector and the potential repercussions of failing to do so.


The technicians at Samsung’s semiconductor division were allowed to use a ChatGPT, to assist with resolving issues in their source code. However, they accidentally included confidential information such as source code for new software, review meeting details and hardware information.

ChatGPT is an AI tool developed by OpenAI, a research and deployment company for artificial intelligence. ChatGPT can be trained by feeding information. The Samsung engineers fed ChatGPT too much information to optimize their code, which resulted in the confidential details being stored in the database.

Although the employees involved in this incident were not identified, they were reportedly senior officials within Samsung. It is assumed that the leak of confidential data was not intentional, but rather a result of human error.

However, the disclosure could have serious repercussions for Samsung. If the leaked information is exploited by rivals or used for illegal activities, Samsung risks losing its competitive edge, reputational damage, and legal penalties.


  • Samsung Electronics employees in the semiconductor business unit entered sensitive corporate data into ChatGPT in three separate instances.
  • The first instance involved an employee who entered a faulty source code related to the Samsung Electronics facility measurement database download program to find a solution.
  • The second instance involved an employee who entered a program code for identifying defective equipment to get code optimization.
  • The third instance involved an employee who converted a smartphone recording of a company meeting to a document file using an AI tool i.e. NAVER CLOVA and entered it into ChatGPT to get meeting minutes.

This incident emphasises how crucial employee training and awareness are when it involves information security and trade secret protection. In this instance, the employees of Samsung accidentally shared confidential details by using a publicly accessible AI chatbot without any obvious security controls in place. By providing adequate data security training and rules, as well as by educating staff members about the possible dangers of utilising technologies with a public audience, this might have been averted.

Data breaches are frequently attributed to improper employee training, thus it is vital that companies invest in periodic training and awareness initiatives to guarantee that employees have the know-how and abilities required to safeguard sensitive data.

An internal inquiry and disciplinary measures against the involved employees were part of Samsung’s response to the event. While these procedures are critical in reflecting the gravity of the problem, it is also critical for the organisation to take preventive measures to avoid such events in the future. This entails strengthening its data security procedures and giving staff members continual training and instruction.

How much sensitive data goes to ChatGPT?

How much sensitive data goes to ChatGPT

How many people use ChatGPT or other AI tools at the workplace?

Fishbowl conducted a poll of 11,793 employees from different firms, and of those –

  • Nearly half (43%) of professionals report using ChatGPT or other AI tools to assist with work tasks.
  • Meanwhile, 57% of individuals reported not using AI tools at work.
  • Several businesses have outlawed the use of AI technologies, but according to the poll, 68% of respondents claim to use AI without ever disclosing it.
  • 32% reported they’re using generative AI with their manager’s knowledge.

How Companies are Addressing the Risks of ChatGPT to Corporate Data?

Companies are taking steps to prevent the potential misuse of the generative AI tool by employees. The measures are aimed at protecting sensitive data and preventing any possible cyber threats.

  • Walmart Global Tech issued a memo warning employees against entering confidential information into ChatGPT due to prior blocked activity that presented risks to the company.
  • In another instance, Amazon’s lawyer warns employees against sharing confidential information with ChatGPT despite its potential productivity benefits, as OpenAI, the maker of ChatGPT, is criticized for lack of transparency in how it handles sensitive information.
  • According to WSJ, JPMorgan Chase & Co. and Verizon have implemented measures to restrict employee access to ChatGPT.
  • 3.1% of workers have risked their company’s confidential data into ChatGPT, according to a recent report published by Cyberhaven.

OpenAI updated its terms of service, announcing that its models will no longer use user inputs as the default option for training, in response to concerns regarding privacy risks.

What other countries are doing?

  • Italy has banned ChatGPT over data privacy concerns, becoming the first Western country to take such action against the popular AI chatbot.
  • China prohibits foreign websites and applications, and it is highly unlikely that ChatGPT will ever be allowed to operate in the country due to concerns about misinformation and altering global narratives.
  • Russia fears the misuse of ChatGPT-like platforms, and its ongoing conflict with Western countries, means it cannot afford to let an AI platform control narratives.
  • North Korea, known for its strict internet censorship, banned ChatGPT, given the state’s monitoring of every activity on the internet.
  • Iran has strict regulations on censorship, and due to political tensions with the United States, the use of ChatGPT has been restricted in the country.
  • Cuba has strict regulations around the use of the internet and does not allow ChatGPT’s usage in the country.
  • Syria, already struggling with misinformation, does not want to risk more exposure by allowing an AI platform built by a US-based company.


It is crucial for the CISO office to inform their employees about the risks of feeding company data into CHATGPT, and an urgent “Information Security Policy Update” is needed to specify this clearly. This step is essential in protecting sensitive company information and preventing potential data breaches.

This incident is a live testament to the critical importance of security awareness and training. With the advancement in AI technology every day, it is becoming very risky and in fact creepy how a single instance can lead to major negative consequences. The Samsung incident underscores a broader implication for intellectual property rights in the technology sector. It’s high time companies start prioritizing data security and educate their employees on how to safeguard them. Knowledge can be enlightening at the same time it comes with responsibilities.

Companies must embrace a culture of continuous learning and education, fostering an environment where every employee understands the critical role they play in maintaining the integrity and security of organizational data.

HumanFirewall believes that only security awareness and training is not the solution. You need to alter the psychology in order to transform your humans into a robust first line of defense. It is a world-first security awareness and training platform that also works when a real attack strikes. With HumanFirewall, organizations can harness specialized training programs that not only elevate the awareness levels of employees but also introduce a proactive and security-conscious mindset. Knowledge, when coupled with responsibility, becomes the cornerstone of a resilient defense against cyber threats.