When you’re using ChatGPT you probably won’t notice, but it’s normal that you enter data of all kinds to solve your problem. Some of them can be sensitive, and this, which is certainly delicate in the case of end users, is even more so in the case of business users, as discovered by the South Korean multinational Samsung.
What happened. As indicated in CNBC, Samsung is stopping the use of generative AI models such as ChatGPT among its employees. The company has discovered that its staff is making a misuse of this tool, and its managers confirmed in this environment that they were “temporarily restricting” the use of these systems in the company’s personal computers. They had done this movement in their chip factories a few weeks ago.
Cuidado con ChatGPT. The company informed its employees in an internal communication at the end of April and indicated that it had detected an inappropriate use of the tool. In Bloomberg they indicated how at the beginning of April Samsung engineers accidentally leaked Samsung code when uploading it to ChatGPT. No details are known about the code and its impact on having a filter.
Confidential data that ends no debt. In that internal communication, the managers of Samsung indicated that “interest in generative AI platforms, such as ChatGPT, has been growing both internally and externally. Although interest is centered on the utility and efficiency of these platforms, there is also a growing concern for the risks of security that the generative AI presents”.
If you use it, you can finish sending off. The Samsung managers also made it clear in their message about the risk involved in using this tool from now on. In addition to requesting that the employees follow their safety standards, they warned that failure to do so could result in “a disciplinary action that could result in dismissal”.
Samsung is not the first (and will not be the last). Other companies have taken similar measures before the risk of confidential data ending up in ChatGPT. Financial entities such as JPMorgan Chase & Co, Bank of America and Citigroup – among others – prohibited or restricted their use in February. The reality is that the model created by OpenAI does not know how to keep secrets and attack vectors exist to take advantage of these vulnerabilities. Companies are beginning to understand the risks of using this and other models like Bing with ChatGPT or Google Bard. JP Morgan actually announced a few days ago its own IA to analyze the announcements of the US Federal Reserve.
ChatGPT in “Incognito” mode. OpenAI, creator of ChatGPT, knows well that this problem exists, and is taking measures to avoid or at least minimize the risk. Last week it was announced the arrival of what some have qualified as an “Incognito mode” for ChatGPT. When activated, ChatGPT does not store the conversation history, we use it to improve its performance.
And more measures are being prepared. The company is also preparing a “ChatGPT Business” subscription service with additional security controls to prevent sensitive information from being filtered, so that companies can take advantage of the potential of this solution without risk. There are other measures that will be activated in part as a response to the situation of ChatGPT in Europe. Italy decided to prohibit it at the national level even though it has lifted the veto, and other European countries – Spain included – are planning to do exactly the same thing.
Microsoft is preparing its game. Meanwhile, Microsoft took good care of keeping this issue in mind when it presented Microsoft 365 Copilot, its evolution of Office that takes advantage of the power of ChatGPT but does it in a special way: restricting the scope of the IA model so that the data confidentiales con los que se trabaja en esas applications no salgan de empresa.
In Xataka | GPT-4 is a brutal level jump compared to ChatGPT: new examples to test it in person