The growth in both quantity and utility of the different Artificial Intelligence platforms that are appearing, is evident in the future. But not everything is always as beautiful as we see it, the risks relative to security and privacy are still there, as we will see.
A pesar su enorme crecimiento over the last few months, it must be taken into account that most IA platforms work online. Esto quiere decir que de un modo otro exponemos y ofrecemos certain personal data to be able to use these services, regardless of their mode of use. With this, what we really want to say is that the risks that expose our privacy, as with other Internet platforms, are there.
This is something that, for example, the popular security company Check Point wanted to make clear through this discovery that they made public. It must be taken into consideration that millions of users all over the world make use of different proposals centered on the Artificial intelligence for multiple uses, both professional and personal. A priori a good part of these platforms, such as the mentioned ChatGPT, Bard or Bing, ensure that they do not keep in their cache the private information that we envy.
But despite everything and with it, the aforementioned security company has detected several incidents in this sense. For example, they discovered that the popular ChatGPT has exposed sensitive data due to certain vulnerabilities views on the actual development of these systems. As reported by the security firm, a provider specialized in this topic, publishes a series of reports about a new vulnerability in the Models of Language Amplio or LLM of some of these services.
Various IA platforms expose data from their users
In fact, as you know, this vulnerability has been detected in well-known services such as ChatGPT, Google Bard or Microsoft Bing Chat. Here we can add other platforms IA less popular but which also use LLM. In a parallel manner, we must keep in mind that each time there are more developers who make use of these intelligent services for the development of code that they employ in their projects.
The utility that Artificial Intelligences provide is beyond doubt in most cases. But we must also keep in mind that these tools can be converted into one important source of data leakage. It is precisely for this reason that both software developers and users themselves should take certain precautions in this respect. With the growth of the different IA platforms, it is increasingly common that we share certain private and sensitive data.
Basically, this means that the risks of leaks through these online services will grow as the years go by and as their use expands more and more. But as usual in other vulnerable online platforms, companies such as the aforementioned Check Point will offer different options security solutions to continue protecting their millions of clients.