A los chatbots les da to vary from time to time. They do it because the users provoke it, but also because there are too many reasons for it. It’s one of the big problems of ChatGPT and its competitors, but NVIDIA believes they have the solution.
What happened. NVIDIA has announced the launch of NeMo Guardrails, a new software that will help the developers of AI-based solutions to make their models plant incorrect answers, be toxic or reveal security holes.
Guardarrailes to avoid problems. What this software does is add a kind of virtual barriers to prevent the chatbot from talking about things it shouldn’t. With NeMo Guardrails you can force a chatbot to be able to talk about any topic avoiding toxic content, and it is also designed to prevent the LLM systems from executing harmful commands on our computers.
Cuidado con lo que dices, chatbot. NeMo Guardrails is a software layer that sits between the user and the conversational AI model or any other AI application. Its objective is to eliminate wrong or toxic answers before the model can offer them to the user. In an example proposed by NVIDIA from a call center, they indicated how the company “does not want to answer questions about competitors”.
No confidential data. This type of tool also serves for another scenario in which someone tries to obtain confidential or sensible information from the data with which the chatbot has been trained. We know that ChatGPT and its competitors do not know how to keep secrets, and NVIDIA’s solution wants to be the answer to this problem.
Ias que se hablan entre sí. This NVIDIA software is capable of making an LLM detect those errors and “hallucinations” by asking other LLM models to verify that the answers from the first ones are correct. If the LLM “verifier” cannot offer that answer, the first LLM will respond to the user with an answer of the type “no lo sé”.
Open Source. NeMo Guardrails has another striking feature: it is Open Source. It can be used through NVIDIA services and it can be used in commercial applications. To use it, developers can use the programming language Colang, with which it is possible to create custom rules that will then be applied to an AI model.
Image: Javier Pastor con Bing Image Creator
In Xataka | “Within two years it will be impossible to know what is reality and what is not”: from the network of regular deepfakes