Home App NVIDIA launches ‘Guardrail’ software to stop AI chatbots talking nonsense

NVIDIA launches ‘Guardrail’ software to stop AI chatbots talking nonsense

0

Artificial intelligence (AI) is evolving rapidly, but there are some challenges, such as AI models that sometimes “hallucinate” by telling the wrong truth, engaging in harmful topics or causing security risks. To address this problem, NVIDIA on Tuesday released a new software called NeMo Guardrails, which helps software developers put “guardrails” on AI models to prevent them from producing undesirable output.

NeMo Guardrails is a software layer that sits between the user and the AI model, intercepting and modifying bad content before it is output. For example, if a developer wants to create a customer service chatbot, they can use NeMo Guardrails to restrict it to talking only about relevant products and not about competitors’ products or other irrelevant topics. If the user asks such a question, the bot can direct the conversation back to the topic the developer wants.

NVIDIA also provides another example of a chatbot used to answer internal corporate HR questions. In this example, NVIDIA was able to use NeMo Guardrails to prevent ChatGPT-based bots from answering questions about the company’s finances or accessing other employees’ private data.

In addition, the software was able to use one AI model to detect the ‘illusion’ of another AI model by asking the second AI model questions to verify the answers of the first AI model. If the two models give inconsistent answers, the software will return an “I don’t know” response.

NVIDIA also says that this “guardrail” software also helps improve security by forcing AI models to interact only with third-party software on a whitelist.

Nemo Guardrails is open source and available through NVIDIA Services and can be used in commercial applications where developers will use the Golang programming language to write custom rules for AI models.

Other AI companies, including Google and Microsoft-backed OpenAI, also use a method called reinforcement learning to prevent LLM applications from producing harmful output, this approach uses human testers to create data about which answers are acceptable or unacceptable and then uses that data to train AI models.

NVIDIA is turning its attention to AI, and the company now dominates the market for the chips needed to create this technology, with its shares up 85% so far in 2023, making it the biggest gainer in the S&P 500.

Exit mobile version