This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Technology Law

| 1 minute read

AI Software Manipulated into Leaking Sensitive Data

As large language models (like ChatGPT) and other types of generative AI grow in popularity, researchers are starting to uncover their vulnerabilities and the ways they can be exploited for nefarious purposes. Earlier this month, researchers at Robust Intelligence published a blog post detailing the ways they were able to prompt the NVIDIA AI Platform—designed for businesses to customize and deploy their own generative AI models (for example, to integrate with customer service chat bots)—into revealing personally identifiable information from a database.

Though NVIDIA has begun to address and resolve the issues the researchers identified, the research indicates that, broadly, AI “guardrails”—the rules, filters, and other mechanisms designed to ensure safe and ethical use of the applicable software—may not be sufficient to protect against undesirable outputs, especially where the AI model is trained on “unsanitized” data.

Key Takeaways:

  • Even advanced AI systems can be vulnerable to data leaks and other exploits
  • There may be severe legal consequences for organizations that fail to prevent AI models from revealing personally identifiable information
  • Organizations need robust internal guidelines and policies detailing how AI (as well as sensitive data) should and should not be used
  • In addition to those written policies, organizations should maintain meaningful human oversight to regulate the use of AI

Tags

artificial intelligence, ai, data, data protection