RealHarm: A Collection of Real-World Language Model Application Failure
I'm David from Giskard, and we work on securing your Agents. Today, we are launching RealHarm: a dataset of real-world problematic interactions with AI agents, drawn from publicly reported incidents.