At least two other major AI developers will publicly publish their prompt injection failure rates by the end of 2026.
50%
At least two other major AI developers will publicly publish their prompt injection failure rates by the end of 2026.
Resolution date
Tags
Prompt injection has emerged as a measurable security metric for AI systems, yet current reports indicate that Anthropic is the sole AI developer publicly publishing its prompt injection failure rates. This situation points to a lack of transparency from other major AI developers regarding this critical security vulnerability.
This claim will resolve as 'True' if at least two AI developers, other than Anthropic (e.g., OpenAI, Google, Microsoft, Meta), publicly release official reports or statements detailing their prompt injection failure rates. The publication must be verifiable through their official websites, press releases, or reputable technology news outlets (e.g., VentureBeat, TechCrunch, Reuters, AP). The reports must include quantitative data on prompt injection failure rates.