At least two other major AI developers will publicly publish their prompt injection failure rates by the end of 2026.
Prompt injection has emerged as a measurable security metric for AI systems, yet current reports indicate that Anthropic is the sole AI developer publicly publishing its prompt injection failure rates. This situation points to a lack of transparency from other major AI developers regarding this critical security vulnerability.