https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/
AI-powered Bing Chat spills its secrets via prompt injection attack [Updated] - Ars Technica
via prompt injectionbing chat
https://embracethered.com/blog/posts/2025/github-copilot-remote-code-execution-via-prompt-injection/
GitHub Copilot: Remote Code Execution via Prompt Injection (CVE-2025-53773) · Embrace The Red
This post is about an important, but also scary, prompt injection discovery that leads to full system compromise of the developer’s machine in GitHub …
remote code executiongithub
https://adversa.ai/blog/gpt-4-hacking-and-jailbreaking-via-rabbithole-attack-plus-prompt-injection-content-moderation-bypass-weaponizing-ai/
GPT-4 Jailbreak and Hacking via RabbitHole attack, Prompt injection, Content moderation bypass and...
Jul 21, 2025 - GPT-4 Jailbreak is what all the users were waiting for since the GPT-4 release. Hack GPT-4 Bypass GPT4. DAN Jailbreak for GPT-4
hacking viaprompt injection
https://www.theregister.com/2025/10/28/ai_browsers_prompt_injection/
AI browsers wide open to attack via prompt injection • The Register
Oct 28, 2025 - Feature: Agentic features open the door to data exfiltration or worse
via prompt injectionwide open
https://www.csoonline.com/article/4080154/copilot-diagrams-could-leak-corporate-emails-via-indirect-prompt-injection.html
Copilot diagrams could leak corporate emails via indirect prompt injection | CSO Online
Oct 28, 2025 - A now patched flaw in Microsoft 365 Copilot let attackers turn its diagram tool, Mermaid, into a data exfiltration channel–fetching and encoding emails...
via indirect promptcopilot
https://www.promptarmor.com/resources/data-exfiltration-from-slack-ai-via-indirect-prompt-injection
Data Exfiltration from Slack AI via Indirect Prompt Injection
This vulnerability can allow attackers to steal anything a user puts in a private Slack channel by manipulating the language model used for content generation.
via indirect promptslack ai
https://winbuzzer.com/2025/11/25/security-flaw-in-google-antigravity-ide-allows-data-exfiltration-via-prompt-injection-xcxwbn/
Security Flaw in Google Antigravity AI IDE Allows Data Exfiltration via Prompt Injection - WinBuzzer
Nov 26, 2025 - According to security researchers, Google Antigravity allows data exfiltration via indirect prompt injection, bypassing default safety controls.
security flawai idegoogledata