Robuta

https://www.ibm.com/think/topics/prompt-injection
In prompt injection attacks, hackers manipulate generative AI systems by feeding them malicious inputs disguised as legitimate user prompts.
prompt injection attackwhat isibm
https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/
Feb 14, 2023 - By asking “Sydney” to ignore previous instructions, it reveals its original directives.
bing chataipoweredspillssecrets
https://tensortrust.ai/?page=662
Rise to the top of the Tensor Trust leaderboard by fooling AI language models, and help researchers make more secure AI along the way.
prompt injection attacktensortrustdefensegame