https://www.ibm.com/think/topics/prompt-injection
In prompt injection attacks, hackers manipulate generative AI systems by feeding them malicious inputs disguised as legitimate user prompts.
prompt injection attackwhat isibm
https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/
Feb 14, 2023 - By asking “Sydney” to ignore previous instructions, it reveals its original directives.
bing chataipoweredspillssecrets