Robuta

https://www.giskard.ai/
Secure AI agents with Giskard’s continuous AI red teaming. Detect vulnerabilities, improve LLM security, and safeguard your AI systems.
ai red teamingllm securityplatformgiskard
https://cranium.ai/resources/press-release/cranium-launches-arena-the-industrys-first-ai-red-teaming-platform-that-extends-to-the-ai-supply-chain/
May 19, 2025 - Cranium unveils Arena V1, a platform purpose-built for AI red teaming for enterprise security. Simulate real-world attacks, test LLMs, and automate AI risk...
ai red teamingenterprise securitycraniumarenalaunches
https://openreview.net/forum?id=D2xnV75eIV&referrer=%5Bthe%20profile%20of%20ZHUO%20ZHANG%5D(%2Fprofile%3Fid%3D~ZHUO_ZHANG1)
We present ASTRA, an automated agent system designed to systematically uncover safety flaws in AI-driven code generation and security guidance systems. ASTRA...
red teamingai softwareastraautonomousspatial
https://hiddenlayer.com/autortai/
Aug 29, 2025 - Identify threats early and validate defenses continuously to safeguard agentic and generative AI applications at scale.
red teamingaihiddenlayer
https://openreview.net/forum?id=D2xnV75eIV&referrer=%5Bthe%20profile%20of%20Xiangyu%20Zhang%5D(%2Fprofile%3Fid%3D~Xiangyu_Zhang3)
We present ASTRA, an automated agent system designed to systematically uncover safety flaws in AI-driven code generation and security guidance systems. ASTRA...
red teamingai softwareastraautonomousspatial
https://foreignpolicy.com/2024/04/03/def-con-31-ai-safety-red-teaming-hack-chatbot-safety/
Apr 4, 2024 - The White House backed an AI red-teaming exercise last year. The results are in.
ai red teamingdef conexerciseresults
https://arxiv.org/abs/2501.07238
Abstract page for arXiv paper 2501.07238: Lessons From Red Teaming 100 Generative AI Products
red teaminggenerative ailessonsproducts
https://www.findarticles.com/top-ai-red-teaming-providers-who-makes-the-list-in-2026/
AI Red Teaming Providers are at the forefront of securing artificial intelligence systems by simulating real-world adversarial attacks to uncover hidden
ai red teamingtopprovidersmakeslist
https://huggingface.co/papers/2410.02828
Join the discussion on this paper page
security riskpaperpyritframeworkidentification
https://pingu.audn.ai/
Audn.ai secures AI agents across voice, chat, and multimodal interfaces. Our Pingu Unchained LLM stress-tests and protects any AI model through adversarial...
ai securityred teamingplatform
https://imerit.net/resources/blog/the-role-of-ango-hub-in-scaling-red-teaming-for-generative-ai/
Nov 12, 2025 - Enterprises face rising risks with generative AI. Ango Hub automates red-teaming workflows for safer, more reliable models.
ango hubred teamingrolescaling
https://adversa.ai/blog/llm-red-teaming-gpts-prompt-leaking-api-leaking-documents-leaking/
Jul 21, 2025 - What is AI Prompt Leaking, AI API Leaking, and AI Documents Leaking in LLM Red Teaming? Testing OpenAI GPT's for real examples.
prompt leakingapi documentsllm
https://www.paloaltonetworks.com/resources/datasheets/prisma-airs-ai-red-teaming
Organizations are increasingly adopting AI to increase the speed and scale of the positive impact they want to deliver on their stakeholders.
ai red teamingpalo alto networksprisma airs
https://www.zscaler.com/products-and-solutions/continuous-automated-red-teaming
Test and secure AI systems with Zscaler. Run automated AI red teaming to identify vulnerabilities, simulate attacks, and ensure enterprise AI safety and...
automated red teamingenterprise aisecurezscaler
https://www.databricks.com/blog/announcing-blackice-containerized-red-teaming-toolkit-ai-security-testing
In this post, we introduce BlackIce, an open-source, containerized toolkit that bundles 14 widely used AI security tools into a single, reproducible...
red teamingai securityannouncingblackicecontainerized
https://grcsolutions.io/ai-red-teaming-ml-llm-testing/
Discover how AI Red Teaming & ML/LLM Testing can help organizations prevent misuse and failure in AI systems under real-world conditions.
ai red teamingllm testinggrc solutionsmlorganizations
https://www.modelred.ai/
Bulletproof your AI models with adaptive red teaming. ModelRed hunts down vulnerabilities in LLMs with 10,000+ evolving attack vectors—deploy AI systems that...
ai securityred teamingampplatform
https://www.paloaltonetworks.com/engage/prisma-airs-webinar/41duztd
Prisma AIRS Deploy Bravely Series Ep 4
red teamingai systemsattackers
https://ndcmanchester.com/workshops/ai-red-teaming-in-practice/86e2d9e8a5e4
Red Teaming AI systems is no longer optional. What began with prompt injection attacks on simple chatbots has exploded into a complex threat surface spanning...
ai red teamingattacksllmsagentsmultimodal
https://royalsociety.org/science-events-and-lectures/2023/10/ai-safety-science-redteam/
This event will bring together graduate students to test the efficacy of guardrails for AI-generated disinformation about climate change and COVID-19.
x aired teamingsciencesafetyllms
https://channellife.news/story/security-methods-safety-goals-rethinking-ai-red-teaming
AI red teaming blends security tactics with safety goals to prevent exploits in chatbots, defending users from harm beyond classic cyber threats.
ai red teamingsecuritymethodssafetygoals
https://www.hcltech.com/ai-red-teaming
Break it before it breaks you. HCLTech AI Red Teaming uncovers hidden AI risks from hallucinations to compliance gaps and keeps your AI secure and resilient....
ai red teamingsecuretestfortifyhcltech