https://owasp.org/www-project-llm-verification-standard/
OWASP LLM Security Verification Standard | OWASP Foundation
The standard provides a basis for designing, building, and testing robust LLM backed applications
llm securityowaspverificationstandardfoundation
https://www.giskard.ai/
AI Red Teaming & LLM Security Platform | Giskard
Secure AI agents with Giskard’s continuous AI red teaming. Detect vulnerabilities, improve LLM security, and safeguard your AI systems.
ai red teamingllm securityplatform
https://www.tigera.io/learn/guides/llm-security/
LLM Security: Top 10 Risks and 5 Best Practices
Jan 23, 2026 - Large language models (LLM) are AI systems trained on text datasets, capable of emulating human-like text, code, and interactions.
llm securitytop 10best practicesrisks
https://www.haproxy.com/content-library/webinars/beyond-basic-routing-building-an-ai-aware-gateway-for-llm-security
Beyond basic routing: building an AI-aware gateway for LLM security | On-Demand Webinars
Apr 24, 2025 - In this hands-on webinar, we'll demonstrate how to transform HAProxy into a sophisticated AI-aware gateway using Stream Processing Offload Engine (SPOE).
on demand webinarsbasic routingllm securitybeyondbuilding
https://www.telco.com/ai-guard/
AIGuard | LLM Security & Prompt Protection | BATM Networks
Mar 24, 2026 - Secure enterprise LLM usage with controls against data leakage, prompt injection, and unsafe outputs. AI ChatGuard by BATM Networks.
llm securitypromptprotectionnetworks
https://7asecurity.com/ai-pentest
AI & LLM Security Testing | 7ASecurity
Secure your AI-powered applications against adversarial threats, prompt injection, and agentic misbehavior with comprehensive adversarial testing aligned with...
llm security testingai7asecurity
https://lwn.net/Articles/1068928/
Kernel code removals driven by LLM-created security reports [LWN.net]
There are a number of ongoing efforts to remove kernel code, mostly from the networking subsyst [...]
security reportskernelcoderemovalsdriven
https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/when-tokenizers-drift-hidden-costs-and-security-risks-in-llm-deployments
When Tokenizers Drift: Hidden Costs and Security Risks in LLM Deployments | Trend Micro (US)
A tokenizer lies at the core of every large language model. When it drifts, whether from unseen flaws or adversarial interference, costs rise and performance...
trend microtokenizersdrifthiddencosts
https://github.com/splx-ai/agentic-radar
GitHub - splx-ai/agentic-radar: A security scanner for your LLM agentic workflows · GitHub
A security scanner for your LLM agentic workflows. Contribute to splx-ai/agentic-radar development by creating an account on GitHub.
security scannergithubaiagenticradar
https://www.fortinet.com/products/fortiai-secure
FortiAI – Security for AI Models, LLM Data and AI Workloads | Fortinet
FortiAI Secures AI models and systems, prevents data leakage from LLM, and secures AI workloads with layered defense across network, application and data...
security for aimodelsllmdataworkloads
https://www.ndss-symposium.org/ndss-program/last-x-2026/
Workshop on LLM Assisted Security and Trust Exploration (LAST-X) 2026 Program - NDSS Symposium
security and trust2026 programndss symposiumworkshopllm
https://www.ardanlabs.com/events/security-in-go-llm-based-applications/
Security in Go LLM-based applications
Secure Go-LLM apps: learn to block prompt injection, secure tool calls via least-privilege, and stop RAG data poisoning with robust patterns.
securitygollmbasedapplications
https://phoenix.security/whitepapers-resources/ebook-llm-threat-centric/
Phoenix Security - LLM Threat Centric Approach on Vulnerability
Feb 10, 2026 - Download the latest whitepaper on LLM and the application to vulnerability management and application security. Understand how Ransomware attacks leverage...
threat centric approachphoenix securityllmvulnerability
https://www.haproxy.com/blog/lessons-learned-in-llm-prompt-security-securing-ai-with-ai
Lessons learned in LLM prompt security: securing AI with AI
Jun 13, 2025 - Experimenting with AI for prompt security in AI Gateways. Discover key lessons, performance issues, and how to optimize for practical use.
lessons learnedsecuring aillmpromptsecurity
https://nationalsecurity.law.georgetown.edu/jd-llm/
JD and LLM Programs - Georgetown Law - Center on National Security
Mar 6, 2026 - We are training the next generation of national security lawyers and leaders.
llm programsgeorgetown lawnational securityjdcenter
https://promptbrake.com/
LLM API Security Testing for Prompt Injection and Data Leaks | PromptBrake
Security test LLM-powered API endpoints for prompt injection, jailbreaks, data leaks, tool abuse, and unsafe behavior. Get evidence-backed findings in minutes.
api security testingprompt injectiondata leaksllm
https://www.mimecast.com/content/llm-data-leakage-prevention/
LLM Data Leakage: Preventing AI Security Exposure | Mimecast
LLM data leakage occurs through prompts, training data, and AI-generated outputs. Learn how Mimecast helps reduce AI security exposure.
data leakageai securityllmpreventingexposure