https://www.elastic.co/observability/llm-monitoring
LLM Observability - Monitor AI Safety & Performance | Elastic
llm observabilityai safety
https://openlaunch.ai/projects/shannon-ai-frontier-red-team-lab-for-llm-safety
Shannon AI - Frontier Red Team Lab for LLM Safety | Open Launch
Uncensored AI models for red team research. We build AI that shows you what happens when guardrails are removed. Simple as that.
red teamllm safetyshannonai
https://talktothe.city/safety
LLM Safety - Talk to the City
Understanding AI risks and safety measures in Talk to the City reports
llm safetytalkcity
https://www.leidos.com/insights/leidos-securing-agentic-ai-future-llm-trust-and-safety
Leidos Is Securing the Agentic AI Future with LLM Trust and Safety | Leidos
Large language model refusal training is essential to ensure AI agents avoid unsafe information sources and tools when accomplishing tasks autonomously.
agentic aileidossecuringllm
https://huggingface.co/blog/ServiceNow-AI/aprielguard
AprielGuard: A Guardrail for Safety and Adversarial Robustness in Modern LLM Systems
A Blog post by ServiceNow-AI on Hugging Face
guardrailsafetyadversarial
https://www.theregister.com/2025/01/17/nvidia_cisco_ai_guardrails_security/
Cisco, Nvidia offer tools to boost LLM safety, security • The Register
Jan 17, 2025 - Some of you have apparently already botched chatbots or allowed ‘shadow AI’ to creep in
cisco nvidiaboost llmoffer