https://thenewstack.io/redis-launches-vector-sets-and-a-new-tool-for-semantic-caching-of-llm-responses/
Redis Launches Vector Sets and a New Tool for Semantic Caching of LLM Responses - The New Stack
Apr 8, 2025 - Redis today announced two new products: LangCache for caching LLM responses and vector sets, a new data type in Redis for storing and querying embeddings.
vector setsnew toolredis
https://redis.io/blog/semantic-caching-and-routing-two-powerful-patterns-for-vector-classification/
Semantic caching & routing: two powerful patterns for vector classification | Redis
Mar 13, 2026 - Developers love Redis. Unlock the full potential of the Redis database with Redis Enterprise and start building blazing fast apps.
semantic cachingtwo powerful
https://www.solo.io/resources/lab/gloo-ai-gateway-rag-semantic-caching
Gloo AI Gateway Hands-On Lab: RAG and Semantic Caching - Lab | Solo.io
Sign up for the free, hands-on technical labs.
ai gatewaysemantic cachinglab
https://www.scylladb.com/2025/11/24/cut-llm-costs-and-latency-with-scylladb-semantic-caching/
Cut LLM Costs and Latency with ScyllaDB Semantic Caching - ScyllaDB
Nov 24, 2025 - How semantic caching can help with costs and latency as you scale up your AI workload
llm costssemantic cachingcut
https://www.marktechpost.com/2025/11/11/how-to-reduce-cost-and-latency-of-your-rag-application-using-semantic-llm-caching/
How to Reduce Cost and Latency of Your RAG Application Using Semantic LLM Caching - MarkTechPost
Nov 11, 2025 - Learn how to reduce cost and latency of your RAG application using semantic LLM caching to optimize performance efficiently.
reduce costrag application
https://redis.io/resources/videos/meet-redis-langcache-semantic-caching-for-ai/
Meet Redis LangCache: Semantic caching for AI | Redis
Feb 18, 2026 - Developers love Redis. Unlock the full potential of the Redis database with Redis Enterprise and start building blazing fast apps.
redis langcachemeetsemanticai
https://thenewstack.io/what-is-semantic-caching/
What Is Semantic Caching? - The New Stack
May 4, 2025 - Semantic caching is poised to eliminate redundant LLM queries and improve AI agent performance.
semantic cachingnew stack
Sponsored https://joi.com/
NSFW Character AI Chat – AI Girlfriend Chat Without Limits | JOI Spicy
Explore AI chat models on JOI AI with virtual characters and digital celebrities. Chat, interact, and customize AI companions for immersive experiences.
https://redis.io/tutorials/semantic-caching-with-redis-langcache/
Build semantic caching with Redis LangCache to reuse LLM answers for similar questions
Mar 25, 2026 - Cache OpenAI responses with Redis LangCache using semantic similarity. Reuse answers for paraphrased questions, reduce token spend, and track hit rate....
semantic cachingllm answers
https://www.infoq.com/articles/reducing-false-positives-retrieval-augmented-generation/?topicPageSponsorship=6cd7463a-8078-4002-8497-4a5e67bd0650
Reducing False Positives in Retrieval-Augmented Generation (RAG) Semantic Caching: a Banking Case...
In this article, author Elakkiya Daivam discusses why Retrieval Augmented Generation (RAG) and semantic caching techniques are powerful levers for reducing...
augmented generation ragfalse