Reducing Hallucinations In LLM Agents With A Verified Semantic Cache
This repository contains sample code demonstrating how to implement a verified semantic cache using Amazon Bedrock Knowledge Bases to prevent hallucinations in Large Language Model (LLM) responses while improving latency and reducing costs.
Directory Presence
Cross-referenced across 50 tracked directories
Adoption Metrics
#110
Popularity Rank
2%
Adoption Rate
Emerging
Adoption Stage
49
Unlisted Directories
Recently added to directories
Cross-Posting Gaps
Not yet listed in these active directories:
Statistics
12
GitHub Stars
0
Forks
Related Agents
Multi Agent System A2A ADK MCP
RubensZimbres
Multi-Agent Systems with Google's Agent Development Kit + A2A + MCP
End To End Agentic Ai Automation Lab
MDalamin5
This repository contains hands-on projects, code examples, and deployment workflows. Explore multi-agent systems, LangChain, LangGraph, AutoGen, CrewAI, RAG, MCP, automation with n8n, and scalable agent deployment using Docker, AWS, and BentoML.
Airflow API
call518
🔍Model Context Protocol (MCP) server for Apache Airflow API integration. Provides comprehensive tools for managing Airflow clusters including service operations, configuration management, status monitoring, and request tracking.
Agentgateway
agentgateway
Next Generation Agentic Proxy for AI Agents and MCP servers