Code Release: CRDB Tuning Report Generator with AI Agents
A companion to “Reimagining My Application as an Agentic Workflow (Part 4)“
In a previous post, I walked through how I transformed a manual reporting workflow into an agentic one by giving an AI agent the same tools a human analyst would use to create CockroachDB performance reports.
Today, I’m releasing the full working code so you can explore, learn from, or adapt it for your own projects.
What You’ll Find
This is a complete, working proof-of-concept demonstrating:
FastAgent for agentic workflows
MCP (Model Context Protocol) integration with FastMCP 2.0 (Using FastAPI to generate FastMCP)
A simple React frontend with AI-powered and manual report creation
CockroachDB integration for cluster metrics
Note: This is a proof-of-concept built to demonstrate a simple agentic architecture pattern. There are areas for improvement (like configurable time ranges for metrics collection), but the core concepts are ready and the code is clear enough to learn from and build upon.
Get the Code
Repository: github.com/kikiya/agentic-tuning-reporter
git clone https://github.com/kikiya/agentic-tuning-reporter.git
cd agentic-tuning-reporter
Quick Start
The repo includes a QUICKSTART.md that gets you running in ~5 minutes:
Setup database – One script creates everything
Start backend services – Main API (8001) + Agent Service (8002)
Launch frontend – React UI on port 5173
Then click “New Report” → “Quick Analysis” and watch the AI agent:
Gather cluster metrics
Analyze performance
Create structured findings
Generate actionable recommendations
Save everything to the database
All automatically. All editable afterward.
Architecture Highlights
Three Services Working Together
Main API – FastAPI service with:
ReportsCRUD operationsCluster metrics endpoints (topology, schema, CPU, slow queries)
Direct CockroachDB connection
Agent Service – Separate FastAPI service with:
AI-powered report generation
FastAgent + LLM integration
MCP server that exposes Main API as tools
Frontend – React + TypeScript with:
“Quick Analysis” (AI-generated reports)
“Custom Report” (manual creation)
Both types are fully editable
Model Context Protocol for tools
The key insight from the blog post was giving the agent the same API access as a human. Here’s how:
# backend/mcp/reporter_mcp.py
from main import app
from fastmcp import FastMCP
# Convert entire FastAPI app to MCP tools
mcp = FastMCP.from_fastapi(app=app)Every endpoint becomes a tool the agent can call. No manual wrapping needed.
What You Can Learn
This repo demonstrates:
Agentic Architecture – An introduction into how to structure apps where AI agents act autonomously
MCP Integration – Practical example of Model Context Protocol
Tool Design – What makes good tools for AI agents
Human-in-the-Loop – AI generates, humans refine via the UI
Service Separation – Why the agent service is separate from the main API
Use Cases
This pattern works for any workflow where you want AI to:
Gather data from multiple sources
Analyze and synthesize information
Create structured outputs
Save results for human review
Examples:
Security audit reports
Code review summaries
Customer support analysis
Financial report generation
Research synthesis
Documentation
The repo includes:
README.md – Comprehensive setup and architecture guide
QUICKSTART.md – Get running in 5 minutes
backend/AGENT_SERVICE.md – Deep dive on agent architecture
backend/mcp/README.md – MCP server explanation
What You’ll Need
CockroachDB running locally (or remote cluster)
Python 3.12 (required for FastAgent)
Node.js 16+
Anthropic API key (or your choice of model. FastAgent supports most majors)
30 minutes to explore
Key Takeaways
As I wrote in the original post:
“What I love most is that the original pattern of direct value coupling still holds. Every model improvement or new MCP tool instantly enriches the agent’s output without code rewrites.”
This architecture is:
Extensible – Add new tools by adding API endpoints, or new MCP servers
Maintainable – Clean separation of concerns
Flexible – Swap models, change prompts, no rewrites
Human-centric – AI assists, humans decide
Known Limitations & Future Improvements
This is a proof-of-concept. Some areas for enhancement:
Time Range Selection – Currently uses default time windows for metrics. Could add UI controls for “last hour”, “last day”, “custom range”
Report Scheduling – No automated report generation on a schedule
Multi-Cluster Support – Currently targets one cluster at a time
Advanced Filtering – Could add more granular filtering for slow queries and metrics
Report Templates – Could support different report types (weekly summary, incident analysis, etc.)
The core pattern is solid and these are just natural extensions as you adapt it to your needs.
Contributing & Feedback
This is a learning resource. If you:
Find bugs or improvements
Have questions about the architecture
Want to share how you adapted it
Open an issue or PR! I’m building this in public to help others navigate the shift from “AI as a feature” to “AI as a collaborator.”
What’s Next
I’ll continue exploring agentic architectures—how agents, tools, and humans can work together as adaptive systems. Follow along at kikia.io or watch the repo for updates.
Links:
Built with

to demonstrate agentic architecture patterns. Not production-ready, but production-inspired.



