Note: This AI Agent is currently an experimental side project.
An AI-powered agent that answers queries about your Kubernetes cluster using Open AI. This tool enables natural language queries about your Kubernetes resources and provides concise, accurate responses about the state of your cluster.
Uses natural language processing to:
- Fetch real-time information about pods, services, secrets, and configmaps
- Interpret cluster state and configurations
- Monitor cluster health through Prometheus metrics
- Prerequisites
- Installation
- Local Development Setup
- Kubernetes Deployment Setup
- Usage
- Example Queries
- Cluster Management
- Troubleshooting
- Python 3.10
- Minikube
- kubectl
- Docker
- OpenAI API key
- Clone the repository:
git clone https://github.com/johnwroge/K8_AI_Query_Agent.git
cd k8_AI_Agent
- Create and activate virtual environment:
# Create environment
python3.10 -m venv venv
# Activate on Mac/Linux:
source venv/bin/activate
# Activate on Windows:
.\venv\Scripts\activate
- Install Dependencies:
pip install -r requirements.txt
- Create
.env
file with your OpenAI API key:
OPENAI_API_KEY=your-api-key-here
- Start the flask server:
python main.py
- Monitor logs:
tail -f agent.log
- Test the application:
python -m unittest test_main.py
- Start Minikube:
minikube start
minikube status
kubectl get nodes
- Create OpenAI Secret:
# Generate base64 encoded API key
echo -n "your-actual-openai-key" | base64
# Create openai-secret.yaml (DO NOT commit this file)
apiVersion: v1
kind: Secret
metadata:
name: openai-api-key
type: Opaque
data:
api-key: <your-base64-encoded-api-key>
# Apply the secret
kubectl apply -f openai-secret.yaml
- Build and Deploy Application:
# Configure Docker to use Minikube's daemon
eval $(minikube docker-env)
# Build the image
docker build -t k8s-agent:latest .
# Apply deployment configuration
kubectl apply -f deployment.yaml
- Deploy Sample Applications:
# Deploy NGINX
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80
# Deploy MongoDB
kubectl create deployment mongodb --image=mongo
kubectl expose deployment mongodb --port=27017
# Deploy Prometheus
kubectl apply -f prometheus.yaml
- Verify Deployments:
kubectl get pods,deployments,services
- Set up port forwarding:
kubectl port-forward service/k8s-agent-service 8000:80
- The API will be available at
http://localhost:8000
Check Nodes:
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "How many nodes are in the cluster?"}'
Response: {"answer":"1","query":"How many nodes are in the cluster?"}
Check Pods:
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "How many pods are in the default namespace?"}'
Response: {"answer":"4","query":"How many pods are in the default namespace?"}
Check Deployments:
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "What deployments are running?"}'
Response: {"answer":"k8s-agent,mongodb,nginx,prometheus","query":"What deployments are running?"}
The test_api.sh file can be updated to make requests to the API for desired metrics.
chmod +x test_api.sh
./test_api.sh
This project uses OpenAI's gpt-3.5-turbo by default for cost-effective development and testing. You can switch to GPT-4 for better language capabilities:
response = self.openai_client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": query}
],
temperature=0
)
Stop Minikube:
minikube stop
Delete Cluster:
minikube delete
Common issues and solutions:
-
OpenAI API Issues:
- Verify API key is correct in .env file
- Check API key is properly encoded in secret
-
Kubernetes Issues:
- Check Minikube status:
minikube status
- Verify pods are running:
kubectl get pods
- Check pod logs:
kubectl logs -f <pod-name>
- Check Minikube status:
-
Docker Issues:
- Ensure using Minikube's Docker daemon:
eval $(minikube docker-env)
- Rebuild image if needed:
docker build -t k8s-agent:latest .
- Ensure using Minikube's Docker daemon:
This project is licensed under the MIT License - see the LICENSE file for details.