Skip to content

Commit

Permalink
PR5: Write a Kubernetes deployment YAML (Deployment, Service) for you…
Browse files Browse the repository at this point in the history
…r model's UI (Streamlit, Gradio).
  • Loading branch information
danilyef committed Nov 25, 2024
1 parent 2aa5b4d commit 9e366a7
Show file tree
Hide file tree
Showing 8 changed files with 206 additions and 3 deletions.
3 changes: 0 additions & 3 deletions homework_9/pr4/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,6 @@ numpy==1.26.4
pandas==2.2.1
torch==2.2.1
-f https://download.pytorch.org/whl/cpu
streamlit==1.39.0
pytest==8.3.0
gradio==4.44.1
fastapi[standard]==0.114.0
pydantic==2.8.2
uvicorn==0.30.5
Expand Down
18 changes: 18 additions & 0 deletions homework_9/pr5/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Use official Python image
FROM python:3.11

# Set working directory
WORKDIR /app

# Copy files
COPY . .

# Install dependencies
RUN pip install -r requirements.txt

# Expose the port Streamlit runs on
EXPOSE 8501

# Command to run the application
#CMD ["streamlit", "run", "main.py", "--server.address=0.0.0.0"]
CMD ["streamlit", "run", "main.py"]
90 changes: 90 additions & 0 deletions homework_9/pr5/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
## correct home directory

```bash
cd homework_9/pr5
```

## start minikube

```bash
minikube start
eval $(minikube -p minikube docker-env)
```


## build docker image

```bash
docker build -t streamlit-app:latest .
```

## deploy to minikube

```bash
kubectl apply -f k8s_deployment.yaml
```

## get url

```bash
minikube service streamlit-service --url
```


## test predict

```bash
new url: http://192.168.99.100:30000/ (or other)
```




In Kubernetes, **`type: NodePort`** is used in a Service when you want to access your application from outside the Kubernetes cluster (like your laptop or local browser).

Here’s why you might use it in simple terms:

---

### **1. Kubernetes Runs on Its Own Network**
- Kubernetes creates an internal network for all the Pods.
- By default, this network isn’t accessible from the outside (e.g., your computer).

---

### **2. Services Expose Pods**
- A **Service** connects your app (running in Pods) to the outside world.
- **`type: NodePort`** exposes your app on a specific port on every node in your cluster.

---

### **3. Why Use `NodePort`?**
- When you set `type: NodePort`, Kubernetes assigns a port (like `30000-32767`) on the node's IP address.
- You can now access your app by visiting:
```
http://<node-ip>:<node-port>
```
For example:
```
http://192.168.99.100:30000
```
Here, `192.168.99.100` is the Minikube node's IP, and `30000` is the NodePort.

---

### **4. Why Not Use ClusterIP?**
- By default, Services use **`type: ClusterIP`**, which only allows access *within* the Kubernetes cluster.
- This is useful for internal communication between apps but not for external access.

---

### **5. Why NodePort is Good for Minikube**
- In Minikube, you're running Kubernetes on your local machine.
- Using `NodePort` is a quick and simple way to test and access your app from your browser or other devices on the same network.

---

### **In Summary**
- **`type: NodePort`** makes your app accessible outside Kubernetes on a specific port.
- This is great for testing or development, especially in Minikube.
- Later, in production, you might use other Service types (like `LoadBalancer` or `Ingress`) for more advanced routing.
Empty file added homework_9/pr5/__init__.py
Empty file.
35 changes: 35 additions & 0 deletions homework_9/pr5/k8s_deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: streamlit-app
labels:
app: streamlit-app
spec:
replicas: 2
selector:
matchLabels:
app: streamlit-app
template:
metadata:
labels:
app: streamlit-app
spec:
containers:
- name: streamlit-app
image: streamlit-app:latest
imagePullPolicy: Never
ports:
- containerPort: 8501
---
apiVersion: v1
kind: Service
metadata:
name: streamlit-service
spec:
selector:
app: streamlit-app
ports:
- protocol: TCP
port: 8501
targetPort: 8501
type: NodePort
28 changes: 28 additions & 0 deletions homework_9/pr5/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import streamlit as st
from utils import Model

model = Model(model_name='distilbert-base-uncased-finetuned-sst-2-english')

# Create the Streamlit app title
st.title('Sentiment Analysis with DistilBERT')

# Create a text input for user's sentence
user_input = st.text_area("Enter text to analyze:", "I love this app!")

# Create analyze button
analyze_button = st.button("Analyze Sentiment")

# Make prediction when button is clicked
if analyze_button and user_input:
# Get the prediction
label = model.predict(user_input)
score = model.predict_proba(user_input)

# Display results
st.write("### Results:")
label_color = "green" if label == "POSITIVE" else "red"
st.markdown(f"Label: **:{label_color}[{label}]**")
st.write(f"Confidence: **{score:.4f}** ({score*100:.2f}%)")

# Create a progress bar for the confidence score
st.progress(score)
9 changes: 9 additions & 0 deletions homework_9/pr5/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
transformers==4.42.3
numpy==1.26.4
pandas==2.2.1
torch==2.2.1
-f https://download.pytorch.org/whl/cpu
streamlit==1.39.0
pytest==8.3.0
pydantic==2.8.2
requests==2.32.2
26 changes: 26 additions & 0 deletions homework_9/pr5/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
import torch

class Model:
def __init__(self, model_name="distilbert-base-uncased-finetuned-sst-2-english"):
self.tokenizer = DistilBertTokenizer.from_pretrained(model_name)
self.model = DistilBertForSequenceClassification.from_pretrained(model_name)
self.model.eval()

def predict(self, text):
inputs = self.tokenizer(
text, return_tensors="pt", truncation=True, padding=True
)
with torch.no_grad():
outputs = self.model(**inputs)
predicted_class_id = torch.argmax(outputs.logits, dim=1).item()
return self.model.config.id2label[predicted_class_id]

def predict_proba(self, text):
inputs = self.tokenizer(
text, return_tensors="pt", truncation=True, padding=True
)
with torch.no_grad():
outputs = self.model(**inputs)
probabilities = torch.softmax(outputs.logits, dim=1)
return probabilities.squeeze().max().item()

0 comments on commit 9e366a7

Please sign in to comment.