MinAI - Về trang chủ
Dự án
12/1360 phút
Đang tải...

Deployment & Capstone Project

Deploy AI app lên Streamlit Cloud và hoàn thành capstone project

0

🎯 Mục tiêu bài học

TB5 min

Bài cuối cùng: deploy AI app lên cloud và build capstone project tổng hợp mọi kiến thức đã học.

Sau bài này, bạn sẽ:

✅ Deploy Streamlit app lên cloud ✅ Build capstone: AI-Powered Data Analyst Assistant ✅ Apply best practices: safety, cost, memory

1

🛠️ Deploy to Streamlit Cloud

TB5 min

1.1 Project Structure

📁my-ai-app/
🐍app.py — Main app
📂pages/
💬1_💬_Chat.py
📄2_📄_Analyzer.py
⚙️3_⚙️_Settings.py
📂utils/
🤖ai_client.py — AI wrapper
🧠memory.py — Memory management
🛡️moderation.py — Safety
📂.streamlit/
⚙️config.toml
🔒secrets.toml (gitignored!)
📋requirements.txt
🚫.gitignore
📖README.md

1.2 Requirements File

txt
1# requirements.txt
2streamlit>=1.30.0
3openai>=1.12.0
4anthropic>=0.18.0
5tiktoken>=0.6.0
6python-dotenv>=1.0.0
7pandas>=2.1.0
8plotly>=5.18.0

1.3 Streamlit Config

toml
1# .streamlit/config.toml
2[theme]
3primaryColor = "#FF6B6B"
4backgroundColor = "#FFFFFF"
5secondaryBackgroundColor = "#F0F2F6"
6textColor = "#262730"
7font = "sans serif"
8
9[server]
10maxUploadSize = 10

1.4 Deploy Steps

Bash
1# 1. Push code to GitHub
2git init
3git add .
4git commit -m "Initial commit"
5git remote add origin https://github.com/you/my-ai-app.git
6git push -u origin main
7
8# 2. Go to share.streamlit.io
9# 3. Connect GitHub repo
10# 4. Select branch: main, file: app.py
11# 5. Add secrets (OPENAI_API_KEY) in dashboard
12# 6. Deploy! 🚀

1.5 Secrets on Streamlit Cloud

Ví dụ
1Streamlit Cloud Dashboard → App Settings → Secrets
2
3OPENAI_API_KEY = "sk-xxxxx"
4ANTHROPIC_API_KEY = "sk-ant-xxxxx"

Important: KHÔNG commit secrets.toml lên GitHub!

Checkpoint

Bạn đã deploy được app lên Streamlit Cloud thành công chưa?

2

📝 Alternative Deployments

TB5 min

2.1 Docker

dockerfile
1# Dockerfile
2FROM python:3.11-slim
3
4WORKDIR /app
5COPY requirements.txt .
6RUN pip install --no-cache-dir -r requirements.txt
7
8COPY . .
9
10EXPOSE 8501
11HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health
12ENTRYPOINT ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]
Bash
1docker build -t my-ai-app .
2docker run -p 8501:8501 -e OPENAI_API_KEY=sk-xxx my-ai-app

2.2 Deploy Options Comparison

PlatformFree TierCustom DomainScaling
Streamlit Cloud✅ 1 app
Hugging Face Spaces✅ unlimited
Railway$5 credit
Render✅ spin-down
Azure App Service✅ F1 tier✅✅

Checkpoint

Bạn đã biết các lựa chọn deploy khác như Docker, Railway, Hugging Face chưa?

3

⚡ Production Best Practices

TB5 min

3.1 Environment Variables

python.py
1import os
2
3# Local: .env file
4# Production: Platform secrets
5OPENAI_KEY = os.getenv("OPENAI_API_KEY")
6ENVIRONMENT = os.getenv("ENVIRONMENT", "development")
7
8if ENVIRONMENT == "production":
9 # Production settings
10 DEFAULT_MODEL = "gpt-4o-mini" # Cost-effective
11 MAX_TOKENS = 500
12 RATE_LIMIT = 10 # requests per minute
13else:
14 # Development settings
15 DEFAULT_MODEL = "gpt-4o"
16 MAX_TOKENS = 2000
17 RATE_LIMIT = 100

3.2 Error Handling in Production

python.py
1import logging
2import streamlit as st
3
4logging.basicConfig(level=logging.INFO)
5logger = logging.getLogger(__name__)
6
7def safe_ai_call(prompt):
8 try:
9 response = client.chat.completions.create(
10 model=DEFAULT_MODEL,
11 messages=[{"role": "user", "content": prompt}],
12 timeout=30
13 )
14 return response.choices[0].message.content
15
16 except Exception as e:
17 logger.error(f"AI call failed: {e}")
18 st.error("🔧 Hệ thống đang gặp sự cố. Vui lòng thử lại sau.")
19 return None

3.3 Rate Limiting

python.py
1from datetime import datetime, timedelta
2
3class RateLimiter:
4 def __init__(self, max_requests=10, window_seconds=60):
5 self.max_requests = max_requests
6 self.window = timedelta(seconds=window_seconds)
7 self.requests = []
8
9 def allow(self):
10 now = datetime.now()
11 self.requests = [r for r in self.requests if now - r < self.window]
12
13 if len(self.requests) >= self.max_requests:
14 return False
15
16 self.requests.append(now)
17 return True
18
19limiter = RateLimiter(max_requests=10, window_seconds=60)
20
21if not limiter.allow():
22 st.warning("⏳ Bạn đã gửi quá nhiều yêu cầu. Vui lòng đợi 1 phút.")

Checkpoint

Bạn đã nắm được các best practices về error handling, rate limiting và environment variables chưa?

4

💻 Capstone Project

TB5 min

4.1 Project Brief: AI Data Analyst Assistant

Build một complete AI-powered app giúp users phân tích data.

Features required:

#FeatureSkills Used
1Upload CSV/ExcelStreamlit file upload
2Auto-analyze dataFunction calling + prompts
3Natural language Q&AConversation management
4Generate chartsTool use + Plotly
5Export reportStructured output
6Cost trackingCost optimization
7Safe modeContent moderation

4.2 Phase 1: Core App (60 phút)

python.py
1# capstone/app.py
2import streamlit as st
3import pandas as pd
4from openai import OpenAI
5
6st.set_page_config(page_title="AI Data Analyst", page_icon="📊", layout="wide")
7st.title("📊 AI Data Analyst Assistant")
8
9client = OpenAI(api_key=st.secrets["OPENAI_API_KEY"])
10
11# Upload data
12uploaded = st.file_uploader("Upload CSV", type=["csv"])
13
14if uploaded:
15 df = pd.read_csv(uploaded)
16
17 # Show preview
18 st.subheader("📋 Data Preview")
19 st.dataframe(df.head())
20
21 col1, col2, col3 = st.columns(3)
22 col1.metric("Rows", len(df))
23 col2.metric("Columns", len(df.columns))
24 col3.metric("Missing", df.isnull().sum().sum())
25
26 # Natural language query
27 question = st.chat_input("Hỏi về data...")
28
29 if question:
30 # Convert data info to context
31 data_info = f"""
32 Columns: {list(df.columns)}
33 Dtypes: {df.dtypes.to_dict()}
34 Shape: {df.shape}
35 Sample: {df.head(3).to_dict()}
36 Stats: {df.describe().to_dict()}
37 """
38
39 with st.chat_message("assistant"):
40 response = client.chat.completions.create(
41 model="gpt-4o",
42 messages=[
43 {"role": "system", "content": f"""
44 Bn là Data Analyst expert.
45 Dataset info: {data_info}
46 Tr li bng tiếng Vit.
47 Khi cn chart, output Python code (pandas + plotly).
48 """},
49 {"role": "user", "content": question}
50 ],
51 stream=True
52 )
53 st.write_stream(response)

4.3 Phase 2: Add Tools & Memory (60 phút)

Add function calling cho:

  • analyze_column(column_name) — Deep dive vào 1 column
  • create_chart(chart_type, x, y) — Tạo chart
  • filter_data(condition) — Filter data
  • calculate_stat(column, stat_type) — Tính thống kê

Add conversation memory (token-based truncation).

4.4 Phase 3: Safety & Cost (30 phút)

  • Input moderation
  • Cost tracking (display in sidebar)
  • Rate limiting
  • Error handling

4.5 Phase 4: Deploy (30 phút)

  • Push to GitHub
  • Deploy to Streamlit Cloud
  • Test with real data
  • Share URL

4.6 Rubric

CriteriaPoints
Core functionality (upload, query, response)25
Tool use (function calling works)20
Memory management (handles long conversations)15
Safety (moderation, error handling)15
UX (clean UI, loading states, helpful messages)10
Deployment (live on Streamlit Cloud)10
Code quality (organized, documented)5
Total100

Checkpoint

Bạn đã hoàn thành capstone project AI Data Analyst Assistant chưa?

5

💻 Extension Challenges

TB5 min

Đã hoàn thành capstone? Thử thêm:

Challenge 1: Multi-Model Support

Cho user chọn GPT-4, Claude, hoặc Gemini. Compare responses.

Challenge 2: RAG from Documents

Upload PDF → Extract text → RAG-based Q&A.

Challenge 3: Auto-Generated Dashboard

AI phân tích data → tự generate dashboard layout → render charts.

Challenge 4: Export to PowerPoint

AI generate analysis → export slides (python-pptx).

Checkpoint

Bạn đã thử thêm các extension challenges như Multi-Model Support và RAG chưa?

6

🎯 Tổng kết

TB5 min

📝 Course Summary

Đã học trong 12 bài:

ModuleBàiTopic
Fundamentals01GenAI Overview
02How LLMs Work
03Prompt Engineering Basics
04Advanced Prompting
APIs & Tools05LLM APIs (OpenAI, Claude, Gemini)
06Prompt Optimization & Testing
Building Apps07Streamlit for AI
08Conversation Management
09Function Calling & Tool Use
Production10Cost Optimization
11Safety & Ethics
12Deployment & Capstone

Skills Acquired

Ví dụ
1✅ Understand LLM architecture & limitations
2✅ Write effective prompts (CRAFT, CoT, few-shot)
3✅ Use OpenAI, Anthropic, Google APIs
4✅ Build chat interfaces with Streamlit
5✅ Manage conversations & memory
6✅ Implement function calling & tool use
7✅ Optimize costs (model routing, caching)
8✅ Ensure safety (moderation, bias, hallucination)
9✅ Deploy to production

🎯 What's Next?

Ví dụ
1📚 Recommended paths:
21. RAG & Vector Databases → Build knowledge-base chatbots
32. LangChain / LlamaIndex → Framework cho complex AI apps
43. Fine-tuning → Customize models cho specific domains
54. AI Agents → Autonomous agents (AutoGPT, CrewAI)
65. MLOps → Production ML pipelines

Chúc mừng bạn đã hoàn thành khóa GenAI Intro! 🎉

Câu hỏi tự kiểm tra

  1. Các bước chính để deploy một AI application lên Streamlit Cloud là gì?
  2. Tại sao cần tách biệt environment variables giữa development và production?
  3. Những yếu tố nào cần cân nhắc khi chọn platform deploy cho AI app (Streamlit Cloud vs Hugging Face vs Railway)?
  4. Sau khi hoàn thành khóa GenAI Intro, bạn có thể xây dựng những loại ứng dụng AI nào?

🎉 Tuyệt vời! Bạn đã hoàn thành toàn bộ khóa học Generative AI Intro!

Bạn đã nắm vững kiến thức từ cơ bản đến nâng cao: LLM architecture, prompt engineering, API integration, Streamlit, conversation management, function calling, cost optimization, safety và deployment. Hãy tiếp tục khám phá RAG, LangChain, Fine-tuning và AI Agents để nâng cao kỹ năng! 🚀