🎯 Mục tiêu bài học
Bài cuối cùng: deploy AI app lên cloud và build capstone project tổng hợp mọi kiến thức đã học.
Sau bài này, bạn sẽ:
✅ Deploy Streamlit app lên cloud ✅ Build capstone: AI-Powered Data Analyst Assistant ✅ Apply best practices: safety, cost, memory
🛠️ Deploy to Streamlit Cloud
1.1 Project Structure
1.2 Requirements File
1# requirements.txt2streamlit>=1.30.03openai>=1.12.04anthropic>=0.18.05tiktoken>=0.6.06python-dotenv>=1.0.07pandas>=2.1.08plotly>=5.18.01.3 Streamlit Config
1# .streamlit/config.toml2[theme]3primaryColor = "#FF6B6B"4backgroundColor = "#FFFFFF"5secondaryBackgroundColor = "#F0F2F6"6textColor = "#262730"7font = "sans serif"8 9[server]10maxUploadSize = 101.4 Deploy Steps
1# 1. Push code to GitHub2git init3git add .4git commit -m "Initial commit"5git remote add origin https://github.com/you/my-ai-app.git6git push -u origin main7 8# 2. Go to share.streamlit.io9# 3. Connect GitHub repo10# 4. Select branch: main, file: app.py11# 5. Add secrets (OPENAI_API_KEY) in dashboard12# 6. Deploy! 🚀1.5 Secrets on Streamlit Cloud
1Streamlit Cloud Dashboard → App Settings → Secrets2 3OPENAI_API_KEY = "sk-xxxxx"4ANTHROPIC_API_KEY = "sk-ant-xxxxx"Important: KHÔNG commit secrets.toml lên GitHub!
Checkpoint
Bạn đã deploy được app lên Streamlit Cloud thành công chưa?
📝 Alternative Deployments
2.1 Docker
1# Dockerfile2FROM python:3.11-slim3 4WORKDIR /app5COPY requirements.txt .6RUN pip install --no-cache-dir -r requirements.txt7 8COPY . .9 10EXPOSE 850111HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health12ENTRYPOINT ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]1docker build -t my-ai-app .2docker run -p 8501:8501 -e OPENAI_API_KEY=sk-xxx my-ai-app2.2 Deploy Options Comparison
| Platform | Free Tier | Custom Domain | Scaling |
|---|---|---|---|
| Streamlit Cloud | ✅ 1 app | ❌ | ❌ |
| Hugging Face Spaces | ✅ unlimited | ❌ | ✅ |
| Railway | $5 credit | ✅ | ✅ |
| Render | ✅ spin-down | ✅ | ✅ |
| Azure App Service | ✅ F1 tier | ✅ | ✅✅ |
Checkpoint
Bạn đã biết các lựa chọn deploy khác như Docker, Railway, Hugging Face chưa?
⚡ Production Best Practices
3.1 Environment Variables
1import os23# Local: .env file4# Production: Platform secrets5OPENAI_KEY = os.getenv("OPENAI_API_KEY")6ENVIRONMENT = os.getenv("ENVIRONMENT", "development")78if ENVIRONMENT == "production":9 # Production settings10 DEFAULT_MODEL = "gpt-4o-mini" # Cost-effective11 MAX_TOKENS = 50012 RATE_LIMIT = 10 # requests per minute13else:14 # Development settings15 DEFAULT_MODEL = "gpt-4o"16 MAX_TOKENS = 200017 RATE_LIMIT = 1003.2 Error Handling in Production
1import logging2import streamlit as st34logging.basicConfig(level=logging.INFO)5logger = logging.getLogger(__name__)67def safe_ai_call(prompt):8 try:9 response = client.chat.completions.create(10 model=DEFAULT_MODEL,11 messages=[{"role": "user", "content": prompt}],12 timeout=3013 )14 return response.choices[0].message.content15 16 except Exception as e:17 logger.error(f"AI call failed: {e}")18 st.error("🔧 Hệ thống đang gặp sự cố. Vui lòng thử lại sau.")19 return None3.3 Rate Limiting
1from datetime import datetime, timedelta23class RateLimiter:4 def __init__(self, max_requests=10, window_seconds=60):5 self.max_requests = max_requests6 self.window = timedelta(seconds=window_seconds)7 self.requests = []8 9 def allow(self):10 now = datetime.now()11 self.requests = [r for r in self.requests if now - r < self.window]12 13 if len(self.requests) >= self.max_requests:14 return False15 16 self.requests.append(now)17 return True1819limiter = RateLimiter(max_requests=10, window_seconds=60)2021if not limiter.allow():22 st.warning("⏳ Bạn đã gửi quá nhiều yêu cầu. Vui lòng đợi 1 phút.")Checkpoint
Bạn đã nắm được các best practices về error handling, rate limiting và environment variables chưa?
💻 Capstone Project
4.1 Project Brief: AI Data Analyst Assistant
Build một complete AI-powered app giúp users phân tích data.
Features required:
| # | Feature | Skills Used |
|---|---|---|
| 1 | Upload CSV/Excel | Streamlit file upload |
| 2 | Auto-analyze data | Function calling + prompts |
| 3 | Natural language Q&A | Conversation management |
| 4 | Generate charts | Tool use + Plotly |
| 5 | Export report | Structured output |
| 6 | Cost tracking | Cost optimization |
| 7 | Safe mode | Content moderation |
4.2 Phase 1: Core App (60 phút)
1# capstone/app.py2import streamlit as st3import pandas as pd4from openai import OpenAI56st.set_page_config(page_title="AI Data Analyst", page_icon="📊", layout="wide")7st.title("📊 AI Data Analyst Assistant")89client = OpenAI(api_key=st.secrets["OPENAI_API_KEY"])1011# Upload data12uploaded = st.file_uploader("Upload CSV", type=["csv"])1314if uploaded:15 df = pd.read_csv(uploaded)16 17 # Show preview18 st.subheader("📋 Data Preview")19 st.dataframe(df.head())20 21 col1, col2, col3 = st.columns(3)22 col1.metric("Rows", len(df))23 col2.metric("Columns", len(df.columns))24 col3.metric("Missing", df.isnull().sum().sum())25 26 # Natural language query27 question = st.chat_input("Hỏi về data...")28 29 if question:30 # Convert data info to context31 data_info = f"""32 Columns: {list(df.columns)}33 Dtypes: {df.dtypes.to_dict()}34 Shape: {df.shape}35 Sample: {df.head(3).to_dict()}36 Stats: {df.describe().to_dict()}37 """38 39 with st.chat_message("assistant"):40 response = client.chat.completions.create(41 model="gpt-4o",42 messages=[43 {"role": "system", "content": f"""44 Bạn là Data Analyst expert.45 Dataset info: {data_info}46 Trả lời bằng tiếng Việt.47 Khi cần chart, output Python code (pandas + plotly).48 """},49 {"role": "user", "content": question}50 ],51 stream=True52 )53 st.write_stream(response)4.3 Phase 2: Add Tools & Memory (60 phút)
Add function calling cho:
analyze_column(column_name)— Deep dive vào 1 columncreate_chart(chart_type, x, y)— Tạo chartfilter_data(condition)— Filter datacalculate_stat(column, stat_type)— Tính thống kê
Add conversation memory (token-based truncation).
4.4 Phase 3: Safety & Cost (30 phút)
- Input moderation
- Cost tracking (display in sidebar)
- Rate limiting
- Error handling
4.5 Phase 4: Deploy (30 phút)
- Push to GitHub
- Deploy to Streamlit Cloud
- Test with real data
- Share URL
4.6 Rubric
| Criteria | Points |
|---|---|
| Core functionality (upload, query, response) | 25 |
| Tool use (function calling works) | 20 |
| Memory management (handles long conversations) | 15 |
| Safety (moderation, error handling) | 15 |
| UX (clean UI, loading states, helpful messages) | 10 |
| Deployment (live on Streamlit Cloud) | 10 |
| Code quality (organized, documented) | 5 |
| Total | 100 |
Checkpoint
Bạn đã hoàn thành capstone project AI Data Analyst Assistant chưa?
💻 Extension Challenges
Đã hoàn thành capstone? Thử thêm:
Challenge 1: Multi-Model Support
Cho user chọn GPT-4, Claude, hoặc Gemini. Compare responses.
Challenge 2: RAG from Documents
Upload PDF → Extract text → RAG-based Q&A.
Challenge 3: Auto-Generated Dashboard
AI phân tích data → tự generate dashboard layout → render charts.
Challenge 4: Export to PowerPoint
AI generate analysis → export slides (python-pptx).
Checkpoint
Bạn đã thử thêm các extension challenges như Multi-Model Support và RAG chưa?
🎯 Tổng kết
📝 Course Summary
Đã học trong 12 bài:
| Module | Bài | Topic |
|---|---|---|
| Fundamentals | 01 | GenAI Overview |
| 02 | How LLMs Work | |
| 03 | Prompt Engineering Basics | |
| 04 | Advanced Prompting | |
| APIs & Tools | 05 | LLM APIs (OpenAI, Claude, Gemini) |
| 06 | Prompt Optimization & Testing | |
| Building Apps | 07 | Streamlit for AI |
| 08 | Conversation Management | |
| 09 | Function Calling & Tool Use | |
| Production | 10 | Cost Optimization |
| 11 | Safety & Ethics | |
| 12 | Deployment & Capstone |
Skills Acquired
1✅ Understand LLM architecture & limitations2✅ Write effective prompts (CRAFT, CoT, few-shot)3✅ Use OpenAI, Anthropic, Google APIs4✅ Build chat interfaces with Streamlit5✅ Manage conversations & memory6✅ Implement function calling & tool use7✅ Optimize costs (model routing, caching)8✅ Ensure safety (moderation, bias, hallucination)9✅ Deploy to production🎯 What's Next?
1📚 Recommended paths:21. RAG & Vector Databases → Build knowledge-base chatbots32. LangChain / LlamaIndex → Framework cho complex AI apps43. Fine-tuning → Customize models cho specific domains54. AI Agents → Autonomous agents (AutoGPT, CrewAI)65. MLOps → Production ML pipelinesChúc mừng bạn đã hoàn thành khóa GenAI Intro! 🎉
Câu hỏi tự kiểm tra
- Các bước chính để deploy một AI application lên Streamlit Cloud là gì?
- Tại sao cần tách biệt environment variables giữa development và production?
- Những yếu tố nào cần cân nhắc khi chọn platform deploy cho AI app (Streamlit Cloud vs Hugging Face vs Railway)?
- Sau khi hoàn thành khóa GenAI Intro, bạn có thể xây dựng những loại ứng dụng AI nào?
🎉 Tuyệt vời! Bạn đã hoàn thành toàn bộ khóa học Generative AI Intro!
Bạn đã nắm vững kiến thức từ cơ bản đến nâng cao: LLM architecture, prompt engineering, API integration, Streamlit, conversation management, function calling, cost optimization, safety và deployment. Hãy tiếp tục khám phá RAG, LangChain, Fine-tuning và AI Agents để nâng cao kỹ năng! 🚀
