Lý thuyết
45 phút
Bài 5/6

Advanced Prompt Techniques

Các kỹ thuật prompt nâng cao: Self-consistency, Tree of Thought, Prompt Chaining, và ReAct

Advanced Prompt Techniques

Sau khi nắm vững cơ bản, đây là lúc khám phá các kỹ thuật nâng cao để tận dụng tối đa sức mạnh của LLMs.

🎯 Mục tiêu

  • Master Self-consistency và Tree of Thought
  • Implement Prompt Chaining cho complex tasks
  • Hiểu ReAct pattern cho AI agents
  • Áp dụng meta-prompting

1. Self-Consistency

Concept

Self-consistency = Chạy nhiều lần với CoT, chọn answer xuất hiện nhiều nhất.

Text
1Run 1: "Let's think step by step..." → Answer: 42
2Run 2: "Let's think step by step..." → Answer: 42
3Run 3: "Let's think step by step..." → Answer: 45
4Run 4: "Let's think step by step..." → Answer: 42
5Run 5: "Let's think step by step..." → Answer: 42
6
7Final answer: 42 (appeared 4/5 times)

Implementation

Python
1from openai import OpenAI
2from collections import Counter
3
4client = OpenAI()
5
6def self_consistent_answer(question, n_samples=5, temperature=0.7):
7 answers = []
8
9 prompt = f"""
10 {question}
11
12 Let's solve this step by step:
13 1. First, I'll analyze the problem
14 2. Then, work through the logic
15 3. Finally, give the answer
16
17 Answer (number only):
18 """
19
20 for _ in range(n_samples):
21 response = client.chat.completions.create(
22 model="gpt-4",
23 messages=[{"role": "user", "content": prompt}],
24 temperature=temperature
25 )
26
27 # Extract answer
28 answer = response.choices[0].message.content.strip()
29 answers.append(answer)
30
31 # Majority voting
32 counter = Counter(answers)
33 most_common = counter.most_common(1)[0]
34
35 return {
36 "final_answer": most_common[0],
37 "confidence": most_common[1] / n_samples,
38 "all_answers": answers
39 }
40
41# Usage
42result = self_consistent_answer("If a train travels 120km in 2 hours, what's its speed?")
43print(f"Answer: {result['final_answer']} (confidence: {result['confidence']:.0%})")

Khi nào dùng

✅ Math problems, Logic puzzles, Code debugging ❌ Creative tasks, Subjective questions


2. Tree of Thought (ToT)

Concept

Tree of Thought = Explore multiple reasoning paths, evaluate và chọn best path.

Text
1[Problem]
2 / | \
3 Path A Path B Path C
4 / \ | / \
5 A1 A2 B1 C1 C2
6 ✓ (best)

Prompt Template

Text
1Bạn là expert problem solver. Giải bài toán sau theo phương pháp
2Tree of Thought.
3
4Problem: [problem]
5
6Step 1: Generate 3 possible approaches
7Step 2: Evaluate each approach (score 1-10)
8Step 3: Pursue the best approach
9Step 4: If stuck, backtrack and try another
10
11Format:
12## Approach 1: [name]
13- Description: ...
14- Evaluation: X/10
15- Reason: ...
16
17## Approach 2: [name]
18...
19
20## Best Approach: [chosen]
21## Solution:
22[detailed solution following chosen approach]

Example: Business Strategy

Text
1Problem: Công ty startup AI cần tăng revenue gấp 3 trong 12 tháng.
2Resources: Team 10 người, $500K budget.
3
4Generate 3 strategies, evaluate, và develop best one.

Output:

Text
1## Approach 1: B2B Enterprise Sales
2- Focus: Target large enterprises with custom AI solutions
3- Evaluation: 8/10
4- Pros: High deal value, recurring revenue
5- Cons: Long sales cycle, need senior salespeople
6
7## Approach 2: Product-Led Growth
8- Focus: Free tier + viral features
9- Evaluation: 6/10
10- Pros: Scalable, low CAC
11- Cons: Slow growth, may not hit 3x in 12 months
12
13## Approach 3: Partnership Strategy
14- Focus: Integrate with major platforms
15- Evaluation: 9/10
16- Pros: Fast distribution, credibility boost
17- Cons: Revenue share, dependency
18
19## Best Approach: Partnership Strategy
20## Detailed Plan:
21[expanded strategy with milestones, KPIs, action items]

3. Prompt Chaining

Concept

Prompt Chaining = Chia complex task thành chain of simpler prompts.

Text
1Task: Write a research paper
2
3Chain: [Outline] → [Draft Intro] → [Draft Body] → [Draft Conclusion] → [Edit] → [Format]

Implementation

Python
1def research_paper_chain(topic):
2 # Step 1: Generate outline
3 outline = call_llm(f"""
4 Create detailed outline for research paper on: {topic}
5 Include: Introduction, 3-4 main sections, Conclusion
6 Format as hierarchical bullet points.
7 """)
8
9 # Step 2: Write introduction
10 intro = call_llm(f"""
11 Topic: {topic}
12 Outline: {outline}
13
14 Write engaging introduction (200 words).
15 Include: Hook, context, thesis statement.
16 """)
17
18 # Step 3: Write body sections
19 body_sections = []
20 for section in extract_sections(outline):
21 section_content = call_llm(f"""
22 Topic: {topic}
23 Section: {section}
24 Previous content: {intro}
25
26 Write this section (300 words).
27 Include examples and evidence.
28 """)
29 body_sections.append(section_content)
30
31 # Step 4: Write conclusion
32 conclusion = call_llm(f"""
33 Topic: {topic}
34 Paper content: {intro} {' '.join(body_sections)}
35
36 Write conclusion (150 words).
37 Summarize key points, implications, future directions.
38 """)
39
40 # Step 5: Edit and polish
41 full_paper = f"{intro}\n\n{'\n\n'.join(body_sections)}\n\n{conclusion}"
42
43 final = call_llm(f"""
44 Edit this paper for:
45 - Grammar and clarity
46 - Smooth transitions
47 - Consistent tone
48
49 Paper: {full_paper}
50 """)
51
52 return final

Benefits

  • ✅ Better quality through focused tasks
  • ✅ Easier debugging (identify which step failed)
  • ✅ More control over each component
  • ✅ Can optimize individual steps

4. ReAct Pattern (Reason + Act)

Concept

ReAct = LLM reasons about task, decides actions, observes results, repeats.

Text
1Thought: I need to find the weather in Hanoi
2Action: search_weather("Hanoi")
3Observation: 28°C, sunny, humidity 75%
4Thought: Now I have the weather data, I can respond
5Action: respond("The weather in Hanoi is 28°C and sunny")

Prompt Template

Text
1You are an AI assistant that follows the ReAct pattern.
2
3Available tools:
4- search(query): Search the web
5- calculate(expression): Do math
6- lookup(term): Look up definition
7
8For each step, output:
9Thought: [your reasoning]
10Action: [tool_name(params)] or respond(message)
11Observation: [result - will be filled by system]
12
13Task: {user_query}
14
15Begin:

Implementation Sketch

Python
1import re
2
3TOOLS = {
4 "search": lambda q: web_search(q),
5 "calculate": lambda expr: eval(expr),
6 "lookup": lambda term: dictionary_lookup(term)
7}
8
9def react_agent(query, max_steps=5):
10 context = f"Task: {query}\n\nBegin:\n"
11
12 for step in range(max_steps):
13 # Get LLM response
14 response = call_llm(REACT_PROMPT + context)
15
16 # Parse thought and action
17 thought = extract_thought(response)
18 action = extract_action(response)
19
20 context += f"Thought: {thought}\nAction: {action}\n"
21
22 # Check if done
23 if action.startswith("respond"):
24 final_message = extract_response_message(action)
25 return final_message
26
27 # Execute tool
28 tool_name, params = parse_action(action)
29 if tool_name in TOOLS:
30 observation = TOOLS[tool_name](params)
31 context += f"Observation: {observation}\n"
32 else:
33 context += f"Observation: Unknown tool\n"
34
35 return "Max steps reached"

5. Meta-Prompting

Concept

Meta-prompting = Dùng AI để generate/improve prompts.

Prompt Generator

Text
1You are a prompt engineering expert.
2
3Task: {user_task}
4
5Generate an optimal prompt that:
61. Uses appropriate technique (zero-shot, few-shot, CoT)
72. Includes clear role/context
83. Specifies output format
94. Handles edge cases
10
11Output the prompt in a code block, then explain why this structure works.

Prompt Optimizer

Text
1Current prompt: {original_prompt}
2Issue: {what's wrong - e.g., "responses too vague"}
3
4Suggest 3 improved versions with explanations.
5For each:
61. Show the improved prompt
72. Explain the changes
83. Predict how output will improve

Prompt Debugger

Text
1Prompt used: {prompt}
2Expected output: {expected}
3Actual output: {actual}
4
5Analyze:
61. Why did the prompt produce this output?
72. What's missing or unclear?
83. How to fix it?
9
10Provide corrected prompt.

6. Constrained Generation

Controlling Output

Text
1Generate Python function with these CONSTRAINTS:
2- Must use type hints
3- Max 20 lines
4- Must include docstring
5- Must have error handling
6- No external dependencies
7- Follow PEP 8
8
9Function purpose: Parse CSV file and return dictionary

Output Validation Prompt

Text
1Generate JSON for user profile.
2
3SCHEMA (must match exactly):
4{
5 "name": string (2-50 chars),
6 "age": integer (18-120),
7 "email": string (valid email format),
8 "interests": array of strings (1-5 items)
9}
10
11After generating, validate against schema and fix any issues.
12
13User info: John, 25 years old, john@email.com, likes coding and music

7. Multi-Turn Strategy

Conversation Design

Python
1conversation = [
2 {
3 "role": "system",
4 "content": """You are a code reviewer.
5 Follow this process:
6 1. First ask clarifying questions
7 2. Then provide initial feedback
8 3. Wait for response before detailed review
9 4. Give final assessment with score"""
10 },
11 {
12 "role": "user",
13 "content": "Review my code: [code]"
14 },
15 # AI asks questions...
16 {
17 "role": "user",
18 "content": "It's for a web app, performance is priority"
19 },
20 # AI gives focused review...
21]

State Management

Python
1class ConversationManager:
2 def __init__(self, system_prompt):
3 self.messages = [{"role": "system", "content": system_prompt}]
4 self.state = {} # Track conversation state
5
6 def add_user_message(self, content):
7 self.messages.append({"role": "user", "content": content})
8
9 def get_response(self):
10 response = call_llm(self.messages)
11 self.messages.append({"role": "assistant", "content": response})
12 self.update_state(response)
13 return response
14
15 def update_state(self, response):
16 # Track what info has been collected
17 if "email" in response.lower():
18 self.state["asked_email"] = True

8. Hands-on Lab

Lab 1: Implement Self-Consistency

Python
1# Complete this function
2def solve_with_self_consistency(problem, n_trials=5):
3 """
4 Solve a math/logic problem using self-consistency.
5 Return the most common answer.
6 """
7 # Your code here
8 pass
9
10# Test
11problem = "A farmer has 17 sheep. All but 9 die. How many are left?"
12answer = solve_with_self_consistency(problem)
13print(f"Answer: {answer}") # Should be 9

Lab 2: Build a Prompt Chain

Design a chain for: "Analyze a product review and generate response"

Text
1Chain steps:
21. [?]
32. [?]
43. [?]

Lab 3: Create a ReAct Agent

Build an agent that can:

  • Search for information
  • Do calculations
  • Answer user questions

📝 Quiz

  1. Self-consistency hoạt động thế nào?

    • Chạy 1 lần và verify
    • Chạy nhiều lần, majority voting
    • Dùng nhiều models khác nhau
    • Cache và reuse responses
  2. Tree of Thought khác CoT ở điểm nào?

    • Nhanh hơn
    • Explore nhiều paths, có backtracking
    • Dùng ít tokens hơn
    • Không cần reasoning
  3. Prompt Chaining phù hợp khi nào?

    • Tasks đơn giản
    • Khi cần response nhanh
    • Complex tasks cần nhiều steps
    • Khi token limit thấp
  4. ReAct pattern kết hợp gì?

    • Reading và Acting
    • Reasoning và Acting
    • Retrieval và Acting
    • Rewriting và Acting

🎯 Key Takeaways

  1. Self-consistency - Majority voting cải thiện accuracy
  2. Tree of Thought - Explore multiple paths, backtrack khi cần
  3. Prompt Chaining - Chia nhỏ complex tasks
  4. ReAct - Foundation cho AI agents
  5. Meta-prompting - Dùng AI để optimize prompts

🚀 Bài tiếp theo

Building Your First AI Application - Thực hành xây dựng AI chatbot với Streamlit và OpenAI API!