Advanced Prompting

Advanced techniques for complex tasks, better reasoning, and expert-level results.

Chain-of-Thought (CoT) Prompting

The Idea: Make the LLM show its reasoning step-by-step, improving accuracy for complex problems.

Basic CoT

Add "Let's think step by step" to your prompt.

Without CoT:

What's 15% tip on a $87.50 meal for 3 people per person?

Result: Often gets confused.

With CoT:

What's 15% tip on a $87.50 meal for 3 people per person?
Let's think step by step.

Result:

  1. 15% of $87.50 = $13.13
  2. Total = $100.63
  3. Per person = $33.54

When to use: Math, logic, multi-step reasoning.

Zero-Shot CoT

Just add the magic phrase at the end:

  • "Let's think step by step"
  • "Let's approach this systematically"
  • "Let's break this down"

Few-Shot CoT

Show examples with reasoning:

Question: If a train travels 60 mph for 2.5 hours, how far does it go?
Reasoning: Distance = Speed × Time = 60 × 2.5 = 150 miles
Answer: 150 miles

Question: A store has a 20% off sale. How much do you pay for a $80 item?
Reasoning: Discount = 80 × 0.20 = $16. Final price = 80 - 16 = $64
Answer: $64

Question: [Your actual question]

Tree of Thought (ToT)

The Idea: Explore multiple reasoning paths, like a decision tree.

I need to [goal]. Let's explore 3 different approaches:

Approach 1: [method 1]
Pros: ...
Cons: ...
Likely outcome: ...

Approach 2: [method 2]
Pros: ...
Cons: ...
Likely outcome: ...

Approach 3: [method 3]
Pros: ...
Cons: ...
Likely outcome: ...

Now, which approach would you recommend and why?

When to use: Strategic decisions, comparing options, when one path isn't obvious.

Self-Consistency

The Idea: Generate multiple responses and pick the most consistent answer.

For critical tasks:

  1. Ask the same question 3-5 times (new conversation each time)
  2. Compare responses
  3. The most common answer or approach is likely best

When to use: High-stakes decisions, complex reasoning, when accuracy matters more than speed.

Prompt Chaining

The Idea: Break complex tasks into a sequence of simpler prompts, passing output from one to the next.

Example: Writing a Blog Post

Prompt 1 (Research):

List 10 key points about [topic] that would interest [audience].

Prompt 2 (Outline):

Based on these key points: [output from Prompt 1]
Create a blog post outline with introduction, 3 main sections, and conclusion.

Prompt 3 (Writing):

Using this outline: [output from Prompt 2]
Write the introduction section. Tone: [tone]. Length: 200 words.

Prompt 4-6: Write each main section.

Prompt 7 (Conclusion):

Given this content: [all previous sections]
Write a compelling conclusion with a call to action.

Benefits:

  • Better control at each stage
  • Can review and adjust between steps
  • Easier to identify and fix issues

ReAct (Reasoning + Acting)

The Idea: Combine reasoning with actions (tool use, searches, etc.).

Task: Find the current market cap of the top 3 tech companies and compare them.

Thought: I need to identify the top 3 tech companies first.
Action: List the top 3 tech companies by market cap.
Observation: [Result]

Thought: Now I need current market caps. My knowledge is outdated.
Action: [Search for current data or note the limitation]
Observation: [Result]

Thought: Now I can compare them.
Action: Create comparison.
Result: [Comparison]

When to use: Multi-step tasks requiring both thinking and information gathering.

Constrained Generation

The Idea: Force specific output formats using strict constraints.

JSON Output

Extract the following from this text and return ONLY valid JSON:
{
  "name": "",
  "date": "",
  "amount": 0,
  "category": ""
}

Text: "Bought lunch at Chipotle on March 15th for $12.50"

Regex-Like Patterns

Generate 5 email addresses following this pattern:
firstname.lastname@company.com

All lowercase, real names, fortune 500 companies.

Forced Choices

Answer ONLY with: "YES", "NO", or "UNCLEAR"

Question: Based on this contract clause, can we terminate early?
[Contract text]

Role-Based Prompting (Advanced)

Go beyond simple roles. Create detailed personas.

You are Dr. Sarah Chen, a venture capitalist with:
- 15 years in enterprise SaaS investing
- Engineering background (MIT CS)
- Portfolio: $500M AUM
- Known for rigorous technical due diligence
- Direct communication style
- Focus: B2B SaaS, infrastructure

I'm pitching you my [product]. What questions would you ask?

Why it works: Detailed roles constrain the LLM to think and respond in specific ways.

Meta-Prompting

The Idea: Ask the LLM to help you write better prompts.

I want to [goal]. Help me write a highly effective prompt for this task.
Consider:
- What context you need
- What format would be best
- What constraints matter
- What examples would help

My initial attempt: [your basic prompt]

Or:

I want to achieve [goal]. Before answering, ask me 5 questions 
to better understand what I need, then provide your response.

Benefits: Ensures you've thought through requirements before getting output.

Negative Prompting

The Idea: Tell the LLM what NOT to do.

Write a professional email to a client about a project delay.

DO NOT:
- Blame anyone
- Make excuses
- Over-promise on new timeline
- Use corporate jargon

DO:
- Take responsibility
- Provide specific new date
- Offer compensation/solution
- Maintain confident tone

When to use: When you've had bad outputs before and know what to avoid.

Perspective-Taking

The Idea: Ask the LLM to consider multiple viewpoints.

Analyze this business decision from three perspectives:

1. Financial perspective: [focus on numbers, ROI]
2. Customer perspective: [focus on user experience]
3. Team perspective: [focus on execution feasibility]

Then synthesize these into a recommendation.

When to use: Complex decisions, avoiding blind spots, whole-picture analysis.

Socratic Method

The Idea: Have the LLM ask YOU questions to clarify thinking.

I'm considering [decision]. Don't give me advice yet. 
Instead, ask me 5 probing questions to help me think through this clearly.

After I answer, ask 5 more based on my responses.

Benefits: Clarifies your own thinking, uncovers assumptions, leads to better decisions.

Iterative Refinement Pattern

The Idea: Explicitly structure iteration into your prompt.

Task: [What you want]

Your approach:
1. First, provide a draft
2. Then, critique your own draft (what's weak, what's missing)
3. Then, provide an improved version
4. Finally, explain why the second version is better

Benefits: One prompt, multiple iterations, higher quality output.

Constitutional AI Approach

The Idea: Give the LLM principles to follow.

Constitution for this response:
1. Accuracy over creativity
2. Cite limitations when uncertain
3. Provide balanced views
4. Avoid speculation
5. Use simple language

Now, answer: [Your question]

When to use: When you need reliable, balanced, well-bounded responses.

Format-First Prompting

The Idea: Start with the exact format you want, then fill it in.

Create a product comparison:

| Feature | Product A | Product B | Winner |
|---------|-----------|-----------|--------|
| [Fill]  | [Fill]    | [Fill]    | [Fill] |
| [Fill]  | [Fill]    | [Fill]    | [Fill] |
| [Fill]  | [Fill]    | [Fill]    | [Fill] |

Products to compare: [Your products]
Features to evaluate: [Your features]

Benefits: Gets exactly the format you want, no reformatting needed.

Prompt Templates

Create reusable templates for common tasks.

Code Review Template

Review this [language] code for:
- Bugs and errors
- Performance issues
- Security vulnerabilities
- Best practice violations
- Readability concerns

Code:
```[language]
[Your code]

For each issue found:

  1. Severity: [Critical/High/Medium/Low]
  2. Location: [Line number or function]
  3. Issue: [What's wrong]
  4. Fix: [How to fix it]
  5. Why: [Explanation]

### Analysis Template

Analyze [subject] using this framework:

  1. SITUATION What's the current state?

  2. PROBLEMS What are the key issues?

  3. ROOT CAUSES Why do these problems exist?

  4. OPTIONS What are 3-5 possible solutions?

  5. RECOMMENDATION What's the best path forward and why?

  6. NEXT STEPS What are the immediate actions?

Subject: [Your topic] Context: [Background info]


### Learning Template

Teach me [concept] at [level] using this structure:

  1. Simple Definition (one sentence)
  2. Why It Matters (real-world relevance)
  3. Key Components (break it down)
  4. Example 1: [Simple example]
  5. Example 2: [More complex example]
  6. Common Mistakes (what people get wrong)
  7. Quick Quiz (3 questions to test understanding)
  8. Next Steps (what to learn next)

Concept: [What you want to learn] Level: [Beginner/Intermediate/Advanced]


## Handling Hallucinations

**The Problem**: LLMs confidently generate false information.

### Mitigation Strategies

**1. Request Citations**

Explain [topic]. For each claim, cite your source or note if it's general knowledge vs. uncertain.


**2. Ask for Confidence Levels**

Answer this question and rate your confidence (1-10) for each part of your answer.


**3. Request Verification Steps**

Provide your answer, then list what claims should be verified and how.


**4. Use Constraints**

Answer only based on: [specific documents you provide] Do not use any other knowledge.


**5. Multiple Models**

Check answers across ChatGPT, Claude, and Gemini. Differences indicate potential hallucination.


## Working with Long Context

Modern LLMs have huge context windows (100K-2M tokens). Use them effectively.

### Structure Long Prompts

CONTEXT

[Background information - can be pages long]

TASK

[What you want done]

SPECIFIC INSTRUCTIONS

[Detailed requirements]

OUTPUT FORMAT

[How to structure the response]

CONSTRAINTS

[Boundaries and limitations]


### Reference Management

I'll provide several documents. Refer to them as DOC1, DOC2, etc.

DOC1: [Content] DOC2: [Content] DOC3: [Content]

Now, compare DOC1 and DOC2 on [criteria], using DOC3 as reference.


### Chunking Strategy
For extremely long documents:

I'm going to provide a long document in 5 parts. After each part, just acknowledge with "Received part X". After part 5, I'll ask my questions.

Part 1: [Content]


## Multimodal Prompting

### Images

[Upload image]

Analyze this image for:

  1. Main subject
  2. Style/aesthetic
  3. Technical quality
  4. Potential improvements
  5. Suitable use cases

### Code + Explanation

[Upload code file]

Review this code and:

  1. Explain what it does (high-level)
  2. Identify bugs or issues
  3. Suggest optimizations
  4. Rate code quality (1-10)

### Data Analysis

[Upload CSV/data]

Analyze this dataset:

  1. Summary statistics
  2. Data quality issues
  3. Interesting patterns
  4. Visualization suggestions
  5. Analysis recommendations

## Prompt Engineering Patterns

### Pattern: Expert Panel

Assemble a panel of 3 experts to evaluate [topic]:

  • Expert 1: [Role/specialty]
  • Expert 2: [Role/specialty]
  • Expert 3: [Role/specialty]

Have each expert provide their perspective, then synthesize into a final recommendation.


### Pattern: Red Team / Blue Team

Analyze this decision:

BLUE TEAM (Advocates for it): Argue why this is a good idea. Steel-man the position.

RED TEAM (Challenges it): Argue why this is problematic. Find the weaknesses.

VERDICT: Balanced assessment considering both perspectives.


### Pattern: Time Travel

Project this decision forward:

6 MONTHS FROM NOW: What likely happened?

1 YEAR FROM NOW: What are the outcomes?

5 YEARS FROM NOW: What's the long-term impact?

VERDICT: Should I do it?


### Pattern: Analogical Reasoning

Explain [complex concept] by:

  1. Finding 3 analogies from different domains
  2. Explaining how each analogy helps understanding
  3. Noting where each analogy breaks down

Then synthesize into a clear explanation.


## Advanced Control Techniques

### Temperature Tuning
- **0.0-0.2**: Facts, code, analysis (deterministic)
- **0.3-0.5**: General writing, instructions (mostly consistent)
- **0.6-0.8**: Creative writing, brainstorming (balanced)
- **0.9-1.2**: Very creative, unconventional ideas (unpredictable)
- **1.3-2.0**: Experimental, chaotic (rarely useful)

### Top-P Tuning
- **0.1**: Only most likely tokens (very focused)
- **0.5**: Moderately focused
- **0.9**: Standard (good balance)
- **0.95**: More variety
- **0.99**: Maximum variety

**Pro Tip**: Start with defaults (temp 0.7, top-p 0.9). Only adjust if you have specific needs.

## System Prompts (API Users)

If using APIs, you can set a system prompt that persists across all messages.

```python
system_prompt = """
You are a technical writing assistant specializing in API documentation.

Guidelines:
- Use clear, concise language
- Always include code examples
- Format code as markdown with language tags
- Note common pitfalls
- Follow our style guide: [link]

Never:
- Use marketing language
- Make assumptions about user skill level
- Provide incomplete examples
"""

Benefits: Consistent behavior without repeating instructions in every prompt.

Debugging Prompts

If you're not getting good results:

1. Check Clarity

  • Is there any ambiguity?
  • Have you defined all terms?
  • Are instructions clear?

2. Add Context

  • What background info is missing?
  • What assumptions are you making?
  • What does the LLM need to know?

3. Review Format

  • Have you specified output format?
  • Is the format realistic?
  • Have you shown an example?

4. Test Systematically

Original prompt: [X]
Problem: [What went wrong]
Hypothesis: [Why]
Modified prompt: [X with change]
Result: [Better/worse]

5. Simplify

  • Try removing parts of prompt
  • Which parts are necessary?
  • Are you asking too much at once?

Prompt Libraries

Build your own library of tested, effective prompts.

Organization

prompts/
├── analysis/
│   ├── data-analysis.txt
│   ├── competitive-analysis.txt
│   └── swot-analysis.txt
├── writing/
│   ├── email-templates.txt
│   ├── blog-posts.txt
│   └── social-media.txt
├── code/
│   ├── code-review.txt
│   ├── debugging.txt
│   └── optimization.txt
└── learning/
    ├── concept-explanation.txt
    ├── tutorial-creation.txt
    └── quiz-generation.txt

Template Format

PROMPT NAME: [Descriptive name]
CATEGORY: [Category]
USE CASE: [When to use this]
TESTED WITH: [Which models]
SUCCESS RATE: [Your experience]

TEMPLATE:
[The actual prompt with [PLACEHOLDERS]]

EXAMPLE:
[Filled-in example]

NOTES:
[Tips, variations, things to watch for]

A/B Testing Prompts

For important use cases, test variations:

Variation A: [Prompt with approach 1]
Variation B: [Prompt with approach 2]

Test each 5 times:
- Which gives better results?
- Which is more consistent?
- Which is faster/cheaper?

Winner: [A or B] because [reason]

Summary

Key Advanced Techniques:

  1. Chain-of-Thought: "Let's think step by step"
  2. Tree of Thought: Explore multiple approaches
  3. Self-Consistency: Multiple attempts, pick best
  4. Prompt Chaining: Break complex tasks into steps
  5. ReAct: Combine reasoning with actions
  6. Constrained Generation: Force specific formats
  7. Meta-Prompting: Ask LLM to improve your prompt
  8. Negative Prompting: Specify what to avoid
  9. Constitutional AI: Give principles to follow
  10. Format-First: Start with desired output structure

Best Practices:

  • Use templates for repeatability
  • Build a prompt library
  • Test and iterate systematically
  • Understand when to use advanced techniques vs. simple prompts
  • Balance complexity with maintainability

Next Steps:

  • Apply one advanced technique to a real task today
  • Create your first prompt template
  • Start a prompt library
  • Move to Chapter 04 to learn about tools and platforms

Further Reading