Advanced Prompting Techniques
Unlock the full potential of AI for web development with advanced prompting methods that produce cleaner code and more accurate results
Once you’ve mastered the basics, it’s time to unlock the full potential of prompting with more advanced strategies.
This guide covers:
- Zero-shot vs. few-shot prompting
- Techniques to reduce AI hallucinations
- Step-by-step reasoning (Chain of Thought)
- Grounding the AI with real data
- Instructing the model to be honest about uncertainty
Zero-Shot vs. Few-Shot Prompting
Zero-Shot Prompting
This is the default method: you give a prompt and expect the AI to figure it out based on its training.
Example:
Translate this sentence to French: “The app is running smoothly.”
Use this when:
- The task is common or clearly worded
- You don’t need a specific format or style
- You want quick results with minimal input
Few-Shot Prompting
In this technique, you provide examples inside the prompt to show the AI what you expect.
Example:
Correct grammar:
Input: "the code not working good" → Output: "The code is not working well."
Input: "API give error in login" → Output: "The API gives an error during login."
Now Input: "user not found in database" → Output:
Use this when:
- You want consistent formatting or tone
- The task is uncommon or ambiguous
- You want more control over the output
Reducing Hallucinations
AI “hallucinations” are when the model confidently makes things up — code, facts, or functions that don’t exist.
Here’s how to reduce them:
1. Provide Grounding Data
Add context directly in your prompt:
Here’s the API response format:
{ "user": { "id": 123, "email": "[email protected]" } }
Now write a function to extract the user email.
Or use your app’s Knowledge Base as persistent context.
2. Add In-Prompt Constraints
Be clear about what not to do:
Do not use any external libraries. Only use native JavaScript.
3. Ask for Step-by-Step Reasoning
Chain-of-thought prompting helps the AI slow down and reason through the task.
Example:
Before writing the code, explain your approach step by step.
This is especially useful when debugging or solving logic-heavy tasks.
4. Instruct Honesty
You can explicitly tell the model not to guess:
If you're unsure about any part, say so instead of making it up.
This can lead to more transparent and trustworthy outputs.
Verification Prompting
After receiving output, ask the AI to verify it:
Double-check your code. Does it meet all the requirements and constraints I provided?
List any possible issues.
You’ll often catch mistakes before running the code.
Formatting & Output Control
For better control over the AI's response, specify:
- Output format (JSON, Markdown, code block, plain text)
- Language or comment style
- File names or structure
Example:
Write the component in TypeScript and wrap the code in a Markdown code block.
Use the Right Mode & Model
Use Chat Only Mode for complex reasoning or exploration.
Use Default Mode for direct edits and output.
If your builder supports multiple models (e.g., GPT-3.5 vs GPT-4), choose accordingly:
- GPT-4: better for long prompts and complex logic
- GPT-3.5: faster, cheaper, great for quick iterations
When to Apply These Techniques
Use advanced techniques when:
- The output needs to be precise, structured, or non-trivial
- You’re working with external APIs, auth flows, or sensitive logic
- The model previously hallucinated, or results were unclear
Up Next
You’ve now got the full toolset for expert prompting.
Let’s put it all into practice with real-world examples and reusable prompt templates.