This section walks you through three additional high impact patterns:
- Chain of thought prompts
- Few shot learning prompts
- Meta cognitive prompts
The idea behind these techniques is to unlock deeper reasoning, precision, and adaptability from any LLM. Letâs walk through them:
Chain of thought (COT) prompts:
This pattern explicitely asks the model to âthink out loadâ - breaking complex reasoning into logical step by step segments.
It is important to note here, that you can do this manually, or in a way - use the âthinkingâ models in various LLM products to simulate the same thing.
Why this method is so powerful:
Why it works:
- Mirrors how humans decompose problems.
- Encourages the model to surface intermediate steps, reducing leaps and hallucinations.
- Boosts accuracy on multi-step tasks (math, code logic, strategy).
How to use it:
âlets think through this step by step:
- First, identifyâŚ
- Next, determineâŚ
- Then, calculateâŚ
- Finally, concludeâŚâ
Example:
Prompt: âYouâre a data analyst. I have a dataset showing monthly sales. Letâs think step-by-step to identify the three biggest seasonal trends and explain why they occur.â
Few shot learning prompts
This prompting technique provides a handful of inputâoutput examples within the prompt so the model infers the desired pattern.
Why it works:
- Anchors the model in your taskâs format and style.
- Requires no fine tuning and offers instant customisation.
- Particularly effective for translation, classification, format transformation.
How to use it:
âConvert these product descriptions into Twitter-style blurbs: Example 1: Input: âA 12oz stainless steel travel mugâŚâ Output: âStay caffeinated on the go âď¸â¨ Leak-proof steel mug that fits cup holders. #TravelEssentialsâ Example 2: Input: âNoise-cancelling wireless earbudsâŚâ Output: âSilence the world, hear the beat đ§đ 30-hour battery, crystal sound. #MusicLoversâ Now convert: âErgonomic office chair with lumbar supportâŚââ Tips:
- Use 2â5 high-quality, diverse examples.
- Match examples to your actual data in tone, length, and structure.
- If performance lags, swap or augment examples.
Meta cognitive prompts
This technique creates prompts that ask the model to reflect on how it will solve a problem before providing the answerâi.e., âthinking about its own thinking.â
Why it works:
- Checks the modelâs grasp of the task before execution.
- Surfaces hidden assumptions and clarifies ambiguous requests.
- Can improve both correctness and creativity.
How to use it:
âBefore answering, please outline: 1. The key information you need. 2. Potential pitfalls or ambiguous areas. 3. The steps youâll take to ensure accuracy. Then, provide the final answer.â Example:
Prompt: âYouâre an AI tutor. First, list what you need to teach someone the Pythagorean theorem clearly (e.g., prerequisites, examples). Then deliver the lesson in three paragraphs.â
Combining patterns is even more powerful
For maximum effect, layer them:
âYouâre a security analyst. Letâs think step-by-step how to secure a web app (CoT). Here are two samples of security checklists (Few-Shot). Next, reflect on any assumptions youâre making and list them (Meta-Cognitive). Finally, generate a consolidated, prioritized security plan.â
Conclusion
- Chain of thought unlocks deep reasoning.
- Few shot enforces format and tone.
- Meta cognitive prevalidates approach and reduces errors.
Master these patterns and youâll transform prompts from simple queries into powerful, reliable AI workflows.