From Prompt Engineering to Context Engineering: Main Design Patterns
Earlier on, we relied on clever prompt wording, but now structured, complete context matters more than just magic phrasing. The next year is going to be a year of context engineering which expands beyond prompt engineering. The two complement each other: prompt engineering shapes how we ask, while context engineering shapes what the model knows, sees, and can do.
To keep things clear, here are the main techniques and design patterns in both areas, with some useful resources for further exploration:
1. Zero-shot prompting โ giving a single instruction without examples. Relies entirely on pretrained knowledge.
2. Few-shot prompting โ adding inputโoutput examples to encourage model to show the desired behavior. โถ https://arxiv.org/abs/2005.14165
3. Role prompting โ assigning a persona or role (e.g. "You are a senior researcher," "Say it as a specialist in healthcare") to shape style and reasoning. โถ https://arxiv.org/abs/2403.02756
4. Instruction-based prompting โ explicit constraints or guidance, like "think step by step," "use bullet points," "answer in 10 words"
5. Chain-of-Thought (CoT) โ encouraging intermediate reasoning traces to improve multi-step reasoning. It can be explicit ("letโs think step by step"), or implicit (demonstrated via examples). โถ https://arxiv.org/abs/2201.11903
6. Tree-of-Thought (ToT) โ the model explores multiple reasoning paths in parallel, like branches of a tree, instead of following a single chain of thought. โถ https://arxiv.org/pdf/2203.11171
7. Reasoningโaction prompting (ReAct-style) โ prompting the model to interleave reasoning steps with explicit actions and observations. It defines action slots and lets the model generate a sequence of "Thought โ Action โ Observation" steps. โถ https://arxiv.org/abs/2210.03629
The Christmas holidays are here! ๐ Thinking about learning something new in AI?
@huggingface offers 12 FREE courses covering all the relevant topics, for every level of experience. A great challenge for the holidays (and worth saving for later ๐)
It comes packed with updates: > Agent training with tools in GRPO > New CISPO & SAPO losses + reasoning rewards > vLLM quantization in colocate mode > Dataset shuffling in SFT > Lots of NEW examples > Tons of fixes and documentation improvements
3 replies
ยท
reacted to sergiopaniego's
post with ๐ฅ18 days ago
ICYMI, you can fine-tune open LLMs using Claude Code
just tell it: โFine-tune Qwen3-0.6B on open-r1/codeforces-cotsโ
and Claude submits a real training job on HF GPUs using TRL.
it handles everything: > dataset validation > GPU selection > training + Trackio monitoring > job submission + cost estimation when itโs done, your model is on the Hub, ready to use