Product Updates

CoRecruit Team
Last updated:
March 2026
Read time:
10
mins
You’ve learned the basics (Part 1) and practiced one‑shot and few‑shot prompting (Part 2). Now it’s time for the part that turns good prompts into repeatable systems.
This post introduces the TCR‑EI framework, which is a compact, practical recipe you can use inside CoRecruit to write prompts that are reliable, auditable, and easy to share across your team.
The framework will be your curated checklist that fits in every prompt and dramatically improves output quality.
TCR‑EI stands for:
This sequence forces you to be explicit at every step: specify the job, set boundaries, give the AI the right data, judge its work, then tweak.
Plus, it moves you away from vague prompts and into predictable, high‑quality outputs.
Next, we’ll walk through each step of the TCR‑EI framework with an example prompt. The structure for TCR-EI looks something like this:
Your task: write a client update summary after a recent executive search interview.
Context: This is for a CFO search; the client cares about strategic finance leadership, M&A experience, and stakeholder management. Keep tone professional and concise.
Reference: Use the participant’s answers in the call transcript and the candidate’s ATS profile as reference.
Evaluate: Ensure the summary is 120–180 words, includes 2–3 key impact bullets, and ends with clear next steps.
Iterate: If the summary exceeds 180 words, shorten it and convert any extra detail into a single, optional bullet labeled 'Additional notes.'
Produce the client update now.
Task: Vague prompts produce vague notes. By naming the deliverable (client update, submittal, ATS field entry), you change the AI’s entire output shape.
Context: This is where you encode priorities: the role level (junior vs executive), what the client cares about, or whether this write up is internal or external. Tone and content differ wildly depending on context. CoRecruit follows what you tell it.
Reference: CoRecruit has multiple data sources: the transcript, your meeting notes, the candidate’s ATS record, or your firm’s template. Explicitly point CoRecruit to the source to avoid hallucinations and missing items.
Evaluate: Don’t leave quality control to chance. Add clear pass/fail rules (word counts, required headings, bullet counts, tone). Evaluations let CoRecruit self‑audit and make it easier for you to scan results quickly.
Iterate: The first result is rarely final. Provide a deterministic instruction for how to change the output (shorten, re‑tone, or shift emphasis). This is faster than creating a new prompt from scratch.
Before you push notes to an ATS or send a client update, run these quick checks (you can make CoRecruit run them for you):
1. Required fields: Does the output fill the ATS fields you need (title, company, years of experience, notice period)?
2. Tone & format: Is the style client‑facing or internal? Are there 2–3 impact bullets at the top? Does it use the word limits you set?
3. Accuracy: Cross‑reference any factual claims (companies, dates, numbers) with the participant’s ATS profile.
4. Brevity: If the summary or note is over length, ask CoRecruit to compress and keep the most relevant points.
5. Actionability: Does it finish with clear next steps (interview scheduling, references requested, or declined)?
If any check fails, call the Iterate step with a deterministic instruction: shorten to X words, move items into bullets, or re‑tone to be more conversational or more formal.
TCR‑EI is a small change that compounds. What you can do here is be very explicit about the task and give enough context to the AI.
This way, you help turn prompts from one‑off experiments into reproducible results across calls, roles, and recruiters.