scroll icon
Scroll to see our services

Product Updates

Mastering AI prompts in CoRecruit - the TCR‑EI recipe for recruiters [Part 3]

CoRecruit Team

Last updated:

March 2026

Read time:

10

mins

Mastering AI prompts in CoRecruit - the TCR‑EI recipe for recruiters

You’ve learned the basics (Part 1) and practiced one‑shot and few‑shot prompting (Part 2). Now it’s time for the part that turns good prompts into repeatable systems.

This post introduces the TCR‑EI framework, which is a compact, practical recipe you can use inside CoRecruit to write prompts that are reliable, auditable, and easy to share across your team. 

The framework will be your curated checklist that fits in every prompt and dramatically improves output quality. 

What is the TCR‑EI framework?

TCR‑EI stands for:

  • Task: Tell the AI exactly what you want it to do.
  • Context: Add the background that matters for the result.
  • Reference: Point to the data source CoRecruit should use.
  • Evaluate: Check the output against specific criteria.
  • Iterate: Refine the prompt or output until it’s right.

This sequence forces you to be explicit at every step: specify the job, set boundaries, give the AI the right data, judge its work, then tweak. 

Plus, it moves you away from vague prompts and into predictable, high‑quality outputs.

TCR‑EI explained, step by step (with recruiter examples)

Next, we’ll walk through each step of the TCR‑EI framework with an example prompt. The structure for TCR-EI looks something like this: 

Step What it means Example phrase
Task Tell AI what you want it to do "Your task is to create a post‑interview summary."
Context Add background info that affects tone/priority "This summary is for an executive search for a CFO role; highlight strategic leadership and M&A experience."
Reference Include the data source CoRecruit should use "Use the participant’s information from the call transcript and the candidate profile in ATS."
Evaluate Review criteria the output must meet "Does it align with the tone and structure of our submittals? Is it under 200 words?"
Iterate Tell CoRecruit how to adjust if it misses the mark "If it’s too long, make it 25% shorter and move key impact points to bullet form."

Example prompt 

Your task: write a client update summary after a recent executive search interview.

Context: This is for a CFO search; the client cares about strategic finance leadership, M&A experience, and stakeholder management. Keep tone professional and concise.

Reference: Use the participant’s answers in the call transcript and the candidate’s ATS profile as reference.

Evaluate: Ensure the summary is 120–180 words, includes 2–3 key impact bullets, and ends with clear next steps.

Iterate: If the summary exceeds 180 words, shorten it and convert any extra detail into a single, optional bullet labeled 'Additional notes.'

Produce the client update now.

How each TCR‑EI step helps you

Task: Vague prompts produce vague notes. By naming the deliverable (client update, submittal, ATS field entry), you change the AI’s entire output shape.

Context: This is where you encode priorities: the role level (junior vs executive), what the client cares about, or whether this write up is internal or external. Tone and content differ wildly depending on context. CoRecruit follows what you tell it.

Reference: CoRecruit has multiple data sources: the transcript, your meeting notes, the candidate’s ATS record, or your firm’s template. Explicitly point CoRecruit to the source to avoid hallucinations and missing items.

Evaluate: Don’t leave quality control to chance. Add clear pass/fail rules (word counts, required headings, bullet counts, tone). Evaluations let CoRecruit self‑audit and make it easier for you to scan results quickly.

Iterate: The first result is rarely final. Provide a deterministic instruction for how to change the output (shorten, re‑tone, or shift emphasis). This is faster than creating a new prompt from scratch.

More tips & variations (a bit advanced!)

  • Template variables: Turn the framework into variables inside templates. Example: {{ROLE}}, {{PRIORITY}}, {{SOURCE}} — then the template becomes a reusable asset for all similar calls.
  • Use evaluation checklists: Instead of a single evaluate sentence, supply a short checklist the AI should confirm after generating. Example: Confirm: 1) Includes 2 impact bullets, 2) ATS tags populated, 3) No personal opinions.
  • Mix one‑shot + few‑shot inside the framework: For new or tricky workflows, provide one high‑quality example output under the Reference or Context sections. This teaches structure and style while still keeping the framework intact.
  • Guardrails for compliance: If some clients require exclusions (e.g., no salary, no health info, or GDPR notes), add those as part of Context or Evaluate so CoRecruit never includes them.

Practical checks to evaluate & iterate for better results 

Before you push notes to an ATS or send a client update, run these quick checks (you can make CoRecruit run them for you):

1. Required fields: Does the output fill the ATS fields you need (title, company, years of experience, notice period)?

2. Tone & format: Is the style client‑facing or internal? Are there 2–3 impact bullets at the top? Does it use the word limits you set?

3. Accuracy: Cross‑reference any factual claims (companies, dates, numbers) with the participant’s ATS profile.

4. Brevity: If the summary or note is over length, ask CoRecruit to compress and keep the most relevant points.

5. Actionability: Does it finish with clear next steps (interview scheduling, references requested, or declined)?

If any check fails, call the Iterate step with a deterministic instruction: shorten to X words, move items into bullets, or re‑tone to be more conversational or more formal.

Final notes 

TCR‑EI is a small change that compounds. What you can do here is be very explicit about the task and give enough context to the AI. 

This way, you help turn prompts from one‑off experiments into reproducible results across calls, roles, and recruiters. 

Table of Contents

Frequently asked questions

Why should I use TCR‑EI for my recruiting prompts?

Using TCR‑EI makes your prompts explicit, reduces vague outputs, and produces reliable, ATS-ready results that reflect your recruiting style and priorities.

Can I reuse TCR‑EI prompts across different roles and clients?

Yes! Once a prompt is structured using TCR‑EI, you can adjust the Context and Reference sections to suit different roles or client needs without rewriting the whole prompt.

How do I evaluate if a CoRecruit prompt is working well?

Check that it fills the required ATS fields, follows the correct tone and format, is accurate, concise, and ends with actionable next steps. If it doesn’t, use the Iterate step to refine it.