AI Enabled Scenarios

Shayna Pittman Updated by Shayna Pittman

🎉 NEW: AI-Enabled Scenarios Available Today (Beta) 🎉

We've heard your feedback about effective use of AI being an important part of modern work.

So we've calibrated the first set of scenarios designed with LLM/AI use in mind.

Design Principles

We've calibrated the first set of scenarios designed with LLM/AI use in mind.

  • Bring your own LLM: Candidates are encouraged to use the LLM of their choice. Different models benefit from different prompting and interaction strategies, so this ensures the most real-world evaluation of skills.
  • Human judgement required: ~25% of candidates are unable to use basic prompting techniques to get value from LLMs. But these scenarios are calibrated so that candidates must go beyond basic prompting to do well. Technical skills are still necessary!
  • Mix of human-only scenarios recommended: We're seeing best success with assessments that mix 1-2 AI-Enabled scenarios with 1-2 human-only scenarios. Unless your post-Woven interview loop is 100% AI-enabled, going 100% AI-Enabled is not yet recommended.
  • Beta (September 2025): We have customers getting real value from these scenarios today, but scoring/calibration and other details might shift. 

Compared to scenarios where AI usage is not allowed, AI-allowed scenarios are typically a combination of:

  1. More complex
  2. Tighter timebox
  3. Larger in scope

This mimics the real world benefits of AI.

Current AI-Enabled scenarios include

  • Real-World Programming: Business Data Structures & Algorithms - Prorating Subscriptions
  • Frontend UI Frameworks - Change Password Alert (Angular, Java, Vue)
  • Web Architecture: Full Stack - Debugging Social Media 
  • Code Review - Backend
  • Code Review - Full Stack

Frequently Asked Questions (FAQ)

Q: How are AI-Enabled Scenarios Scored?

Our goal is to capture the value-add an engineer provides on top of the LLM output. So our scoring system uses the one-shot output of a State of The Art AI model as the baseline. eg if an LLM gets you 12 of 20 points, our scoring focuses on the 8 remaining points that require human judgement.

Q: Can I see the prompts that a candidate used to solve the problem?

No. This first batch of AI-Enabled Scenarios is Bring Your Own LLM. If a candidate is great with the quirks of Gemini 2.5 Pro, we don't want to force them to use a less-familiar LLM.

We've calibrated these scenarios so that you can judge the final output and incrementally progress of their work to evaluate problem solving in that situation.

In the future, we expect to provide scenario types where candidate prompting interactions are captured as part of the evaluation. Contact us if you're interested in agentic programming scenarios (eg involving Claude Code) where we expect prompting strategies to be especially high-signal.

Q: What happens when new models are released that make scenarios easier?

We will adapt both the scoring system and the scenario content to support good candidate signal.

But this is a new area entirely! We expect lessons learned as we work through the beta period for these scenarios.

How did we do?

Scenario Timeboxing and Time Limits FAQ

Contact