Candidates
Scoring Errors
CoC Message to Candidates
Getting Started with Woven
How to Invite a Benchmark
Inviting Candidates to Woven
How to Add a team member in the Woven Dashboard
CoC Messages to Candidates
Why Woven Enforces Unique Candidate Links
ADA Accommodations for Candidates
How to Create a Support Ticket
How To See/Change Available Languages For A Scenario
Candidates who have previously take a Woven Assessment
Candidate Feedback FAQ
Tips for Maximizing Candidate Completion Rate
Scoring Quality Assurance
Pair programming with a candidate's solution
Downloading a candidate's code from a recommendation
Locating rejected, withdrawn or hired candidates in Woven
Free Trial
Integrations
How to Export Candidate Data from Ashby
Woven customer email address: security and access controls
One Click Candidate Invites to Woven
MacOS: Open generated candidate invite emails by configuring gmail as your email client for mailto
Integrating Greenhouse with Woven
Internal
Roles & Work Simulations
AI Enabled Scenarios
Scenario Timeboxing and Time Limits FAQ
How to Create a Role
Woven and ChatGPT resistance
NEW Role Creation UI
Woven Scoring System and Philosophy
What does a score of 100 mean in Woven?
Increasing Scenario Time Limits
How to Clone a Role
Rubric Change Logs
Data Architecture - Schema/Model
Hard SQL: Two Lawyers Who Worked in the Most Trials Together
Architecture Debugging - Brainstorm
Security & Privacy
- All Categories
- Candidates
- Scoring Errors
Scoring Errors
Updated
by Shayna Pittman
What can cause a scoring error?
We generally have ~3 types of errors in scoring:
- Rubric errors- The scoring team correctly followed the rubric, but the rubric is wrong (or incomplete/vague/etc). These errors decrease as we get more repetitions through a scenario and more feedback from customers about cases we miss. We've had ~100 candidates through this scenario. We typically see stabilization around 500
- Scoring errors- These are more rare and happen when each human scorer errs in the same direction on a given rubric item. When we've done audits, we see error rates of 2.5-3.5% so on a 24-rubric item scenario like this we'd expect 0-3 errors
- Communication errors- The rubric and scoring are correct, but the way we communicate about them in the analysis is confusing, incomplete, or just wrong. These act like rubric errors where we have more of them when scenarios are new and they are slowly reduced with feedback/time