SageScreen vs. Pymetrics

Featured Image

The Interview vs. The Personality Test

Pymetrics deserved better than what happened to it.

That’s not the opening you’d expect from a competitor comparison. But Pymetrics wasn’t just another hiring tool — it was a genuinely ambitious attempt to fix something broken. Founded by a Harvard and MIT neuroscientist who saw how arbitrary and biased traditional hiring really is, Pymetrics asked a question worth asking: what if we could measure what actually matters about a person, not just what’s on their resume?

They built neuroscience-based games. They open-sourced their bias auditing tools. They submitted to independent academic audits before anyone required it. They raised $60 million, landed BCG and JPMorgan as clients, and made the Forbes AI 50 list. The science was real. The ambition was sincere.

SageScreen Enterprise
Pricing Built for Scale
Custom enterprise pricing with dedicated support, SSO, and SLA guarantees.
SageScreen Sage
Talk to Sales

Then they got acquired. Then the acquirer got acquired. And now Pymetrics exists as a feature inside a platform inside a private equity portfolio — and the neuroscience pioneer who built it holds a title that doesn’t include the word “CEO.”

This article isn’t a hit piece. It’s a story about what was built, what happened to it, and what SageScreen learned from watching. Because we’re building in the same space, with similar ambitions, and we have no intention of ending up the same way.

What Pymetrics Built

What Pymetrics Built

Pymetrics’ core product was a suite of 12 neuroscience-based mini-games that measured 91 cognitive, emotional, and behavioral traits. Candidates played games for about 25 minutes — pumping virtual balloons to measure risk tolerance, memorizing number sequences for working memory, identifying facial expressions for emotional intelligence, exchanging virtual money for trust and fairness. No resumes. No interview questions. No text at all.

The system tracked micro-behaviors: how long you hesitated before a risky decision, how quickly you adapted to rule changes, whether you prioritized fairness over personal gain. It compiled these signals into a behavioral profile and compared it against the trait profiles of a company’s top performers. If your cognitive fingerprint matched theirs, you moved forward. If it didn’t, the system could redirect you to other roles — or other companies — where your profile was a better statistical fit.

The Pymetrics Innovation Stack

The Science

  • Founded by Frida Polli, PhD — Harvard Medical School, MIT postdoc
  • Co-founded with Julie Yoo in 2013
  • Neuroscience-based behavioral measurement
  • 91 traits across 9 cognitive/emotional categories
  • Nonverbal design — works across 30 languages

The Credibility

  • $60M raised from Khosla, General Atlantic, Salesforce, Workday
  • Forbes AI 50 — America’s Most Promising AI Companies
  • Inc. 5000 fastest growing (#338)
  • World Economic Forum Technology Pioneer
  • Open-sourced Audit-AI bias auditing tool

This was legitimately pioneering work. The open-source bias auditing tool alone put Pymetrics ahead of companies ten times their size on transparency. When NYC’s Local Law 144 required bias audits for automated hiring tools in 2023, Pymetrics had already been doing it voluntarily for years — and when the audits became mandatory, their BABL AI audit results were published by clients like Paramount. They passed an independent academic audit conducted by researchers from Northeastern University and published at the ACM Conference on Fairness, Accountability, and Transparency. That matters.

What Happened to Pymetrics

In August 2022, Pymetrics was acquired by Harver. To understand what that means, you need to understand what Harver is — and what it used to be.

2013

Pymetrics founded

Frida Polli and Julie Yoo launch Pymetrics in New York. The idea: use neuroscience games and ethical AI to match candidates to jobs based on cognitive and emotional traits, not resumes. Over the next nine years, the company raises $60M, signs BCG, JPMorgan, Blackstone, Unilever, Accenture, and Tesla, and processes millions of assessments in 100+ countries.

2017–2020

Meanwhile: Outmatch assembles itself

Outmatch, a Dallas-based hiring tech company backed by private equity firm Rubicon Technology Partners and Camden Partners, completes six acquisitions in three years — absorbing assessment providers, video interview platforms, and automation tools into a single volume-hiring suite.

May 2021

Outmatch acquires Harver

Outmatch buys Harver, an Amsterdam-based volume-hiring platform. Six months later, Outmatch decides the Harver name has better market recognition and rebrands the entire company as “Harver.” The combined entity has 250 employees across six offices, with US headquarters in Dallas.

August 2022

Harver acquires Pymetrics

Pymetrics — the neuroscience pioneer, Forbes AI 50 company, WEF Technology Pioneer — becomes a product line inside a PE-backed roll-up that has changed its own name twice. Frida Polli’s title changes from CEO and Co-Founder to Chief Data Science Officer at Harver. Terms not disclosed.

Today

Pymetrics is a feature

Pymetrics.com redirects to a login page with the note: “Pymetrics has been acquired by Harver.” The gamified assessments are now one module inside Harver’s broader suite — alongside video interviews, scheduling, reference checking, and cognitive assessments. The neuroscience-based behavioral games that defined the company now share shelf space with tools that have nothing to do with neuroscience.

None of this is inherently wrong. Acquisitions happen. Roll-ups happen. PE happens. But the pattern matters because it keeps happening in HR tech, and each time, the original innovation gets a little harder to find inside the platform that absorbed it.

The Fundamental Difference: Traits vs. Competency

Here’s where the comparison gets substantive. Pymetrics and SageScreen both believe AI can make hiring more fair and more accurate. They both believe the resume is an inadequate signal. They both believe structured assessment is better than gut instinct. But they measure fundamentally different things.

Pymetrics Measures

Who You Are

Cognitive traits, emotional tendencies, and behavioral patterns — how your brain processes risk, attention, fairness, learning, and emotion. These are relatively fixed attributes. Pymetrics captures them through games that observe your instinctive reactions and micro-decisions.

The question it answers:

“Does this person’s cognitive profile match our top performers?”

SageScreen Measures

What You Can Do

Behavioral competency, communication skills, situational judgment, and role-specific knowledge — demonstrated through an adaptive conversation with an AI interviewer that asks follow-up questions, probes for specifics, and evaluates answers against a structured rubric.

The question it answers:

“Can this person do this job? Here’s the evidence.”

This isn’t a subtle distinction. It’s the difference between a personality test and a job interview. Both have a place in hiring. But they produce fundamentally different kinds of information, and they serve different audiences in different ways.

What Each Tool Produces

The most practical way to understand the gap is to look at what a hiring manager actually receives after a candidate has been assessed by each system.

What Pymetrics Delivers

A Trait Profile and Match Score

91

behavioral traits measured across 9 categories

Match / No Match

binary recommendation based on top-performer comparison

Traits to Probe

suggested interview questions for traits that diverge from the benchmark

The hiring team still has to conduct an actual interview to evaluate whether the candidate can do the job. Pymetrics tells you who to talk to — not what to think about them once you do.

What SageScreen Delivers

A Structured Evaluation with Evidence

Full Transcript

complete adaptive behavioral interview the team can read or audit

Rubric Scores

competency ratings mapped to company-specific criteria with cited evidence

Actionable Report

structured summary the hiring manager can act on — or challenge

The hiring team receives a complete evaluation. They can read the interview, verify every score against the candidate’s actual words, and make an informed decision without conducting another interview first.

Pymetrics was always pre-interview technology. It told you which candidates to spend time on. SageScreen is the interview itself — the part that actually evaluates whether a candidate can do the job, conducted by an AI that follows up, asks for specifics, and maps everything to the competencies your team defined.

The Top-Performer Problem

Pymetrics’ matching model works by profiling a company’s existing top performers and then finding candidates with similar cognitive fingerprints. It’s an elegant approach. It also has a structural limitation that critics — including neuroscientists and I/O psychologists — have identified since the platform launched.

The cycle looks like this: you profile your best people, build a model from their traits, hire people who match those traits, and those new hires become the next generation of top performers your model trains on. The model reinforces itself. If your current top performers happen to share a narrow set of cognitive tendencies — which is common in organizations that have been hiring the same “type” for years — the algorithm codifies that narrowness and calls it excellence.

Pymetrics addressed this by testing for adverse impact against protected demographics (and they did this more rigorously than most competitors). But demographic diversity and cognitive diversity aren’t the same thing. A team can be demographically diverse while still being cognitively homogeneous — same risk profiles, same attention patterns, same decision-making tendencies — if the algorithm selects for trait similarity.

SageScreen doesn’t use a top-performer matching model. Each Sage evaluates candidates against a rubric — a set of competency criteria defined by the hiring team for the specific role. The question isn’t “does this person think like your best people?” The question is “did this person demonstrate these specific skills in this conversation?” Two candidates can have completely different cognitive profiles, communication styles, and approaches to problem-solving and both score well — because competency doesn’t require conformity.

The Candidate Experience Question

Pymetrics deserves credit here: they designed something genuinely engaging. A 98% completion rate is remarkable for a hiring assessment. The games are colorful, quick, and nonverbal. Candidates receive a traits report after playing, which most assessment tools don’t offer. The experience is objectively better than filling out a 200-question personality inventory.

But there’s a disconnect that shows up consistently in candidate feedback: the games feel unrelated to the job. When you’re applying for a financial analyst role and the assessment asks you to pump virtual balloons and memorize number sequences, it’s natural to wonder what one has to do with the other. The science says there’s a correlation between these game behaviors and job performance. The candidate’s lived experience says I just played a carnival game to apply for a serious job.

Pymetrics Candidate Experience

12 mini-games, ~25 minutes. Nonverbal — balloon pumping, number memorization, facial expression matching, tower puzzles. No conversation. No questions about the job. Candidate receives a traits report showing where they fall on 9 behavioral dimensions.

What the candidate knows afterward: their behavioral trait profile. They do not know why they were or weren’t moved forward, what the company was looking for, or how their traits were weighed.

SageScreen Candidate Experience

Adaptive behavioral interview conducted by an AI Sage. The conversation covers role-relevant scenarios, asks follow-up questions, and adapts based on responses. Candidates know from the first message they’re speaking with AI. The interview feels like an interview — because it is one.

What the candidate knows afterward: they had a structured interview about the role they applied for. The experience maps to what they expected when they applied — a conversation about the job.

An entire cottage industry of prep sites — JobTestPrep, PrepLounge, CaseBasix — now coaches candidates on how to optimize their Pymetrics game strategies. BCG has had to publicly clarify that Pymetrics results aren’t used as a “filter” and are just one input alongside other evaluation methods. When candidates feel compelled to game a game-based assessment, the assessment has a perception problem even if the science is sound.

The Transparency Gap

Pymetrics was more transparent than most of its competitors. The open-sourced Audit-AI tool, the voluntary academic audits, the proactive compliance with NYC Local Law 144 — all genuine strengths. But the transparency stopped at the model itself. Candidates couldn’t see how their traits were weighted. Companies couldn’t fully explain why a candidate matched or didn’t. The system collected, by Pymetrics’ own description, “millions of data points” per candidate. Explaining what those data points meant in plain language was, by the system’s own design, nearly impossible.

SageScreen
Conversational, Not Interrogational
24/7 AI screens. No video. Real conversations.
24/7
~15 Min
No Video
See It Live

SageScreen’s transparency model is structurally different because interviews are inherently more explainable than behavioral pattern-matching. A hiring manager can read the transcript. They can see the question, the candidate’s response, and the score the AI assigned — along with the rubric criteria that informed that score. If they disagree with the evaluation, they can point to the specific sentence where they think the AI got it wrong. The entire chain of reasoning is auditable in natural language.

Explainability: What Can You Point To?

Question
Pymetrics
SageScreen
Why was this candidate recommended?
Their trait profile statistically matches top performers
They scored 4/5 on leadership, citing this specific answer
Why was this candidate screened out?
Their risk tolerance or attention pattern diverged from the model
They scored 2/5 on conflict resolution — here’s their answer
Can the hiring manager disagree?
Not meaningfully — the model is the model
Yes — read the transcript, override any score
What would a regulator or auditor examine?
Statistical disparity in outcomes across demographics
The same — plus the full interview transcript and scoring rationale
Could a candidate challenge the result?
Difficult — results are derived from behavioral micro-signals
Yes — “I said X, why was that scored as Y?”

This isn’t about one system being “better” at transparency. It’s about the inherent explainability of the underlying method. Game-based behavioral profiling produces rich data but opaque decisions. Conversation-based evaluation produces decisions you can read, question, and override.

Architecture: Games vs. Interviews

The technical architectures reflect the philosophical difference.

Pymetrics Architecture

Input: Behavioral micro-signals from 12 neuroscience games — reaction times, hesitation patterns, strategy shifts, risk/reward choices

Processing: Machine learning model compares candidate’s behavioral profile against top-performer benchmark trained on existing employees

Output: Match/no-match recommendation plus trait profile and suggested probe questions for human interviewers

Designed for: High-volume pre-screening — narrowing the funnel before interviews begin

SageScreen Architecture

Input: Candidate’s own words in a structured, adaptive behavioral interview conducted by a custom AI Sage

Processing: 10 specialized agents across 3 isolated pipelines — conversation management, rubric-based evaluation, and report generation run independently to prevent cross-contamination

Output: Full interview transcript, competency scores with cited evidence, and structured evaluation report

Designed for: Replacing the first-round interview — the evaluation itself, not just the filter before it

Pymetrics was built to answer: should we interview this person? SageScreen was built to answer: what did the interview reveal? One sits before the interview. The other is the interview.

Who Should Use Which

Honest assessment time. Neither tool is right for every situation, and the use cases don’t overlap as much as you might think.

Consider Pymetrics (via Harver) If:

✓ You’re hiring thousands of people per quarter for hourly or entry-level roles

✓ Your candidates don’t have much work experience to evaluate behaviorally

✓ You want a pre-screening gate before human interviews

✓ You need something that works across 30 languages without translation

✓ You’re already on the Harver platform for other hiring functions

✓ You value a gamified experience with high completion rates

Consider SageScreen If:

✓ You want AI to conduct the interview, not just decide who gets one

✓ Your roles require demonstrated competency, not just trait alignment

✓ Your hiring team needs to see why a candidate scored the way they did

✓ You’re screening for roles where communication and judgment matter

Regulatory transparency is a priority — you need auditable evidence

✓ You want transparent pricing without a sales-gated enterprise contract

The Pricing Question

Pymetrics, now inside Harver, follows the enterprise pricing model: contact sales, custom quotes, annual contracts. Harver has 1,300+ customers and positions itself as a volume-hiring platform, so pricing is typically negotiated based on assessment volume and product bundle. There are no publicly available pricing pages.

SageScreen operates on a transparent credit-based model. Credits are listed on the website. You can calculate your costs before your first conversation with us. There’s no bundle requirement — you pay for the screening you use, not a platform subscription that includes twelve tools you didn’t ask for.

Pymetrics / Harver

Contact Sales

Enterprise pricing. Custom quotes. Annual contracts. Part of the broader Harver suite — assessments, video interviews, scheduling, reference checking. Pricing depends on volume and product bundle.

SageScreen

Published Credits

Credit-based pricing, published on the website. Pay per screening. No platform bundle, no annual lock-in. See the pricing page and calculate your cost before you talk to anyone.

What We Learned from Watching

Pymetrics’ story isn’t unique. It follows the same trajectory as Modern Hire (acquired by HireVue), as dozens of innovative hiring tools that got rolled up into broader platforms where the original innovation slowly becomes a checkbox in a feature comparison spreadsheet.

The pattern works like this: a founder with real domain expertise builds something genuinely novel. It gets traction. It raises venture capital. The VC needs an exit. The exit is almost always a PE-backed platform that’s assembling pieces to create a “comprehensive suite.” The founder stays for a transition period. The product becomes a feature. The feature gets maintained but no longer drives the roadmap. The platform sells the suite.

We’re not anti-PE. We’re not anti-acquisition. SageScreen will need investment to scale, and we’re clear-eyed about that. But we built the product to survive it. The architecture — reusable Sages, council-based management, credit-based pricing — is designed so the core innovation doesn’t dissolve if the company structure around it changes. The Sage an HR team builds today should work the same way in five years regardless of who’s on our cap table.

That’s the lesson from Pymetrics. Build something worth surviving. Then make sure it can.

The Bottom Line

Pymetrics pioneered the idea that AI could make hiring decisions more fair and more accurate than human intuition alone. That idea was right. The execution — neuroscience-based behavioral games, open-source bias tools, proactive regulatory compliance — was impressive. Frida Polli built something that deserved to be a company, not a feature.

But it became a feature. And the question Pymetrics was best at answering — does this person’s cognitive profile match our best people? — was always a prelude to the question that actually mattered: can this person do this job?

SageScreen starts where Pymetrics left off. Not with traits, but with competency. Not with games, but with conversations. Not with a match score, but with evidence. And not with the hope that someone will eventually ask the right follow-up questions — but with an AI that already did.