
Our highest performer sat in my office, trembling. Her annual review had just been delivered. It was overwhelmingly positive — 4.5 out of 5.
But she was fixated on the 0.5 she'd "lost."
"What did I do wrong? Where did I fall short? Why wasn't I a 5?"
I had no good answer. The difference between 4.5 and 5.0 was arbitrary. Some calibration meeting where managers horse-traded ratings to fit a curve. It had nothing to do with her actual performance.
She spent the next month demoralized, second-guessing every decision. Our best engineer, questioning her worth because of a number on a spreadsheet.
For what?
So we could generate data for an HR system that nobody used?
I looked at our review process critically:
- 40+ hours of manager time per review cycle
- Zero correlation between review scores and retention
- Guaranteed 2-week productivity dip after every cycle
- Widespread anxiety that started weeks before reviews were due
We stopped doing formal performance reviews entirely. Deleted the process. Told HR we were done.
Retention went up. Productivity went up. Manager sanity went up.
Here's what we do instead — and why the "feedback culture" religion is misguided.
Section 1: The Broken Promise of Performance Reviews
Performance reviews are sold as a development tool. "We give feedback so employees can grow!"
But look at how reviews actually function in most organizations.
The Legal CYA Function:
Reviews exist primarily to create a paper trail. If you ever need to fire someone, you need documentation. The review is that documentation.
This means reviews are written defensively. Managers hedge their language. They avoid saying anything that could be used against the company in a lawsuit.
The result: Reviews are vague, bureaucratic, and useless for actual development.
The Research Says They Don't Work:
Multiple studies have found that annual performance reviews have near-zero correlation with employee improvement or performance gains.
A 2019 Gallup study found that only 14% of employees strongly agree that performance reviews inspire them to improve. The majority find them demotivating or irrelevant.
If a practice doesn't work, why do we keep doing it?
The Recency Bias Problem:
Annual reviews are supposed to evaluate 12 months of work. In practice, managers remember the last 2-3 months.
Did you have a great Q1 but a rough Q4? Your review reflects Q4.
Did you save the company millions in March but make a mistake in November? Your review is about November.
This is not a fair assessment of performance. It's a snapshot of recent memory.
The Central Tendency Trap:
When reviews are tied to compensation or ranking, managers game the system.
Nobody wants to give a 1 (creates HR problems). Nobody wants to give a 5 (hard to justify, sets expectations).
So everyone gets a 3.5. The distribution clusters around "meets expectations."
This tells employees nothing. If everyone is a 3.5, the system is meaningless.
Section 2: The Hidden Costs Nobody Measures
Organizations track the "completion rate" of reviews. They don't track the costs.
Manager Time:
For each direct report, a manager spends approximately:
- 2 hours gathering feedback
- 2 hours writing the review document
- 1 hour in calibration meetings
- 1 hour delivering the review
That's 6 hours per direct report. A manager with 8 reports spends 48 hours on reviews. Per cycle.
If you do two review cycles per year, that's 96 hours — more than two full work weeks — spent on a process that doesn't improve performance.
Employee Anxiety:
Reviews create a 1-2 week "anxiety bubble" before and after delivery.
Before: Employees worry about what their rating will be. They write self-assessments. They stress.
After: Employees process the feedback, often negatively. The 4.5/5 employee who spirals. The 3.0 employee who disengages.
Multiply this distraction across your entire org. The productivity cost is massive.
Political Gaming:
When reviews drive compensation, employees optimize for review optics rather than actual impact.
- Take visible projects, avoid invisible-but-important work
- Time major accomplishments to land near review deadlines
- Build relationships with "influential" reviewers
- Document everything defensively
This is rational behavior given the incentives. But it's terrible for the organization.
The Stack Ranking Poison:
Some companies rank employees against each other. Forced curve: 10% must be "underperformers."
This destroys collaboration. Why help a colleague if their success comes at your expense?
High-performing teams become competitive, then toxic. The review system did this.
Section 3: What We Replaced Reviews With
After eliminating formal reviews, we needed to answer: How do we give feedback? How do we develop people? How do we make compensation decisions?
Continuous 1:1s:
Instead of one annual review, managers have weekly 30-minute 1:1s with each report.
These are not performance evaluations. They are problem-solving sessions.
- "What are you blocked on?"
- "What do you need from me?"
- "What's going well? What's not?"
Feedback is immediate and contextual. It happens when the work happens, not 6 months later.
Project Retrospectives:
At the end of every significant project, we do a team retrospective.
- What went well?
- What could we improve?
- What did each person contribute?
This ties feedback to specific, concrete work. It's not abstract "performance." It's "Here's what you did on this project, and here's how it went."
Peer Recognition:
We created a Slack channel for public kudos. When someone does great work, anyone can recognize them.
The key: Recognition is narrative, not numeric. "Alex stayed late to help debug the production issue" — not "Alex gets 4.2 stars."
This creates a culture of appreciation without the toxicity of ratings.
Compensation Decoupled from Ratings:
Pay is determined by:
- Market rate for the role
- Tenure (time at company)
- Scope of responsibility
Not by a review score. Not by stack ranking.
This removes the incentive to game reviews. People focus on the work, not the rating.
Section 4: The Results (After 2 Years Review-Free)
We've been running without formal performance reviews for over two years. Here's what we've measured.
Retention Improved 25%:
High performers were no longer demoralized by arbitrary scores. They weren't leaving because they got a 4.5 instead of a 5.0.
Interestingly, low performers also self-selected out faster. Without the review as a "moment of truth," continuous feedback made poor fit obvious earlier. Exits happened naturally, not in post-review trauma.
Manager Time Reclaimed:
We calculated: Eliminating reviews saved 200+ hours per year across our management team.
That time now goes to actual management — coaching, unblocking, planning. The work that actually improves performance.
Employee Satisfaction with Feedback Increased 40%:
We survey employees on feedback quality every quarter.
With reviews: 45% said they received "useful, actionable feedback."
Without reviews: 85% said the same.
Why? Because feedback is now frequent, specific, and low-stakes. It's a conversation, not a judgment.
No Negative Impact on Performance:
The fear: "Without reviews, people will slack off."
The reality: Performance metrics stayed stable. Shipping velocity, quality, customer satisfaction — all unchanged or improved.
It turns out people are motivated by the work itself, not by the threat of a bad review score.
Conclusion
Performance reviews are a ritual, not a tool. They exist because we've always done them, not because they work.
The research says they don't improve performance. The lived experience says they create anxiety and politics. The math says they consume enormous resources.
Kill the ritual. Keep the relationship.
Give feedback continuously. Tie it to real work. Make it low-stakes.
Your team will be happier, your managers will be saner, and your organization will be stronger.
Written by XQA Team
Our team of experts delivers insights on technology, business, and design. We are dedicated to helping you build better products and scale your business.