The Seven Minute Wake-Up Call
The traits that matter and how to find them in an AI world.
This weekend a portfolio company founder sent me the take-home exercise he uses to assess engineering candidates. The README said it should take 2-3 hours, with AI tools permitted.
I’m not an engineer. I fed it into Claude Code. Seven minutes later, I sent him my completed exercise.
I aced it.
What Actually Matters Now
It’s not just engineering. Case studies, writing samples, financial models. AI can (or will soon) produce competent versions of all of them in minutes. The way we’ve traditionally assessed excellence is broken.
If traditional assessments are broken, what should people test for?
Taste. In a world where anyone can build anything quickly, the scarce resource is knowing what’s worth building. Can they distinguish between good and great? Do they focus on driving outcomes that matter?
High agency. Do they take initiative and drive outcomes without being managed? AI is a multiplier on agency. The gap between a high-agency person with AI and someone who waits for direction has never been wider.
Strategic thinking. Can they see problems before they arrive? Are they a long-term thinker? This is still hard to outsource to AI.
AI-forward mindset. Are they constantly seeking out the latest tools to make themselves more impactful? Someone still reluctant to use AI is choosing to operate at a fraction of their potential.
Work ethic. AI amplifies effort. Someone willing to put in the hours now generates dramatically more output than ever before.
Culture fit. Will they work well with your team? Do their values align with how you operate? Skills can be amplified by AI. So can the damage from culture misalignment.
Superpower alignment. Be specific about what you actually need from this role. An architect who prevents long-term AI slop is very different from a builder who ships fast. Know which you need, then find someone whose superpower matches. AI or teammates can fill in their gaps.
How to Actually Assess This
Throw out the take-home. Here’s what works better:
Work together on a real project. Have your top candidates spend a few nights or a weekend collaborating with your team on something you’re actually building. Pair programming, team discussions, real problems. You’ll learn more in a few sessions than any interview process could reveal. If they’re between jobs, even better: bring them on as a contractor for a few week first.
Give them ambiguity. Don’t hand them a well-scoped problem. Give them a fuzzy one and see how they handle it. High-agency people clarify, scope, and move. Others wait for direction.
Ask about their AI stack. What tools do they use daily? How has their workflow changed in the last month? People who are genuinely AI-forward have specific answers and strong opinions. People who aren’t will give you vague generalities.
Watch them work in real-time. Sit with them while they solve a problem using whatever tools they want. Do they know how to iterate and guide AI to desired outcomes? Do they evaluate outputs critically or accept slop?
Reference checks matter more than ever. Agency, judgment, and taste are best evaluated by people who’ve worked with the candidate. Ask specifically: what are their superpowers? Then make sure those match with what you need.
The founders who adapt fastest will build teams that dramatically outperform those who don’t. The gap between a well-constructed AI-native team and a traditional one isn’t 20%. It’s multiples.
I am not an engineer. I completed a senior engineering assessment in seven minutes.
The old interviewing playbook is dead. Rebuild yours now.

Anthropic's performance optimization team lead wrote up a longer report on how they dealt with this internally, worth a read: https://www.anthropic.com/engineering/AI-resistant-technical-evaluations