top of page
Search

Screening for Comfort, Missing Capability

  • Writer: David Frank
    David Frank
  • Jan 20
  • 6 min read

I spend a lot of time reviewing resumes from candidates with identical credentials who produce wildly different results. Eventually you start noticing that academic performance predicts some things well and other things not at all. This subject hits particularly hard because I've been on both sides: the candidate overlooked for reasons I couldn't see, and the screener who overlooked people for reasons I couldn't articulate. What makes this worth examining isn't to assign blame but to recognize that when our screening mechanisms filter out invisible talent systematically, we all lose. Understanding the gap between what we measure and what matters is the first step toward closing it.


Two transcripts can look identical in print yet carry different weight in motion.


Consider: both candidates graduated from well-regarded universities with solid GPAs. Both present polished resumes listing internships, extracurriculars, and technical proficiencies. One has a 3.8, the other a 2.8. On paper, the choice seems obvious.


Six months later, the 2.8 is reframing problems the team didn't know they had. The 3.8 is competent, reliable, and slightly forgettable.


This isn't an anomaly. It happens often enough to become a pattern.


And the pattern raises a question about what we're actually measuring.


What Academic Success Measures


Academic performance, particularly GPA, correlates modestly with job performance, typically in the range of .20 to .30 after corrections. That's real but not overwhelming. It's roughly equivalent to the validity of an unstructured interview, which suggests we might be building hiring systems on foundations less solid than we'd prefer to admit.


Part of the explanation lies in what grades actually measure. The Cattell-Horn-Carroll (CHC) model of intelligence distinguishes between crystallized intelligence (accumulated knowledge and skills) and fluid intelligence (the capacity for novel problem-solving and abstract reasoning). Traditional academic structures reward crystallized intelligence heavily. One studies material, retains it, reproduces it under timed conditions, and moves forward. Repetition, compliance, and structured recall dominate.


Fluid intelligence operates differently.


It's the capacity that helps someone see patterns no one else noticed or navigate ambiguity without a rubric. It matters far less when the syllabus is fixed and the answers are known. This reflects what can be taught and what can be tested, not what education fails to do. The mismatch becomes visible when employers assume that someone who excelled at structured, supervised learning will automatically excel at ambiguous, unsupervised problem-solving.


Sometimes they do. Often they don't.The theory of successful intelligence offers a useful lens here: intelligence isn't a single capacity but a constellation of analytical, creative, and practical abilities. Academic environments privilege analytical intelligence. Professional environments require all three, often simultaneously, with the weight shifting depending on the role. The person who aced organic chemistry because they memorized reaction mechanisms may struggle when the question becomes "how should we be thinking about this?" Yet that same person might excel in contexts where precision and systematic thinking matter most.


Neither candidate is better in any absolute sense. They simply inhabit different regions of capability. But the resume rarely makes that distinction visible.


What Documentation Cannot Show


Resumes reflect what's easy to document. Quantifiable achievements show up. Ambiguous contributions disappear.


Someone who reorganized a failing workflow doesn't list "noticed that everyone was solving the wrong problem." Someone who spent six months doing intellectual labor that prevented a crisis doesn't write "made something not happen." There is no bullet point for preventing a project from imploding quietly. The resume says "coordinated cross-functional team" and leaves the invisible work invisible.


This creates a sorting problem. Some people produce visible output naturally: reports, dashboards, presentations, deliverables that photograph well. Others produce cognitive infrastructure. They reframe questions, spot faulty assumptions, synthesize disparate information into usable heuristics. Both are valuable. Only one translates neatly into bullet points.


Consider the capabilities that rarely appear on resumes: Reframing questions others take as given. Spotting what's missing from a conversation, not just what's present. Translating across domains without formal training in either. Generating hypotheses faster than analyzing them. Maintaining conceptual coherence across fragmented information.


These are markers of strategic cognition. They're almost impossible to document in standardized formats, which means hiring processes optimized for what's documentable systematically filter out people whose value operates in a different register. Not less valuable. Just less visible.


The pattern repeats across organizations, across industries, across decades of hiring data. We screen for what we can see, which means we screen for people whose work happens to leave artifacts.The Filtering Effect


Personnel selection methods reveal a consistent pattern: organizations optimize to avoid false positives (bad hires) at the expense of false negatives (missed talent). This makes sense from a risk perspective. A bad hire is visible, disruptive, and requires corrective action. A great candidate one never interviewed is invisible, costs nothing, and generates no complaint.


The cumulative effect compounds over time.


Screening for high GPAs, name-brand schools, and tidy career progressions filters out people whose intelligence manifests in practical or creative domains rather than analytical ones. Late bloomers who took time to find their direction. Those who prioritized deep work over résumé optimization. Anyone whose cognition doesn't translate neatly into credentials.


Cultural matching compounds this. Employers in elite firms often prioritize candidates who share their leisure pursuits, communication styles, and social signaling over those with the strongest technical skills. The filtering effect isn't just about credentials. It's about recognizability. One hires people who remind one of oneself, which works beautifully when staffing teams where everyone needs to do the same thing. It works less well when the problem requires someone who thinks differently.


Yet something shifts when this pattern becomes visible.


The hiring manager who begins tracking who gets filtered out and why starts to notice that comfort and capability don't always correlate. What was invisible becomes visible. What was filtered becomes questionable. The question stops being "did we hire the best person?" and becomes "are we measuring the right things?"


What Strategic Cognition Looks Like


Strategic thinking doesn't announce itself. It operates in the space between question and answer, in the gap where most people are still defining terms. One sees it when someone quietly redirects a conversation that was headed toward a dead end. Or when they synthesize three unrelated inputs into a structure that suddenly makes the problem solvable.It shows up as second-order thinking, conceptual flexibility, and the capacity to work in the realm of hypotheticals before anyone's ready to commit. These are not skills one measures with a test. They're dispositions, orientations, habits of mind. And they become visible only in contexts designed to surface them.


The candidate who asks clarifying questions about the question. The one who spots the assumption everyone else accepted. The one who says "here's another way to think about this" and actually offers a useful reframe. These moments signal capability that resumes can't capture but structured observation can reveal.


Meanwhile, the markers we do screen for measure something real: the ability to succeed within established systems. That's valuable. The question is whether it's sufficient. Whether the person who navigates existing structures effectively will also thrive when the structure doesn't exist yet. The resume doesn't answer this. And so we default to what's measurable, which means we default to comfort.


When Measurement Misses What Matters


Two transcripts can look identical in print yet carry different weight in motion. One tells a clear, linear story. The other suggests a mind that operates three moves ahead of the question not yet asked.


The first gets hired more often. The second gets filtered out before anyone realizes they were there.


The gap between what we screen for and what we need isn't permanent. It's a function of awareness, of what becomes visible when we stop assuming that the easiest hire to defend is the strongest hire to make. We've become very good at identifying people who are good at being identified.


Whether we can become equally good at identifying people who are good at the work depends on whether we're willing to examine what our screening mechanisms actually measure. Not credentials versus no credentials. Not rigor versus laxity. But visibility versus capability. Documentation versus cognition. The story that's easy to tell versus the contribution that's hard to see.


The question isn't whether organizations are hiring for the work or hiring for the narrative. It's whether the distinction between the two has become visible enough to matter.For those that need some light reading:


¹ Roth, P. L., BeVier, C. A., Switzer, F. S., & Schippmann, J. S. (1996). Meta-analyzing the relationship between grades and job performance. Journal of Applied Psychology, 81(5), 548-556.


² Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262-274.


³ McGrew, K. S. (2009). CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence, 37(1), 1-10.


⁴ Sternberg, R. J. (2005). The theory of successful intelligence. Interamerican Journal of Psychology, 39(2), 189-202.


⁵ Rivera, L. A. (2012). Hiring as cultural matching: The case of elite professional service firms. American Sociological Review, 77(6), 999-1022.


⁶ Dineen, B. R., Duffy, M. K., Henle, C. A., & Lee, K. (2023). Recruitment in personnel psychology and beyond: Where we've been working, and where we might work next. Personnel Psychology, 76(3), 695-740.


⁷ Vedapradha, R., Hariharan, R., & Shivakami, R. (2023). Reimagining recruitment: Traditional methods meet AI interventions - A 20-year assessment (2003–2023). Cogent Business & Management, 12(1), Article 2454319.

 
 
 

Recent Posts

See All
The Reference You'll Never Hear

Think about the last time you checked restaurant reviews before trying a new place. You scrolled through Google, saw the owner respond to feedback, watched the conversation unfold. Now imagine a revie

 
 
 
When Busyness Became Worth

There is a kind of fatigue that hides behind achievement, a stillness no calendar can measure. Lately, I have noticed how easily exhaustion disguises itself as accomplishment. Every week I hear someon

 
 
 
Hard to Follow, Impossible to Ignore

You know the coworker who asks questions that catch you completely off guard. You stare at them, genuinely confused about where that came from. It feels like a non-sequitur, yet something in their ton

 
 
 

Comments


Head vs Heart
bottom of page