Skip to content
iStock-914327272 (1)

AI Executive Assessment Analysis: Can It Match Human Insight?

Can AI Produce a Client-Ready Aggregate Analysis of Executive Assessment Data?

In the past few months, we conducted a hands-on test to explore a growing question in leadership consulting: Can AI executive assessment analysis deliver insights with the depth and accuracy needed for high-stakes decision-making? The rise of generative AI has sparked interest in its potential to process and synthesize executive feedback at scale – but we wanted to see for ourselves how it stacks up against the judgment of experienced consultants.

The promise is compelling. AI can scan large volumes of text in seconds. It can categorize, summarize, and surface themes faster than any human. That kind of efficiency could transform how we deliver insight to leadership teams.

But theory doesn’t always hold up in practice – especially when subtle judgment calls are involved.


The Test: Comparing AI and Human Judgment

We used nine real executive assessments, each developed through stakeholder interviews and hands-on analysis. These were typical of what we deliver for clients – comprehensive and customized. Each report included:

  • A one-page narrative executive summary
  • A bullet-point list of strengths and development areas
  • Several pages of anonymized, polished stakeholder quotes categorized by theme

With those reports in hand, we ran two separate analyses:

  1. Manual (fully human) analysis: We reviewed all nine reports and synthesized aggregate themes across strengths and development areas. We also conducted a narrative-level thematic analysis – exactly what we’d do in preparation for a leadership team offsite.
  2. AI-generated analysis: We used generative AI to analyze the same reports, asking it to extract and summarize common strengths, development themes, and patterns across the executives.

Then, we compared the results head-to-head.


Infographic comparing AI executive assessment analysis with fully human analysis, featuring visual icons and a split layout of two analysis methods.

First Impressions: AI Looks the Part

The AI output was, at first glance, surprisingly good. The structure was clean. The language was professional. It offered clearly labeled categories, cohesive groupings, and summaries that looked like the kind of work we’d deliver to a client team.

If you had scanned the output without deep knowledge of the source material, you might have assumed it was a credible synthesis.

But once we compared it to our own analysis, the flaws began to surface.


Where the AI Fell Short

Despite the professional presentation, the AI executive assessment analysis contained significant gaps and inaccuracies:

  • Shallow pattern recognition: It lumped together concepts that looked similar on the surface but had different meanings beneath.
  • False positives: Some themes showed up in only one or two individuals but were generalized across the entire group. This would have introduced misleading recommendations if used in a group setting.
  • Poor contextual interpretation: The AI often failed to correctly interpret subtle differences in tone or intent. In a few cases, it reversed the meaning of a quote – misrepresenting praise as criticism.
  • Lack of prioritization: While the AI could list many themes, it struggled to discern which ones were material and which were simply noise.

Ultimately, the analysis felt like something you’d expect from a capable but inexperienced junior consultant – technically sound in format, but off the mark in judgment.


What the AI Got Right

To be fair, the AI did a few things well:

  • Speed and formatting: The analysis was fast. It produced a structured, presentable document in minutes – what would have taken a human hours.
  • Basic grouping: It was competent at creating general categories (e.g., “collaboration,” “execution,” “communication”) and sorting quotes accordingly.
  • Clean language: The output was polished and readable – something that could easily be misunderstood as being client-ready.

But none of those strengths offset the risks introduced by flawed insight.


Why This Matters: The Stakes Are High

Executive assessments aren’t casual documents. They’re used in decision-making that affects leadership pipelines, team performance, and organizational strategy.

Typically, clients use these reports to inform:

  • Succession planning
  • Team interventions
  • Selection
  • Individual development plans

If a consultant misreads the data – or presents themes that aren’t actually present – the downstream effects can be serious: coaching that misses the mark, talent decisions that backfire, or team dynamics that worsen rather than improve.

This is why AI executive assessment analysis, despite its speed and surface-level polish, can’t yet be trusted without rigorous human review.


Implications for CEOs and CHROs

If you’re relying on executive assessments to inform leadership decisions, this experiment offers a cautionary insight: AI can support your process – but it can’t replace judgment.

Here are three takeaways to consider:

1. Insight still requires human discernment.
The quality of your leadership decisions depends on how well subtle patterns, contradictions, and context are interpreted. AI can summarize content, but it doesn’t yet understand tone, nuance, or organizational dynamics in the way a seasoned advisor can.

2. Presentation isn’t the same as precision.
AI outputs may look polished and professional, but that doesn’t mean the conclusions are sound. As we saw, errors in interpretation or prioritization can lead to coaching misfires or risky succession choices.

3. Use AI as a tool – not a shortcut.
AI can speed up logistics (like sorting quotes or formatting themes), but it shouldn’t drive the final analysis. If you’re working with consultants or in-house teams using AI, make sure human judgment is front and center.

In high-stakes leadership decisions, style can’t substitute for substance. And when your next successor or team intervention hinges on what the data is really saying, you’ll want the analysis to be more than just fast – you’ll want it to be right.

What’s Next: AI as a Tool, Not a Strategist

AI is evolving quickly. Future versions will likely do a better job at understanding tone, drawing inferences, and integrating cross-report themes.

But today, it’s clear: AI isn’t ready to replace the consultant in executive assessment work. And it may never fully replace the level of trust, judgment, and discretion required in high-stakes leadership conversations.

Instead, the opportunity is to augment consulting work – using AI to handle the mechanical parts of data handling, quote collation, and first-draft synthesis – while keeping the human mind in charge of what matters most: judgment, clarity, and strategic relevance.

We’d love to discuss further with you.

Share this post

Comments are closed for this article!