06 May 2026
After the strategy slides, the taxonomy work, the manager training, and the learning-in-the-flow-of-work pilots, every L&D leader eventually runs into the same uncomfortable question — and it almost always comes from a CFO or a CEO. OK. Is any of this actually working? In 2026, the organisations getting skills-based L&D right are the ones that have stopped trying to answer that question with the old metrics and have rebuilt the way they measure capability from the ground up.
Because here’s the catch: skills-based L&D and traditional L&D measurement aren’t just misaligned — they’re built on completely different assumptions about what success looks like. You can’t run a skills-based strategy and report on it with course-completion dashboards, any more than you can run a modern engineering team using lines-of-code as a productivity metric. The numbers will look fine. The story they tell will be wrong.
The Trouble With Traditional L&D Metrics
Traditional L&D reporting was designed for a world where the deliverable was the course. Completion rates, hours of training delivered, post-course NPS, mandatory compliance percentages — these are operational metrics. They tell you whether the L&D function is functioning. They don’t tell you whether anyone in the business is more capable than they were six months ago.
That gap stayed invisible for a long time because nobody was asking. As long as the dashboards turned green and the audits passed, the question of “are people actually getting better at their jobs?” sat in the too-hard pile. In 2026, with skills-based strategies in active rollout and finance teams asking pointed questions about return on capability investment, the gap is no longer ignorable. The metrics that made L&D defensible in the past are now actively misleading the people trying to steer it.

What “Capability Measurement” Actually Looks Like
Measuring capability instead of activity means tracking three different kinds of signal, all anchored to the same skills taxonomy:
- Demonstrated proficiency. Has the person shown — through assessment, project artefact, peer review, or applied evidence — that they can perform a skill at a defined level? This is the closest thing to a “score,” but it’s grounded in observable evidence rather than self-rating.
- Applied behaviour. Is the skill actually being used in real work? This signal usually comes from manager check-ins, performance conversations, and lightweight prompts embedded in the flow of work — not from annual self-report surveys.
- Business outcome. Are the metrics the skill is supposed to influence — quality, cycle time, customer scores, sales, safety incidents — moving in the expected direction for the people building the skill?
No single one of these tells you the whole story on its own. A passed assessment isn’t capability. A manager’s view that someone is “doing better” isn’t capability either. What matters is whether the three signals start to line up over time. When proficiency rises, the right behaviours show up in the work, and the related business outcomes improve for the same cohort, you have something close to genuine evidence that capability is moving.
Why the Old Reporting Stack Can’t Carry the New Question
Most organisations discover the limits of their reporting stack about three months into a serious skills-based effort. The LMS knows about courses and completions. The performance system knows about ratings and goals. The HRIS knows about roles and titles. None of them know about skills as a shared object — which means none of them can answer “how is capability X moving across the organisation?” without a quarterly heroic effort by someone in HR analytics with a spreadsheet and a long evening.
This is exactly why the taxonomy work matters as a foundation. Without a single, agreed list of skills with stable IDs, the analytics layer has nothing to attach to. With it, every learning event, every manager check-in, every assessment result, and every internal move can be tagged to the same underlying capabilities — and a real picture starts to emerge. The analytics question and the taxonomy question are, in practice, the same question approached from opposite directions.

How Modern Skills Analytics Closes the Loop
The systems carrying skills-based L&D in 2026 don’t treat analytics as a reporting afterthought. They treat it as part of the operating model. Every time a learner finishes a piece of content, completes a goal, demonstrates a skill in a project, or receives feedback from a manager, the system writes a signal back against a specific skill at a specific level. Over weeks and months, those signals form a living picture of capability — at the individual, team, and organisation level — that doesn’t need a quarterly stitch-together to make sense.
This is where KnowHow’s Training Goals and skills analytics earn their keep. Because every goal is tied to a defined skill at a defined level, and every learning interaction feeds the same model, the data is consistent by design rather than reconciled after the fact. Managers see exactly which skills their team is building, where gaps are widening, and which interventions are actually shifting the dial. L&D leaders see capability movement across the business, not just course traffic. And executives finally get a view of the workforce’s skills that means the same thing in May as it did in March.
Measure What You Want to Become
There’s an old line that you become what you measure. For two decades, L&D has been measuring courses delivered and tickets closed — and it has, broadly, become a function that delivers courses and closes tickets. The shift to skills-based L&D is, at its core, a decision to start measuring capability instead. That choice quietly forces every other part of the system — content, manager conversations, learning experiences, taxonomy, technology — to reorganise around the thing that actually matters. In 2026, the L&D functions winning the budget conversation are the ones that have stopped reporting on activity and started reporting on capability.