This is the second part of a two-part post. Part 1 covers the problem - how procedural training has set junior engineers up for cognitive surrender in an age of AI agents, and why the protective factor is reasoning ability, not tool knowledge. This part is about what to do about it.
Mentor Like You Prompt #
You probe the reasoning - not just ‘is this right?’ but ‘why did you do it this way?’ - because fluency and correctness are not the same thing, and the answer is usually where you find the difference.
Now think about how most of us mentor junior engineers.
We hand them a ticket. We check back in a week. We review the output at the end, flag the problems, and wonder why the same mistakes keep recurring.
The most effective mentoring works in exactly that cadence. Small, bounded tasks with clear expected outputs. Active review of each piece before the next one starts. Targeted questions that require the junior to explain their reasoning, not just defend their result. “Walk me through why you chose this join type” is a much better question than “does this look right?”
This isn’t micromanagement. It’s calibration.
When you ask a junior to explain their output and they can’t - when the explanation reveals they followed a pattern without understanding it - you’ve caught a cognitive surrender event before it reaches production. And when they can explain it, with appropriate uncertainty and awareness of tradeoffs, you know the mental model exists. Give them a bigger problem.
The progression is: understand, explain, extend. Not: do, review, repeat.
What Happens When You Flip the Script #
I know this approach works because I’ve seen it work. Not theoretically. In a room, in Oslo, with a group of students who did not enjoy the first hour of what we were asking them to do.
Many years ago when we were working together at Atea, I was running a training together with Heini Ilmarinen, and we had decided to try something different. No step-by-step instructions. No worked example to follow. Instead, we handed students a business scenario - a real customer problem, messy and underspecified the way real customer problems always are - and told them: this is what the customer needs. Design a solution.
Produce architectural drawings. Write up a design document. And then - this is the part that made people uncomfortable - explain not just what you built, but why. How you arrived at this architecture and not some other one. What issues you identified along the way. What you considered and discarded. What tradeoffs you made and why you made them.
The room got quiet in a way that is very different from the quiet of people reading instructions. It was the quiet of people realising they didn’t have a script to follow.
Some groups went in circles for a while. Some made confident early decisions they had to walk back when someone asked an inconvenient question. One group built something technically coherent but completely misread the business requirement - and when they presented it, that became the most valuable 20 minutes of the training, because we could discuss how they had misread it and what assumptions had led them there.
The debrief wasn’t “here’s the right answer.” It was “walk us through your reasoning.” And in that walkthrough - in the justifications, the hesitations, the places where two people had disagreed and one position had won - you could see exactly what each student actually understood and what they were still pattern-matching without comprehension.
Correct execution is compatible with zero understanding. It was true before AI. It’s catastrophically more relevant now.
What This Looks Like as a Curriculum #
If I were redesigning onboarding for junior data engineers today, here’s how the weight distribution would shift.
Procedural skills - SQL, pipeline syntax, tool configuration, deployment steps - would be taught with AI assistance from day one. Explicitly framed: “The agent handles the mechanics. You handle the reasoning, but learn the basic syntax as you go.” That’s honest about the job they’re walking into.
Conceptual foundations - data modeling theory, semantics of a fact versus a dimension, how aggregation interacts with grain - would be taught without AI assistance, through Socratic dialogue, broken examples, and case studies. Not as a purity exercise. Because these concepts need to exist in the engineer’s head independently of any tool that can produce an instance of them.
And the highest-investment training would be case-based diagnosis. Here is a real business problem. Here is what the data currently shows. Here is what the business claims it should show. Find the gap, explain the cause, propose the fix.
No agent can own that end-to-end. Working through it builds exactly the reasoning pattern that makes someone resistant to cognitive surrender - in AI output review, in stakeholder conversations, in architecture decisions, in everything that actually matters in this job.
We Are Eating Our Seed Corn #
Here’s the argument I keep hearing: AI can do the junior’s job, so why hire juniors?
I think that’s wrong. But even if it were partially right, the conclusion being drawn from it is shortsighted in a way that will be painful in a few years.
The numbers are striking. Layoffs.fyi [2] tracked 245,953 tech workers laid off in 2025 alone - and 2026 is running at nearly 1,000 per day. Companies posting record profits are simultaneously citing “AI efficiency” as justification for cutting headcount. That’s the headline story. But the more important story is buried in the Burning Glass Institute’s “No Country for Young Grads” report [3]: between 2018 and 2024, the share of software development roles requiring three years of experience or less dropped from 43% to 28%. In data analysis, from 35% to 22%. Total job postings in these fields stayed flat or increased. Senior hiring held steady. Companies aren’t hiring fewer people. They’re skipping the junior tier entirely.
Right now, large organisations are cutting junior roles at a pace that feels rational in a spreadsheet and reckless in practice. The seniors who remain were once juniors who had time, mentorship, and the space to build conceptual understanding from the ground up. They developed judgment by making mistakes in low-stakes situations. They learned to diagnose by building things badly first. That pipeline is being shut off.
You cannot skip a generation of practitioners and expect institutional knowledge to survive. The seniors retire. The experts move on. And the people who were supposed to replace them were never hired.
This is what farmers call eating your seed corn. It solves a short-term problem by destroying your capacity to grow anything next season.
The market is already showing signs of what happens next. Forrester [4] found that 55% of employers regret their AI-driven layoffs. Klarna replaced 700 employees with AI, quality declined, customers revolted, and the company had to rehire humans. These aren’t edge cases - they’re early signals.
Here’s what I actually believe: there has never been a better time to hire and train junior data engineers. Not despite AI. Because of it. The organisations cutting juniors today are making their competitors’ talent strategy for them. Engineers who go through rigorous, concept-first training right now - who learn to reason about data with agents rather than instead of agents - will be extraordinarily scarce and extraordinarily valuable in two or three years.
Supply is collapsing. Demand will not.
The question is whether your organisation will have any of those engineers, or whether you’ll be trying to hire them from someone else at a significant premium.
The Uncomfortable Conclusion #
We have spent years optimizing junior training for a world where doing the thing quickly was the primary value. That world automated itself. The thing agents cannot do - and will not do anytime soon - is develop a well-calibrated suspicion about their own output. They don’t know when they’re wrong. They’re confident either way.
That’s our job. That’s the irreplaceable human contribution in a data engineering team that uses AI well.
We either train for it deliberately, or we find out the hard way that we didn’t.
We can do better than that.
Join the Conversation #
Have you changed the way you train juniors? Have you as a junior changed how you learn? I’m curious of what your process for learning looks like. Find me on LinkedIn or BlueSky.
References #
- Shaw, S. D., & Nave, G. (2026). Thinking-Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender. SSRN.
- Layoffs.fyi. (2026). Tech Layoffs Tracker.
- Levanon, G., Sigelman, M., et al. (2025). No Country for Young Grads: The AI Disruption of Entry-Level Jobs. Burning Glass Institute.
- Beilfuss, L. (2025). The AI Layoff Trap: Why Half Will Be Quietly Rehired. HR Executive / Forrester Research.