March 23, 2026

What Engineering Leaders Really Think About AI in 2026

8 min read

Amnic recently hosted a closed-door roundtable with 25 engineering leaders across industries, fintech, healthcare, automotive, B2C, and more, over an evening of cocktails, to talk candidly about what's actually happening on the ground with AI and tooling adoption. The idea was to have an honest conversation about what's working, what isn't, and what keeps engineering leaders up at night.

The agenda was straightforward: where is AI actually showing up in engineering workflows (sometimes without teams even realising it), what does it mean when AI stops being a tool and starts influencing decisions, and who's accountable when things go wrong? We also wanted to dig into the cost of fragmented adoptio, what happens when every engineer on the team is just picking up whatever AI tool they want with zero coordination.

What we didn't expect was quite how much the mood in the room would oscillate between excitement and genuine anxiety. Almost in the same breath.

Here's what came out of it.

The State of Play

Engineering leaders at the Futurescape Conclave converged on a pivotal moment: AI code generation has crossed a quality threshold in the last 90 days that is fundamentally reshaping how software teams operate. As AI moves from a passive tool to an active participant in the development lifecycle, early adopters report 80-90% of code being AI-generated, yet the conversation quickly moved past the "whether" to the harder questions: how do you govern it, measure it, hire for it, and build systems thinking around it when the tools are evolving faster than the processes?

Underlying the optimism are real concerns: technical debt accumulation, the erosion of low-level engineering expertise, and a near-total absence of frameworks for measuring true ROI. This report captures the key themes, verbatim voices, and actionable takeaways from that conversation.

Key Themes in the Room

1. The 90-Day Inflection Point 

If there was one thing the room agreed on instantly, it was this: something changed recently. Not gradually, suddenly. AI code generation has taken a "step function" leap in quality over the last three months, and the effect on teams has been almost disorienting. What previously took years can now be achieved in months. Some engineers are reporting they haven't written a single line of code manually in a week.

One leader summed up the collective feeling perfectly: "I am very happy and also very scared."

The conversation is no longer about adoption. It's about readiness, and whether teams, processes, and people are equipped to handle what's already here.

2. From Coding to Architecting: The New Engineering Moat

This theme kept coming back in different forms throughout the evening. The competitive differentiator for engineers is no longer writing code; it's knowing what to build, how to structure it, and guiding AI with architectural intent. As one leader put it:

"With AI coding, a good engineer and a very good engineer, the difference will not be the coding part; it will be the taste for architecture. So what architecture are you prompting or guiding the AI with?"

  • The 10x analyst: Leaders noted that the industry now needs "10x analysts" and "10x designers", not just "10x engineers." Horizontal AI models are handling the bulk of standard coding. The humans need to be operating a level above that.

  • Systems thinking: As automation takes over rote tasks, the skills that survive are the ones machines can't replicate, yet. "Systems thinking is the only thing that will survive. Everything else is getting automated."

And the economic reality was put bluntly by one leader: "Code generation is $5. Code review is $25. That is how it is." The room nodded. Nobody argued.

  • AI-first methodologies: One leader shared a workflow that's already showing results, test-first AI development. "Use your AI to write the basic test cases and requirements first...only when your test code is completed, then you can generate your code." Simple in theory, but it requires a discipline that most teams haven't built yet.

3. The Quality & Experience Gap

This was probably the most uncomfortable part of the conversation, and the most honest. The "hiring trap" created by powerful AI tools in the hands of inexperienced developers came up repeatedly, and nobody had a clean answer for it.

  • The expertise premium: Better developers get dramatically more productive with AI. Less experienced developers just produce bad code faster. The foundational understanding of edge cases, architecture, and system design, that's what separates the two. And AI doesn't give you that. You either have it, or you don't.

One leader drew a sharp parallel from their own experience with automation: "The people who are writing good code for automation are the people who do it manually first. Because they know what's happening on the ground, what are the issues, and what is the smart way of automating things." The same logic applies here.

  • Loss of "The Grilling": There's a generational concern brewing. The next wave of engineers may never get the reps, debugging low-level OS interrupts, wrestling with race conditions, the slow painful work that builds real instinct. If you've never done it manually, how do you know when the AI gets it wrong?

  • Accountability: Regardless of how the code was generated, the human is still on the hook. "No one is going to ask AI why you wrote the code...you'll be answerable." That's not changing anytime soon.

4. Governance & Compliance Are the Real Bottlenecks

For the leaders in regulated industries at the table, the frustration was palpable. The constraint isn't AI capability, it's process. The tools have lapped the governance structures meant to oversee them.

Industry

Adoption Level

Primary Driver/Hurdle

Fintech/B2C

High/Rapid

Speed-to-market, trend prediction

Healthcare

Moderate/Cautious

Regulatory & compliance overhead (40%+ of process)

Automotive

Accelerating

Shift from 5-year cycles to yearly OTA updates

A principal architect at a major healthcare organisation painted a vivid picture: compliance and regulatory overhead account for over 40% of their release cycle. GitHub Copilot just got approved. Amazon Kiro recently joined the list. Want to try something new? That's a 2-3 month security review. Meanwhile, the AI landscape is releasing major updates every few weeks.

The pace of AI evolution is outrunning enterprise governance, and for heavily regulated industries, that gap isn't closing anytime soon.

5. ROI Remains Unmeasured and That's a Problem

This one hit differently. One head of engineering at the table is spending close to half a million dollars a year on AI tooling. When asked how they measure the return on that investment, the answer was: "It's subjective."

And they weren't alone. Nobody in the room had a clean answer. Lines of code pushed to production, meaningless. Commit frequency, meaningless. Cloud metrics, a proxy at best. The honest truth is that the industry is spending significant money on AI tooling with no agreed-upon way to know if it's working.

That's not a small problem. And it's only going to get harder to ignore as budgets grow.

The ROI challenge: Measuring return on investment for AI tools is currently described as "subjective," with no industry-standard way to track productivity beyond cloud metrics or lines of code pushed to production.

Fragmented tooling: Most organisations are standardising on a limited set of approved tools due to the lengthy 2-3 month security approval cycles for new AI software.

At a Glance: 5 Things the Room Agreed On

  1. The speed jump is real and sudden. AI code quality has hit a step function, not a gradual curve 

This wasn't a room full of early adopter evangelists. These are pragmatic engineering leaders who've seen hype cycles come and go. So when multiple people independently said something fundamental shifted in the last three months, not incrementally, but overnight, that's worth paying attention to. The tools got good enough, fast enough, that teams didn't have time to prepare.

  1. Great engineers get dramatically better with AI; mediocre ones just produce bad code faster 

AI is an amplifier, not a equaliser. The leaders in the room were clear, the gap between a strong engineer and a weak one isn't closing, it's widening. A senior engineer with deep architectural instinct uses AI to move at a pace that was previously impossible. A junior engineer without that foundation uses it to generate more code, faster, that nobody fully understands. The tool doesn't know the difference.

  1. The new premium skill is architectural thinking, not coding 

Writing code is increasingly the commodity. Knowing what to build, how to structure it, and being able to guide AI toward the right solution, that's where the value lives now. One leader put it simply: the difference between a good engineer and a great one will no longer show up in the code they write. It'll show up in the prompts they give and the architecture they design around it.

  1. Regulated industries are stuck, governance and compliance overhead is the real bottleneck, not the technology 

The irony for healthcare, fintech, and enterprise teams is that the AI is ready before they are. The tools exist. The capability is there. But when a new tool takes 2-3 months to clear security review, and your release cycle is already quarterly, the pace of AI innovation feels almost taunting. The bottleneck isn't ambition, it's process. And process moves slowly by design.

  1. Nobody has figured out how to measure ROI yet, even while spending half a million a year on tools 

This might be the most important unsolved problem in the room. Organisations are making significant financial commitments to AI tooling with no reliable way to know if it's paying off. The old proxies, lines of code, commit frequency, story points, don't map cleanly onto AI-assisted workflows. Until the industry agrees on what productivity actually looks like in this new world, spending decisions are essentially running on gut feel.

Snapshots from the Futurescape Conclave

The Next Move

  1. Establish a test-first AI policy: Mandate that AI-generated code cannot be pushed to production without AI-generated test cases written first. Several leaders are already moving this way and seeing measurable quality improvements.

  2. Redefine engineering hiring around systems thinking: Shift interview rubrics away from coding challenges toward architecture, debugging instinct, and prompt design. The engineers who will thrive are those who can guide AI, not just use it.

  3. Build a cross-team AI governance framework: Fragmented tool adoption is creating hidden costs and compliance risk. Engineering orgs need a centralised but lightweight process for evaluating, approving, and measuring AI tools, before the sprawl becomes unmanageable.

  4. Invest in foundational engineering education, before the knowledge gap becomes permanent: The concern about the next generation missing "the grilling" isn't just a cultural worry, it's a business risk. If your team can't identify when AI gets it wrong, you have a problem. Engineering orgs need to be intentional about building foundational knowledge alongside AI adoption, not assuming the tools will teach people what they need to know.

  5. Define what ROI actually means for your team, and measure it consistently: Half a million a year on AI tooling with no framework to evaluate it is not a sustainable position. Before the spend grows further, engineering leaders need to agree internally on what productivity looks like in an AI-assisted world, whether that's deployment frequency, time to production, incident rates, or something else entirely. The metric matters less than having one.

The Futurescape Conclave made one thing clear: the question is no longer whether AI belongs in engineering workflows, it already does. The leaders who will define the next era are those who move fast enough to capture the gains while building the governance, taste, and human judgment to make sure what ships is actually good.

If you're thinking about where your team stands on any of this, whether it's managing cloud costs as AI usage scales, we're here to help. Take a look at what we're building at Amnic:

Recommended Articles

Read Our Breaking Bill Edition