FIELD ASSESSMENT

The Activation Gap

Access to AI tools is not the same as using them. And not using them is becoming a structural career risk.

George Andronchik • Data AI Architect, OrbitaLab • fellows.tech

March 2026

FRAMING
This is not a report about AI adoption in the abstract. It is a practitioner’s assessment of a specific, observable pattern: organizations that have deployed AI tools widely are not seeing those tools used meaningfully, and the gap between deployment and activation is starting to produce workforce consequences.
The data cited here comes from Deloitte, BCG, McKinsey, PwC, Gallup, and Microsoft — organizations that survey large enterprise populations. The field observations come from OrbitaLab’s active engagements with enterprise clients. Both lines of evidence point in the same direction.
The intended reader is a senior practitioner who already understands the technology. The question this paper addresses is not “should you use AI” but “what does the current pattern of non-use actually mean for people in technical roles, and what should they do about it.”

What the Enterprise Data Actually Shows

The headline numbers on AI adoption are misleading if taken at face value. McKinsey’s 2025 State of AI survey reports that 88% of organizations use AI in at least one business function — up from 78% the year prior. [4] That figure is real, but it describes the breadth of deployment, not the depth of use.

When you look at what’s actually happening inside those organizations, the picture changes:

  • Nearly two-thirds of those 88% are still in pilot or experimentation phase. Only 7% have fully scaled AI across their enterprise.
  • BCG’s study of 1,250 firms found that 60% report no material return from AI despite significant investment. Only 5% qualify as genuinely generating value at scale.
  • PwC’s 2025 Global Workforce Survey found that while 54% of workers have used AI for their role in the past 12 months, only 14% use it daily.
  • Gallup’s tracking data shows 81% of frontline workers have never used an AI tool at work.

Deloitte’s 2026 State of AI in the Enterprise report captures the pattern most precisely: worker access to AI tools grew 50% in one year — from under 40% to around 60% of employees. Among those with access, fewer than 60% use it in their daily workflow.

Access expanded. Habits did not.

14% Share of global workers using generative AI daily

up only 2 points from 12% in 2024, despite a 50% expansion in access. PwC, Nov 2025. [1]

The standard interpretation of these numbers is that organizations need better change management. That is true but incomplete. A more precise interpretation is that the gap between access and activation is now producing a measurable divergence in outcomes between the workers on either side of it.

The Outcome Divergence

The research on what happens to workers who actually use AI daily — versus those who have access but don’t — is consistent across sources.

PwC found that compared to infrequent users, daily AI users are:

  • 92% more likely to report productivity improvements
  • 58% more likely to report job security
  • 52% more likely to have received a salary increase
  • Significantly more optimistic about their role over the next 12 months

BCG’s data adds a structural dimension. The 5% of companies generating real AI value at scale achieve 1.7x revenue growth, 3.6x three-year total shareholder return, and 2.7x greater ROI compared to the 60% generating no material return.

The individual-level and firm-level patterns are reinforcing each other. Workers who activate AI are more productive, better compensated, and more secure.

5% Share of companies globally achieving AI value at scale, with significantly higher growth and shareholder returns. BCG, September 2025.

A Field Observation: The Same Job Title, Two Outcomes

The aggregate data is useful for establishing the pattern. What follows is a specific case drawn from OrbitaLab current enterprise work.

One of our clients — a large organization with a mature data infrastructure function — is currently automating a significant portion of their data infrastructure support workflows.

The automation is being built by data engineers on the same team who are proficient in AI tooling: prompt engineering, agentic workflow design, LLM-assisted code generation, and automated pipeline monitoring.

The roles being made redundant are held by data engineers in the same job title who are not doing those things.

The question our clients are now asking is not how to introduce AI to their teams. It is how to redeploy the people who adapted and manage the transition for those who did not. Those are operationally very different problems.
— George Andronchik, OrbitaLab

This pattern is worth examining carefully because it challenges a common framing. The narrative that “AI replaces jobs” obscures what is actually happening in practice. The job title is not being eliminated. The function is being automated by someone in that same job title who learned to use the relevant tools. The displacement is lateral, not vertical.

This has a specific implication for technical practitioners: the risk is not that your role disappears from the org chart. It is that a colleague or contractor with the same title — but a different practice — becomes the person your organization builds around.

Why the Gap Persists: What the Data Says vs. What Practitioners Knows

The research literature offers three standard explanations for why AI activation lags deployment: insufficient training, unclear organizational vision, and employee anxiety. All three are real. But they do not fully capture what practitioners observe on the ground.

The training explanation

BCG found that only 36% of employees believe their AI training is sufficient, and 18% of regular AI users received no training at all. Employees who received more than five hours of training show 79% regular AI usage, versus 67% for those with less. [3] The correlation is real but the effect size is modest. Training explains some of the gap, not most of it.

What the training data does not capture is that many practitioners who are not using AI have received training and still do not use it. The barrier is not primarily informational.

The organizational vision explanation

Microsoft’s 2025 Work Trend Index found that 82% of leaders say this is a pivotal year to rethink core strategy and operations — yet only 24% have deployed AI organization-wide, while 12% remain in pilot mode. [7] The gap between urgency and execution is measurable. BCG and Columbia Business School found a 45-point gap between executives who believe employees are enthusiastic about AI (76%) and employees who actually are (31%). [6]

These gaps are real and they matter. But senior practitioners are rarely waiting for a corporate AI strategy before deciding whether to use tools in their own practice. The vision gap explains frontline hesitation better than it explains hesitation among experienced engineers and technical leads.

What practitioners know that the data does not capture

In OrbitaLab experience, the more precise explanation for non-adoption among senior

technical practitioners is a combination of three things:

  • Workflow inertia. Established practitioners have optimized their existing workflows over years. Integrating AI tools requires rebuilding those workflows, which involves a period of reduced output before increased output. That cost is real and immediate; the benefit is deferred.
  • Identity resistance. Senior engineers have professional identities built around technical judgment and expertise. Tools that appear to shortcut that judgment can feel like a threat to professional identity, not just a productivity aid. Microsoft found that 52% of knowledge workers still treat AI as a command-based tool — issuing direct instructions rather than engaging it as a thinking partner. [7] Among senior practitioners, that pattern often reflects something deeper than habit: using AI as a sophisticated search engine preserves the feeling of authorship over the output. Treating it as a collaborator requires ceding some of that control.
  • Unclear ROI at the task level. Enterprise AI rollouts tend to produce capability at the platform level without clear integration at the task level. A practitioner who does not see how a given tool improves their specific daily workflow has a rational reason not to use it.

None of these are insurmountable. But they require different interventions than standard training programs.

The Block Case: A Signal, Not a Proof

On February 26, 2026, Block announced it would reduce its workforce from over 10,000 to just under 6,000. Jack Dorsey attributed the decision to AI-enabled efficiency gains, and predicted that most companies would reach the same conclusion within a year. [2]

It is worth being precise about what this case does and does not show.

What it shows

Block is financially healthy. Gross profit grew 24% year-over-year. This is not a distressed company cutting costs out of necessity. It is a company restructuring its workforce because it believes a smaller, AI-augmented team can deliver the same or greater output. That is a specific and consequential claim, and the market responded positively — shares rose over 16% following the announcement. [2]

Dorsey’s prediction that most companies will reach the same conclusion within a year may prove overstated. But the underlying logic — that AI tooling changes the headcount required to operate at a given output level — is not controversial among operators who have actually deployed these systems.

What it does not show

Several analysts noted, correctly, that Block’s workforce had expanded from 3,800 employees in 2019 to over 12,500 in 2022, and that pandemic-era overhiring likely contributed to the scale of the cuts. Dorsey acknowledged this. Bloomberg reported that the layoffs sit “at the center of a complex debate” between genuine AI-driven displacement and companies using the AI narrative to dress up cost-cutting. [2]

That debate is legitimate. The practitioner’s takeaway, however, is not about Block specifically. It is about the selection criterion being applied when organizations make difficult workforce decisions. The engineers building Block’s AI tooling were not in the 4,000. The distinction between those two groups is the pattern worth attending to.

Whether or not AI is the primary driver of any given round of layoffs, it is increasingly the primary variable in who gets retained when organizations restructure. That is a different and more durable claim.
— George Andronchik, OrbitaLab

Practical Implications for Technical Practitioners

The following is not a prescriptive framework. It is a set of observations about what practitioners who are navigating this well appear to be doing differently from those who are not.

On individual practice

  • The practitioners managing this transition well are not those who learned about AI — they are those who rebuilt specific workflows around it. The unit of adoption that matters is not the tool but the task. Identify two or three high-frequency tasks in your current role and systematically replace the existing workflow with an AI-assisted one. Evaluate the result honestly.
  • The workflow rebuild period involves a real productivity dip. Budget for it. Practitioners who abandon AI tools during the integration period — before the new workflow is optimized — consistently report that the tools do not work. Practitioners who work through that period report the opposite.
  • Proficiency compounds. BCG’s data shows that daily AI users widen their advantage over infrequent users over time. [3] The practitioners who started rebuilding workflows 18 months ago are not at the same point as those starting today. The gap is real and it is growing.

On team and organizational dynamics

  • The perception gap documented by BCG and Columbia — executives at 76%, individual contributors at 31% enthusiasm — is not primarily a communication problem. [6] It is a different-reality problem. Leaders who have seen AI work in controlled contexts are genuinely more optimistic than practitioners who have tried to integrate poorly-scoped enterprise deployments into real workflows. Closing the gap requires acknowledging that both perceptions are partially accurate.
  • Organizations that are generating real AI value are not primarily distinguished by the tools they deployed. They are distinguished by workflow redesign. BCG found that companies undertaking end-to-end workflow redesign show 23–26 percentage point higher AI adoption among employees compared to those doing tool rollout only. [3] Deployment without redesign produces the activation gap.
  • Governance is lagging dangerously behind deployment. Deloitte found that 74% of companies plan to deploy agentic AI within two years, but only 21% have a mature governance model for autonomous agents. [8] For practitioners in engineering and architecture roles, this is an immediate professional responsibility, not a future concern.

Where This Is Heading

The activation gap is not a temporary state that will resolve itself as AI tools improve or as organizations run more training programs. It is a structural divergence between practitioners who are rebuilding their practice around AI capabilities and those who are not. The divergence is compounding.

The Block case is one data point in a larger pattern. Klarna, Amazon, Salesforce, and others are all making similar structural arguments, with varying degrees of accuracy. What is consistent across those cases is not the scale of the cuts but the selection logic: the practitioners building automation are not the ones being cut.

For the fellows.tech community — engineers, architects, technical leads, and investors evaluating technical organizations — the relevant question is not whether to use AI. That debate is settled. The question is whether your current practice, and the practices of the teams you work with, reflect the level of AI integration that will be table stakes in 18 months. Based on the current data, most do not.

The window for building that capability before it becomes a differentiator — rather than a baseline expectation — is closing. It has not closed yet.

REFERENCES

[1] PwC. 2025 Global Workforce Hopes & Fears Survey. PwC, November 12, 2025. https://www.pwc.com/gx/en/news-room/press-releases/2025/pwc-2025-global-workforce-survey.html

[2] CNBC / Bloomberg / Fortune. Block laying off about 4,000 employees, nearly half of its workforce. CNBC, February 26, 2026. https://www.cnbc.com/2026/02/26/block-laying-off-about-4000-employees-nearly-half-of-its-workforce.html

[3] BCG. The Widening AI Value Gap: Build for the Future 2025. Boston Consulting Group, September 30, 2025. https://www.bcg.com/publications/2025/are-you-generating-value-from-ai-the-widening-gap

[4] McKinsey & Company. The State of AI in 2025: Agents, Innovation, and Transformation. McKinsey Global Survey, November 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

[5] Gallup. AI in the Workplace: Answering 3 Big Questions. Gallup Workplace, 2025. https://www.gallup.com/workplace/651203/workplace-answering-big-questions.aspx

[6] BCG & Columbia Business School / Lovich, D. & Meier, S.. Leaders Assume Employees Are Excited About AI. They’re Wrong. Harvard Business Review, November 26, 2025. https://hbr.org/2025/11/leaders-assume-employees-are-excited-about-ai-theyre-wrong

[7] Microsoft. Work Trend Index 2025: The Year the Frontier Firm Is Born. Microsoft WorkLab, 2025. https://www.microsoft.com/en-us/worklab/work-trend-index/2025-the-year-the-frontier-firm-is-born

[8] Deloitte AI Institute. State of AI in the Enterprise 2026. Deloitte, January 2026. https://www.deloitte.com/global/en/issues/generative-ai/state-of-ai-in-enterprise.html