Over the past year I’ve had the privilege of training teams in generative AI. These sessions range from quick‑hit workshops to multi‑week agent‑building bootcamps, and the feedback has been overwhelmingly positive. What has struck me most is how quickly teams move from curiosity to impact when the company and its leaders give people the tools and the time to actually grow. They start with simple prompt experiments, then discover patterns that free up hours of routine work.
Wharton’s 2025 Gen AI adoption report echoes many of the same patterns I’ve seen myself, but only in those Finnish companies that actually understand how AI adoption needs to be done. The report shows that 46% of enterprise leaders already use generative AI daily and more than 80% do so at least weekly, based on surveys of 800 business leaders across sectors. In this article I’ll share a few of those themes, combining stories from my workshops with insights from the Wharton/GBK study.
AI ROI isn’t a dream anymore – it’s measurable
When I work with clients on their first AI pilot, I always hear the same question: “How do we know this is worth the investment?” In early 2023 most companies were experimenting without clear metrics. By late 2024, finance teams started asking for return on investment (ROI) calculations. Today, nearly three quarters of the enterprises surveyed by Wharton say they formally track Gen AI ROI. This is a dramatic shift from earlier years when ROI conversations were abstract.
The data is encouraging: 74% of organizations that measure AI ROI report positive returns. Smaller tier‑2 and tier‑3 firms (<2 billion USD revenue) realise benefits faster than the largest enterprises. Banking and finance, professional services and tech sectors see the strongest returns.
Insight
The “measure or bust” phase is here. Without clear ROI metrics, AI initiatives remain science projects. The companies I’ve trained that start with a defined baseline (e.g. how long a report takes today) are the ones that succeed in proving value quickly. Finance and HR functions lead the way; they treat Gen AI like any other investment by tracking productivity gains, throughput improvements and incremental profit. This mindset also helps secure more budget: 88% of enterprises anticipate increasing their Gen AI spending in the next year.
Framework / Takeaway
- Pick two measurable tasks. Start with simple but high‑volume processes such as invoice matching, document summarization or routine analytics. In the banking sector, for example, AI is already used for invoice matching, fraud flagging and reporting.
- Establish before‑and‑after baselines. Measure the time spent, error rates or throughput without AI, then track the same metrics with AI assistance.
- Build a lightweight ROI dashboard. Even a shared spreadsheet can visualise hours saved and cost avoided. Share the results broadly (transparency builds momentum).
From pilots to performance: why 2026 will be the year of accountable acceleration
In my talks this year I’ve repeatedly predicted that 2026 will mark the shift from pilot projects to scalable performance. The Wharton report reinforces this view:
organisations expect the next phase to be about “performance at scale”, where ROI metrics and proven playbooks enable enterprises to rewire workflows and deploy agentic systems.
The research also notes that budgets are moving away from one‑off proofs of concept towards larger, performance‑justified deployments.
Insight
We’re on the cusp of what Wharton calls “accountable acceleration.” Pilot projects have shown value, but scaling them requires more than optimism. Leaders need to standardise processes, invest in training and build guardrails. According to the report, 82% of leaders use Gen AI at least weekly, yet 43% worry about declining skills as usage increases. This tension “high enthusiasm but concern over capability gaps” underscores why structured frameworks will matter in 2026.
Framework / Takeaway
- Create a repeatable playbook. Document each step: identifying tasks, choosing models, measuring outcomes, addressing risks. A good playbook accelerates new projects and ensures consistency.
- Invest in change management. Upskilling employees and adjusting workflows isn’t optional. Budget for training as part of every AI project. I’ve seen teams stumble when tools are adopted without proper onboarding.
- Scale what works, sunset what doesn’t. Use the ROI dashboard to decide which pilots graduate to production and which should be shelved.
Finance and HR show the way in AI ROI tracking
When clients from financial services or human resources ask about AI adoption, they often share detailed ROI spreadsheets. That’s not a coincidence: the study notes that HR (84%) and finance (80%) functions report the highest rates of formal ROI measurement. Why? These departments are accustomed to metrics and governance. They also face regulatory pressure to demonstrate compliance and efficiency, which encourages disciplined experimentation.
In some of chats I have had earlier this year, a financing sector CFO mentioned they would want to use AI agents to automate identity verification (KYC), and daily manual tasks related to back office risk analytics, tasks that free up time and improve decision‑making. The report offers similar examples of finance operations using AI for invoice matching, fraud flagging and reporting.
Insight
Financial and HR leaders aren’t waiting for perfection. They treat Gen AI as a tool that needs the same rigor as any other technology investment. They pilot in controlled environments, measure outcomes, and roll out widely only after clear results are achieved. Other sectors can learn from their disciplined approach.
Framework / Takeaway
- Adopt finance’s mindset for measurement. Even if you’re in marketing or product, borrow the finance playbook: define clear metrics, track variance and review quarterly.
- Use AI for compliance and reporting. In regulated industries, leverage AI to summarise policy changes, generate audit trails or flag anomalies. Start with small, low‑risk tasks and expand.
- Educate stakeholders. Finance and HR teams often succeed because they invest time in explaining AI’s impact to executives and staff. Don’t let the technology become a black box.
Small companies, faster ROI – why agility beats budget
There’s a myth that only big companies can harness AI. My experience training startups and mid‑sized firms suggests the opposite: smaller teams often move faster because they’re less constrained by bureaucracy. The study reinforces this: midsized tier‑2 ($250M–$2B revenue) and tier‑3 (<$250M revenue) enterprises report quicker ROI than tier‑1 giants. These nimble firms adopt new tools quickly and focus on practical use cases instead of moonshots.
Insight
Agility, not size, determines AI success. Smaller companies can iterate, learn and integrate AI into workflows without lengthy approval cycles. My motto has long been “think big, start small.” It’s not about limiting ambition, it’s about building momentum through quick wins. Large enterprises can emulate this by creating internal start‑up teams or “innovation pods” that operate with startup speed.
Framework / Takeaway
- Set a six‑week horizon. Define a use case you can test end‑to‑end in roughly six weeks. For example, automate customer support triage or generate project status updates.
- Empower cross‑functional teams. Give a small group from different departments the authority to experiment. Keep stakeholders informed but avoid heavy governance at this stage.
- Iterate and scale. If the pilot shows positive ROI, expand to other teams; if not, pivot quickly. Speed is your advantage.
Skill atrophy – the hidden risk of AI adoption
During workshops I often witness a dual reaction: excitement over time saved and fear of losing expertise. The report captures this duality: 89% of decision‑makers see Gen AI as a supplement to human capital, yet 43% agree that it could lead to declines in proficiency, especially among entry‑level employees. As Gen AI handles more routine tasks, some worry that critical skills will atrophy.
Insight
Automation doesn’t have to equal skill decay, but it can if left unmanaged. I’ve long argued that leaders must treat continuous learning as a core responsibility. The data supports this: while AI enhances skills for many, the decline in training budgets and the prevalence of skill drift highlight a growing gap.
Framework / Takeaway
- Build AI into learning plans. Pair AI tools with training modules. For example, use AI to generate draft code or reports, then require employees to review and refine the output.
- Implement “AI Fridays.” Reserve an hour each week for teams to share experiments and lessons. Peer‑driven learning counters skill atrophy and keeps curiosity alive.
- Track skill development metrics. Beyond ROI, measure employee skill growth through assessments or peer reviews. Treat these as leading indicators of long‑term success.
AI agents are quietly becoming the new coworkers
In recent workshops I’ve asked participants to imagine AI not as a tool, but as a teammate. How would you onboard it? What tasks would you assign? This analogy resonates because the next wave of AI isn’t just about chatbots; it’s about autonomous agents that perform tasks end‑to‑end. According to the Wharton report, 58% of enterprise decision‑makers say their organisations are already using AI agents in some way. These agents automate processes, triage support tickets, manage workflows and perform analytics.
Insight
The shift to agentic systems is subtle but transformative. Unlike single‑purpose bots, AI agents coordinate across departments and make decisions with minimal human oversight. Early adopters use them for process automation, internal workflow management and frontline customer support. The reported benefits include improved customer service, increased productivity and more cohesive operations.
Framework / Takeaway
- Onboard your AI agent like a new hire. Define its role, provide documentation and create feedback loops. In my trainings we treat the agent as a new colleague; it needs onboarding, guidance and continuous learning.
- Start with repetitive tasks. Target areas like support ticket triage, DevOps monitoring or document analysis where the agent can learn patterns and build confidence.
- Maintain human supervision. Even as agents become more capable, human oversight ensures alignment with business goals and ethical standards.
Final thoughts
The evidence from both my workshops and the latest research points to a simple conclusion:
generative AI is moving from hype to habit.
We’re seeing real ROI, disciplined measurement and expanding use cases across industries. Yet success isn’t guaranteed. Leaders must balance ambition with accountability, invest in people, and treat AI as a teammate rather than a magic wand.
The next year will be pivotal. If you’re still dabbling with pilot projects, now is the time to build your playbook. Measure what matters, train your teams and don’t be afraid to start small. The journey to accountable acceleration begins with the first step.
References
Wharton/GBK survey metrics table detailing how organisations measure AI ROI


