All blog posts

AI Insights

May 1, 2026

What’s Next for AI Investment and Adoption: A Future Outlook with Raj Bakhru

In this Q&A with Raj Bakhru, General Manager and Co-Founder of Blueflame AI, we explore the forces driving today’s surge in AI investment and what that capital is ultimately trying to unlock. The conversation covers why AI adoption is still uneven despite rapid capability gains, how foundational models are expanding beyond SaaS into labor transformation, and where the immediate impact is emerging in finance.

,

Abstract swirling blue smoke or ink texture with varying shades and wisps.

Table of Contents

While investment in AI has surged in recent years, the more compelling narrative lies in what the capital is being deployed to unlock.

As foundational AI models move beyond conversational interfaces, the focus is shifting from building better tools to fundamentally rethinking how work gets done.

Raj Bakhru, General Manager and Co-Founder of Blueflame AI, joined us for a Q&A to explore what’s driving the acceleration behind AI investment, why AI adoption remains uneven despite rapid capability gains, and where he sees the most immediate value emerging — particularly in private equity and investment banks.

Let’s dive in.

Q. Why is so much capital being invested in foundational models, and what are they really optimizing for?

Raj Bakhru: "We've seen over $1.2 trillion invested in these models. To get a good return on that investment, you need to capture a pretty sizable revenue stake, which means AI needs to be doing a lot more than serving as an ad hoc chat tool. It needs to actually take a piece of the labor market.  

That doesn't necessarily mean replacing people; it means supplementing them and providing similar value to what they're already providing. We need to see revenues in hundreds of billions of dollars for that to become a reality. We're only about 10% of the way there today, but everyone sees a path to getting there — and that's why people are putting so much money behind it."

Q. Is AI really targeting SaaS markets or something much larger?

Raj Bakhru: “Definitely much larger. AI is broadly targeting labor — working toward a point where it can supplement what human workers are doing. Every company has a long to-do list of high-ROI projects, and if AI can take a piece of those and help us move further down that list, it should create a lot of value for everyone.  

People should be willing to pay labor-equivalent value for that — the same as what they would pay for human workers, if they could find them. So, this is definitely going well beyond just SaaS.”

Q. Why is adoption still so low despite rapid advances in capability?

Raj Bakhru: "We're seeing strong pockets of adoption, but there's still a wide gap. Anthropic has charted where people are adopting versus where they could be, and the delta is significant. There are a few reasons for this.  

First, there's a gap between perception and reality — the market isn't yet seeing the full capability that exists. Second, there's a natural training and adoption lag. It's standard change management: people must change processes, learn how to use the tooling, and get that tooling connected into their existing systems.  

None of that can happen overnight. People have to wrap their heads around it and figure out new ways of working.

That said, it's moving faster every day. People are becoming more AI literate, and the 10x folks are becoming 50x folks, and that spreads quickly. People see it, they ask, ‘What are you doing?’, and it catches on. We're definitely seeing an exponential pickup."

Q. How much opportunity and value could AI create in financial workflows?

Raj Bakhru: “I find AI in finance particularly compelling for a few reasons. First, there’s an enormous amount of unstructured data, which creates a strong use case for LLMs that can effectively process and interpret it.

Second, reasoning models have advanced to the point where they can work through the complexity of financial transactions in a meaningful way. That wasn’t true historically. A big part of why finance professionals are so highly compensated is because these transactions are extremely complex and high-stakes — often involving hundreds of millions or even billions of dollars.

You wouldn’t trust that level of responsibility for something that isn’t close to perfect. And while humans aren’t perfect either, AI is now approaching a level of reliability where it can start to add real value to these environments.

If it can operate at that level, the impact is significant. Transactions can move faster, deal risk can be reduced, more companies can come to market, and firms can evaluate and manage a larger number of deals. Ultimately, that translates into hundreds of millions or even billions of dollars in value creation across the financial ecosystem.”

Q. Are AI models converging in performance across providers?

Raj Bakhru: “Interestingly, not really. We’re still seeing models leapfrog each other quite regularly, with specialist models often outperforming in more nuanced areas.

Take coding, for example — new models like Mercury Coder are much faster and, in certain use cases, significantly more effective. Then you have models like xAI’s Grok, which are particularly strong with real-time data. Gemini, on the other hand, has become very good at multimodal tasks and content generation, including things like infographics and technical architecture diagrams.

Claude has tended to perform strongly in code-related tasks, while ChatGPT has led in reasoning and broader “pro” capabilities, though it’s quickly catching up across other areas as well.

The key point is that this shifts constantly — what’s best-in-class today may not be the same a month from now. That’s exactly why the focus is on taking advantage of best-in-breed models wherever possible, rather than relying on a single system.”

Q. Are we entering a new phase of AI model architecture beyond transformers?

Raj Bakhru: “We’re starting to see new models gain traction. Mercury Coder, for example, is a diffusion-based model. Historically, diffusion models have been used more for content generation, things like infographics and media, so applying them to code is relatively new and proving quite interesting.

We’re also seeing progress with SSMs, and that space will continue to evolve. More broadly, there’s a significant amount of research now focused on what comes after transformers as the dominant architecture. At this point, though, there isn’t a clear answer yet.”

Q. Are we seeing a return to “pre-training-first” strategies in frontier labs?

Raj Bakhru: “We’re seeing a renewed focus on pre-training. That emphasis had faded for a while as attention shifted toward post-training techniques like reinforcement learning and verifiable rewards.

The reason pre-training is coming back into focus is largely because of new infrastructure capabilities from NVIDIA. With next-generation chips enabling much larger and more efficient training runs, the dynamics have changed again.

For example, models like Spud and Mythos were among the first trained on NVIDIA’s Blackwell architecture, which represents a step change in compute capability, and by extension, a step change in model performance. As NVIDIA continues to release new chip generations, we’re likely to see that pattern repeat.

So, between new pre-training techniques and the underlying hardware improvements enabling them, it’s understandable why so much attention is shifting back in that direction.”

Q. How is Blueflame AI evaluating its own product capabilities across use cases?

Raj Bakhru: “We consider 25 to 50 key use cases that matter most to dealmakers, and we actively grade ourselves on how well we support each one. While the platform can go well beyond that set, the belief is that if we get those core use cases right and really strong, we’ll start to see the emergence of the “prompt-first” dealmaker, which hasn’t fully materialized yet.

Today, dealmakers still make a conscious choice: do they go to Excel, PowerPoint, or their files to complete a task, or do they use an AI tool instead?

“Prompt-first” means that the default instinct becomes to go to AI first. In many use cases, we’re already there — but not across the board. The expectation is that once we reach that level of coverage across nearly all core workflows, we’ll see a true shift to prompt-first dealmaking.

Q. How are templates, threads, and app-like workflows shaping the next generation of AI tools?

Raj Bakhru: “We’re seeing ‘vibe coding’ take off right now, and we want to support that — giving people the creativity and flexibility it enables, while still maintaining the enterprise requirements around security, compliance, and privacy.

That’s especially important given how sensitive the data is that our clients work with. This space is evolving extremely quickly. The tooling around vibe coding today is dramatically better than it was even four months ago, and it’s only recently become viable for non-technical users at scale.

We’re also seeing strong progress in template generation, where PowerPoint, Word, and Excel models can now be created directly from code, and the outputs are becoming quite high quality.

That said, there’s still a gap in the “last mile” work; taking outputs from roughly 80% complete to truly production-ready at 100%. A lot of focus is going into solving that final step and closing that gap.”

Q. Are we still early in AI adoption or near a tipping point?

Raj Bakhru: “It’s still very early — we’re only about three years into this journey, which is hard to believe. The models are evolving faster than ever, and in some cases, they’re even helping build the next generation of models. Each iteration improves the last, creating a kind of positive feedback loop where better models enable even better ones.

Because of that, progress is compounding quickly. In five years, we’ll likely look back and think the technology we’re using today was incredibly nascent and relatively simple. And in many ways, we already feel that way about the models from just two years ago.”

Q. What does the next 12–24 months of AI competition look like?

Raj Bakhru: “I think we’re going to see AI become dominant in dealmaking. “Prompt-first” will likely become the default way people work over the next 12 to 24 months; it feels inevitable.

That shift will materially accelerate workflows and become a real competitive advantage, both at an individual level, in terms of how people shape their careers, and at an investment firm level, depending on how effectively organizations adopt AI.

What’s interesting right now is that in many cases, individuals have already been empowered and significantly sped up by AI, but that hasn’t fully translated to the enterprise. The knowledge and workflows being built by early adopters often don’t reach the broader organization.

A big reason for that is that much of today’s tooling isn’t designed for enterprise-wide sharing and distribution.

That’s where a lot of our focus is — making AI usable not just for power users, but for the entire firm. The goal is to allow advanced users to build routines, skills, processes, templates, and applications, while enabling everyone else in the organization to benefit from that work without needing to be a power user themselves.”

Looking ahead: Private markets enter a new phase of AI adoption

What stands out from Raj's perspective is not just the pace of change, but its direction: AI is steadily moving from a standalone tool to an embedded orchestration layer in how work gets done.

Over the next 12–24 months, that shift is expected to accelerate, particularly in private equity and investment banks.

The deal teams that see the greatest advantage won’t necessarily be the ones with the single "best” model, but those that can turn AI capabilities into consistent, repeatable workflows across the organization.