A recent study by Lenovo has positioned Indian enterprises at the forefront of the technological landscape in the Asia-Pacific region, revealing that they are poised for the highest growth in Artificial Intelligence investment.

While the broader region anticipates a 15% average increase in AI spending, Indian organisations are significantly outpacing their peers with a projected 19% boost in their budgets.

This surge reflects a strategic shift from mere experimentation to the active execution of AI-driven business models.

The primary catalyst behind this aggressive expansion is India’s robust talent pool, which is increasingly focused on the development of the AI application layer.

Industry leaders note that the country’s unique position as a hub for global technology providers and Global Systems Integrators (GSIs) allows for the creation of specialised AI agents tailored for specific sectors like marketing, finance, and legal services. This transition marks a move from the initial training phases of large models to real-world inferencing and practical use cases.

A defining characteristic of the Indian approach is a distinct emphasis on immediate returns. Chief Information Officers (CIOs) in India are known for a frugal yet result-oriented mindset, demanding faster ROI and quicker payback periods compared to global averages.

Despite the enthusiasm, these leaders face significant hurdles, particularly concerning legacy infrastructure which may take over a year to modernise. Nevertheless, the pressure to deliver tangible business outcomes remains a driving force for investment.

Architectural preferences among Indian firms are also becoming increasingly clear, with 90% of organisations favouring hybrid AI models. These frameworks balance on-premise and edge environments, ensuring that performance and security are maintained while meeting strict regulatory requirements.

This hybridity allows firms to keep sensitive data close to the source while leveraging the power of distributed computing to manage the high costs associated with AI operations.

The study further highlights that the financial burden of AI is shifting; inferencing costs—the process of a trained model making predictions—can be up to 15 times higher than the initial training phase. Consequently, 75% of AI compute is expected to be dedicated to inferencing in the near future.

To manage this, 80% of enterprises are projected to rely on distributed edge infrastructure by 2030, ensuring that processing happens closer to the consumer and the device, where the most immediate impact is felt.

IDN (With Agency Inputs)