.

Why we invested in Turiyam AI

December 12, 2025
Read the full report here

We’re at a very exciting time in the semiconductor industry, with startups around the world at the vanguard of innovation and India in particular emerging as a semiconductor hub of the future. This is powered by the rise of new technologies, such as chiplets, as well as the rapidly evolving landscape of computational trends - the rise of AI, computationally intensive tasks moving to the edge, the digitization of entire value chains with intelligence built in at every layer.

Most notably, we’re in the middle of an AI boom, where new and highly capable models are redefining knowledge work as we know it, and creating a large number of use cases across both personal and enterprise applications. This is being powered by LLMs such as GPT, Claude and Gemini whose capabilities have skyrocketed in the past few years.

This has created a large opportunity for semiconductors to both train and infer from these models. While GPUs have cornered the market for a hardware and software to train AI models, model inference is still an open space. Inference is the process of using trained AI models for real-time applications, like you would use ChatGPT as a knowledge assistant.

GPUs are an ideal platform on which to train LLMs because of their parallel architecture optimized around high-bandwidth memory (HBM). However, the same does not hold for inference. Instead of pooling hundreds or even thousands of GPUs together to train a single large model, model inference may use smaller or quantized versions of large models, must serve thousands to millions of users concurrently, and is optimized for a different set of metrics such as throughput (measured in tokens per dollar per watt), latency and uptime.

LLM inference currently runs on GPUs in large data centres, but at high cost and power usage. This is prompting AI companies to look to the next generation of semiconductor solutions to power inference. While GPU alternatives do exist, some cases in point being the solutions provided by Cerebras and Groq, their solutions have not been built from the ground up for inference.

At the same time, there is a large India market consisting of enterprise and end-consumer demand for AI inference, a significant fraction of which will be served by Indian data center operators such as E2E Networks and Yotta Cloud. India’s AI adoption, and buildout of local AI infrastructure and data centres, is poised to grow at 24% year-on-year, with data centre capacity set to cross 4000MW by 2030. Enterprises, too, are looking for semiconductor solutions beyond the hyperscalers to set up and grow their in-house AI capabilities. As customer demand moves past GPUs, Indian data centers too will look for India-designed inference-specific solutions, which dovetails with India positioning itself as a global semiconductor and AI hub.      

Given the technical and market gaps, there is a massive opportunity for a solution built specifically for model inference. The market for such a best-in-class solution, with tightly integrated hardware and software, is conservatively valued at $100B and potentially up to $300B by 2030.

This is the problem statement and market gap that Turiyam is solving for - a software-first inference solution for data centres with a hybrid memory architecture (using both SRAM and HBM) to arrive at a sweet spot for efficiency, combined with a tightly integrated compiler stack that uses reinforcement learning to continuously optimize workload-to-hardware mapping. This co-design between software and silicon establishes a strong defensible moat and enables world-class token-per-dollar-per-watt performance. As one of India’s first data centre inference startups, Turiyam is in the right place at the right time.

Turiyam is founded by a trio of experienced semiconductor, AI and product leaders who are passionate about building for the India AI opportunity. Each of the co-founders brings with them 15+ years of experience in the semiconductor domain. Sanchayan has designed chips at large design houses and more recently, at the cutting edge of AI inference at Groq. Parag brings deep expertise and leadership on AI and data science, as well as past entrepreneurial success. Praveen most recently led India operations for XCOM Labs, a leading telecommunications tech company. Collectively, the team has built more than 30 chips and brings over two decades of semiconductor and AI experience each.    

We are excited to make our first semiconductor investment and to back a strong and experienced team. We believe that Turiyam has a number of advantages over incumbent semiconductor companies building out these solutions, as well as the first generation of AI inference startups. The team has a nuanced understanding of the market, the ability to attract talent and build stellar engineering and product capability, and the first-mover advantage in the India market. The convergence of these factors means that Turiyam is poised for explosive growth to rapidly become a leading data centre inferencing semiconductor company, and we’re excited to partner with them as they embark on this journey.