TechSprouts is a platform to engage with the deep science ecosystem in India

Computation and AI: Augmenting Deep Tech Innovation

September 29, 2023

TL;DR

  • Advances in computation have constantly introduced new paradigms of problem solving over the last 70 years
  • The scientific method is currently advancing rapidly past the computational paradigm and into the data-driven paradigm
  • This paradigm-shift is resulting in deep tech innovation being augmented and supercharged by the introduction of computational tools
  • Opportunities in the space span both hardware and software innovation. On the hardware front, there is a big opportunity for fabless players to create AI-first processors and accelerators
  • On the software side, we see deep tech innovation consisting of entities constructing general-purpose AI models as well as domain-specific models, combining computational smarts with domain knowledge
  • The intersection of AI and deep tech continues to be a dynamic space where we see a number of opportunities arising to solve some of our most urgent problems, ranging across a multitude of fields

For email updates, join the TechSprouts mailing list:

Introduction

There is no facet of our everyday lives untouched by a computer. From ordering your groceries and hailing a cab to relying on weather forecasts crunched by supercomputers, the effect that computational platforms have had on our lives is monumental. Much of this would have been scarcely believable 70 years ago when the first transistor was invented, and even less so in the early 1800s, the time of the Jacquard loom and Babbage’s Difference Engine. The rise of and adoption of computing happened at the time of the industrial revolution, albeit not at the same breakneck speed. Similar to the industrial revolution, the motivation behind designing computers, besides just showcasing human ingenuity, was to mechanize and automate basic information processing tasks, thereby introducing the concept of “computing” quantities rather than calculating them.

This was, and is continuing to be, a marked shift in our approach to technical and technological problems. Application of computational techniques served as a way to continue to make progress and new discoveries when previous approaches became ineffective. As the scientific method evolved especially in the 20th century, it quickly leapfrogged from its empirical and theoretical paradigms to reach the computational and data-driven paradigms. As the systems we study become more complex and as we probe deeper into fundamental aspects of the natural world, we need the assistance of numerical methods supercharged by the latest advances in computational science.

AI, as the vanguard of progress in computation, fills that gap. By combining domain-specific expertise with AI smarts, deeptech innovation is getting augmented by AI and the entire ecosystem being built around it. We’re now seeing AI play a role in fundamental scientific breakthroughs, ranging from improved materials for solar panels to nitrogen-fixing inoculant bacteria for agriculture. This has opened up a huge space for inventors and entrepreneurs to operate in, and at Ankur Capital, we couldn’t be more excited to embrace the computational revolution in deeptech innovation. 

A brief history

A computing solution consists broadly of three parts: the base hardware (e.g. transistors), the architecture, and the algorithms that run on it. Computing hardware was limited in potential when it consisted of punch cards and vacuum tubes, but things changed when the transistor was invented in 1948. Computing hardware was then put onto a treadmill powered by Moore’s law, which posits that the number of transistors per computing “chip” would double every 18 months, an absolutely incredible rate of progress that persisted for over 60 years with the result that we are now able to manufacture transistors 2000 times smaller than we first did in 1948.

The architecture of a computing solution describes the rules, layout and hardware design that describe the functionality of the entire system. These architectures have seen major upgrades from the 1950s onwards as well. Initially, architectures were designed on paper, which in the 80s and 90s moved to being built and tested inside a computer architecture simulator or a microprocessor. In other words, computer architectures started off being designed on paper and finally inside computers themselves, unlocking a virtuous cycle of rapid improvement.

Of course, the other piece in the puzzle, and at this point the elephant in the room, is algorithms. Algorithms, technically just a list of instructions to be followed, provide a language using which to execute problems, or “program computers”. This yielded another paradigm shift, where tools with fixed hardware could be made to perform any task as long as the task could be converted to a series of steps to be executed one after the other. Primitive computers, like the antikythera mechanism were made for a specific use, but the laptop on which we wrote this article can play a movie, render a web page, or even train a small neural network, because of the universal nature of computer programs. This feature has initiated numerous virtuous cycles in both the development and deployment of computing solutions, as we shall see a number of times in this article. 

Positive feedback loops mean that computers played a big role in their own improvement, resulting in exponential growth in their power as well as seemingly limitless scope for their applications. Problems once considered infeasible become accessible as new paradigms of computing emerge, a process that continues to repeat to this day. As computing power becomes more abundant, it unlocks more and more in terms of quantities that can be computed, resulting in novel, emergent computing solutions that couldn’t have been predicted by sheer increase in compute power.

Of course, this sort of positive feedback cycle is very apparent in modern artificial intelligence (AI), where the small toy models of the 1900s have snowballed into the massive, powerful, and most importantly, useful AI models of today. 

Artificial Intelligence

What is AI

AI refers to man-made systems that have the ability to perform cognitive functions that we normally associate with human minds. Computers, being easily programmable and scalable, are the ideal platform on which to construct AIs. As a theoretical science, AI has been around since the 1980s, but has only recently begun to be deployed in meaningful real-world scenarios. The reason for the current boom in AI deployment is the convergence of advantages in computing power, computer architecture and state-of-the-art algorithms, as well as the availability of large quantities of digital data that have been used to train models and algorithms.

AI is built upon and uses machine learning techniques, in particular to train its models. The creation of AI models is a two-step process. Firstly, the model’s architecture is defined and the right algorithms are identified for the particular use case in question. The second step is to identify and prepare the data required to train the model, and then to perform the actual training. Various approaches to creating AI models have been created and tested, and the winning approach now is based on artificial neurons. These neurons are simple units that operate by mimicking the action potential and step-function response of a human neuron, and can be arranged in layers to create sophisticated models meant to perform cognitive tasks.

Neurons are arranged in two dimensions to create neural networks. These networks are designed to mimic the nonlinear and nontrivial operations of a human brain. Neural networks have gone through iterations of design which have progressively made them more powerful, versatile and widely applicable. Deep learning refers to using neural networks with multiple hidden layers between input and output, and deep neural networks have been shown to be versatile and powerful across use cases. Further architectures can be built on top of deep neural networks, such as generative adversarial networks (GANs) and transformer models (which power LLMs such as GPT-3). As the demands from the models become more complex, more layers of design are created to build larger and more sophisticated AI models.

Historical evolution of neural network technologies, starting from the first artificial neuron all the way to contemporary neural networks. Source: Zuo et. al., Light: Science and Applications 2022

In the past decade, we’ve seen the state-of-the-art in AI change roughly every three years. This has started with long short-term memory (LSTM) networks, deep convolutional neural networks (2011), generative adversarial networks (2014), and has now resulted in transformer models that power large-language models (LLMs) like GPT and Claude. These LLMs have ushered in the Generative AI revolution, where neural networks are able to draw on patterns in their training data to generate original content. Generative models like GPT have attracted mainstream attention, to the extent that even folks like us use Chat-GPT to battle writer's block. McKinsey estimates that generative AI could add over $2.5 trillion of value annually.

AI in the real world

AI tools are now everywhere and actively changing the way we operate, whether it’s in tools like Chat-GPT for end-users, or algorithms for drug discovery and weather prediction that have enterprise applications. Some of these tools offer incremental benefits by speeding up existing processes, but others are game-changers, opening up new avenues of addressing open problems across industries and converting infeasible problems into feasible ones. Rather than becoming an industry unto itself, AI is becoming a tool that’s being deployed by industries, service providers and digital platforms across the board. Some companies are becoming AI-centred ones building models and AI-powered tools, and many others are incorporating AI models in their legacy businesses, whether it’s by building them in-house or using models developed by the aforementioned AI-centred companies.

There is something common across fields being disrupted in a major way by AI. Firstly, when solving a problem involves crunching a large amount of data and finding patterns in it, AI is a natural fit. AI also makes a lot of sense for high-throughput applications, or when the search space is very large. In such cases, it can narrow the funnel of candidate answers and make processes more efficient. Routine and predictable tasks that workers do can be easily offloaded to AI tools. AI tools are also very good at processing images and converting that data into a more useful format. Given the generality of this prescription, it’s no surprise that AI has great potential across multiple sectors, some of which are climate change mitigation, materials science, industrial optimization & automation, and health & biotechnology.

Types of AI companies

An AI solution requires purpose-built hardware as well as models and software to run on that. Along these lines, AI companies are broadly categorized into hardware and software companies. Deep tech opportunities exist both on the software and hardware fronts. Developing chips and computing solutions for specific AI applications relies heavily on IP-led differentiation. On the software side, we’re at a stage where almost every tech company is in some sense an AI company. We see the landscape consist of a core of deep tech companies developing cutting edge models surrounded by a ring of companies building businesses by incorporating these models.

Hardware AI companies

AI hardware is such a wide open space because training and inferencing from state-of-the-art models requires entirely new classes of computing chips. The past fifteen years or so has seen the explosion of GPUs, whose parallel architecture makes them uniquely suited for training of large neural network models. As AI models evolve and models also grow in size, the hardware space has opened up further for more specialized chips. Creating a new chip is a capital-intensive process often requiring upwards of $50M, making it almost exclusively the domain of large established players such as Nvidia. However, startups in the space are also attacking this huge problem, with Cerebras and Graphcore being two of the highlights. Both of these companies make chips specially to train neural network models and have each raised $750M to date. They have faced significant challenges in getting the product to market, though, highlighting the difficulty of creating new classes of chips even with a large available amount of capital.

AI inferencing, on the other hand, is a different domain: it means the development of smaller chips that can inference from AI models on the edge. Given the rapid digitization of industry, the rise of self-driving cars and the rise of IoT solutions, such edge processors face high demand and are another sunrise sector. Mobileye, for example, creates chips with AI acceleration for self-driving cars and has raised over $500M to date.

Software AI companies

On the software front, there are three types of AI companies: (a) companies developing their own AI models, (b) companies making tools to deploy AI, and (c) companies using existing large models. In terms of deep tech innovations across these categories, we primarily see it in the first bucket of novel model creation. This itself takes place in two ways: companies creating general-purpose models like GPT and companies creating domain-specific models for specific applications.

Types of software companies that are in the AI space. The deep tech innovation among these primarily arises in the “model builders” space, both in companies that develop general-purpose models as well as companies incorporating domain expertise into verticalized algorithms.

On the software front, there are three types of AI companies: (a) companies developing their own AI models from scratch, (b) companies making tools to deploy, maintain and distribute AI, and (c) companies using existing large models in their products. Deep tech innovation happens primarily in the first category, which itself comprises companies making general-purpose models, like OpenAI, and companies making domain-specific models in fields like healthcare, agriculture, materials science etc. These domain-specific models can be used to optimize existing processes like cancer diagnoses, and for further value-add can be used to create entirely new products entirely, like new drugs or functional materials.

Companies creating their own general-purpose models, like OpenAI with GPT, operate at the very cutting edge of AI. They blaze a new trail on two fronts, both in terms of using the most sophisticated model architectures as well as building the largest models of their time. While this ensures a deep moat for these companies along with an accompanying flock of companies using their model, it also comes with a sizeable capital requirement for compute resources and power. Even though we are still in the early days of generative AI, there are already 13 unicorns in the space. A large part of their value creation comes from the creation of cutting-edge models and the ability of the company to absorb large quantities of capital to grow.  

Companies building domain-specific AIs are also attacking large markets and addressing large problems. AI-powered drug discovery, for example, is projected to create a $50 billion market over the next decade, and startups in the space have raised $9.9 billion since 2019. The requirement of technological climate solutions, say for new battery materials or electrolyzers for hydrogen production, is urgent and translated to AI-accelerated discovery processes that are underway as we speak. Just the battery materials market is set to cross $100 billion by 2030, which goes to show the outsized commercial role AI will have in these technological fields.

Companies building domain-specific AI have a different journey, though, both in terms of their technical approach to their models as well as their financial and scale-up journeys. Think of a company like India’s very own Cropin, which has built AI-powered algorithms to make agricultural advisory more intelligent and contextual. Domain-specific models are much smaller and henceforth cheaper than general-purpose models. In many cases, AI algorithms replace existing numerical methods (say, in AI-powered materials discovery) again with the aim of creating a cheaper-to-deploy version of the numerical method in question, or to enable the creation of “hybrid” systems that contain “first-principle” as well as AI models working together. What we’ve seen is that dual expertise in AI as well as the domain in question is required for ventures to create a significant amount of value. This is a manifestation of the dual PhD problem that many deeptech startups across the world face today.

Typically, deep science and deeptech startups create moats and barriers to entry via intellectual property. However, with data- and algorithm-driven solutions, different approaches have to be sought out. An algorithm by itself cannot be patented, but the individual steps that comprise it can be. Further, at the current moment, AI deeptech solutions have barriers to entry that come from the kind of technical know-how required to combine domain-specific skills and AI. The data that is used to train the algorithms is also important, as the quality of training data can make or break its performance, hence offering another avenue to a startup to maintain its moat or entry barrier. Given the active evolution of AI-based solutions, we expect this aspect of maintaining one’s advantage and differentiation to take on new and subtle techniques. 

No discussion of AI would be complete without talking about tech companies like Amazon, Alphabet and Microsoft. They are using AI to enhance their core businesses, power novel systems such as autonomous driving, and are also developing models for other players to use commercially. These tech giants are becoming AI companies altogether, dominating both commercial deployment and AI research. 

Apart from companies that are centering AI tools in their solutions, legacy players in various industries are also incorporating AI into their processes. This could include examples like ArcelorMittal optimizing its steel production schedule, AccuWeather creating a plain language weather bot, or Unilever using AI to develop performance-boosting enzymes for its homecare products. This drives home the point that no field is untouched by AI, and there is a huge opportunity for innovation as these tools are developed and deployed. It’s important to remember that AI is just a tool, or a platform, and thus is deployed in materially different ways across sectors, some of which we’ll dive into now.

AI + Deep Tech across sectors

Some of the fields we see deep tech AI solutions being incorporated are climate or cleantech, materials science, agriculture, and biotech or synthetic biology. As we remarked earlier in the piece, AI’s role in these sectors is to complement existing science-based approaches and improve upon numerical methods, especially as the scientific method has entered into its third and fourth paradigms.

In climate and cleantech, the intersection of AI and deeptech plays a role in multiple mitigation solutions, such as optimizing a grid based on renewable energy, developing materials for novel battery chemistries and solar panels, and automated methods to measure carbon intensity or reduction to power carbon markets. 

Materials science is another field ready to be augmented by AI. For some decades now, materials science has been dominated by numerical methods such as density functional theory simulations and molecular dynamics. These tools, while effective, are computationally expensive and are being replaced by high-fidelity AI-based substitutes as we speak. As another example of the recurrent theme of advanced computing power unlocking new domains, this advance turns materials discovery on its head by enabling high-throughput screening and enables the rapid development of novel materials with specific physical properties in mind.

The seamless running of our modern economy relies heavily on control systems and specific algorithms are used to optimize the processes in an industrial plant or to organize and design logistics systems. Again, as the systems we create become more complicated, first principles-based numerical methods become uncompetitive, leaving room for their AI-powered replacements to come through.

These use cases we’ve detailed only scratch the surface, though, as in the grand scheme of things we are still in the early days of AI deployment. We expect that as AI systems become more advanced and competent, they will upend and improve multiple other fields. Further we haven’t covered biotechnology and synthetic biology applications here. Those applications are so vast that we’ll cover them in the next article of this series. 

Future of AI and other Approaches

In the recent past, neural network-based AI has overtaken almost every other AI approach, and has come to represent the ultimate state-of-the-art in computing itself. Large resources allocated to AI R&D have resulted in the development of new AI paradigms in quick succession. This effort is by no means complete, and we are excited about the forms that AI models of tomorrow will take.

There are also some general open problems in the field of AI, which need addressing. First, the data-driven approach of neural networks comes with it becoming a “black box”, which means that there is a lack of clear causality between inputs and outputs. As models become complex and neural networks have a massive number of parameters (The GPT-3 model has 175 billion parameters), it becomes difficult to diagnose errors and biases. This issue is more apparent in general-purpose models than ones designed for specific purposes.

Another aspect to bear in mind is that as AI models become larger to become more powerful, they also require increasing amounts of power to train and deploy. In certain cases, like domain-specific models in materials discovery, AI makes the entire process more efficient. However, large general-purpose models have a large carbon footprint both during their training stage as well as their deployment.

Quantum computing is in its nascent stages now, with the world’s largest quantum computers beginning to touch the 1000 quantum bit (or qubit) mark. While there are still many technological and scale-up challenges to be overcome, quantum computers hold immense promise to revolutionize finance, machine learning and materials science. Further, by nature of the immense speedup they provide for linear algebra tasks, they would also go a long way in reducing the carbon footprint of modern AI.

Conclusions

For many years now, we have studied and backed a number of deep science innovations, and have enjoyed the fruitful journey ensuing from it. The incorporation of advanced computation and in particular AI-based methods to these deep science and deeptech fields promises to take these innovations to the next level. Further, it will entail a different kind of competitive landscape as well as accelerated timelines for product development and improvement. A rapid pace is being set, making the deeptech space a far more interesting one!

While there are concerns about AI and AI-based automation threatening the current role of knowledge workers, this is not the case in deep science and deeptech. AI will augment, not disrupt these fields, and we are very excited to see this play out.

For more TechSprouts updates, join our mailing list:
TechSprouts is a platform to engage with the deep science ecosystem in India