Four Fascinating Questions on the Future of AI
Reflections on AI Funding, Geopolitics, Talent Flow, Cloud, and Chips
One of my favorite AI visionaries,
, published a thought-provoking piece this morning on what he doesn’t know about AI. (For those not familiar with Elad, this man knows a ton about AI; but as in every field of knowledge, the more you know, the more you recognize what you don’t know.)Elad raised many fascinating questions in his article, which folks interested in AI should read. Below I’ll highlight three, plus a fourth quesstion I have that he didn’t touch upon, along with some takeaways and my own thoughts:
How will capital from big cloud service providers (CSPs), Nvidia, and governments shape and/or distort the AI market?
Are we close to an explosion of LLM application startups from a human capital wave standpoint?
How will the AI cloud landscape evolve once the GPU shortage is over?
What does the future hold for AI hardware?
I. How will capital from big CSPs, Nvidia, and governments shape and/or distort the AI market?
As Elad mentioned, investments in foundational model companies have primarily come from strategics (i.e., Big Tech / CSPs — Microsoft, Google, Amazon — plus Nvidia) rather than VCs. This is no surprise, given the tremendous cloud / hardware revenue synergies from alliances with frontier model providers. (Elad cited a few jaw-dropping numbers on Microsoft’s investment in OpenAI relative to Azure’s revenue growth from AI, for example.)
Does this dynamic distort the market for foundational models to some extent? Elad thinks so, and I agree. Just like in M&As it is difficult for a financial investor (e.g. private equity) to outbid a strategic buyer with real synergy expectations, strong strategic interest (and resulting valuation premium) for stakes in OpenAI and the like could make the risk/reward less compelling for VCs and other institutional investors without a strategic agenda.
(To be clear, the word “distort” isn’t necessarily negative here, at least not to me; it just means the said market forces have such a strong gravity pull that they meaningfully move the market.)
The other question Elad raised was whether governments would invest heavily to back regional AI champions (like UAE and Falcon)? Here, I can offer my personal speculation on China. (Would love to hear viewpoints from others - leave a comment!)
My current prediction is - in the short- to medium-term, we’ll see regional governments in China back home-grown AI players to get ahead in domestic competition for talent and growth, but we won’t see central government backing of general purpose Chinese genAI players. For these players, funding is not the bottleneck: even with many North American / European GPs exiting mainland China, to my knowledge there is more than enough undeployed USD venture capital, inflows from Middle East and Singapore, plus domestical funding ready to keep backing the limited set of “real contenders”. The true bottlenecks - hardware (i.e. export controls), (to a lesser extent) high-quality training data and global talent - cannot be helped by central government investment. On hardware constraints, it could actually hurt.
In the longer term, when countries leading in AI capabilities start to export AI products and services at scale, the story might be different. Could China subsidize homegrown AI applications, or even open-source models, the same way it subsidizes EVs, because this is a strategically crucial industry not just domestically, but for exports too? Maybe, if players are primarily competing on costs, which is not a given. And the AI and EV industries are quite different in many ways. For starters, AI seems too important (and sensitive) for any major economy to let in foreign players without heavy guardrails or protectionism — especially foreign players backed by governments that are not considered allies. So — I don’t know, this will be interesting to watch.
II. Are we close to an explosion of LLM application startups from a human capital wave standpoint?
One’s reaction to this question might be “Wait, haven’t we already seen an explosion of LLM apps? Wasn’t every other YC Demo Day pitch in the past year about an LLM wrapper?”
Wait till you look at Elad’s chart below, which was illuminating. So far, despite many attempts, we have yet to see droves of breakout GenAI application companies. Granted, a meaningful gap needs to be bridged between frontier AI capabilities today and most commercial applications; but I (and probably many others) never thought about this in the context of human capital waves.
Most AI killer apps today were started by AI researchers (i.e. Wave 1 below) — E.g., Character, Perplexity, Inflection (not to mention ChatGPT itself). Even among the more prominent vertical applications, many were still founded by AI researchers — even Harvey and Hebbia, which focus on highly specialized industries like law and financial services. I’m by no means diminishing these two — both have hired tons of top talent from their target industries to internalize domain expertise. But just think about it — with all the nuances in building vertical enterprise applications and consumer products, we’re only starting to see PMs and product engineers become AI founders, not to mention enterprise folks (many remain skeptics, especially after 1H23). Call me an optimist — I’m excited about what the next 2-3 years holds for LLM application startups.
III. How will the AI cloud landscape evolve once the GPU shortage is over?
Startups in AI cloud services and tooling (e.g., Anyscale, Baseten, Modal, Replicate, Together, etc.) are seeing fast adoption and revenue growth from AI developers. Elad raised a few great questions on the future of the AI Cloud Stack. The most interesting one to me is: “How much of AI cloud adoption is due to constrained GPU / GPU arb?” And by extension, “how does [the eventual ending of the GPU bottleneck] impact new AI cloud providers?”
My overall takeaway: GPU shortage is an invaluable tailwind to help new AI cloud providers break in, but relying on it is dangerous. Differentiating services are the future.
Right now, NVIDIA is incentivized to preferentially allocate some GPUs to these new players, in order to weaken bargaining power from mainstream CSPs by seeding their competitors. This is a golden window, but we don’t know how long it will last.
Meanwhile, demand for AI developer tools is both varied and massive. Here’s a good recap of common pain points for LLM builders, i.e. unmet demand for AI infrastructure services. In other words, opportunities abound for new AI cloud providers to accumulate staying power.
Elad also suggests that we might see M&A consolidation among existing AI clouds to combine market share and service offerings. I agree. While antitrust concerns have been front and center in recent tech M&As, most of these players seem too small to raise flags with FTC, especially if the argument is that such mergers would challenge the incumbents and therefore make cloud services overall more competitive.
Taking it one step further - will we begin to see vertical consolidation as well? E.g., could Scale AI acquire an AI cloud player, given some existing overlap in tooling and what seems a natural extension of Scale’s offerings? Or is it too risky - potentially alienating mainstream CSPs which Scale operates across today?
IV. What does the future hold for AI hardware?
Elad’s piece didn’t touch on this, but I’ve been wondering lately. On one hand, NVIDIA’s market cap surpassed $1 trillion last year; on the other hand, we know many others are thinking about challenging NVIDIA’s dominance:
Sam Altman is reportedly raising billions for an AI chip venture.
Masayoshi Son is apparently contemplating a similar but bigger move ($100 billion).
Microsoft announced late last year that it has built its first custom AI chip to train models.
Shortly after, Amazon released a new version of its Trainium chip.
Google is using DeepMind AI to design AI processors like its TPUs.
AMD, Qualcomm, and Intel are playing catch-up with chips designed for “Edge AI” - models that run locally on devices such as laptops, phones, and tablets.
What has Apple been cooking up (in its typical low-key fashion until a tangible product is ready for launch)?
And finally, what will we see from mainland China?
The AI chip war is just getting started.
Like this post? Why not share it on social media or forward to a friend (and if you’re new to Entrepreneurship of Life, subscribe for free to get future updates)? Beside writing about AI, I also interview distinguished US immigrant tech founders, VCs, and builders. See you again soon :)