For the entirety of human civilisation, intelligence was scarce. We're slowly entering a world where that is not the case.
For a brief moment at the end of last year, when OpenAI released o1-pro, it seemed like accessing the highest intelligence required $200 a month. For many, that's a no-brainer, and they subscribed for it, and those who did were awed by what they witnessed a language model could do.
This continued for months, where people threw thousands of lines of code, some even using specialized software to send entire repositories of code to LLMs, asking for precise edits and fixes, and the model could do it all.
The model not only did it, but behaved in particularly peculiar ways where it outputted "exactly" what you asked it to output. So, if you're used to an LLM splitting out code completions, you're also used to it explaining what edits it did and where, but o1-pro never does that. It assumes you know what you're asking for, and because you didn't ask it to also explain you the changes, it never did.
The implications for this were that people who didn't get access to it were not only missing out on experiencing the highest intelligence on the planet, but they were actively not learning how to use it in the first place.
In an age of exponential intelligence, that's the perfect way to get left behind and so- there could likely be a timeline where only those who paid $200 a month rapidly outcompeted those who couldn't.
Fortunately, that wasn't the case.
II.
DeepSeek R1 was the first "viral" reasoning model. Unlike OpenAI who hide their reasoning chain-of-thoughts, obscuring to you what the model actually thinks, DeepSeek went the opposite way and exposed everything. And just seeing the model's internal monologue as it debated what to respond to a "hey, wyd?" Is simply delightful to watch.
The response from consumers was clear. They gained 100M+ users in a week.
A big reason for this was that DeepSeek was free to use. People didn't have to worry about spending a dollar to experience it.
It was also open source, and as of this point, three months after its release, it's still the most powerful open source model.
DeepSeek was so important that I believe other labs, even though they adjusted their strategy - from OpenAI & Anthropic being more lenient about exposing their model's reasoning, to OpenAI releasing o3-mini to free users, DeepSeek is still ahead of all major labs on important developer features like prompt-caching (where they cache prompts for hours, while all US AI labs only cache for 5-10 minutes).
Then came Grok 3, the first xAI reasoning model that was on-par with o1-pro. Grok 3 with thinking is heavily underrated, and I think that's primarily because the biggest fans of Grok 3 were also blind fans of Elon Musk who had no prior experience using any other AI, and just like they praise every action of Elon on his X feed, trying to maximise their own engagement, they contributed unknowingly to Grok 3 to losing its credibility.
But Grok 3 is still really good! If you've not played around it, now is as good a time than ever. It's consistently at the top of all leaderboards, it's smart and you can see all its thoughts just like DeepSeek, and it's available again, for free. Just $8 on X gives you enough access to Grok 3 that its a steal at this point, and the only mistake the xAI team has done is not release its API more publicly, or open source the model yet.
If DeepSeek R1 and Grok 3 proved that the $200 gate to great models was coming to an end, this month, Google with its experimental release of Gemini 2.5 Pro showed that we're still in a world where the best models are increasingly going to cheaper with time.
Gemini 2.5 Pro is better than o1-pro. It's better than Grok 3. It's better than "All" models right now, and it's available to anyone to use for free. The API costs less than Sonnet 3.7, which means even for programming, using Gemini over Sonnet is a no brainer right now.
III.
The best search engine, social network, and video platform have all claimed outsized shares of human attention, leaving scant crumbs for the rest. With AI, this is likely also going to be true.
In the long term, only one big AI lab may remain - the one with the best model, while others compete for niche use cases that the best model is either too expensive or too inconvenient to use.
The best AI model won't merely outperform others marginally- it will capture exponentially more attention, data, and resources. From programming to agents, if the best model is both cheap and convenient, it'll be the go-to for millions of developers and almost all AI apps built using it.
We should hope that this monopoly period doesn't arrive anytime soon, but within the next 10 years, we'll know for sure.
I think as this race continues- Google, xAI and OpenAI will be the primary ones competing till the end.
Anthropic will die, or be acquired by Google or Amazon.
China will begin to wake up and invest more resources, maybe even centralise the efforts of all their AI firms to create a God model with exponentially more compute than what is available in the world right now, but it seems unlikely that they'll ever be in the #1 position. We should still be grateful for Chinese efforts and talent, as they've open sourced their work and they're actively making AI more accessible by making it very cheap, but both chip controls and the lack of early state support will end up making it impossible for them to win.
Among Google, xAI & OpenAI, it's too tough to say which firm's model will win, but you can be certain they'll continue to reduce prices or release models for free to capture attention and data.
IV.
Very soon, OpenAI will release o3, o4-mini and o3-pro. Again, o3-pro may end up being the best model but hidden behind a $200 paywall.
You should commend their efforts for still releasing it, not deciding not to, and having a pricing tier where it makes sense to economically serve that model, where they easily could've kept it internal.
But every day that a model from Google, xAI or even DeepSeek remains freely available, and is comparable if not better than the best model behind a $200 subscription, is every day that new users will choose the free option over the paid one.
Every day that some marginally better state of the art AI remains too expensive is everyday a cheaper but almost-equally-capable model takes up all the market share.
Either way, for consumers, this is all good news, at least temporarily.
Till there's a winner in the AI race, models will continue to be cheaper. Models will continue to get smarter. Jobs will continue to get displaced and new jobs will continue to be created.
Science fiction will continue to be eclipsed by our new reality as infinite intelligence remains accessible to everyone.
The question is - what will you do with it?