“We’re on AI time,” that’s what I tell people now when they try to comprehend the rapid pace of AI and all connected technologies, advancements.

Even though it’s been just two and a half years since OpenAI unleashed ChatGPT on the world, I’ve known intuitively for months that in the world of technology, we are no longer operating on Moore’s Law: the number of transistors on a chip doubles every two years. This is now AI Model Law, in which generative model capabilities are doubling every three months.

Even if you don’t believe that Large Language Models (LLMs) are developing at that pace, there’s no denying the unprecedented speed of adoption.

A new report (or rather a 340-page presentation) from Mary Meeker, a general partner at BOND investments, paints the clearest picture yet of the transformative nature of AI and how it’s unlike any other previous tech epoch.

“The pace and scope of change related to the artificial intelligence technology evolution is indeed unprecedented, as supported by the data,” wrote Meeker and her co-authors.

Google, what?

Bond Capital AI Report

(Image credit: Bond Capital)

One stat in particular stood out to me: It took Google nine years to reach 365 billion annual searches. ChatGPT reached the same milestone in two years.

Meeker’s presentation illustrates something I’ve been trying to articulate for some time. There’s never been a time quite like this.

I’ve lived through some big tech changes: The rise of personal computing, the switch from analog to digital publishing tools, and the online revolution. Most of this change was gradual, though, granted, it did feel rapid at the time.

I first saw digital publishing tools in the mid-1970s, and it wasn’t until the middle-to-late 1980s that many of us made the switch, which is also around the time personal computers started to arrive, though they wouldn’t become ubiquitous for at least another decade.

AI Time leaves little time, I think, for self-reflection.

With the public internet arriving in 1993, it would be years before most people were on broadband. Knowledge workers didn’t rise up immediately. Instead, there was a slow and steady shift in the workforce.

I’d say we had a decade of solid adjustment before the Internet and its associated systems and platforms became an inexorable part of our lives.

I still remember just how confused the average person was by the Internet. On The Today Show in 1994, the hosts literally asked aloud, “What is the Internet?” AI and platforms like ChatGPT, Copilot, Claude AI, and others haven’t met with the same level of confusion.

Sign us up

Bond Capital AI Report

(Image credit: Bond Capital)

Meeker’s report notes that ChatGPT users skyrocketed from zero in October 2020 to 400M in late 2024 and 800M in 2025. A shocking 20M people are paying subscribers. It took decades to convince people to pay for any content on the Internet, but for AI, people are already lining up with their wallets open.

I suppose the rise of the internet and ubiquitous and mobile computing might have prepared us for the AI Era. It’s not as if artificial intelligence appeared out of the blue. Then again, it sort of did.

Almost a decade ago, we were marveling at IBM’s Deep Blue, the first AI to beat a Chess Grandf Master, Gary Kasparov. That was followed in 2005 by an autonomous car completing the DARPA challenge. A decade after that, we saw DeepMind AlphaGo beat the world’s best Go player.

Some of these developments were startling, but they were arriving at a relatively digestible pace. Even so, things started to pick up in 2016, and various groups began sounding the warning bells about AI. No one was publicly using the terms “LLM” or “generative.” Still, the concern was such that IBM, Amazon, Facebook, Microsoft, and Google’s DeepMind formed the nonprofit Partnership on AI, which was intended to “address opportunities and challenges with AI technologies to benefit people and society.”

That group still exists, though, I’m not sure anyone is paying attention to its recommendations. AI Time leaves little time, I think, for self-reflection.

A 2016 Stanford University Study on AI in 2030 (no longer available online) noted that “Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind.”

Meeker’s presentation, though, presents an accelerated picture that, I think, does raise some cause for concern, with one caveat: the predictions come from ChatGPT (which is an even greater cause for concern).

By 2030, for instance, it predicts AI’s ability to create full-length movies and games. I’d say Gemini’s Veo 3 is proof we’re well on our way.

It promises AI’s ability to operate human-like robots. I would add that AI time has accelerated humanoid robotic development in a way I, in my 25 years of covering robotics, have never seen before.

It says AI will build and run autonomous businesses.

In 10 years, ChatGPT believes AI will be able to simulate human-like minds.

If we remember that ChatGPT, like most LLMs, bases most of its knowledge on the known universe, I think we can assume that these predictions are, if anything, underambitious. Even AI doesn’t know what we don’t know.

Nvidia CEO Jensen Huang

Nvidia CEO Jensen Huang (Image credit: Nvidia)

There was some argument in the office that I had the equation wrong. There is no AI Model Law, there’s just Huang’s Law (for Jensen Huang, founder and CEO of Nvidia). This law predicts a doubling of GPU performance at least every two years. Without the power of those processors, AI stalls. Maybe, but I think that the power of these models has yet to catch up with the processing power provided by Nvidia’s GPUs.

Huang is simply building for a future in which every person and business wants GPU-based generative power. That means we need more processors, more data, and development leaps to prepare for models to come. However, model development in real-time is not hindered by GPU development. Those generative updates are happening far faster than silicon advancements.

If you accept that there is a thing such as AI Time and that the AI Model Law (heck, let’s call it “Ulanoff’s Law”) is a real thing, then it’s easy to accept ChatGPT’s view of our impending reality.

You might not be ready for it, but it’s coming all the same. I wonder what ChatGPT thinks about that.

You might also like

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Leave a Reply

Can i use the secret crisis blueprint in germany ?. chicken chaap | chicken chaap kolkata style.