By Michael-Patrick Moroney
Let's begin with numbers. It took over 2,500 days for the Ford Model T to reach one million users. The iPhone reached it in 74. ChatGPT? A paltry five.
That reality starts the May 2025 BOND report with the kind of bravado usually reserved for moon landings. Only here the moon is here on Earth - and learning to think. What BOND, led by Mary Meeker (former Wall Street securities analyst. Her primary work is on Internet and new technologies.) and team, provides is not just a snapshot of where artificial intelligence is. It's a dispatch from a present already becoming the future. (Download and read BOND’s full 2025 AI report here:)
The report is a panoramic report of acceleration. It documents a tipping point in the ascent of AI: from theoretical novelty to everyday tool. Not merely an application on your phone, but the gateway to your job, your physician, your educator, and perhaps your therapist.
Mira Murati, Former chief technology officer of OpenAI
OpenAI’s ChatGPT is the avatar of this moment, and the report returns to it often. It wasn’t the first large language model, but it was the first that didn’t require a manual. With a simple prompt, the average person could engage something that felt smart. By early 2025, over 800 million people used it weekly. That’s not just a software success. It’s a paradigm shift.
What accelerated this ascent so rapidly? Meeker and her team chart a collision of conditions: 5.5 billion internet users, cloud computing on an international scale, and user-friendly interfaces. What once demanded computer science degrees now necessitates only curiosity and Wi-Fi.
What's worthwhile in the BOND report isn't just its figures-it's its granularity. We don't just find out how AI is doing, but where it's doing it. We read about how Yum! Brands, the parent company of Taco Bell and KFC, is optimizing fast food logistics through generative models. How Kaiser Permanente has placed AI tools in the hands of over 10,000 doctors, automating the most administrative parts of their day.
The report’s richest vein lies in its middle chapters, where it examines AI’s capacity not to replace humans, but to reshape what human work looks like. The rise of AI hasn’t meant widespread pink slips. Instead, it’s meant a rebalancing. Yes, some jobs fade. But new roles-prompt engineer, model auditor, AI ethicist-emerge.
The statistics back it up: AI job postings grew 448% between 2018 and 2025, while tech jobs overall dropped. What's happening, Meeker suggests, is a shift in the nature of work. It's not the end of work. It's the end of a certain kind of repetition.
Jackrel lab used AI for protein finding that could help fight disease.
Medicine, too, is transforming. Over 220 FDA-approved medical devices already utilize AI. Insilico Medicine and similar companies are shrinking drug discovery timelines from years to months. Meta and DeepMind are interpreting proteins, predicting structures vital to treating disease. The efficiency is astounding, but the report stops just shy of mindless jubilation. Precision, BOND warns us, must be tempered by ethics.
That brings us to the struggle at the heart of the review. As AI grows stronger, the questions become existential. In 2025, a test found 73% of humans mistook AI for a person in a conversation. AI voices are now indistinguishable. Images can be generated with eerie realism. So how do we trust what we hear, see, or read?
BOND is not alarmist. But it is clear-eyed. It catalogs the risks: misinformation, deepfakes, surveillance. It echoes Stephen Hawking’s famous warning about AI’s potential being either civilization’s apex - or its end.
The last chapters of the report are forward-looking. Not in a speculative way, but in a structural way. What does 2030 appear to be? AI as co-worker, as co-pilot, and as concierge. And 2035? A genuine creative collaborator in science and the arts. And after that - 2040 and the possible advent of Artificial General Intelligence (AGI) - a watershed that might question the very nature of what it means to be human.
Ai-Da, the world’s first robot artist.
Meeker does not predict apocalypse. She does not predict responsibility. AGI, she argues, will not be an accident. It will be a choice - a chain of them. Cultural, technical, and moral.
The BOND report ends with a quotation that frames the stakes: "Statistically speaking, the world doesn't end that often." That sentence does not minimize risk. It maximizes agency. The future is not something we inherit. It's something we decide to build.
What is so striking about this review of the BOND report is its balance. It deifies innovation without sainting it. It quotes data but writes with moral certainty. Like all good reporting on technology, it keeps returning, again and again, to the question beneath the code:
Not "Can AI think?"
But "Can we?"