SearchRecent comments
Democracy LinksMember's Off-site Blogs |
dot AI bubble....
“Keeping perspective” is Michael Pascoe’s mantra. He writes that AI mania has not just warped perspective, it risks blowing up reality. Over more decades than I care to count of market watching and reporting, I haven’t seen a time when there’s been more wide-spread conviction that we’re experiencing a dangerous bubble that’s sure to pop, yet the money keeps pouring in to inflate it. The lead up to the “Crash of 87”, now viewed in the rearview mirror as a minor hiccup, was relatively muted. Not even the “dot bomb” bubble at the turn of the century, when all a company had to do to get a share price boost was to add “e” or “dotcom” to its name, was as widely perceived as over-cooked as the present outlook. “This time it’s different”This time round there is vastly more money chasing itself with a circular investment boom at its core built on promises of revolution and, inevitably, that “this time it’s different”. This time round there are trillions of dollars being splurged on a heady mix of momentum riding, FOMO (fear of missing out), and massive bets that the first movers will win all. There are the market telltales of sky-high valuations for inexperienced startups merely intending to build data centres or AI-somethings, but they are sideshows, mere flotsom that will be blown away when the reckoning occurs, as the shells and frauds were when the dot bomb burst. As a general rule, things tend to turn out to be not as bad as you fear or as good as you hope. This time round, the hope is about the level of bad. The serious game is the impact both the reckoning and the AI promises will have on the real economy, on employment and wealth. That’s where the potential is for a GFC-scale event, not a piddling ’87 or dot bomb. And this time round, still carrying the cost of all their COVID efforts, governments and central banks will be less able to ameliorate the pain. We’re already reaping the result of a global easing cycle with global government deficit spending fuelling asset inflation with its subsequent wealth effect. After what was, with the benefit of hindsight, overcompensation during COVID, the ammo isn’t there to fight another shock. Two quite different articles this month have highlighted the impossibility of the nirvana being promised by the AI investment promoters. Former colleague Alan Kohler writing for the ABC was the bleakest, seeing a future where a GFC-size bust is the least-worst option. Returns not thereThe other is a note to clients by independent economist Gerard Minack showing the AI spend simply can’t generate the returns for investors to justify it. I’ll come back to that. Kohler first, adding the AI and crypto bubbles together for frightening numbers: Somewhere between $3 trillion and $6 trillion has been invested in building AI infrastructure and software, and that has been responsible for almost all US economic growth over the past year. The top 10 American AI companies have provided most of the US stock market’s gains over the past two years and are now valued at $35 trillion, almost half the total market. Meanwhile there are 20,000 cryptocurrencies worth $5.8 trillion, of which Bitcoin represents more than half. The total cash in the AI and crypto bets is more than a quarter of global GDP; it’s probably the greatest technology investment boom/bubble in history. No probably about it, in my opinion. It would not take a total crash to send shocks through the financial and real economies. Jobs armageddon the quid pro quoBut Kohler’s Doomsday outcome is that it would be worse if the investment ends up being justified by profits created as it would create massive longterm unemployment for the many while the very few at the top continue on their present path of becoming even more unimaginably rich. Gerard Minack’s note has a smaller focus and thus more concrete outcome: there’s a lot of capital being burned. Minack limits his numbers to the “AI8” listed entities, Alphabet, Oracle, Microsoft, Amazon, Meta, Broadcom, Nvidia and Palantir. In this bubble phase, investors keep rewarding companies that increase planned AI-related investment spending. But: “The larger the investment spend, the greater the revenue that will need to be generated to ensure an adequate return on that investment. In my view that revenue hurdle is already implausibly high. Whatever the technical wonders that AI will generate, the investment returns will disappoint. That will inevitably lead to significant market losses.” Greatly simplyfying Minack’s analysis, the AI8 will conservatively have investment stock of more than US$1 trillion by the end of next year. That’s allowing for 20 per cent depreciation on an investment spend heading above US$2 trillion in 2027. (And a reminder that this is ignoring the investment spending of unlisted companies such as Open AI.) $1T for a 10% returnSo how much revenue will their AI businesses need to generate to get a reasonable return on this stock of invested capital? I’ll skip the details of Minack’s figuring but AI would need revenue of some US$925 billion a year to achieve a modest 10 per cent return on invested capital. And that return compares with the hyper-scalers’ current 25 per cent ROIC. By comparison, Minack quotes Praetorian Capital’s Harry Kupperman’s observation that the incredibly successful Microsoft Office 365 subscription services had revenue of US$94 billion last year. “In other words, to achieve an average ROIC, the AI industry will need to support 7-10 firms with businesses as widely deployed, and widely subscribed to, as Office 365.” And then it gets harder. If the hyper-scalers’ current 50 per cent gross margin falls closer to the S&P500 average, they would require additional revenue of US$1.2-$1.6 trillion. As Minack concludes, “good luck with that”. A further complication is that AI will cannibalise much of the hyper-scalers’ existing businesses. I would further speculate about what competition between that many players would do to margins. There’s also the well-reported phenomenon of much of the AI splurge being circular – the major players are hanging out and taking in a lot of each other’s washing. Show me the moneyIt’s all good fun until investors reach the imminent “show me the money” stage and the music stops. That’s when the big boys take a hit and the fringe players, the bubble startups, lose their shirts. As for the concurrent crypto-bubble, RBA Governor Bullock last week pointed to the challenge for the financial system’s security if quantum computing ever effectively works. “If you believe what they say on the tin of quantum computing, what takes 200 years to decrypt now, to break, will take a matter of minutes. So it is a big threat,” she said. Decrypted crypto is no crypto at all. And the Toddler KingThen there is the little matter of the world’s biggest economy being run by a febrile toddler king and a mob of self-enriching accomplices. It all adds up to the biggest mystery: how markets are so willingly charging higher to yet more records regardless of risk with the biggest gains at the riskiest end. On one hand there’s the view that the reckoning is always a little further off. On the other, the second law of “old bond dog” Anthony Peters comes to mind: “Nobody gets fired for being long a falling market, but woe betide anyone short a rising market.” Good luck with that, too. https://michaelwest.com.au/dangerous-bubble-sure-to-pop-wall-street-has-ai-crash-in-the-wings/
YOURDEMOCRACY.NET RECORDS HISTORY AS IT SHOULD BE — NOT AS THE WESTERN MEDIA WRONGLY REPORTS IT — SINCE 2005.
Gus Leonisky POLITICAL CARTOONIST SINCE 1951.
PICTURE AT TOP BY GUS LEONISKY.
|
User login |
academic AI....
How AI exposes the moral hypocrisy of academic publishing
Nicholas Agar
Knowledge production in the humanities is undergoing a step change, a sudden transformation driven, in part, by AI technologies.
Many things in the humanities won’t change, simply because there are constants in the ways humans agree or disagree, fall in love or into hate. So long as there are humans in 2075 there will be human philosophers pondering humanity’s problems. The insights of today’s philosophical geniuses will presumably be as interesting to the philosophers of that time as are the insights of Ludwig Wittgenstein to us today.
But step changes in knowledge production place into starker relief some of the bad practices that we have fallen into. Just as Warren Buffett observed about economic downturns, “only when the tide goes out do you discover who’s been swimming naked”, so too abrupt changes in technology and student expectations expose the moral compromises of academic humanists. We have long been swimming naked, clinging to outdated practices that no longer serve students, society or truth.
The AI cheating problemPrompting ChatGPT to “critically discuss Plato’s theory of forms” isn’t a way to do philosophy. But that’s precisely the path many students now take, motivated by high tuition fees and high-stakes assessments.
There is a race between AI writing tools and AI detection tools, one that the detectors are destined to lose. Companies like OpenAI, which created ChatGPT, don’t reveal their secrets to firms like Turnitin LLC, a business in plagiarism detection. The result? The detection tech will always be playing catch-up.
The numbers tell the story about how far behind they will lag. Turnitin was acquired by Advance Publications for US$1.75 billion in 2019. OpenAI now has a US$500 billion valuation. The first-mover wins, and OpenAI has more money to spend training AIs to produce human-like speech than Turnitin can spend on detecting it.
Terminators or Cylons?Does the difficulty in detecting AI-cheating mean that the professors should give up? Perhaps it casts the defenders of human writing in the role of Sarah Connor in the Terminator franchise. The odds are clearly against her. But Hollywood produces many movies in which she heroically beats the odds and the machines.
The problem is that our movie analogy is ill chosen. We don’t face machines like the hulking T-800 cyborg. A better representation is the Cylon from the television series Battlestar Galactica. In the 2004 version, machines that perfectly pass for humans engineer our downfall by infiltrating us. What befalls the humans in that story is also happening to the humanities in real life: even as they proclaim that they are fighting AI, humanities scholars are abetting its infiltration.
The vulnerability of the humanities is more ideological than technological. It comes in the form of the teaching-research nexus, a prized feature of the Humboldtian university invented in Prussia in the late-nineteenth century. Ernest Boyer, former president of the Carnegie Foundation for the Advancement of Teaching, expressed it well when he wrote:
The most inspired teaching will generally take place when faculty are pursuing their own intellectual work, and students, rather than being passive observers, are partners in the scholarly enterprise.
Our students become our apprentices. In the fullness of time, they replace us. The glitch in this plan, that continues to work for the sciences, becomes apparent in the overproduction of humanities PhDs for whom there are no jobs.
Paradoxically, much of the money governments spend to sustain the humanities amplifies its vulnerability. The money has attracted academic publishing businesses. Profits passed on to shareholders become debits for governments and taxpayers.
The hypocrisy of punishing AI cheatsConsider the contract I recently signed with humanities publisher Taylor & Francis. It granted them the right to distribute my work “in printed, electronic or other medium now known or later invented, and in turn to authorise others … to do the same”. We can speculate about what this might mean.
Informa PLC is the parent company of Taylor & Francis. Its 2025 financial report offers rare transparency about a quiet transformation underway in academic publishing. Informa is more open to its investors than it is to humanities scholars. The report reveals that Taylor & Francis generated over US$75 million in 2024 from data access licensing, explicitly naming AI companies among customers gaining legal entry to vast troves of scholarly content. With nearly 9,000 new titles added annually and a vast back catalogue of specialist works, Informa is positioning this licensing as a “repeatable income stream” and a key part of its growth strategy.
What this means for humanists is stark. The very articles and books we painstakingly produce are being fed, legally and lucratively, into AI systems that will soon replicate, and perhaps replace, our intellectual labour.
Signing up to be my apprentice by inviting me to supervise your PhD in philosophy is a bit like apprenticing with a master weaver when a factory with power looms has just opened in your town. Yet most authors remain unaware that their work is fuelling the next generation of AI tools, often without any additional consent or compensation.
This is speculation about possible motivations of Informa PLC. It would not suffice for a class action lawsuit mounted by sacked humanities academics. If pressed, Big Oil’s lawyers can vehemently assert their passion for the environment. That’s certainly how Big Academic Publishing’s lawyers would advise them to reply to questions about how they might be contributing to the failure of humanities faculties.
One hint about Informa’s intentions can be found in a linguistic pivot from the 2024 to the 2025 report. In 2024 there was talk of “flexible Pay-to-Publish Open Research platforms”. That language is absent from the 2025 report. Now that governments are less interested in paying for humanities academics to publish, it is a reasonable inference that Informa is looking to replace lost revenue with money from training AIs. Scholars fret about the sloppy academic referencing of AI text. An AI with full access to the Taylor & Francis back catalogue can almost certainly improve on the referencing of distracted humanists anxious about their jobs.
Herein lies the hypocrisy. We punish students for using AI, even as we gift our own research to a business that directly feeds it into the very models that we caution students against using — all of this without compensation, consent or even awareness. If anyone’s cheating, it’s not the students. The challenge for the humanities isn’t to either abet or beat AI detection tools. It’s to reimagine a scholarly ecosystem with AI where truth-seeking is collaborative, transparent and fair. That starts with confronting the uncomfortable truths not just about our students, but about ourselves.
Nicholas Agar is Professor of Ethics at the University of Waikato in Aotearoa New Zealand. He is the author of How to be Human in the Digital Economy and Dialogues on Human Enhancement, and co-author (with Stuart Whatley and Dan Weijers) of How to Think about Progress: A Skeptic’s Guide to Technology.
https://www.abc.net.au/religion/how-ai-exposes-the-moral-hypocrisy-of-academic-publishing/105937278
READ FROM TOP.
YOURDEMOCRACY.NET RECORDS HISTORY AS IT SHOULD BE — NOT AS THE WESTERN MEDIA WRONGLY REPORTS IT — SINCE 2005.
Gus Leonisky
POLITICAL CARTOONIST SINCE 1951.
AI pop....
Will the AI bubble burst as investors grow wary of returns?
Nik Martin
Billions have poured into AI, helping stock valuations soar. But the cracks are starting to show. Slowing adoption, surging costs and elusive profits are fueling warnings that the boom may be headed for a hard reset.
The artificial intelligence (AI) party is still in full swing, with tens of billions globally pouring into infrastructure, startups and attracting the best talent.
Among the headline announcements this year: ChatGPT parent company Open AI, Softbank and Oracle pledged to invest $500 billion (€433 billion) in AI supercomputers, Open AI and chip giant Nvidia announced a $100 billion fund to maintain the United States' dominance in advanced chips, while Chinese tech giants Alibaba and Tencent hiked investments to help speed up China's ambition to lead AI by 2030.
Since ChatGPT’s debut in November 2022, AI-related stocks have added an estimated $17.5 trillion in market value, according to Bloomberg Intelligence, driving around 75% of the S&P 500’s gains and propelling companies like Nvidia and Microsoft to record-breaking valuations.
Corporations are hesitant over AI adoptionBut signs of a hangover are getting harder to ignore. AI usage by corporations is slipping, spending is tightening and the machine learning hype has massively outpaced the profits.
Many economists think usage concerns, barely three years into AI going mainstream, dropkick the prevailing narrative that AI would revolutionize how businesses operate by streamlining repetitive tasks and improving forecasting.
"The vast bet on AI infrastructure assumes surging usage, yet multiple US surveys show adoption has actually declined since the summer," Carl-Benedikt Frey, professor of AI & work at the UK's University of Oxford, told DW. "Unless new, durable use cases emerge quickly, something will give — and the bubble could burst."
The US Census Bureau, which surveys 1.2 million US companies every fortnight, found that AI-tool usage at firms with more than 250 employees dropped from nearly 14% in June to under 12% in August.
AI’s biggest challenge remains its tendency to hallucinate — generating plausible but false information. Other weaknesses are inconsistent reliability and the poor performance of autonomous agents, which complete tasks successfully only about a third of the time.
"Unlike an intern who learns on the job, today’s pretrained [AI] systems don’t improve through experience. We need continual learning and models that adapt to changing circumstances," said Frey.
Unsustainable capital burnAs the gap widens between sky-high expectations and commercial reality, investor enthusiasm for AI is starting to fade.
In the third quarter of the year, venture-capital deals with private AI firms dropped by 22% quarter on quarter to 1,295, although funding levels remained above $45 billion for the fourth consecutive quarter, market intelligence firm CB Insights wrote last month.
"What perturbs me is the scale of the money being invested compared to the amount of revenue flowing from AI," economist Stuart Mills, a senior fellow at the London School of Economics, told DW.
Market leader OpenAI, which is backed by Microsoft, generated $3.7 billion in revenue last year, versus total operating expenses of $8-9 billion. The company says it is on course to make $13 billion this year but is still expected to burn through $129 billion before 2029, news site The Information calculated in September.
Mills thinks generative AI companies like Elon Musk's Grok and ChatGPT are "charging far less than they need to make a profit" and should raise subscription prices.
Few have quantified the AI bubble more starkly than Julien Garran, partner at UK-based research firm MacroStrategy Partnership. He argues that the sheer volume of capital flowing into AI — despite little evidence of sustainable returns — dwarfs previous speculative frenzies.
"We estimate a misallocation of capital equivalent to 65% of US GDP — four times bigger than the housing buildup before the 2008/9 financial crisis and 17 times bigger than the dot-com bust," Garran told DW.
Investors increasingly cautiousRecent earnings from Big Tech have sparked cautious optimism, but also fresh doubts about AI’s staying power. Data analytics and intelligence platform Palantir's Q3 revenue surged 63% year-over-year, but its stock price fell by up to 7% on the news. AMD and Meta also saw their strong AI-related earnings overshadowed by market concerns about sustainability.
That disconnect between soaring valuations and shaky fundamentals is exactly what worries Mills, who sees a widening gap between what AI promises and what it actually delivers to businesses.
"The data suggests that AI is not penetrating high enough up the value chain. Loads of people are using it, but it's not being used for tasks that directly contribute to value production," he told DW.
Nvidia's upcoming earnings on November 19 may prove a key test of whether the AI boom still has legs. In the second quarter, Nvidia's data center sales alone made up 88% of total revenue, which hit a record $46.7 billion. For Q3, the company has guided $54 billion, projecting 54% year-on-year growth, which would equate to a full-year total of more than $200 billion.
When will the bubble pop?"With the exception of Nvidia, which is selling shovels in a gold rush, most generative AI companies are both wildly overvalued and wildly overhyped," Gary Marcus, Emeritus Professor of Psychology and Neural Science at New York University, told DW. "My guess is that it will all fall apart, possibly soon. The fundamentals, technical and economic, make no sense."
Garran, meanwhile, believes the era of rapid progress in large language models (LLMs) is drawing to a close, not because of technical limits, but because the economics no longer stack up.
"They [AI platforms] have already hit the wall," Garran said, adding that the cost of training new models is "skyrocketing, and the improvements aren’t much better."
Striking a more positive tone, Sarah Hoffman, director of AI Thought Leadership at the New York-based market intelligence firm AlphaSense, predicted a "market correction" in AI, rather than a "cataclysmic 'bubble bursting.'"
After an extended period of extraordinary hype, enterprise investment in AI will become far more discerning, Hoffmann told DW in an emailed statement, with the focus "shifting from big promises to clear proof of impact."
"More companies will begin formally tracking AI ROI [return on investment] to ensure projects deliver measurable returns," she added.
Edited by: Uwe Hessler
https://www.dw.com/en/will-the-ai-bubble-burst-as-investors-grow-wary-of-returns/a-74636881
READ FROM TOP.
YOURDEMOCRACY.NET RECORDS HISTORY AS IT SHOULD BE — NOT AS THE WESTERN MEDIA WRONGLY REPORTS IT — SINCE 2005.
Gus Leonisky
POLITICAL CARTOONIST SINCE 1951.
top 10...../
How Does the End Begin?
EDITOR. ACTIVISM.
SCOTT GALLOWAY
The top 10 stocks in the S&P 500 account for 40% of the index’s market cap. Since ChatGPT launched in November 2022, AI-related stocks have registered 75% of S&P 500 returns, 80% of earnings growth, and 90% of capital spending growth. Meanwhile, AI investments accounted for nearly 92% of the U.S. GDP growth this year. Without those AI investments, Harvard economist Jason Furman noted, growth would be flat. As Ruchir Sharma concluded in the Financial Times, “America is now one big bet on AI,” adding, “AI better deliver for the U.S., or its economy and markets will lose the one leg they are now standing on.” This concentration creates fragility, and how the end begins becomes more visible.
ICE & Wiping Grandma’s AssI’m especially proud of the above header, and there is a connection. The S&P, Nasdaq, and DJIA are some of the most damaging metrics in modern history as they create the illusion of prosperity, even as depravity rages on. The cloud cover for a masked, secret police terrorizing communities is these indices. As long as your 401(k) is going up, then everything must make sense and be okay, no? No. Trump could not send troops into U.S. cities if the S&P were down, vs. up, 13%. AI stocks, and the sugar high they have inspired across the entire market, numb Americans from the nagging tooth pain that we are descending into fascism. Yeah, those evil people who wipe your grandma’s ass, pick our crops, and build our homes can be treated inhumanely as long as Nvidia remains worth more than the entire German stock market.
Trillion-Dollar QuestionValuations for the Mag 10 — the original group of seven leading tech stocks, plus AMD, Broadcom, and Palantir — are high, but not yet at historic peaks. The 24-month forward P/E ratio of the Mag 10 is 35x. In 2000, at the height of the dot-com bubble, the top 10 stocks traded at 52x forward earnings. Implicit in these valuations, however, is an assumption that AI will help these companies cut costs, or grow revenues by $1 trillion in the next two years. I believe we’re either going to see a massive destruction in valuations, infecting all U.S. stocks and global markets. Or we’re going to see a massive destruction in employment across industries with the highest concentrations of white-collar workers. Both scenarios are ugly.
If Mag 10 valuations are cut in half, the S&P and global markets would decline by 20% and 10%, respectively. In the U.S., the immediate impact would be felt by the wealthiest 10%, who own 87% of the stocks. Those households won’t struggle to pay their bills, but they may be the tail of the whip on the economy, as wealthy households have the luxury of decreasing their spending dramatically, vs. middle-class households, who spend the majority of their income on basics. If the top 10%, who account for half the consumer spending in the U.S., hit the brakes, the nation gets whiplash. I estimate that if the wealthy see their portfolios drop by 20%, we could see a 2-3% decline in GDP. For context: From peak to trough, the Great Recession registered a 4.3% drop in GDP.
If the Mag 10 justify their valuations by delivering $1 trillion in cost-cutting (Latin for “layoffs”), the impact will hit white-collar workers first, but the contagion could spread. Assuming an average white-collar wage of $100,000 per year, that’s 10 million jobs lost and a 6% increase in unemployment. That estimate is conservative compared to the “white-collar bloodbath” predicted by Anthropic CEO Dario Amodei, who told Axios, “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment 10%–20% in the next one to five years.” The IMF warns that 60% of jobs are already exposed to AI in advanced economies and 40% in emerging markets. According to Okun’s Law, for every 1 percentage point increase in the unemployment rate, real GDP falls by approximately 2 percentage points. But AI could be a different story, with some experts predicting jobless growth. According to a J.P. Morgan report, AI may do to white-collar work what automation did to middle-skill jobs like sales, manufacturing, and construction in the 1980s. The canary in the coal mine may be recent college graduates. Stanford economists found that early-career workers (ages 22 to 25) in the most AI-exposed jobs have experienced a 13% relative decline in employment. If this trend accelerates, today’s challenges around wealth inequality and political volatility will seem quaint.
Past BubblesA 2018 study that examined 51 innovations between 1825 and 2000 found 37 were accompanied by bubbles. The destruction that followed in each bubble’s wake, however, varied greatly, depending on several factors. Bubbles inflated by political policies are more destructive than those inflated by new technologies, according to economic historians William Quinn and John Turner. The size of the capital investment is also important. British railway investment in the 1840s was 15%–20% of GDP. When that bubble burst, unemployment doubled. In the U.S., railroad capex averaged 2.4% of GDP in the 1870s. That bubble drove the financial panic of 1873. In both cases, however, those investments paid (delayed) dividends in the form of rail capacity that helped distribute the promise of the Industrial Revolution. In contrast, as The Economist noted, the capex by electronics firms in the 1980s fueled Japan’s asset price bubble, but the spending “ultimately served no useful function.” Finally, the severity of a crash depends on who takes the losses. A second British rail bubble in the 1860s hit banks hard. The recent NFT bubble was a case study in the greater fool theory, but the contagion didn’t reach the broader economy.
Bubble Trouble?You can’t predict when/if a bubble will burst, but Azeem Azhar, founder of Exponential View, and researcher Nathan Warren, created a framework that compares historic bubbles with AI today. In their estimation, AI is a boom, but “booms can sour quickly and there are several pressure points worth watching.” If AI capex exceeds 2% of GDP, that’s cause for concern; it’s currently estimated at around 1.3%. A sustained fall in enterprise or consumer spending levels is another pressure point. A flawed, though perhaps directionally correct MIT study, rattled the AI ecosystem claiming that 95% of firms have yet to see measurable ROI from their AI pilot programs. We’re approaching a valuation redline if/when P/E ratios reach the 50x to 60x range. Finally, if internal cash covers less than 25% of capex, Azhar and Warren believe investments in data centers will come under pressure.
Dot-Com VibesThe AI infrastructure build-out has accelerated recently with an estimated $1 trillion in new commitments. Some firms are making deals with money and assets that don’t yet exist. See: OpenAI promising Oracle $300 billion — money it doesn’t have — for infrastructure Oracle hasn’t built. In other cases, revenue comes from “circular financing,” where dollars rotate between firms, obscuring true market demand. See: Nvidia’s $100 billion investment in OpenAI, which OpenAI will use to buy … Nvidia chips. Circular financing deals were common toward the end of the dot-com bubble, when similar deals contributed to a crash that destroyed 77% of Nasdaq market value. If we are on the precipice of a bubble popping, Nvidia and OpenAI will likely be ground zero. But the fallout would be widespread, as an ecosystem that resembles an ouroboros lives and dies by a shared narrative.
FragileIn his book Irrational Exuberance Robert Shiller wrote, “The word ‘bubble’ creates a mental picture of an expanding soap bubble, which is destined to pop suddenly and irrevocably. But speculative bubbles are as easily ended; indeed, they may deflate somewhat, as the story changes, and then reflate.” The operative word is “story.” Entrepreneurs, aka storytellers, deploy narratives to capture imaginations and capital in order to pull the future forward. Valuations aren’t a function of balance sheets, but of the stories that give those balance sheets meaning and direction. In the case of AI, a key storyline is shifting. A Prof G analysis of ChatGPT data found that work-related prompts fell from 47% in 2022 to 27% in 2025; ChatGPT has 76% market share. As my Markets cohost Ed Elson said, “the bull case for AI is that it’s going to transform work, but what we’re learning is it’s mostly just affecting your personal life.”
The trouble isn’t the shifting narrative, but the fragility of America’s bet on AI and wealthy consumers driving growth. If McDonald’s goes out of business, the fast-food industry will continue to meet demand for cheap calories; the industry is robust (i.e., anti-fragile). J.P. Morgan, now worth more than the 10 biggest banks in the EU combined, is too big to fail. America’s bet on AI is now a bet without hedge. If companies that aren’t the Mag 10 (i.e., the S&P 490) report they’re scaling back AI investments as the adoption layer fails to launch, the connective tissue between AI, trillions in market cap, and the broader economy severs. The experts have already deemed the grid and electrons as the gating factor to our AI future. However, they’re ignoring a more glaring possibility: AI may be more like VR than GPS and just not offer the ROI built into these valuations. Also, citizens burned by tech executives writing books on gender balance as they launch products that result in teen girls cutting themselves may decide character AI and porn are disastrous for our sons.
If China’s AI program produces another “Sputnik moment” similar to DeepSeek earlier this year, valuations for U.S. AI firms could tumble. And if reports including Apple’s “The Illusion of Thinking” extinguish the hope that artificial general intelligence is near, AI, and by extension the American economy, may experience a significant correction. We’re the biggest economy in the world and the most powerful nation in history. However, concentrating wealth in so few hands, betting on so few companies, makes us fragile.
When asked what one piece of advice I’d give young people, I offer “Nothing is as good or as bad as it seems.” Our economy rests on the belief that AI is even better than it seems. Careful…
Life is so rich,
P.S. For Prof G Conversations I spoke with Kai Ryssdal, host and senior editor of Marketplace. We discussed the risk of stagflation, the growing divide between the top 10% and everyone else, and why America’s economic strength still depends on the health of its democratic institutions. Listen here, and watch here.
« Prev: Love Algorithmically
Next: America’s Best Bet
https://www.activistpost.com/how-does-the-end-begin/
READ FROM TOP.
YOURDEMOCRACY.NET RECORDS HISTORY AS IT SHOULD BE — NOT AS THE WESTERN MEDIA WRONGLY REPORTS IT — SINCE 2005.
Gus Leonisky
POLITICAL CARTOONIST SINCE 1951.