Confluence for 4.5.26
The coming wave of AI constraint. Simon Willison's AI State of the Union. The Solow Paradox returns. Two recent podcasts worth a listen.
Welcome to Confluence. Before we get to what has our attention this week, we want to make readers aware of an immersive development experience our firm runs for leaders of communication teams: the Strategic Communication Academy at the Sundance Resort in Utah. We’ve been running Sundance for nearly 15 years, and it remains one of the most intensive and rewarding experiences available to those leading communication teams and functions.
Over four days, we cover the full range of strategic and leadership challenges communication leaders face, from communicating strategy and leading through change, to symbolic leadership, executive presence and visibility, and building and leading a high-performing team. And, of course, we weave discussion about AI implications and application throughout. The cohort is small, the conversations deep, and the relationships long-lasting. It’s also a spectacular setting, which doesn’t hurt. The program runs June 7-11, and spots are limited. If this sounds right for you, you can learn more here.
With that said, here’s what has our attention this week at the intersection of generative AI, leadership, and corporate communication:
The Coming Wave of AI Constraint
Simon Willison’s AI State of the Union
The Solow Paradox Returns
Two Recent Podcasts Worth a Listen
The Coming Wave of AI Constraint
What does it mean when there’s a disparity of available intelligence?
Since the release of ChatGPT, we’ve been living in a time of relative AI abundance. Thanks to free tiers of service from OpenAI, Google, and Anthropic, anyone on the planet with an internet connection has had nearly unlimited, free access to some of the most powerful and intelligent information technology created by humanity.
The large labs have been able to provide this access thanks to the heavy subsidization of their model development and service costs. The average user simply has not been paying the full freight of their usage. This is of course a loss leader by the large labs, and it’s been tenable thanks to their rapid growth and heavy capitalization. And while the models have been wildly popular, for much of the past three years they have not been an item of daily reliance for most people on the planet. They certainly haven’t been part of the daily work of most people in most organizations. So, while the labs have been generally losing money on model usage, the expectations and the reality of their growth have made the loss acceptable.
But now things are starting to change this calculus.
First, until very recently most non-technical people have struggled to bring these models to bear on their daily work. The models either required specialized skill sets or there weren’t tools that gave the models high utility. But now, interfaces like Claude Cowork have made it easier and safer for people to use a model like Claude in their desktop environment, and all manner of data connectors and tools have dramatically increased the power of these models to do real work. As a result, people are starting to use these models much more frequently and with much more sophisticated use cases than they did when the model was a simple chatbot.
Second, models are starting to go mainstream inside organizations. A year ago, the vast majority of our clients, if they had any form of large language model at all inside their organizations, were using Microsoft Copilot in its default form. Today we are seeing clients deploy Claude Enterprise subscriptions, ChatGPT subscriptions, and Gemini subscriptions at scale, making much more powerful models and tool sets available to many more enterprise users.
Together these factors conspire to increase the amount of load on the model providers, which also stresses the technical capacity to serve the demand and the economics of the subsidization. The labs have responded with price discrimination, offering different tiers of service at different prices based on model and usage.
This makes all kinds of sense, and price discrimination works. People pay for the service they want or can afford. The challenge is the expectation that the ubiquity and abundance of AI have created. These different tiers come with usage and model capability caps. When the work you’ve been doing hasn’t been sophisticated enough to push these limits, they don’t matter. But when it is, and you run into those caps, it absolutely matters, and as people use more powerful models, with more tools, across more sophisticated use cases, this is becoming a more frequent event. Folks in our own firm often run into their session or weekly usage limits using Claude. The options are to either stop using Claude for several hours (or days), or for the firm to spend more to allow them to continue using the model above their base subscription level. Neither is optimal.
And both cause frustration. What before had been an expectation of nearly unlimited availability is increasingly being challenged as people bump into their subscription limits. That was one thing when the people burning through their usage limits was a team of developers somewhere in your IT department. It’s another when it’s 3,700 leaders using AI at scale wondering why they can’t get things done that they want to get done.
And it’s about to get worse.
For several weeks rumors have been floating around about a leading lab having developed a new, very powerful model. Those rumors gained weight when leaks of internal documents from Anthropic described a model called Mythos, which an Anthropic spokesperson subsequently confirmed and described as “a step change” in AI performance and “the most capable we’ve built to date.” It is rumored to be very large, very powerful, and very expensive to serve to users. For some time ChatGPT has had a Pro model, currently ChatGPT 5.4 Pro, which is also very large and very expensive to serve. They charge accordingly for it: $200 a month simply to have access to that model (and everything else in the ChatGPT universe). One can imagine that Mythos will require similar or even more expensive pricing.
At some point the expense of serving these models is no longer sustainable under the current subsidization schema. Econ 101 reminds us that with rising demand against fixed supply the only answer is to increase price. The current pricing discrimination scheme does this, but with step changes in model ability and service costs, will we see step changes in pricing? Will Mythos be $200 per user per month? Or $500? Or $1,000? What about the models after that? If a model can do the job of a $200,000 employee, is it worth $150,000 a year to use? Would you be willing to pay $1,000 a month to have intelligence stronger than your own at your beck and call? And importantly, how much is too much for most people and most organizations?
There’s no way to know until we get there, and it may be that there are economies of scale here that we don’t know or appreciate. But at least for us, we’re anticipating an environment in the near future where we’ve gone from powerful models that many people and organizations can afford to there being extremely powerful models that many people and organizations can’t afford. This is true of all products and services, but the challenge with these models is that they bring with them ability. More powerful models create a performance disparity between those who have them and those who don’t. What does it mean when wealthy firms have access to much more powerful intelligence than other firms? What does it mean when wealthy societies have access to much more powerful intelligence than other societies? People have been talking about disparities in economics and wealth distribution for many centuries. It may be that we’ll be adding disparities of available intelligence to that list. The consequences are probably significant and perhaps profound. We’re starting to think about what it means for us, and you probably should be doing the same. But in the meantime, we think you should prepare for a time when people can’t get all the intelligence they want, anytime they want it.
Simon Willison's AI State of the Union
How far things have come and what it means for those using AI.
Simon Willison is one of the more important, influential voices in the AI space. His blog is excellent and he’s what we’d consider a “must follow” on X. When he appeared on Lenny’s Podcast this week to offer “An AI State of the Union,” we took note. Willison’s a software engineer but his observations hold relevance across professions. The whole conversation is worth a listen, but a few takeaways especially echo our own experience and recent conversations we’ve had with clients.
Humans are increasingly the bottleneck. LLMs can simply produce more content — whether that’s lines of computer code, research reports, internal communication materials, or something else — faster than we can review and absorb. It’s exhausting to keep up. We’ve had multiple conversations with clients this week expressing how they hoped generative AI would lighten their load, and it has in some ways, but they’re feeling more stretched because of how much they need to process. They’re analyzing, distilling, and critiquing more than ever before, and it can feel overwhelming. The more we and our teams use generative AI, the more our work shifts and the more we exhaust ourselves in different ways. To Willison’s point, “There’s a personal skill we have to learn in finding our new limits—what’s a responsible way for us not to burn out.”
We’re still figuring out the skills of the future. These models and the tools that use them are improving so quickly, the goalposts are continually moving. In Willison’s view, “The only universal skill is being able to roll with the changes.” This is a bit of an overstatement, especially when you consider professions outside of software engineering, but there’s an element of truth to it. If your job falls under the umbrella of knowledge work, working with generative AI is going to be a given. Rolling with the changes, adapting to them quickly, and continuing to grow and develop in ways that matter will separate top talent from the rest.
You should embrace the weirdness of it all. LLMs are bizarre. They can write emails updating participants on the firm’s NCAA March Madness pool in the voice of Dick Vitale. They can create an interactive map that shows all the restaurants and sites on your list for an upcoming trip to Tokyo. They can serve as a nutritional coach, helping you make better choices about what to eat (and what not to eat). They can mock up websites. And, to Willison’s delight, they can mock up images of pelicans riding bicycles. We can learn a ton about how to work with these tools, which isn’t always easy, and what they’re capable of doing just by having fun with them.
Willison is a software engineer, but the challenges he’s describing aren’t technical ones. They’re leadership challenges. When humans become the bottleneck, someone needs to make better decisions about what deserves attention and what doesn’t. When the skills of the future keep shifting, someone needs to create the conditions for people to adapt without burning out. And when the best way to learn a tool is to play with it, someone needs to make that feel acceptable in a professional setting. None of that happens on its own. It takes leaders willing to step into the uncertainty themselves, figure out what their teams should and shouldn’t be spending time on, and create room for people to experiment. The technology is going to keep evolving. Whether teams thrive or burn out along the way has less to do with the tools themselves and more to do with how we help people navigate what’s ahead.
The Solow Paradox Returns
A Federal Reserve working paper finds AI adoption is widespread and productivity gains are real – but much smaller than companies think.
We’ve written about the AI productivity paradox a few times now. Last August, it was McKinsey data showing that nearly 80% of companies were using generative AI while just as many reported no bottom-line impact. In our March 1 edition, it was ActivTrak’s workplace data, which showed that AI intensified activity across nearly every work category without reducing workloads. Those were enterprise- and individual-level findings, respectively. This week, similar patterns show up in macroeconomic data, in a new working paper from the Federal Reserve Banks of Atlanta and Richmond in partnership with Duke University.
The paper, published March 25, surveyed nearly 750 corporate executives on AI’s effects on productivity and the workforce. The headline adoption numbers are substantial. More than half of firms had invested in AI by the end of 2025, with 78% of large firms doing so. Executives reported productivity gains averaging 1.8%. But when researchers checked those self-reported gains against actual revenue and employment data, the implied gains were substantially smaller across every major industry. Companies perceive meaningful improvement in workflows and task efficiency. Those improvements have not yet translated into measurable revenue growth.
The sector-level findings are where the paper gets most useful. Gains are concentrated in high-skill services like finance, which showed implied annual labor productivity growth of approximately 0.8% in those sectors. Manufacturing, construction, and low-skill services trail behind but remain positive. Critically, the gains are not being driven by capital spending on hardware or software. Rather, they reflect increases in total factor productivity tied to innovation and demand: new products, better customer engagement, process redesign. This is consistent with findings from Goldman Sachs in early March that no meaningful relationship exists between AI adoption and productivity at the economy-wide level, alongside dramatic gains in the narrow use cases where firms actually measured impact. European comparison data cited in related research reinforces the point: each additional percentage point spent on workforce training added 5.9 percentage points to productivity gains. The organizations seeing returns are those that invest in people and process alongside technology.
This is the organizational overhang showing up in macroeconomic data. Organizations cannot absorb transformative technology faster than they can restructure around it, and the gap between adoption and absorption is where productivity goes to hide. Solow’s original paradox, which observed that computers were everywhere except the productivity statistics, resolved itself over the following decade as firms redesigned workflows, retrained workers, and rebuilt management practices around information technology. The early evidence suggests AI is on a similar trajectory: real gains, arriving unevenly, requiring the kind of deliberate organizational investment that most firms have not yet made. The measure of AI adoption should not be how many employees have access to tools. It should be whether specific workflows have been redesigned, and whether those redesigns are producing measurable improvements in revenue.
Two Recent Podcasts Worth a Listen
The Chief Information Officer of Goldman Sachs and a product lead for Claude Cowork weigh in on where things stand and where they’re heading.
There are more insightful podcasts about generative AI in a given week than we can keep up with, but the past few weeks have been particularly rich with good episodes (including the interview with Simon Willison we discuss above). Below we’re sharing two more that we’ve listened to lately and found particularly worthwhile — one from the perspective of a major enterprise putting AI to work at scale, and one from inside the lab building the tools.
“Goldman CIO Marco Argenti on the Warp-Speed Improvements in AI” (Bloomberg’s Odd Lots) – This is a wide-ranging conversation about what AI deployment actually looks like inside one of the world’s largest financial institutions. The conversation is rooted in technology and technology strategy, but the takeaways (on how roles are shifting, how to measure AI’s impact, and how to think about the pace of change) are relevant to leaders in any domain. The discussion is specific and practical throughout, as Argenti discusses what’s working, what’s changing, and how Goldman is measuring it.
“Why Anthropic Thinks AI Should Have Its Own Computer” (Latent Space) – This interview with Felix Rieseberg (a product lead for Claude Cowork) is a behind-the-scenes look at how the team behind Claude Cowork is building the product and how they see this new paradigm of AI-powered knowledge work continuing to unfold. Rieseberg walks through how Cowork was assembled in about 10 days from existing prototypes, how Anthropic’s product development process has changed (”don’t even write a memo, just build all the candidates and pick the best one”), and why the company is betting heavily on your local computer rather than moving everything to the cloud. It’s technical but should be accessible for most listeners, and it’s a useful window into how a frontier AI company is thinking about where these tools are headed next. (Thanks to Confluence reader and fellow AI writer Khe Hy for this recommendation.)
We’ll leave you with something cool: An AI engineer paid €7.80 for a Guinness in Dublin, got annoyed, and built an AI voice agent to call 3,000 Irish pubs over St. Patrick’s Day weekend and ask the price of a pint. Pubs are now lowering prices to stay competitive. Check out The Guinndex here.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.

Thanks for including me!!!