Confluence for 9.21.25
New edition of the Anthropic Economic Index. OpenAI study on how people use ChatGPT. The enterprise AI shift is real, but don’t forget individual mastery. Claude can create in your document templates.

Welcome to Confluence. Here’s what has our attention this week at the intersection of generative AI, leadership, and corporate communication:
New Edition of the Anthropic Economic Index
OpenAI Study on How People Use ChatGPT
The Enterprise AI Shift Is Real, but Don’t Forget Individual Mastery
Claude Can Create in Your Document Templates
New Edition of the Anthropic Economic Index
Fresh data reveal how AI usage patterns are evolving across organizations and geographies.
Anthropic released the third edition of its Economic Index this week, significantly expanding how it tracks Claude usage in the real world. This latest edition brings three notable updates. It now incorporates API usage data, capturing how organizations deploy Claude through custom applications (including our own leadership AI, ALEX). It maps usage patterns geographically, providing state-level detail in the U.S. alongside global trends. And it includes an interactive dashboard where you can explore the data yourself. You can search specific professions to see how people in those roles work with Claude, or dig into state-level patterns to understand regional variations. Here’s what it looks like:
The report contains rich data worth exploring directly, but one finding caught our attention, given what we wrote last week about the “Wizard Problem,” where users too readily accept AI outputs rather than pushing for better results: The data suggest this problem may be intensifying. According to Anthropic’s analysis, “directive” conversations, where users give Claude a task and it completes it with minimal back-and-forth, have increased from 27% in late 2024 to 39% by August 2025. Meanwhile, collaborative interactions involving task iteration and learning have declined proportionally. For the first time since Anthropic began tracking these patterns, automation usage exceeds augmentation usage.
Anthropic’s explanation rings true to us. As models grow more capable through improved reasoning and new features, they produce more polished outputs on first attempt. Users need fewer rounds of refinement to get acceptable results. This matches our experience. Today’s models deliver responses that feel more complete from the start. But here’s the catch. When outputs appear sophisticated, we’re naturally less inclined to iterate, even when pushing further would yield better results. The very improvements that make AI more useful may be training us to be less demanding users.
There’s no guarantee the trends in these data will hold, and we’ll resist making predictions about where it all leads. The technology will continue advancing, existing capabilities will improve, and new features will expand what’s possible. More people will integrate these tools into their work and personal routines. Beyond these (relatively) safe bets, we prefer observation to speculation, which is why reports like this matter. They ground us in what’s happening rather than what might happen next. And while the leading AI labs naturally have an interest in encouraging adoption of what they build, it’s valuable that they’re making this kind of usage data available. Understanding how people are using this technology beats speculation every time.
OpenAI Study on How People Use ChatGPT
New research tracks 700 million users and finds AI’s biggest impact may be outside the office.
This post is written by Anthropic’s LLM, Claude Opus 4.1. We provided the piece above as context and asked Claude to write a complementary piece summarizing OpenAI’s research. We provide Claude’s text here without editing.
OpenAI released comprehensive data this week on how people use ChatGPT, drawing from internal message data spanning November 2022 through July 2025. The findings offer a counterpoint to much of the discourse around AI and work. While enterprises focus on productivity gains, the data show that 70% of ChatGPT’s 2.5 billion daily messages have nothing to do with paid employment.
The research, conducted with Harvard economists, reveals three dominant use cases that account for nearly 80% of all interactions: Practical Guidance (getting customized advice and tutorials), Writing (both creating and editing text), and Seeking Information (essentially using ChatGPT as an enhanced search engine). Writing dominates work-related usage at 42% of professional queries, but here’s what caught our attention: two-thirds of those writing requests involve modifying existing text rather than generating content from scratch. Users aren’t asking ChatGPT to replace their voice. They’re asking it to refine what they’ve already written.
The researchers introduced a useful framework, classifying messages as “Asking,” “Doing,” or “Expressing.” About half of all ChatGPT usage falls into the Asking category — people seeking information or advice to inform decisions. Another 40% involves Doing — completing specific tasks. The remaining slice consists of messages with no clear intent. Notably, Asking messages receive consistently higher satisfaction ratings from users, suggesting ChatGPT functions best as an advisor rather than an executor.
The demographic patterns tell their own story. Early ChatGPT adoption skewed heavily male (80% in the first months), but by June 2025, users with typically feminine names slightly outnumbered those with masculine names. Nearly half of all adult usage comes from users under 26, and growth has accelerated in low- and middle-income countries. Meanwhile, the occupational data confirms what many suspected: highly educated professionals in computer-related, management, and business roles are far more likely to use ChatGPT for work than those in nonprofessional occupations (57% versus 40% of messages being work-related).
Computer programming, despite dominating headlines about AI’s capabilities, accounts for just 4.2% of ChatGPT messages — a striking contrast to Anthropic's finding that coding represents 33% of work-related Claude usage. This divergence likely reflects different user bases and perhaps different strengths, but it also suggests that the ChatGPT user base has moved well beyond the early adopter developer community.
Both OpenAI’s and Anthropic’s reports point toward a similar conclusion, though from different angles. AI adoption is accelerating, but not quite as expected. The shift toward more directive, less collaborative interactions that Anthropic documents may reflect growing user confidence. OpenAI’s finding that non-work usage is growing faster than work usage suggests the technology's impact on daily life may ultimately exceed its workplace applications. Together, these datasets paint a picture of AI tools becoming infrastructure for both professional tasks and personal needs, with usage patterns still very much in flux.
The Enterprise AI Shift Is Real, but Don’t Forget Individual Mastery
We think it’s a “both-and” more than an “either-or.”
A new HBR piece from researchers at Stanford and Babson makes a compelling case that organizations need to shift from encouraging ad hoc AI experimentation to building enterprise-wide solutions. Valentine, Politzer, and Davenport argue that while individual experiments with Claude and ChatGPT can yield personal productivity gains, they rarely deliver measurable business value. They propose that organizations redirect resources toward structured, enterprise-aligned AI deployments with proper data infrastructure and new collaboration models between business and technical teams.
The evidence they present is striking. Johnson & Johnson concluded that most individual experimentation wasn’t yielding measurable returns and narrowed focus to just four strategic enterprise AI projects. Coca-Cola is using gen AI to customize 10,000 different versions of marketing assets across 180 countries. Accenture’s marketing team achieved 25-35% faster campaign deployment after building 14 custom AI agents. JetBlue saw 5-10% time savings with BlueBot, their enterprise knowledge assistant. The HBR authors report that these aren’t incremental improvements. They’re substantial operational gains that come from thinking systematically about AI integration rather than leaving it to individual initiative.
We agree with the authors about the enterprise imperative, and their framework for getting there — building data readiness and improving collaboration between business and tech teams — makes sense. There’s no doubt that organizations that don’t move beyond scattered individual experiments will fall behind those making these structured investments.
Yet we believe the path to enterprise success runs through individual expertise, not around it. The professionals who will build, evaluate, and govern these enterprise systems need deep personal fluency with AI tools. That fluency only comes from extensive hands-on use. You can’t craft an enterprise AI strategy if you don’t understand what these tools can and can’t do. And you can’t understand that without becoming an expert at the individual level.
Consider the successes the HBR authors cite. Our hunch is that Accenture’s marketing team and JetBlue’s business units succeeded because they had people who understood AI’s capabilities through direct, personal experience. These weren’t just project managers following a playbook, but rather experts who could bridge the gap between individual insight and enterprise execution because they’d spent time in the trenches with the tools themselves.
The future demands both approaches simultaneously. Organizations should absolutely build the robust enterprise systems the HBR authors describe. But they also need to cultivate professionals who are personally sophisticated with AI tools; people who use them daily, understand their quirks and capabilities, and can translate that knowledge into enterprise value. Instead of choosing between individual mastery and enterprise deployment, organizations must find a way to pursue both.
Claude Can Create in Your Document Templates
It’s not perfect, but it’s possible.
With Anthropic’s recent announcement that Claude can now work with and provide files, we’ve taken a step closer to having generative AI capable of producing final-draft-quality work. But while Claude can work with and create Word, Excel, and PowerPoint files, can it work with existing templates? Because if it can, we’re one step closer to having generative AI that can take a template and set of instructions and create something ready for final review and improvement.
To test this we gave Claude Opus 4.1 (with reasoning turned “on”) our firm’s client memorandum format, the PDF of the OpenAI paper we wrote about above, and this prompt:
This is a test of your ability to use our firm’s document templates. This is our client memo format. I want you to add a few paragraphs of text to it that summarize the PDF paper I have attached. No headings, just a few paragraphs. Also fill in the names, dates, and subject. Don’t create a new file — edit this one, and then give it to me as a download. I’m going to send it to clients along with the paper.
Claude worked for several minutes, reading the PDF, writing its summary, analyzing the template, running code to update the file, and then providing the file for download. (During this intermission, the author worked on something else.) Here’s a screenshot of the final product, unedited:
That is a flawless use of our existing template1.
We’ve also tried this with our PowerPoint template, with some success, although we need to do more testing to see how far Claude can go with planning PowerPoint structures, using different master slide templates, and more. But the early results are encouraging.
While this may not seem like a big deal, it is for our team, as we often have generative AI create first drafts of work that we then must copy and paste into templates and then reformat. It’s a time waste, and a hassle. But there is a larger implication.
It’s a bit hard to grasp that 24 months ago generative AI use in workflow meant keeping personal libraries of prompts we could re-use through copy and paste, with the output being only text. Now we have tools that can create work product, within existing style templates, informed by web-based research, analysis, image analysis, image creation, and more. When people talk about generative AI becoming “agentic,” this is part of what they are describing: AI that can take an assignment, and then independently plan, create, and use tools to complete that assignment. This future is arriving quickly, and it’s going to change a lot about the work we do and how we do it.
We’ll leave you with something cool: Apple’s new AirPods offer live translation via Apple Intelligence — an impressive example of what the frictionless AI integration of the future might look like.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.
For the record, we disclose generative AI use in our work. Your author would not send this memo to clients without a sentence up-front that said something like, “I had Claude AI generate this summary on my behalf, with my review and approval.”