Confluence for 8.31.25
Anthropic Academy now online. Describe the present, don't predict the future. Beyond automation. Worker attitudes on AI.

Welcome to Confluence. Here’s what has our attention this week at the intersection of generative AI, leadership, and corporate communication:
Anthropic Academy Now Online
Describe the Present, Don’t Predict the Future
Beyond Automation
Worker Attitudes on AI
Anthropic Academy Now Online
The list of resources that can help you understand and use generative AI grows.
This post is written by Anthropic’s LLM, Claude Opus 4.1. We gave directional guidance and feedback only, and provide Claude’s text here without editing.
We’ve been thinking lately about a peculiar asymmetry: the most powerful tools most of us have ever encountered — large language models — arrive with essentially no documentation. Traditional enterprise software comes with training modules and user guides. Generative AI? A blank text box and infinite possibility. This gap between capability and comprehension has been on our minds as we work with clients who struggle to extract consistent value from these systems. Which is why Anthropic’s recent launch of their Academy caught our attention this week. It represents something more substantial than tutorials — it’s a recognition that we need frameworks for thinking about human-AI collaboration, not just better prompts.
At the heart of the Academy sits the AI Fluency Framework, developed by Professors Rick Dakan (Ringling College of Art and Design) and Joseph Feller (University College Cork) based on their research into how AI tools were transforming creative and business processes. The framework centers on four core competencies — the “4Ds”: Delegation (deciding what work to do with AI versus yourself), Description (communicating effectively with AI), Discernment (evaluating outputs critically), and Diligence (ensuring responsible interaction). Here’s how the 3-4 hour course unfolds:
Course Structure:
Introduction to AI Fluency and the 4D Framework
Generative AI fundamentals, capabilities and limitations
Delegation: Problem awareness, platform awareness, and project planning
Description: Communication techniques and effective prompting strategies
Discernment: Critical evaluation of AI outputs and behaviors
The Description-Discernment Loop: Iterative refinement
Diligence: Accountability, transparency, and responsible AI use
Conclusion with certificate of completion
What struck us reviewing these materials is their vendor neutrality. These aren’t Claude-specific tricks but universal principles that apply across all models. The Academy has expanded to include specialized tracks for educators and students, all released under Creative Commons license. There’s also a “Claude for Work” section addressing enterprise deployment. Organizations can take these materials and customize them to their contexts — a recognition of how organizational learning actually happens.
We think this framework is worth your time, particularly if you’re helping others work effectively with AI. These systems can write like Shakespeare one moment and fail at basic arithmetic the next. Without frameworks for understanding these inconsistencies, users bounce between overconfidence and frustration. What Anthropic has built here is a structured way to think about human-AI partnership that will remain relevant as capabilities evolve. The organizations that thrive won’t be those with the most advanced AI tools, but those whose people understand how to work alongside them. The more deliberately we equip ourselves with these mental models, the better positioned we’ll be to extract real value from what remains a fundamentally transformative technology.
Beyond Automation
The case for keeping humans in the loop
MIT economist David Autor and Google AI expert James Manyika recently co-authored a piece in the Atlantic that offers a crucial reframing for how we should think about AI in the workplace. Their central insight: we’ve been asking the wrong question. Instead of focusing on what AI can automate, we should instead be asking when AI should collaborate with humans versus when it should work autonomously.
The distinction matters more than many might think. The authors cite a study about radiologists using an AI diagnostic tool called CheXpert. Despite the AI performing better than two-thirds of radiologists on its own, giving radiologists access to its predictions actually decreased diagnostic accuracy. Why? The tool was designed for automation, not collaboration. It offered predictions without transparency, leaving doctors to guess when to trust the machine versus their own expertise. When the AI was confident, doctors often unnecessarily overrode it. When the humans themselves were uncertain, they abandoned better judgment to defer to the algorithm. The result was worse performance than either humans or AI working alone.
This paradox connects directly to a trap we’ve written about before, that Lisanne Bainbridge’s seminal 1983 paper on the “Ironies of Automation” best captures. Bainbridge identified a fundamental problem that the Atlantic authors echo: automation that aims to replace human expertise but can’t fully do so creates dangerous dependencies. The rise of automation in the aviation industry, explored by Bainbridge, is the perfect example. Look no further than the Air France Flight 447 disaster: When ice crystals caused the autopilot to disengage, the startled crew couldn’t effectively take manual control of what was actually a perfectly functional aircraft. Years of reliable automation had eroded the very skills needed when automation failed. As Autor and Manyika argue, imperfect automation is not a first step toward perfect automation. That would be like trying to jump halfway across a canyon as a step toward jumping the full distance. Recognizing that the leap is impossible, we need to find better alternatives: building bridges, hiking the trail, or driving around.
The solution lies in recognizing which tools should automate and which should collaborate. The authors provide good examples: automatic transmissions and ATMs work as closed systems without human oversight, replacing human expertise entirely in their narrow domains. But collaboration tools like chain saws, word processors, and stethoscopes amplify human capabilities while keeping humans engaged and skilled. The authors go on to highlight modern successes: Google’s AMIE medical system exposes its reasoning and highlights uncertainty, pulling doctors into active problem-solving. A PNAS study found doctors working with AI achieved 85% better accuracy than physicians alone. Airlines discovered that heads-up displays (HUDs), which provide information while pilots actively fly, reduce accidents compared to full automation.
Yet even as we recognize the value of collaboration, risks remain. Take the article’s comparison of legal novices, for example, who struggle to spot inaccuracies when using AI and may be actively misled, with the experts who know to use these tools for brainstorming and refinement. Studies show overreliance on AI can impede critical thinking through “cognitive offloading.” Most concerning—and something we’ve written about extensively—is when AI performs tasks previously done by junior professionals, we lose the apprenticeship opportunities that build tomorrow’s experts. The very pathway for developing expertise gets disrupted.
We see this tension play out in our own practice. We’ve lately taken to demonstrating the stark difference between minimal and maximal human involvement when using an LLM. First, we’ll show what happens with a simple prompt like “draft a strategic narrative about X,” which often produces generic, albeit grammatically correct content. Then we transform the interaction, prompting the AI to ask probing questions, challenge our assumptions, pressure-test our logic. The AI becomes a thought partner rather than a task executor, helping us refine and develop ideas we might not have reached alone. The resulting work reflects genuine collaboration. It’s neither purely human nor purely machine, but ideally something richer than either might produce independently.
The Atlantic piece crystallizes what we’ve been learning: the future of AI in knowledge work isn’t about replacing humans. Instead, it’s about designing systems that keep humans engaged, developing, and contributing their irreplaceable judgment. As Autor and Manyika conclude, instead of trying to automate the leap across the canyon before us, we should be building bridges that leverage both human and machine capabilities. The choice between automation and collaboration will shape whether we enhance or erode the very human expertise that makes us valuable in the first place.
Describe the Present, Don’t Predict the Future
Another week, another study on the effects of generative AI on the labor market.
This week we saw another new paper seeking to understand the effects of generative AI on the labor market, this time from a team of Stanford researchers, Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen. This paper comes to a different conclusion than the Economic Innovation Group study we covered three weeks ago and takes a fundamentally different approach. This new paper, “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence,” uses data from ADP, the largest payroll software provider in the United States to look for the effects of generative AI on employment.
Using data from millions of workers, the Stanford team identified six key patterns:
There are declines in employment for early-career workers in roles most exposed to AI, standing in contrast to more experienced workers in similarly exposed roles or early-career workers in occupations less exposed to AI.
Even as overall employment remains strong, employment growth for younger workers has stagnated since 2022.
There’s a distinction between roles where AI can automate work and those where it augments workers. For roles where AI can automate the work, there are declines in employment. For roles where AI mostly augments the work, employment is growing.
The findings related to employment for early-career workers hold even when controlling for firm-level factors. In other words, the declines persist after accounting for company-wide shocks (e.g., broad layoffs) that would affect all workers regardless of AI exposure.
There is little effect on wages at this point, suggesting AI has not yet meaningfully affected how organizations compensate workers.
These AI exposure effects only began predicting employment outcomes in late 2022 with the proliferation of generative AI tools — they didn’t show up during COVID or earlier periods.
Reading this list might give you whiplash — we’re seeing new research coming out every few weeks, often coming to different conclusions. That reaction is understandable. Derek Thompson, a journalist formerly of the Atlantic, now on Substack, published an interview with Brynjolfsson and Chandar to discuss their paper. It’s excellent, and a great way to understand their research.
In his commentary, Thompson makes two points we believe our readers should bear in mind as we continue to learn more about how generative AI is affecting the economy. First, measuring these effects in real time is really, really difficult. And in Thompson’s words, “overconfidence in any direction is inadvisable.” Whenever we see research making claims about what’s happening with the economy, recognize there’s a high degree of uncertainty about the conclusions.
Thompson also offers this wisdom for how to think about keeping up with the latest and what it means for the future:
“Someone once asked me recently if I had any advice on how to predict the future when I wrote about social and technological trends. Sure, I said. My advice is that predicting the future is impossible, so the best thing you can do is try to describe the present accurately. Since most people live in the past, hanging onto stale narratives and outdated models, people who pay attention to what’s happening as it happens will appear to others like they’re predicting the future when all they’re doing is describing the present. When it comes to the AI-employment debate, I expect we’ll see many more turns of this wheel. I cannot promise you that I’ll be able to predict the future of artificial intelligence. But I can promise you that I’ll do my best to describe the wheel as it turns.”
Those speaking with confidence about what specific effects generative AI will have on organizations, on the economy, or on society, will almost certainly be wrong, in either large or small ways. Our view is closer to Thompson’s. Spend the time understanding what the technology can do today and the effects it has right now. Stay flexible in your thinking and focus your attention on what is happening now. Simply by staying current, you’ll end up ahead of the curve.
Worker Attitudes on AI
The connection between training, trust, and decision-making.
There’s no shortage of surveys on AI and organizational dynamics. We’ve written about many of them here in Confluence, including on the gap between leader and employee perceptions of organizations’ AI rollouts. This week, economist Kyla Scanlon caught our attention with her cross-industry survey of workers in an attempt to answer a simple question: “What do workers think about working alongside these new systems?”
Many of the results are unsurprising, including that “workers want AI to take over the boring parts of their jobs” and that many are concerned about technical constraints like accuracy. What stood out to us are the survey results on three interrelated dimensions, which Scanlon categorizes as decision-making, the trust paradox, and the training gap. From Scanlon’s executive summary:
Decision-making: 62% want shared decision-making in how AI is implemented at work, and 15% want full authority. Fewer than 2% are comfortable with no input.
Trust paradox: Most people “somewhat trust” their employers on AI. But in many industries, a majority report no trust. No industry reached a majority “complete trust.”
Training gap: Only about 60% of respondents have received training, with especially low rates in creative industries (20%) and entertainment (5%).
What these three dimensions have in common is that organizational leadership can influence — if not outright control — all of them. In the section outlining the findings from these dimensions, Scanlon notes the overlap and connection between them, ultimately concluding that “one way to build trust in these tools could be providing training.” We agree and would take it one step further to suggest that providing training can build trust in the organization’s AI strategy and deployment, not just in the tools themselves. A holistic approach to training should cover both an organization’s policy and principles for using (and not using) AI and the fundamentals that can help employees be more proficient in their use. As we wrote in October in response to Gallup’s AI in the Workplace survey:
… just giving people access to these tools is not enough for employees to really benefit from them. Unlike a word processor, use cases for generative AI aren’t self-evident. You simply must give people some guidance, because adoption isn’t going to take care of itself. As with any major change initiative, it will require a rigorous change strategy. And communication is at the center of that.
We believe, and Scanlon suggests, that there’s a strong connection between training and trust. This brings us to the third dimension, decision-making. We’re not surprised to see that 62% of employees want input into how AI is implemented at their organizations and integrated into their own work. Our view is that for organizations to fully capture the value of their investment in AI, employees need to have input into how it is used at the specific task level. In most cases, the best people to identify the best ways for AI to add value to an organization’s work are the people doing the work. And the best way to empower people to do that is to develop their understanding of and proficiency with the technology through training and ongoing conversation.
Scanlon writes in her conclusion:
People are remarkably thoughtful about these challenges when given the space to be. They’re developing creative solutions and collaborative approaches that point toward more sustainable forms of human-AI integration. But there has to be leadership from government, companies, and the AI companies themselves. They have to (should?) listen to the needs of the people that are the users of this product.
We agree.
We’ll leave you with something cool: GPT-5 is at the top of the leaderboard for yet another benchmark — how quickly it completed Pokemon Red.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.