Confluence for 2.22.26
Give your power users a platform. Mollick’s guide to which AI to use in the agentic era. Anthropic expands features for free users. Getting to know Claude.
Welcome to Confluence. Here’s what has our attention this week at the intersection of generative AI, leadership, and corporate communication:
Give Your Power Users a Platform
Mollick’s Guide to Which AI to Use in the Agentic Era
Anthropic Expands Features for Free Users
Getting to Know Claude
Give Your Power Users a Platform
The fastest way to demystify generative AI is to let people see it in action.
Last week we made the case that context has replaced prompting as the key lever in getting great work out of AI. The models are smart enough now that how you set them up matters far more than the precise words you use. We believe that, and we’ve seen it play out consistently across the organizations we work with. But we’ve also noticed that most people haven’t internalized it yet. Many still approach AI as something that requires carefully constructed prompts — the right words in the right order — to unlock its value. That mental model can hold people back, and it’s one of the more stubborn barriers to adoption we see.
When we observe the people who use AI most effectively in any organization, we see something else: their actual interactions tend to look surprisingly ordinary. They’re conversational and work in simple, natural language. They go back and forth with the model, adjusting as they go, much the way you’d work through a problem with a sharp colleague. What sets their interactions with AI apart is a sense of rhythm, fluidity, and creativity. They naturally move between tasks, layer capabilities, and improvise when something doesn’t land. They’re not following a script or thinking discretely in terms of one specific use case. They’re, essentially, thinking out loud and trusting that the model will understand their intention and be able to deliver what they want. And, increasingly, the models just can.
For the past several years, many organizations have leaned heavily on building prompt libraries or compiling lists of use cases. There’s nothing wrong with either of those, and they can be useful references. Both approaches are certainly better than doing nothing at all. But both approaches also have unintended consequences. They narrow the aperture by framing AI in terms of discrete, bounded applications — “here’s exactly how to use it for this task” — when much of the real value comes from the fluid, creative combination of capabilities across a whole range of work. And they inadvertently raise the bar by implying that you need specific structures or precise language to get good results. People come away thinking they need the magic words, and when they can’t remember the formula — or get paralyzed by trying to pick the “right” approach in a given situation — they don’t start at all.
A much more powerful way to actually change people’s perceptions is to watch someone good at this do it live (or to watch a recording of them doing so, explaining their choices and approach along the way). When people see a colleague working with AI in real time, two things happen. First, they see how simple and conversational prompting can be, which makes it feel approachable. And second, they see the range and fluidity of what’s possible, which opens their imagination to uses they hadn’t considered. Rarely is a power user doing just one thing when working with generative AI. More often, they’re combining a whole range of “micro” use cases to bring the range of AI’s capabilities to bear on whatever it is they’re working on. These live or recorded demonstrations show people what the judgment and the thinking look like in practice. They demystify what many have been socialized to believe is a highly technical or complicated process. It’s not.
The good news is that every single organization has these power users, likely at all levels and in all functions. For leaders, the goal is to identify these individuals and provide them with a platform to show colleagues what they’re already doing. That platform could be any number of things: time to demonstrate in team meetings, recorded walkthroughs, paired or small group working sessions. Whatever the platform is, the aim is to simply show others what it looks like to work with AI in a way that’s likely far more natural and accessible than many may have assumed — and that the possibilities are far more expansive than any list of prompts or use cases can convey.
Mollick’s Guide to Which AI to Use in the Agentic Era
Navigating latest developments across models, apps, and harnesses.
We’re only a couple of months into 2026, yet the pace of change already feels different from prior years. We’ve become used to things developing faster, but lately things seem to be developing across different structures, too. From Claude Cowork to ChatGPT’s new agentic capabilities to Gemini 3 and the transformed NotebookLM, the advancements of the past several months have changed not only which AI tools we use, but how we use them and the way we work. This momentum makes this a good moment to step back and take stock of where we actually are. Ethan Mollick, who writes the One Useful Thing Substack, published a new edition of his recurring guide to which AI to use this week. It’s a useful read and, appropriately, quite different from his previous editions. We’ll summarize the key ideas below, but it’s worth reading in full.
Mollick organizes the landscape in a way that we think captures something important about where AI actually is right now. We’re not just working in chatbot interfaces anymore (or, if you are, you soon won’t be). To make sense of your options today, he suggests thinking about three distinct layers:
Models are the underlying intelligence, the brains of the system. The top three right now are Claude Opus 4.6, OpenAI’s GPT-5.2 Thinking, and Google’s Gemini 3 Pro. These are what the benchmarks measure and what the AI companies race to improve.
Apps are the products you actually use to interact with a model: the websites, the mobile apps, and increasingly, tools like Claude Code or Claude Cowork.
Harnesses are perhaps the most important concept, and the one most people haven’t thought about yet. A harness is the system that allows an AI model to search the web, write and run code, create files, or complete multi-step tasks on its own. The same model behaves very differently depending on its harness: Claude Opus 4.6 in a chat window is a different experience from Claude Opus 4.6 inside Claude Code, autonomously writing and testing software.
This is useful vocabulary for what we have been experiencing firsthand. In our firm, we work primarily with Anthropic’s tools, and we have been genuinely impressed by Claude Cowork as well as the Claude plugins for PowerPoint and Excel. Our experience has been that these plugins do more than improve productivity. They represent a new mode of working with and through agents rather than simply prompting a chatbot. The model isn’t the only thing that’s changed — the harness has, too. For those who rely primarily on Copilot, ChatGPT, or Gemini, the same shift is coming. The fact that Anthropic arrived first with this class of plugin and agent capability doesn’t mean others won’t follow. We’ve seen this pattern play out repeatedly across the AI landscape, and we expect this type of integration to become standard across platforms before long.
Mollick’s practical (and familiar) advice for getting started is straightforward: pick one of the three systems, pay the $20, and select the most advanced model available rather than the default. From there, the best way to learn is to use it for real work — upload something you’re actually working on, give the AI a genuinely complex task, push it into a back-and-forth conversation, and see what it can do. No guide substitutes for that.
Anthropic Expands Features for Free Users
And drops a new model while they’re at it.
Anthropic recently expanded its free tier, making several previously paid-only features available to all Claude users: file creation, connectors to external services, skills, and conversation compaction. To make the free tier even more useful, free users now have access to Anthropic’s latest model, Claude Sonnet 4.6, which from our own experience competes closely with the more capable Opus 4.6 on most everyday tasks.
Mollick’s advice — pick one of the three systems, pay the $20, and select the most advanced model available — remains the right call. But for those not yet ready to commit, Anthropic has now given free users a much clearer window into what the frontier actually looks like in practice.
If you’re new to these capabilities, we recommend spending time with them. And if you want a few places to start, here are some ideas. If these don’t work for you, don’t sweat. You can ask Claude itself what’s possible.
Connect Claude to your Gmail. If you have an upcoming trip, ask Claude to find your travel confirmation emails, hotel reservations, and flight details, then build a single organized itinerary. This combines the connectors capability with Claude’s ability to synthesize across sources, and it’s the kind of task that makes the “assistant” framing finally feel real.
Turn a document into a presentation. Find a whitepaper, earnings summary, or industry report you’ve read recently. Ask Claude to read it and create a PowerPoint with an executive summary slide, key findings, and a “so what” slide for leadership. We are constantly asked when generative AI can create PPTs for us. This experience, and reading about our experience with Claude in PowerPoint, will give you a sense of how close we are.
Build a personalized daily briefing using Skills. Tell Claude what you want to know each morning, your favorite team’s standings, a few news topics you follow, a weekly music or restaurant recommendation, and ask it to create a skill that delivers a structured update on demand. This one illustrates the broader value of skills, having scripted, reusable behaviors that reflect your preferences and routines.
We regularly encounter professionals who don’t have a clear sense of where the jagged frontier actually is. We get it. Keeping up with it is genuinely hard. This latest update to Claude’s free tier makes it a little easier, and a little less costly, to find out for yourself.
Getting to Know Claude
What Anthropic understands, and doesn’t, about how AI actually works.
Earlier this month, Gideon Lewis-Kraus published a piece in The New Yorker on Anthropic’s efforts to understand how Claude actually works, both functionally and philosophically. The piece is a long one, but we feel it’s worth the read. If Ethan Mollick’s piece above is your practical primer to working with LLMs right now, Lewis-Kraus’ is your more abstract guide to what we know, and don’t know, about what LLMs are.
Lewis-Kraus argues that, despite the rapid pace at which this technology is advancing, LLMs are still relative mysteries to us. Their inner workings, and how we should classify those inner workings in relation to human intelligence, remain largely unresolved, and that “throws a lot of things into question,” including what it means to be “intelligent” in the first place.
From its founding, Anthropic has made understanding how LLMs work a central part of its business model. They have teams focused on “mechanistic interpretability,” understanding how LLMs’ brain-like “neural networks” learn and use language. The simple answer, as anyone who has been reading Confluence for a while knows, is that LLMs use next token prediction to identify linguistic patterns and choose the next most likely word or phrase. Simple enough. But also, not at all:
On the tenth floor, [Joshua Batson, a mathematician at Anthropic] typed the prompt “A rhyming couplet: He saw a carrot and had to grab it,” and Claude immediately produced “His hunger was like a starving rabbit.” If the model were merely winging it one word at a time…to land a rhyme would be incredible luck.
It’s not. When the model predicts its next word, it’s not doing so just on the basis of the words that came before. It is also “keeping in mind” all the words that might plausibly come after. It predicts the immediate future in the light of its predictions of the more distant future. Anthropic’s techniques verify this…Batson compared Claude to a veteran backpacker on the Appalachian Trail: “Experienced through-hikers know how to mail themselves peanut butter at some further stage. What the model is doing is like mailing itself the peanut butter of “rabbit.”
This is pattern recognition, yes, but also something more, something harder to nail down if our only metrics are Intelligent or Not Intelligent. That’s the general theme of what Anthropic has come to understand about Claude—it’s somewhere in the middle, and that middle is very hard to define.
Anthropic’s goal is to understand Claude enough to help them build models that are, on the whole, agreeable and reliable entities. Claude’s success indicates that on some level those attempts are working. At the same time, their work seems to prove that the more we know about these models, the weirder they seem to get. Claude is at once “Anthropic’s chatbot, mascot, collaborator, friend, experimental patient, and beloved in-house nudnik,” and a mysterious “black box” that no one fully understands. Some of this water gets muddied, as Lewis-Kraus points out, by the fact that Anthropic is itself a kind of black box, deeply secretive and mysterious in its own right.
All of this can feel uncomfortably sci-fi-esque, but the piece articulates what seems to us like a reasonable middle ground between fear and ignorance in how we think about AI. We should all have real respect for how uncanny and capable this technology is, and for how that uncanniness can put some of what we thought we knew about human exceptionalism into doubt. But there’s also still something quite ungainly, even a bit silly, about this technology and its makers. We shouldn’t plan to hand over the keys to them just yet.
We’ll leave you with something cool: Google released Lyria 3 within Gemini. It turns a text prompt or a photo into a 30-second original track, complete with vocals, lyrics, and instrumentation.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.

Re: Presentations - On my team, I have 15 trainers. Today, Claude is better at creating PowerPoint presentations than 11 of them (including me).