Confluence for 8.24.2025
A new approach to AI memory. Prompt engineering for strategic thinking. More ways to learn with generative AI. Meta-brainstorming with LLMs.
This week we marked two years of writing Confluence. Whether you’ve been with us for two years or two weeks, we’re glad to be able to share what we learn with you all. In case you missed it, we’re offering two weeks of access to our proprietary AI, ALEX, to celebrate. To claim your two weeks of access, send an email to support@admiredleadership.com, and tell them the Confluence team sent you. With that said, here’s what has our attention this week at the intersection of generative AI, leadership, and corporate communication:
A New Approach to AI Memory
Prompt Engineering for Strategic Thinking
More Ways to Learn with Generative AI
Meta-Brainstorming With LLMs
A New Approach to AI Memory
Claude’s approach to memory is subtly powerful.
In the introduction to last week’s edition, we noted that Anthropic introduced its equivalent of memory to Claude, following the lead of ChatGPT and Gemini. We’ve written before that reliable, persistent long-term memory in LLMs will likely have a substantial effect on the user experience, for better and perhaps in some ways for worse. We’ve experimented with Claude’s memory capabilities over the past week, and we can now comfortably say that we prefer Claude’s approach to memory to ChatGPT’s. It’s a subtle change but has already provided us with a significant increase in utility.
So how does Claude’s memory work, and what differentiates it from ChatGPT’s? In short, Claude has the ability to remember all of a user’s conversations and call on them at the user’s request. ChatGPT uses its “judgment” across conversations to select facts to add to its memory of the user, as well as to call on those facts in new conversations. Importantly, Claude’s process is transparent (it explicitly tells the user when and how it is referring to past conversations), while ChatGPT’s process generally takes place “behind the scenes.” Here’s Claude’s summary of the key differences in approach:
ChatGPT and Claude take fundamentally different approaches to memory. ChatGPT uses a persistent, always-on memory system that automatically builds a comprehensive profile across all your conversations—it remembers your preferences, past discussions, and personal details without being asked, creating seamless continuity between chats. In contrast, Claude’s memory is project-scoped and manually triggered—you must explicitly ask it to “reference previous discussions,” and memories stay isolated within specific projects or workspaces. This means Claude keeps client work, personal projects, and different contexts completely separate, preventing unwanted information bleeding between conversations. The trade-off is clear: ChatGPT offers effortless personalization that feels magical but can sometimes introduce irrelevant context, while Claude provides precise control and privacy-focused compartmentalization that’s ideal for professionals managing multiple clients, though it requires manual activation each time you need historical context.
The emphasis above is ours. While we’ve found ChatGPT’s memory useful at times, we’ve experienced plenty of instances of ChatGPT referring to knowledge of the user that is completely irrelevant to a given conversation. For example, ChatGPT knows that one of our Confluence writers is a father of three and enjoys cooking, and it will often refer to those facts in purely work-focused conversations. We appreciate the effort, but the “judgment” on both what to remember about a user and when and how to refer to it in conversation is not quite there.
With Claude, the user has complete control about when and how to reference the context of previous conversations. It’s a subtle but powerful difference. When engaging with Claude in support of client work, for example, the user can now simply say “Remember when we talked about the meeting with X about topic Y? I want to give you an update on how that went and pick up where we left off.” Or “Remind me where we left off when I was talking to you about Z.” Claude’s ability to remember all past conversations on-demand dramatically expands the accessible context in a given conversation. The user’s ability to control how it does that keeps conversations focused and reduces the likelihood of Claude introducing irrelevant context. Together, these factors make a powerful difference for efficiency.
By no means is LLM memory “solved” with this feature. In an ideal state, the user would not necessarily have to tell Claude to recall a specific conversation. Rather, Claude (like any human with good judgment) would refer to past conversations appropriately and completely on its own. But this approach to memory, to us, does represent a practical, high-utility step forward. It gives us our first real taste of what effective AI memory can deliver and provides more than a glimpse of a not-too-distant future where we spend less time explaining to AI and more time working together grounded in shared understanding.
Prompt Engineering for Strategic Thinking
The best prompting guides do more than structure our prompts
A few weeks ago, at a “Code with Claude” event in San Francisco, Anthropic gave a presentation they called “Prompting 101” about how to build the most effective prompts. The video is worth a watch, but the image below from the presentation has been making the rounds and captures most of what the video discusses:
This proposed prompt structure is interesting to us for an unusual reason: it’s both completely correct and increasingly unnecessary. Here, Anthropic is essentially teaching us to prompt Claude the way we used to have to prompt GPT-4 two years ago: with elaborate scaffolding, explicit instructions, and careful structuring. But their own model improvements are making much of this guidance obsolete for everyday use. As we’ve noted before, the frontier models are getting remarkably good at understanding intent from simple, conversational prompts. And yet here’s Anthropic, teaching a ten-step framework that most users will never need for 90% of their work. But that remaining 10% is where things get interesting.
Consider a task that will sound familiar to communication professionals, such as creating channel-specific messaging for a major announcement. A simple, serviceable prompt might read, “Create different versions of this announcement for different channels” with an attached version of the announcement. Now watch what happens when we apply Anthropic’s framework:
I need your help as a strategic communications advisor. We just achieved B Corp certification after a two-year process. We’re a 500-person software company and only 6,000 companies worldwide have this certification. Our CEO wants to position this as industry leadership, not just corporate responsibility.
I need to adapt our announcement for three different channels, each with distinct audiences and constraints. For context, here’s the kind of strategic guidance I’m looking for: for LinkedIn, I might focus on the industry leadership angle, mention our specific score, and include data about how few tech companies have B Corp status.
Can you provide recommendations for:
A LinkedIn post (150 words, professional audience, emphasizing competitive advantage)
Our internal newsletter (200 words, celebrating team effort, mentioning specific departments who contributed)
Twitter/X (under 280 characters, needs trending hashtags, punchy and energetic)
Think through each channel step by step - first consider what that audience cares about, then the platform’s conventions, then how to maintain message integrity while adapting tone. Use placeholders where you don’t have the information you need.
For each channel, I need: the key message angle, must-include elements, what to emphasize versus de-emphasize, tone adjustments, and one specific hook or opening line suggestion.
There’s no doubt that these two prompts would get very different outputs, but the most compelling difference is in the strategic thinking the structured prompt enables. We’re not outsourcing the drafting to Claude flippantly, but rather we’re using it as a partner to make sure we’re considering angles that will make for compelling content.
Prompting is becoming less about tricking the model into understanding you and more about organizing your own thinking. The structure they’re teaching (context, tone, background, detailed requirements) isn’t just for the LLM. It’s also for you, and helps you to articulate what you actually want before you ask for it. And that’s the value. LLMs are most valuable when they help us structure and improve our own strategic thinking.
For routine tasks, we still recommend keeping it simple. The models are now smart enough to figure out what you need. But when you’re doing something that really matters, this type of structured approach pays dividends. Not because the LLM needs it to be valuable, but because the process of building that structure clarifies your own strategic intent. Anthropic may have set out to teach prompt engineering, but what they’ve really done is provide a template for organizing our own thoughts. And that’s far more valuable than any collection of prompting tricks.
More Ways to Learn with Generative AI
And a way to think about extending the conversations with LLMs.
Perhaps overlooked in the announcements of GPT-5 and Claude Opus 4.1, over the past several weeks Anthropic, OpenAI, and Google all introduced new ways of working with their leading models specifically focused on learning. While each lab calls it something different (Guided Learning in Gemini, Study mode in ChatGPT, and Learning mode in Claude) they, at their core, seek to do the same thing. Rather than just give the user the answer, these modes all guide the user through a conversation designed to deepen their learning.
Here’s a quick example from Gemini. In both cases, we prompted the model with “Explain media richness” with no additional context or instruction.
Gemini 2.5 Pro
Media richness, a concept stemming from media richness theory, refers to the capacity of a communication medium to convey information and facilitate understanding.1 In simpler terms, it's about how “rich” or effective a communication channel is in getting a message across without ambiguity.2 This theory was developed by Richard L. Daft and Robert H. Lengel in the 1980s to help explain how organizations could choose the most appropriate communication channels for different types of messages.3
The richness of a medium is determined by its ability to handle multiple information cues simultaneously, provide rapid feedback, establish a personal focus, and use natural language.4
The Spectrum of Media Richness
Communication channels can be placed on a continuum from lean to rich:5
Lean Media: These are best for simple, straightforward messages where there is little room for misinterpretation.6 Examples include formal written documents like memos, reports, and mass emails. These channels are characterized by a lack of immediate feedback and fewer non-verbal cues.
Rich Media: These are more suitable for complex, ambiguous, or sensitive messages that require a greater depth of understanding.7 The richest form of communication is face-to-face conversation, as it allows for immediate feedback through verbal and non-verbal cues like facial expressions, body language, and tone of voice.8
Key Factors Determining Media Richness
Four main factors contribute to the richness of a communication medium:
Feedback Immediacy: How quickly can the receiver respond to the sender? Face-to-face conversations and phone calls offer immediate feedback, while emails and letters have a delay.9
Multiple Cues: Does the medium support various cues, such as vocal tone, facial expressions, and body language, in addition to the words themselves?10 Video conferencing, for example, is richer than a phone call because it includes visual cues.11
Language Variety: Can the medium convey a range of meanings through natural, conversational language?12 A personal conversation allows for more nuance and emotional expression than a formal report.13
Personal Focus: To what extent can the message be personalized for the recipient? A one-on-one conversation is highly personal, whereas a mass email is impersonal.
Practical Applications
Understanding media richness can significantly improve communication effectiveness in various contexts:15
Business: Managers can choose the best medium for different tasks.16 For instance, a complex project update is better delivered in a team meeting (rich medium) than through a lengthy email (lean medium).17 A simple scheduling confirmation, however, is perfectly suited for an email.
Education: Educators can select appropriate tools for teaching.18 A complex scientific concept might be best explained through an in-person lab demonstration (rich), while a list of vocabulary words can be effectively shared in a document (lean).
Personal Relationships: Recognizing the richness of different media can help maintain strong connections. A serious conversation is best had in person, while a quick “hello” can be sent via text.
In today’s digital age, with a vast array of communication tools at our disposal, the principles of media richness theory remain highly relevant for ensuring clear and effective communication.19
Gemini with Guided Learning
Let’s unpack the concept of media richness theory. It’s a key idea in communication studies that helps us understand why we choose certain communication methods, like a face-to-face chat versus an email, for different situations.
Basically, the theory suggests that communication media can be ranked based on their ability to convey a message effectively.1 The “richness” of a medium is its capacity to transmit cues beyond just words.2
To get started, what do you think makes a face-to-face conversation a “richer” form of communication than, say, a text message? What kinds of cues are present in one but not the other?
The contrast is telling. When prompting the base model (in this case, Gemini 2.5 Pro), you get a robust, detailed answer to your question. When using Guided Learning, Gemini deliberately aims to extend the conversation to guide your learning. If you’re interested in learning about a particular topic, these modes offer real value. And if you have a student in your life, encourage them to use these modes rather than asking for a quick answer.
But what is most telling about these learning modes is what they reveal about the default behavior of these models. They’re programmed to be helpful in the most immediate way possible. Ask a question, get an answer. Request a document, receive a document.
Consider what happens when an executive asks an LLM to draft a reorganization announcement. The model will confidently produce something that looks right, reads well, and follows all the conventions of corporate communication. What it won’t do is ask whether the reorganization addresses the actual problems the organization faces. Or whether employees have context for why this change is happening now. The model gives you what you asked for, not necessarily what you need.
While it’s helpful to have built-in modes that guide extended conversations, it won’t necessarily be right for every use case. When a longer, more nuanced interaction might be helpful, a powerful approach is explicitly asking the model to interview you before it begins work. “Write a memo on our reorganization, but first, interview me to get the context you need. Ask me one question at a time.” This simple addition completely alters the interaction. The LLM might ask about the problems the reorganization addresses, who the stakeholders are, what success looks like. Each question may surface assumptions you didn’t know you were making.
The new learning modes are just one example of the benefits of slowing down the interaction with LLMs. Extended conversations deepen your own thinking about the problem at hand. As you answer the model’s questions, you clarify your own assumptions, identify gaps in your reasoning, and often arrive at insights you wouldn’t have reached on your own. The quality of the output improves because the model has more information and because you’ve done better thinking. Getting the most from these tools requires recognizing when you need a quick answer and when you need to take the time for a more deliberate interaction. The models are capable of both. But only if we ask.
Meta-Brainstorming With LLMs
They can probably help you in more ways than you think.
When determining how to work with an LLM, many users fall into one of two traps: taking a narrow approach and returning to the same familiar use cases (drafting an email, summarizing a document, etc.) or staring at the blank prompt box, overwhelmed by the possibilities and unsure where to begin. We know these tools are powerful, but we don't always know how to unlock that power in a given situation. Often, the best way to discover where AI can be most helpful on a given challenge is to use the AI itself to map out the possibilities — essentially, to brainstorm with the AI about how the AI can help. We’ll call this “meta-brainstorming”: brainstorming with the AI about all of the ways it can help, rather than about specific solutions.
We experienced the power of this approach recently in a discussion with a group of graduate students. We presented the students with a fictional communication scenario and then challenged all of them to consider different ways generative AI could serve as a thought partner for this challenge. Here’s the scenario:
You’re the VP of Communications at a regional hospital system. The CEO needs to announce that the organization is closing its downtown urgent care facility due to financial losses, despite it serving a lower-income community. The facility will close in 60 days.
Additional context:
The facility loses $2M annually but serves 15,000 patients/year
Many patients lack transportation to suburban locations
45 employees will be affected (offered transfers but with longer commutes)
Local media has been covering healthcare access issues in the area
City council members have been vocal about protecting healthcare access
The decision is final but the announcement approach is not yet determined
The students came up with a range of interesting approaches, all of which were feasible. While they were working, we gave Claude the same context and prompted it with a simple question: “What are 50 different ways you can help me with this scenario?” As requested, it gave us 50 ideas, nearly all of which were potentially relevant and high-quality. We include the full list below and will leave you with a simple reminder: when in doubt about how the AI can help you on a given problem or opportunity, ask.
Strategic Planning & Messaging
Develop a comprehensive stakeholder mapping exercise identifying all affected groups and their specific concerns
Create a decision rationale document that clearly articulates the financial imperative while acknowledging community impact
Draft multiple versions of the announcement tailored to different audiences (employees, patients, community, media, city officials)
Build a messaging framework that balances fiscal responsibility with community care commitment
Design a phased communication timeline working backward from the 60-day closure date
Craft talking points for the CEO that demonstrate empathy while maintaining organizational credibility
Develop Q&A documents anticipating the hardest questions from each stakeholder group
Create a narrative that positions this as a strategic redeployment of resources rather than abandonment
Build bridge messaging connecting the closure to enhanced services elsewhere in the system
Design "proof points" showing continued commitment to underserved populations
Employee Communication
Script manager cascade materials ensuring consistent delivery across the organization
Create an employee transition packet detailing transfer options, commute analyses, and support resources
Develop retention strategies for key talent being offered transfers
Design a recognition program honoring the facility's legacy and employee contributions
Build psychological safety protocols for employees to express concerns and grief
Create peer support mechanisms for affected staff
Draft internal FAQ addressing compensation, benefits, and career path questions
Patient & Community Engagement
Map patient transportation challenges and develop mitigation strategies
Create patient notification letters with clear next-step instructions
Design a patient navigator program to help with care transitions
Build partnerships with local transportation services or rideshare programs
Develop multilingual materials for diverse patient populations
Create visual aids showing alternative care locations and how to access them
Design a community forum structure for constructive dialogue
Build coalition strategies with community organizations to support affected patients
Media & Public Relations
Craft a media statement that acknowledges the difficulty while explaining the necessity
Develop exclusive briefing materials for key healthcare reporters
Create data visualizations showing the financial reality without appearing callous
Design a rapid response protocol for breaking news scenarios
Build social media content that humanizes the decision-makers while respecting patient concerns
Develop op-ed pieces from leadership demonstrating long-term commitment to community health
Political & Regulatory Navigation
Create briefing materials for one-on-one meetings with city council members
Develop testimony preparation for potential public hearings
Build coalition strategies with other healthcare providers to demonstrate system-wide challenges
Design materials showing compliance with all regulatory notification requirements
Create economic impact data showing how resources will be redeployed
Crisis Management & Risk Mitigation
Develop scenario plans for protests, negative media coverage, or political intervention
Create escalation protocols for emerging issues during the transition period
Build monitoring systems for community sentiment and media coverage
Design response strategies for potential legal challenges
Develop contingency communications if timeline needs to accelerate or change
Implementation & Measurement
Create communication effectiveness metrics and feedback loops
Design pulse surveys to monitor employee and community sentiment
Build a command center structure for coordinating multi-channel communications
Develop daily briefing templates for leadership during the transition
Create a lessons-learned framework for future difficult announcements
Long-term Relationship Repair
Design a post-closure community investment strategy to rebuild trust
Create storytelling opportunities highlighting successful patient transitions
Develop thought leadership positioning on healthcare access challenges
Build a longitudinal communication plan showing continued community partnership beyond closure
We’ll leave you with something cool: Google DeepMind’s Perch 2.0 turns hours of nature audio into quick maps of what rare or endangered animals are calling where, so conservationists can focus action on hotspots impacting birds, mammals, and even underwater reefs.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.
Congratulations on reaching two years. I look forward to and enjoy reading your posts every Sunday morning. I'm stealing your piece on "Structured prompts help us more than the LLM" and adding it to an ILT we built on The Art of Prompting. I've noticed that most users quickly grasp the concept of creating personas and providing context, but they often stop there. When they call me for help, I'll ask them, "What does good look like?" and then tell them to add that to their prompt, often resulting in a better response from the AI.