Confluence for 5.11.25
AI, communication, and judgment. Claude’s expanding access to your world. Why generative AI shouldn’t speak for the CEO. Bring AI to everything you do.

Welcome to Confluence. Here’s what has our attention this week at the intersection of generative AI, leadership, and corporate communication:
AI, Communication, and Judgment
Claude’s Expanding Access to Your World
Why Generative AI Shouldn’t Speak for the CEO
Bring AI to Everything You Do
AI, Communication, and Judgment
Just because we can use AI in our communication doesn’t mean that we should.
In a recent Substack post titled “You Sent the Message, But Did You Write It?” David Duncan notes that “We’re surrounded by AI-shaped communication — but we’re still talking about it like everything is normal.” Generative AI tools like ChatGPT make it easier than ever to create content, so it’s no surprise to see AI-generated content flooding the information landscape as adoption increases. Certain patterns are emerging, and Duncan’s post provides a tongue-in-cheek glossary to “name, diagnose, and spark reflection on the strange new ways we communicate in the age of AI.” It’s a light and entertaining read, but the patterns Duncan highlights are worth taking seriously.
Three of these patterns reflect a conscious or unconscious decision to prioritize convenience over thoughtfulness, meaning, and connection:
Praste (Derived from ‘prompt’ + ‘paste’)
Definition:
To copy and paste AI output verbatim, without editing, thinking, or reading it through.
Hallmarks:
- Reads like it came straight from ChatGPT’s first draft
- Off-tone, over-polished, slightly generic
- Often longer than it needs to be, with no trace of the sender’s actual voice
- Unrestrained usage of emojis as bullet point markers
…
GPTune
Definition:
Like Auto-Tune for writing. GPTune takes someone’s normal idea and smooths it into something that feels more articulate, structured, erudite - but less authentic.
Hallmarks:
- Every sentence flows perfectly
- Feels smart but strangely impersonal; makes everything sound a little too much like a corporate keynote
- Leaves you wondering what was actually meant
…
Syntherity (Derived from ‘synthetic’ + ‘sincerity’)
Definition:
AI-generated or AI-polished emotional language - apologies, gratitude, vulnerability - that leaves you unmoved.
Hallmarks:
- Feels over-produced: too balanced, polished, too "growth journey"
- Filled with emotional buzzwords or cliches
- Extreme diplomacy; feels like it’s trying to offend no one
By now you have very likely (whether you know it or not) been on the receiving end of at least one of these. AI can help us write and express ourselves more clearly. It can help us refine and articulate our ideas. It can help us adopt the perspective of others and anticipate reactions. The three patterns Duncan identifies take this to a problematic extreme, though, and show the downsides of outsourcing too much of the real work of human communication to the convenience of AI tools.
The pull of convenience is strong and will only get stronger as the tools improve. What will separate good communication from great in an increasingly AI-mediated world will be communicative judgment — the ability to make communicative choices that send the message we want to send, including about who we are and the relationship we have (or want to have) with those on the other end of our communication.
In April, 2023 we wrote:
As people find ways to use generative AI, they will also hone their ability to distinguish between AI and human-generated content. Early research suggests that humans are unable to reliably detect the difference between earlier GPT model (GPT-2 and GPT-3) output and human writing but can develop the ability to tell the difference with training and experience. It will be important to monitor how the widespread adoption of these technologies changes the average person’s ability to discern GPT from human writing. Regardless, communication professionals [and leaders, we’ll add now] should think about the use of AI-generated text through the lens of authenticity and be intentional about its use given that audiences may have heightened suspicion that content could be machine-generated.
Research has repeatedly demonstrated that most people cannot identify AI-generated text. But other recent research confirms that people “who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback.” If this trend holds, as people who frequently use LLMs for writing make up an increasing share of the world’s population, it won’t be just people like Duncan who are able to spot these patterns — it will be nearly everyone. When we succumb to “prasting” or “GPTuning” or “Syntherity” it will be obvious, and it will send a message about who we are and our relationship those on the other end of our communication. The questions will be: What message do we want to send? And what choices do we need to make about our use of these tools that reinforces rather than undermines that message?
Claude’s Expanding Access to Your World
Anthropic’s latest updates show how AI assistants will connect with more of our data.
Several weeks ago, we covered Claude’s Research function and Gmail integration— early steps in Anthropic’s push to make Claude more useful by connecting it to your information. Just a few weeks later, Anthropic announced their next step in connecting Claude to your world.
First, Anthropic introduced additional integrations, connecting Claude to workplace tools like Atlassian’s Jira, Zapier, Asana, and several others. While we haven’t tested these specific integrations (beyond Gmail and Google Calendar), they reflect a clear direction: Anthropic wants to position AI as an intermediary between you and your data. The company makes the case for this approach:
When you connect your tools to Claude, it gains deep context about your work—understanding project histories, task statuses, and organizational knowledge—and can take actions across every surface. Claude becomes a more informed collaborator, helping you execute complex projects in one place with expert assistance at every step.
Alongside these integrations, Anthropic has also expanded Claude’s Advanced Research capabilities. Claude can now use both the newly connected tools and web search to conduct deeper investigations. We’ve tested this updated Research function, which now promises more thorough investigations across thousands of sources. Our experience revealed mixed results. In some cases, the system simply stopped after searching for nearly an hour without delivering the comprehensive results we expected while in others, we’ve received impressive output. These failures seem far less frequent days after the launch, though, and as Anthropic labels this feature as being in “beta,” we’d expect it to only improve.
What stands out from these updates isn’t just what they deliver today, but what they signal. Whether we consider Claude or o3, the leading AI labs increasingly position their systems as interfaces between users and complex information. Whether you need corporate documentation, financial data, or you’re shopping for new luggage, you’ll interact less with raw data and more with AI-mediated versions of that information. The convenience seems magical, but it creates a double-edged sword: these systems make finding information almost too easy while simultaneously introducing the risk of the models getting facts wrong, inventing information, or otherwise hallucinating what they present.
These integrations are the bellwether of a new reality: One with new forms of convenience and accessibility, saving hours of manual searching and synthesizing, that also demand a new kind of literacy from us. We are all going to need to learn when to trust, when to verify, and how to evaluate AI-generated answers and insights against the facts and our own judgment.
Why Generative AI Shouldn’t Speak for the CEO
New data about trust, perception, and generative AI.
Harvard Business Review’s latest issue features an article titled Why CEOs Should Think Twice Before Using AI to Write Messages. The piece explores what happens when generative AI is used to mimic a CEO’s voice, and whether employees can tell the difference. At the center of the article is a study conducted at Zapier, where researchers trained a chatbot on CEO Wade Foster’s communications. The bot, dubbed “The Wade Bot,” responded to real employee questions alongside the CEO himself. Employees then tried to distinguish between human- and AI-generated answers. While the results themselves were telling (people could only identify the AI correctly 59 percent of the time), the real insight came from employee reactions. Messages that Zapier employees believed to be from a human were consistently rated as more helpful, even when the AI had written them. And the inverse was true as well.
A follow-up study confirmed this bias in a broader context. Here, participants read answers they thought came from CEOs during earnings calls — some real, some AI-generated. Again, perceived authorship shaped perceived value. To borrow a term from our piece above, if employees think you’re “prasting” those Q&A responses, they’re going to find them less valuable.
These studies and their implications are a timely reminder of something we’ve long known to be true: message quality alone doesn’t determine credibility or effect. Credibility and trust in combination with message quality make the real difference, and in the age of AI, the assumption of a human voice still matters, especially when the sender is someone in a leadership role. With 50 percent of U.S. CEOs already automating some content and 75 percent experimenting with generative tools (according to Deloitte data cited in the article), the implications are significant.
We know that generative AI isn’t going anywhere, and that it’s going to be increasingly part of how we work and communicate. So, what should leaders do? The article offers three worthwhile guidelines:
Be transparent. Don’t leave employees guessing which messages came from a bot. Let them know how and when you’re using AI, and be clear about the rules that determine its use. We will note here that we have been saying this for almost two years. If the revelation that AI created content would in any way leave you feeling awkward, it’s something you should disclose.
Use AI for impersonal messaging. It’s better suited for drafting routine updates or answering high-level, repeatable questions—areas where your individual voice matters less and efficiency matters more.
Triple-check your work. When you do opt to use generative AI, never send an AI-generated message without reviewing it for tone, accuracy, nuance, ideally via a human editor.
These guidelines aren’t groundbreaking, but they’re a good starting point as you establish and refine your own best practices when it comes to generative AI use in your organization.
The HBR article also includes a companion interview with Zapier’s CEO Wade Foster that’s a worthwhile read. Foster shares more about what it feels like to see a bot imitate your writing style, where it falls short, and how AI has changed his own creative process. The interview is a useful reminder that while tools are evolving quickly, the responsibility for what gets said — and how it lands — still rests squarely with the human in charge.
Bring AI to Everything You Do
A simple how-to on how to become far more proficient at using generative AI.
Ethan Mollick has coined several phrases that have gotten traction among those who read and write about generative AI. One is that it takes about 10 hours of use to start really getting a feel for what these things can do. Another is to always invite AI to the table. We can vouch for both. But all that said, we still see a lot of folks falling behind the overhang of what generative AI can do well and what they use it for. So this is our brief three-step guide on how to become significantly more proficient in generative AI in just one week. Try it yourself, and share it with your friends and family.
Step 1: Pony up the little bit of cash it takes to use the paid version of Claude, ChatGPT, or Gemini. All are different in some ways, but it probably doesn’t matter much which you pick. Get the subscription direct from one of those three companies. App stores are (unfortunately) FILLED with “generative AI” and “GPT” chatbot applications that are little more than wrappers on top of free or dated versions of these tools. They do not offer the full intelligence, nor the full set of capabilities, of those models. We don’t use them, and you should not either. ChatGPT, Claude, or Gemini are the only generative AI apps we use with any regularity, and the ones we would recommend to you. Here’s some color commentary on each (and don’t let the wonky naming terminology set you back, it’s just how these things are named):
Claude Sonnet 3.7 by Anthropic. Just call this “Claude.” It’s $20 a month. A great writer, able to do deep research (sometimes reading many hundreds of sources), able to think things through at a deep level, can read images and documents you upload, but does not create images or have a voice mode that talks to you. Claude is our preferred model for work, for what that’s worth.
ChatGPT by OpenAI, both the 4o model (for mundane tasks) and the o3 model (which has limits on how much you can use it but is very powerful at doing real work). It’s $20 a month. It can do deep research, create images, read photos and documents, and talk to you via voice if you like.
Gemini 2.5 Pro by Google. This is also $20 a month, and comes as part of a “Google AI” subscription plan that includes Gemini, image generation, the ability to make videos, read photos and documents, and a bunch more. Again, don’t worry about the details. If you are part of Google’s ecosystem for work or school, this one makes sense.
All are $20 a month. You can’t get a martini in most cities for $20. And that $20 can make you significantly smarter and more capable, so consider it an investment. You’re worth it. You can use all three via your web browser or via an app you download. But use a leading model that you pay to use. Otherwise you’re working with one hand behind your back.
Step 2: Install the app for the model you’ve picked on your phone. You will find many more opportunities to use the AI if you’re mobile with it. Be sure to get the official app for the model you’ve signed up for. Don’t be fooled by imposters in the app stores. Anthropic, OpenAI, and Google have easy-to-find links to their apps on their websites.
Step 3: For one week, use it for everything you can think of. And if you’re wondering what “everything you can think of” means, here’s a partial list of uses from one of your authors over the past week or so:
Create a meta prompt to design audience personas for message testing.
Give me a morning briefing on what’s new over the past 24 hours on topics X, Y, and Z. Include the weather and local sports.
Craft the first draft of a client proposal from a collection of similar, prior documents.
[After uploading a photo from a phone] Find the PDF of this user manual online for me.
Diagnose the possible sources of water heater leak.
Research what new CPUs could be a drop-in upgrade for a PC, and how to do it.
Conduct a linguistic analysis of a body of text to identify underlying linguistic frames of reference.
Get advice on what process to follow to deal with a gap in insurance coverage.
Make a snarky cartoon in the New Yorker style about the Maple Leafs / Panthers matchup.
Review a long list of items and identify all duplicates.
Research any new news about person X.
Create a set of common-sense explanations of the “Procure to Pay” process for different audiences.
Research a friend from 25 years ago to see if we can track them down to re-establish contact.
Preview the evening’s Phillies v. Rays baseball game.
Review and improve an LLM “steel manning” prompt.
Identify what HMS ship may be in port in Tampa, FL.
Brainstorm possible meanings of a cryptic vanity license plate (from a photo).
Review email drafts for clarity and persuasiveness.
Calculate the equivalent of $20 in 1986 in 2025 dollars.
From a created persona, anticipate audience reactions to a set of corporate messaging.
Track delay causes and status for Southwest flight 3194.
Research possible landscaping plants and strategies for a locale and climate.
Read and explain a legal creditor letter.
Read an academic paper and relate it to recent failures of large language models.
That is an example of “bringing AI to the table” for everything you do. And if you’re about to do something and are wondering if AI can help, just ask the AI. It will tell you.
So that’s it, a simple three-step approach. Follow it for the next week and not only will you be far more conversant and facile with generative AI, you’ll also start to develop your intuition for what these models do well, and where they fail. They are strange in that they’re superhuman at somethings, and dumb as rocks at others. The only way to anticipate those strengths and weaknesses is through use. So get to it. It’s worth the $20.
We’ll leave you with something cool: a collection of New Yorker-inspired cartoons by ChatGPT.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.
I like your recommendation to download it to your phone and use it whenever possible. In the last few weeks, ChatGPT has helped me make an informed decision on which knife set to buy at Costco and potentially saved me $35 from purchasing a supplement for my two dogs (I was skeptical initially).