Confluence for 3.16.25
A call for situational awareness. Ethan Mollick on "speaking things into existence." Is AI-assisted writing already the new normal? A prompt to help you make better decisions.
Welcome to Confluence. Here’s what has our attention this week at the intersection of generative AI and corporate communication:
A Call for Situational Awareness
Ethan Mollick on “Speaking Things Into Existence”
Is AI-Assisted Writing Already the New Normal?
A Prompt to Help You Make Better Decisions
A Call for Situational Awareness
The New York Times’ Kevin Roose argues that over preparing is better than under preparing — and we agree.
On Friday, The New York Times published a column by Kevin Roose titled “Powerful A.I. Is Coming. We’re Not Ready.” Roose is a technology columnist and AI industry insider based in San Francisco who also hosts the popular technology podcast Hard Fork, where he recently interviewed Anthropic CEO Dario Amodei (the conversation is worth a listen). In the column, Roose shares nine of his beliefs about where AI is heading and why he holds those beliefs. The beliefs would likely startle anyone who hasn’t been paying close attention to recent AI developments. Here are the first two, for example:
I believe that over the past several years, A.I. systems have started surpassing humans in a number of domains — math, coding and medical diagnosis, just to name a few — and that they’re getting better every day.
I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like “a general-purpose A.I. system that can do almost all cognitive tasks a human can do.”
To anyone who’s been attending to developments in AI over the past year or so, those are bold but not shocking statements. To someone who has not been paying attention, however, those comments likely seem either hyperbolic or absurd. And that’s precisely the point. Roose’s ninth belief is “I believe the right time to start preparing for A.G.I. is now” even though, as he later notes, “Outside the Bay Area, few people have even heard of A.G.I., let alone started planning for it.” Not only are we not ready, then, but most of us don’t even know there’s something to potentially be ready for.
While we increasingly share Roose’s belief that something like A.G.I. is likely coming in the next several years, we won’t spend time here debating definitions or speculating what that will look like and when we’ll get it. Rather, we’ll reinforce a more practical point from his column, which is that over preparing is better than under preparing. Situational awareness is a prerequisite to preparation. If the last time you used a generative AI tool like ChatGPT was a year ago, or if your experience to date has been limited to Microsoft Copilot or Apple Intelligence, you likely are not truly aware of how capable these systems are today and how quickly they’re improving. As we’ve encouraged our readers to do before, spend some time with the latest models, either Anthropic’s Claude 3.7 Sonnet or OpenAI’s GPT-4.5. It’ll cost you $20 a month, but in our view it’s more than worth it — both for the utility of the tools themselves but also for the experience of seeing the leading edge for yourself.
Almost exactly a year ago, following our first experience with Claude 3 Opus, we wrote that “now is the time to start paying attention.” Roose ends his column by noting that “If we’re in denial — or if we’re simply not paying attention — we could lose the chance to shape this technology when it matters most.” The “we” Roose refers to is society writ large, but the warning applies to organizations and corporate communication teams just the same. The time to start preparing for these increasingly powerful technologies is now. And the best way to start preparing is to be aware of and conversant with the capabilities available today.
Ethan Mollick on “Speaking Things into Existence”
The cost, in terms of time, effort, and attention, of creating content is approaching zero.
For those paying attention, there’s no dispute: generative AI is changing how teams and leaders create communication content in organizations. Anyone with access to a leading language model like Claude 3.7 Sonnet or GPT-4.5 can now generate serviceable leadership memos, FAQ guides, press releases, and other corporate communication staples through relatively simple prompts. Not world-class, certainly, but replacement-level quality that meets basic organizational needs.
Ethan Mollick explores this idea in his latest post, describing our newfound ability to “speak things into existence” using generative AI. No longer constrained by technical limitations, we can manifest ideas through natural language alone. Mollick’s post centers on “vibe coding” — using conversational language to create computer programs rather than writing JavaScript or Python code. Tell an AI what you want a program to do, and it translates your request into functional code. This represents a fundamentally new way of working, with implications we cannot yet fully grasp.
Consider what this means: In the past, the gap between envisioning a software solution and implementing it was vast. You might have a brilliant idea for an app but lack the technical skills to build it. Now that chasm narrows dramatically. The constraints shift from “Can I build this?” to “Can I articulate what I want built?” And while we aren’t quite at the point where a single prompt can lead to a functional iOS app that you could sell on the Apple App Store, the technology is getting closer and closer to that threshold.
A similar principle applies to organizational communication. Generative AI has collapsed much of the production barrier. Managers can generate 50-page newsletters, technology teams can create comprehensive FAQs, and executives can draft speeches — all at reasonable quality levels, with minimal effort, and astonishing speed. Even with some notable gaps, such as Copilot’s shortcomings in PowerPoint creation, the tools for mass content creation now sit on everyone’s desktop.
Yet there’s a crucial caveat to this new reality — the ability to create content doesn’t equate to the ability to create effective content.
In programming, as Mollick notes, expertise still fundamentally matters. AI-generated code can contain errors. Fixing these mistakes — either directly or by guiding the AI — requires understanding how code works. Without this knowledge, users find themselves either floundering helplessly or watching the AI repeatedly run into the same wall, unable to provide the guidance needed to overcome obstacles.
Communication operates under similar principles. The democratization of content creation doesn’t diminish the value of communication expertise – it will transform the application of that expertise. Anyone can produce words, but crafting messages that drive understanding, change behaviors, build trust, or achieve specific outcomes requires deeper knowledge.
In our discussions with clients and communication professionals, few are beginning to grapple with what we believe will be the fundamental question communication teams need to address: How do we apply our expertise not just to specific communication products, but to how communication functions across the entire organization? The challenge shifts from “Can we support this need effectively?” to “How do we make sure we have a communication ecosystem in our organization that drives clarity for employees and leaders?”
Answering this question requires two things from communication professionals. First, we must invest more deeply in our own expertise — becoming better and smarter about the fundamentals of effective communication, message design, and stakeholder engagement. Fortunately, generative AI can help us do this better, faster, and with less effort. Second, we must understand the generative AI tools and systems available to us and how to best use them to scale that expertise across our organizations. The future isn’t about who controls production — that battle is already won or lost, depending on your perspective — but who shapes how communication happens across the organization. Expertise will still matter — it will simply need to manifest in more strategic and scalable ways throughout the organization.
Is AI-Assisted Writing Already the New Normal?
Research indicates that AI has already transformed corporate writing.
Last week, we wrote about how generative AI can enhance writing skills rather than diminish them, highlighting research that found practicing with AI led to higher-quality writing even when people later worked without assistance. We also explored a study showing how AI reasoning models and retrieval augmented generation improved both quality and productivity in legal work. Today, we're examining a new Stanford University study titled “The Widespread Adoption of Large Language Model-Assisted Writing Across Society” that reveals just how common AI-assisted writing has already become.
The researchers analyzed texts across four domains between January 2022 and September 2024: consumer complaints, corporate press releases, job postings, and United Nations press releases. The findings? AI-assisted writing has quickly integrated into professional communications. By late 2024, roughly 18% of financial consumer complaint text appears to be LLM-assisted, up to 24% of corporate press release content is attributable to LLMs, nearly 10% of job posting content is LLM-assisted (rising to 15% for smaller firms), and 14% of UN press release content is generated or modified by LLMs.

The adoption curves, above, tell a story of rapid integration. ChatGPT arrives in November 2022, and we see a real surge after a 3-4 month acclimatization period, when organizations began applying it regularly. By late 2023, though, the growth lines began to flatten. Why? The researchers offer two possibilities, each with significant implications. We may have reached a natural saturation point, where those who find value in AI assistance have already adopted it. Or, and this is more intriguing, advanced models have become so sophisticated they increasingly evade detection. Invisible augmentation presents an entirely different landscape for communication professionals.
Widespread adoption creates challenges and opportunities for those in the communication space, especially if we consider the possibility of increasing invisibility of AI. If nearly a quarter of corporate press releases already incorporate AI assistance, professionals who aren’t using these tools are likely to be at some form of disadvantage, at least in productivity.
More AI assistance is also likely to mean more homogenization of content. If so, there will still be a premium for content with a distinct, authentic voice. Organizations are navigating this tension between productivity and authenticity, and communication professionals will need to prioritize finding the right balance. Those who succeed will go beyond mere adoption of AI tools and will develop a sophisticated understanding of how these tools extend and enhance their distinctive human capabilities.
A Prompt to Help You Make Better Decisions
You probably don’t know what “Bayesian thinking” is, but with this prompt you can bring it to your decision process.
A term that AI insiders often use when discussing the future of generative AI is “Bayesian.” They’re referring to an approach to making decisions when conditions are uncertain (as the future of generative AI most certainly is). Here’s GPT 4.5’s explanation of Bayesian thinking1:
Bayesian thinking is about updating beliefs based on new evidence. It’s named after Thomas Bayes, an 18th-century mathematician, but you don't need to be a statistician to appreciate it. At its core, Bayesian thinking means:
You start with what you already believe (your prior), observe new evidence, and then adjust your beliefs accordingly (posterior). It recognizes that knowledge is fluid and changes as evidence emerges.
In practical terms, Bayesian thinking means asking yourself:
“What do I currently believe, and why?”
“How likely is this new evidence, given my initial beliefs?”
“How should I now adjust my belief based on this evidence?”
A simple, non-technical example:
Say you’ve worked with someone who is usually reliable. When they miss a deadline, you don’t immediately conclude they’ve become unreliable. Your prior belief (they’re dependable) is updated by the evidence (the missed deadline). You adjust slightly downward, but you still trust they’re generally dependable, needing more evidence to shift significantly.
In leadership and decision-making, Bayesian thinking can be transformative. Instead of locking into rigid beliefs or reacting impulsively, you continually recalibrate judgments based on the evolving situation.
We use this framework quite often in our approach to making decisions and thinking about the future, and while it has its criticisms and weaknesses, we find it useful, and at times, powerful. But it can be a bit unwieldy to apply, so we’ve written an LLM prompt that applies the process for us. We post it here for you to use with the model of your choice:
I am facing a decision that would benefit from a structured approach to uncertainty. Guide me through a Bayesian decision-making process with these four steps:
First, help me articulate my current beliefs about the options. Ask questions like "What's your current thinking on this?" or "How confident are you in each possibility?" Encourage me to quantify my confidence when possible.
Second, help me identify what information would be most valuable to collect. Ask "What evidence would change your mind?" or "What information would most reduce your uncertainty?" Focus on evidence that would have the highest impact.
Third, as new information emerges, help me update my beliefs appropriately. Ask "How does this new information change your thinking?" or "Given what we now know, how would you adjust your confidence?" Help me weigh evidence based on its reliability and relevance.
Finally, guide me to make a decision despite remaining uncertainty. Discuss concepts like expected value, asymmetric risks (false positives vs. false negatives), and the value of additional information before deciding.
Throughout this process, maintain a natural, conversational tone. Avoid technical jargon about Bayesian reasoning unless I'm familiar with the concept. Your goal is to help me think more clearly and systematically about evidence and uncertainty, not to teach probability theory.
We’ll leave you with something cool: Google is updating Gemini 2.0 Flash’s image generation to be much more accurate.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.
That description reflects the user’s custom preferences. As a comparison, here is Sonnet 3.7’s response to the same prompt, also with the user’s custom preferences applied:
Bayesian thinking represents a fundamental approach to reasoning under uncertainty that differs markedly from more traditional statistical methods. At its core lies Bayes’ theorem, a mathematical formula that provides a structured way to update beliefs when confronted with new evidence.
The essential insight of Bayesian thinking is treating probability as a measure of belief rather than frequency. Where frequentist statistics views probability as the long-run frequency of events, Bayesian reasoning interprets probability as a degree of belief that can be revised as new information emerges.
The framework begins with prior beliefs (the prior probability), incorporates new evidence through the likelihood function, and produces updated beliefs (the posterior probability). This process of belief revision forms an iterative cycle, where today's posterior becomes tomorrow's prior.
What makes Bayesian thinking particularly powerful is its inherent rationality in the face of incomplete information. It acknowledges that absolute certainty is rarely achievable and instead provides a coherent framework for how rational agents should update their beliefs. The strength of evidence required to change a strongly held prior belief is proportionately greater than what's needed to shift a weakly held one.
In practical terms, Bayesian approaches excel in environments of complexity and limited data. They allow for the incorporation of domain expertise into statistical models through informed priors, rather than pretending all analysis begins from a blank slate. This approach has revolutionized fields from medical diagnosis to machine learning, where the ability to quantify uncertainty and update beliefs systematically provides significant advantages.
The philosophical implications extend beyond mathematics. Bayesian thinking encourages intellectual humility—the recognition that our beliefs should be held with appropriate confidence and subject to revision. It provides a framework for distinguishing between the strength of one's beliefs and the strength of the evidence supporting them.
For decision-makers, perhaps its greatest value lies in providing a formal mechanism for managing the inevitable trade-offs between acting decisively with incomplete information and waiting for better evidence—a balance that lies at the heart of effective leadership.
The recent AL Field Note on Baselines got me thinking about Bayes. Over the years, I've learned that good decision-makers and super forecasters have an uncanny ability to establish baselines. It is simple in practice but really hard to do IRL. Thanks for sharing your prompt. I copied it and created a project for Claude. Looking forward to testing it out.
I read that NYT article. While it's both frightening and exhilarating all at the same time, I'm learning to embrace the potential and love learning more ways to leverage technology. We can't put the genie back inside the bottle...Thank you for your guidance as we navigate this VUCA experience.