Confluence for 6.1.25
Is the mainstream narrative shifting? What we can learn from system prompts. More research on generative AI and labor markets. Live speech translation arrives.
Welcome to Confluence. Here’s what has our attention this week at the intersection of generative AI, leadership, and corporate communication:
Is the Mainstream Narrative Shifting?
What We Can Learn from System Prompts
More Research on Generative AI and Labor Markets
Live Speech Translation Arrives
Is the Mainstream Narrative Shifting?
Axios and The New York Times address AI’s workforce impact head-on.
On Wednesday, May 28, Axios published a column based on an exclusive interview with Anthropic CEO Dario Amodei. Generative AI has been a steady presence in the headlines for the past two years, so that Axios would interview Amodei and publish about it was unsurprising, expected even. But the column’s title — “Behind the Curtain: A white-collar bloodbath” — immediately made it clear that this one would be different. And it was: in tone, focus, and level of urgency.
The column begins with a warning from Amodei, that “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years.” It then notes that “Amodei said AI companies and government need to stop ‘sugar-coating’ what’s coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs.” These are claims, of course, and no one can predict what will happen. Economists are divided on the matter, with many predicting the effects will take much longer to play out, in terms of both economic growth and workforce impact. But that fact that Amodei seems to have made it a priority to begin spreading this message and that Axios was willing to give it so much space (the column is lengthy by Axios standards and was co-authored by CEO Jim VandeHei) is noteworthy.
Two days later, The New York Times published a column by Kevin Roose titled “For Some Recent Graduates, the A.I. Job Apocalypse May Already Be Here.” (Roose, some Confluence readers may remember, published a column in March titled “Powerful A.I. Is Coming. We’re Not Ready.” We wrote about it here.) While the Axios column is speculative, Roose’s is grounded in what’s happening now. He begins by cataloguing several data points and anecdotes suggesting a slowdown in entry-level hiring before turning to two trends he finds particularly concerning. The first is that companies will adapt AI too much, too quickly before the technology is ready. We agree this is a risk (albeit a shorter-term one), but Roose’s second concern echoes one that we’ve been thinking about for years:
…even if entry-level jobs don’t disappear right away, the expectation that those jobs are short-lived may lead companies to underinvest in job training, mentorship and other programs aimed at entry-level workers. That could leave those workers unprepared for more senior roles later on.
This risk is particularly complicated due to its long tail. Organizations won’t feel the dearth of senior talent until they do. This will likely take years. Rushing to gain efficiencies by leaning on AI for work that was previously the province of entry-level employees may well benefit the bottom line in the short-term. But some of this may come at the expense of developing the talent organizations need in the long term. It’s a tradeoff, and a complicated one. The challenge will be how to get in front of it. We’re working hard on this in our own shop, based on the premise that if we use generative AI to replace junior consultants, in 10 years we won’t have any senior consultants. If the technology advances according to prediction, this will be true for us, and for any white-collar job with significant AI overlap (which is most of them).
We’ll save further thoughts on that topic for another day. Again, this is a challenge we’ve been anticipating within our own firm and with our clients for well over two years now. The bigger story to us is that it’s beginning to get such mainstream attention. The Axios article was the first media piece that many of our clients began raising almost immediately in everyday conversation, and it seems the mainstream narrative is catching up with the one that’s been circulating in AI circles for a long time. We do not necessarily believe the workforce impact will unfold as swiftly as Amodei predicts, but neither he nor we (nor anyone else) knows. We do agree with the broader forecast that knowledge work as we know it will be disrupted in a way we haven’t seen in decades, or perhaps ever. If the Axios and Times columns from this past week are any indication, the urgency to confront these challenges is starting to move from the discourse’s periphery to its center.
What We Can Learn from System Prompts
Knowing how the labs prompt their models can help users write better prompts.
Every Large Language Model (LLM) has a system prompt. Think of these prompts as the model’s instructions to themselves that inform how the model should behave. They include explicit guidance and instruction (for instance, how the model should respond if the user asks certain questions about the model) as well as language that shapes the model’s personality. Earlier this week, Simon Willison published a thoughtful analysis of the Claude Sonnet 4 and Opus 4 system prompts released last week. He examines different parts of them and offers insights into Anthropic’s design choices. There’s plenty to learn from it. Beyond learning about system prompts and deepening our understanding of how these models work, which we encourage all our readers to do, Willison also makes an observation that has relevance not only to our understanding of the models, but to how each of us works with them on our own:
Reading these system prompts reminds me of the thing where any warning sign in the real world hints at somebody having done something extremely stupid in the past. A system prompt can often be interpreted as a detailed list of all of the things the model used to do before it was told not to do them.
LLMs produce unpredictable output. They are probabilistic, not deterministic, and often exhibit quirks and tendencies in how they respond. The way to combat and counter these behaviors is through how we prompt the model. If, when working with an LLM, you identify a pattern or tendency that is unhelpful to you, go back and modify your prompt to account for this. Willison points out a great example of how Anthropic does this within its system prompt. LLMs love to make lists. It’s a tendency we’ve seen since the earliest days of ChatGPT, and it’s one where we had to do quite a bit of our own prompt design with our Leadership AI, ALEX, to encourage it respond in prose and only produce lists when asked.
Anthropic seems to feel similarly. Willison points out that Anthropic has an entire paragraph in Sonnet 4 and Opus 4’s system prompts to discourage and put guardrails on how it uses and creates lists.
If Claude provides bullet points in its response, it should use markdown, and each bullet point should be at least 1-2 sentences long unless the human requests otherwise. Claude should not use bullet points or numbered lists for reports, documents, explanations, or unless the user explicitly asks for a list or ranking. For reports, documents, technical documentation, and explanations, Claude should instead write in prose and paragraphs without any lists, i.e. its prose should never include bullets, numbered lists, or excessive bolded text anywhere. Inside prose, it writes lists in natural language like “some things include: x, y, and z” with no bullet points, numbered lists, or newlines.
When you’re working with an LLM and you begin to notice patterns that are unhelpful or even annoying, don’t accept it. Spend time modifying and updating your prompts to address these specific tendencies and steer the models. It can take a bit of work, but in our experience the latest models are smart enough and capable enough that given clear direction, they’ll follow it reasonably well. While there is always a risk of accepting model’s output too quickly and falling asleep at the wheel, we can also miss opportunities in the other direction. When a model’s output disappoints, diagnose the problem and revise your prompt accordingly.
More Research on Generative AI and Labor Markets
Use of generative AI is growing fast, and the labor implications are soon to follow.
As we noted last week, we have added Anthropic’s Claude to the Confluence writers’ room. Here’s Claude’s contribution for the week, unedited, and the prompt is in the footnotes.1
New research from Stanford, the World Bank, and other institutions reveals what many of us have been sensing but couldn’t quite quantify: generative AI adoption in the workplace is accelerating at a remarkable pace. According to survey data from over 4,000 U.S. workers, the percentage using generative AI tools at work jumped from 30.1% in December 2024 to 43.2% by April 2025—a nearly 50% increase in just four months. This isn’t gradual adoption anymore; it’s a surge that suggests we’ve crossed a threshold where AI use is becoming standard practice rather than experimentation. For communication professionals who’ve been wondering when the “AI moment” would truly arrive in the workplace, these numbers suggest it’s not coming—it’s here.
The research reveals fascinating patterns about who’s embracing these tools and how they’re using them. As we might expect, adoption skews toward younger, more educated, and higher-income workers, with nearly 50% of those with graduate degrees using generative AI compared to just 20% of high school graduates. Industries like information services and management lead the charge with adoption rates exceeding 60%, while more traditional sectors lag behind. But here’s what caught our attention: among those using generative AI, about one-third use it every workday, though selectively—typically for specific tasks rather than as an all-day companion. ChatGPT remains the dominant tool, closely followed by Google’s Gemini, suggesting that despite the proliferation of AI options, most users are sticking with the established players.
The productivity implications are striking and somewhat unsettling. Workers report that tasks completed with generative AI take about 30 minutes on average, compared to 90 minutes without it—effectively tripling productivity for AI-augmented tasks. However, the research also reveals a more nuanced picture: generative AI primarily helps workers complete tasks more quickly rather than replacing entire roles, with only 16% of respondents saying AI completed tasks entirely on its own. Perhaps most intriguingly, over 50% of unemployed workers in the survey used generative AI for job searching—crafting cover letters, refining résumés, and researching opportunities—suggesting these tools are already reshaping how people navigate career transitions. The data hints at a U-shaped impact on income groups, with both lower and higher earners seeing significant efficiency gains, while middle-income workers experience more modest benefits.
For communication leaders, these findings demand immediate attention. We’re witnessing not just another technology adoption curve but a fundamental shift in how knowledge work gets done. The 43% adoption rate means that in many organizations, nearly half the workforce is already augmenting their capabilities with AI—whether leadership knows it or not. This creates both opportunities and challenges: How do we harness this productivity surge while ensuring quality and consistency? How do we support the 57% who haven’t yet adopted these tools? And critically, how do we prepare for a workplace where the gap between AI users and non-users may become as significant as the digital divide of previous decades? The research suggests that those who aren’t engaging with generative AI risk being left behind not gradually, but rapidly. The question for communication professionals isn’t whether to develop an AI strategy anymore—it’s how quickly we can implement one that serves both our organizations and our people.
(We also asked Claude Opus to “create a game we can play based on the research.” It did so in about 60 seconds. You may play that game here.)
Live Speech Translation Arrives
Language becomes less of a barrier for leadership communication.
Way back in October, 2023 we forecasted:
With generative AI, we expect communication teams will have the ability to produce town halls, videos, and podcasts in which leaders communicate with employees in native languages, in their own voices. And eventually, we presume the technology will allow this translation to occur in real time.
It seems that day has come with Google Meet’s Speech Translation feature. From Google:
Use Speech Translation to communicate with anyone in your meeting, even if you speak a different language. As you talk, Speech Translation automatically translates your speech in near real-time, in a voice like yours.
The feature is available only to Google AI Pro subscribers (for now) and only translates to / from Spanish and English (for now). But we’re certain that will soon change. You may read more about the feature here, but it’s best understood via demonstration. The Wall Street Journal has one here (which we can’t embed, but it worth watching as it’s a real-world test of the technology), and Google has this:
It took about 18 months to go from our “one day, this will happen” forecast to live speech translation in meetings being real in the world. And like most of these technologies, we expect this to diffuse quickly to Zoom, Teams, podcasts, video distribution technology, and other platforms. But the real value will be for leaders, who should soon be able to communicate with global workforces, in employees’ native tongues, with the leaders’ actual voices and voice inflections in videos, town halls, teleconferences, and more. There’s a lot lost in translation, and while some of that is lexical, a lot of it is tonal. With this technology leaders will be able to be heard in their own voice, if not their own language.
We’ll leave you with something helpful, if not necessarily “cool”: Researcher Damien Charlotin created a database of instances where lawyers included hallucinated cases in their briefs. Yet another reminder to verify outputs when accuracy matters.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.
Good morning, Claude. We’re going to include a summary of a recent paper in our weekly newsletter on generative AI, Confluence. You can get a flavor for Confluence at http://craai.substack.com. Read a few issues to get the writing style. Once you have, write a four paragraph summary of the recent research paper I've attached.
Make the point that generative AI use is increasing rapidly. Also make whatever points about the labor market you feel are appropriate given Confluence and the paper. We will list you as the author for this Confluence item, so you get proper credit. Ok ... go, and give it your all!