Confluence for 12.29.24
Asleep at the wheel. 321 corporate use cases for generative AI. Anticipating the workforce implications of AI agents. What gets lost in AI translation.

Welcome to Confluence. Here’s what has our attention this week at the intersection of generative AI and corporate communication:
Asleep at the Wheel
321 Corporate Use Cases for Generative AI
Anticipating the Workforce Implications of AI Agents
What Gets Lost in AI Translation
Asleep at the Wheel
We fell into a trap we should know all too well — and one you must be wary of, too.
Last week’s Confluence contained a revealing error that made it past our otherwise rigorous quality control process. The word “Communication” appeared as “Communicaiton” not once, but three times. For a publication focused on professional communication, the error carried particular weight. More telling still: the author has built a career helping leaders and organizations craft precise, effective communication. The irony cuts deep.
The error emerged from a classic case of over-reliance on automation. After editing a section heading and misspelling the word, it propagated to two other locations through copy and paste, multiplying the error. The web editor’s spell-check function (which is inconsistent in applying its little red underlines) failed to see the typo. A habitual trust in those red squiggles had bred such complacency that the author never ran the text through a separate spell-checker. Aside from offering a public mea culpa, we raise the matter because it represents a textbook example of what automation researchers call being “asleep at the wheel.”
The phenomenon has been well-documented since 1983, when cognitive psychologist Lisanne Bainbridge published “Ironies of Automation” in the journal Automatica. Originally focused on process control in manufacturing, her paper quickly became one of the most influential works in human factors engineering, cited by more than 1,800 scholarly publications. Bainbridge identified a fundamental paradox that still haunts system designers today: as automation becomes more reliable, operators grow increasingly dependent on it, becoming less engaged and less capable of taking control when automation fails. The insight proved so profound that Google Scholar tracked 10 new citations of her work in just a two-week period in 2016 — 33 years after its publication. While Bainbridge developed her theories studying industrial processes, her observations echo across every domain where humans interact with automated systems.
Consider modern aircraft pilots. They face a peculiar cognitive challenge: maintaining vigilance while monitoring highly reliable automated systems that rarely require intervention. Studies show that even highly trained pilots miss critical signals after periods of routine automation monitoring. In one analysis by the FAA, a significant percentage of serious aviation incidents involved pilots failing to notice automation disconnects or mode changes. It’s not just about skills deteriorating — it’s about the human brain’s fundamental inability to maintain sustained attention when watching systems that almost never fail. The same pattern of vigilance breakdown appears in manufacturing, where operators monitoring automated production lines can miss subtle signs of impending failures, and in process control, where plant operators frequently fail to detect gradual system degradation until it reaches critical levels. We now see this same phenomenon in knowledge work, where professionals monitoring automated systems — from trading algorithms to content management systems to spell-checkers — struggle to maintain the sustained attention needed to catch subtle but significant errors.
The parallel to our current moment with generative AI proves striking. As these tools become more sophisticated and reliable, we risk falling into the same trap that Bainbridge identified forty years ago. The allure of automation is powerful: it promises to handle routine tasks more efficiently than humans ever could. But this efficiency comes with hidden costs.
Modern professionals have grown deeply reliant on an expanding arsenal of automated tools, and with generative AI, the breadth of tasks where the machine is doing some or all the lifting is only going to increase — far beyond writing, but to analysis, idea generation, critique, decision evaluation, and more. When did you last manually look up the spelling of a word? Or scrutinize grammar suggestions rather than accepting them automatically? The creeping complacency manifests in numerous ways: Teams copy-paste boilerplate text without reviewing for context-specific errors. Executives send emails trusting Outlook’s auto-complete feature for distribution lists, occasionally broadcasting sensitive messages to unintended recipients. Leaders use generative AI to draw conclusions from qualitative data without reading the data themselves. We’ve seen cases where AI-generated content included plausible-sounding but fabricated statistics, which made it into final materials because no one questioned their authenticity. While these technologies impress, they can lull users into a dangerous sense of security that manifests in increasingly consequential ways — from embarrassing typos and formatting inconsistencies to material misstatements that erode credibility and trust.
So, what to do?
Establish clear checkpoints where you deliberately disengage from automation. For communication professionals, this might mean printing documents for review, reading text aloud, or having colleagues provide fresh eyes. For leaders using generative AI, it demands maintaining healthy skepticism about AI-generated content and developing robust review processes that don’t simply layer automation upon automation. And while you’re at it, continue to apply intentional practice to skills that you’re automating. Just as pilots maintain their manual flying skills through regular practice and simulator training, writers need to keep drafting important messages from scratch, and leaders need to do deep thinking and critique of important decisions (as an example), rather than defaulting to AI for every task. The goal isn’t to avoid automation but to preserve fundamental capabilities alongside it.
The future of knowledge work will increasingly involve human-AI collaboration, but successful professionals must remain mindful of automation’s ironies. The technology that makes our work easier can also make us more vulnerable to errors when it fails. Last week’s spelling error serves as a timely reminder: staying awake at the wheel requires conscious effort, especially when the road ahead appears smooth.
321 Corporate Use Cases for Generative AI
In some organizations, things are further along than you might think.
Google just published the most comprehensive collection of enterprise generative AI use cases we’ve seen to date — more than 300 real-world implementations. You can see the list here. Some examples:
Formula E now summarizes two-hour race commentary into two-minute podcasts in any language, incorporating driver data and ongoing seasonal storylines.
PODS created what they called the “World’s Smartest Billboard” — a campaign on their trucks that adapted to each neighborhood in New York City, changing in real-time based on local data.
The World Bank is developing a tool to extract key information from research literature on the causal impact of development interventions, with the goal of helping decision-makers more effectively allocate $220 billion in annual aid and trillions in annual impact investing.
Thomson Reuters added Gemini Pro to its suite of tools approved for employee use. Its 2-million-token context window makes some tasks up to 10 times faster to process and can handle entire documents in context.
There are many, many more use cases worth reading about on the list. But what makes the list interesting for us isn’t just its breadth, but what it reveals about how organizations are putting AI to work.
The use cases cluster around six primary functions: customer service, employee empowerment, creative work, data analysis, code development, and security. But the real story isn’t the categories — it’s the approach. The list demonstrates a marked shift from isolated AI projects toward more deeply integrated capabilities that change entire workflows. Some organizations have clearly moved beyond the “let’s try AI” phase into something more transformative (while many others, we know from experience, aren’t even in the “let’s pilot AI” stage).
For those in corporate communication, this shift carries an implication. When AI is brought to multiple functions in an organization, the challenge isn’t just explaining a new technology — it’s helping people understand and adapt to a fundamentally different way of working. We’re seeing communication teams increasingly tasked with threading this needle: maintaining enthusiasm about AI’s potential while setting realistic expectations about its limitations, celebrating efficiency gains while addressing legitimate concerns about change. For leaders, the challenge is equally complex: they must simultaneously drive adoption while ensuring their teams don’t get ahead of their capabilities, invest in new AI initiatives while maintaining operational excellence, and model the behavior they want to see by becoming skilled AI users themselves.
Regardless, what’s clear from this list is how quickly some organizations are moving from experimentation to implementation. This wave will spread, and it will bring sweeping change. The companies in Google’s compilation aren’t just automating tasks — they’re redefining roles, rebuilding processes, and rethinking what’s possible. That’s a coming reality communication teams will need to help their organizations navigate, even as they navigate it themselves.
Anticipating the Workforce Implications of AI Agents
Will generative AI continue to be a skill leveler, or will agents alter the equation?
In September, we wrote that it was time to start paying attention to AI agents, “AI that has agency: goals, the ability to plan, and the ability to make decisions and take actions in service of those goals and plans.” While AI agents have not yet been deployed at scale, we’re seeing more steps in that direction. Earlier this month, Will Douglas Heaven wrote in the MIT Technology Review about the progress on Google’s Project Astra, the in-development “everything app” which “uses Gemini 2.0’s built-in agent framework to answer questions and carry out tasks via text, speech, image, and video, calling up existing Google apps like Search, Maps, and Lens when it needs to.” Heaven notes that Google is “betting its future” on this technology, which is consistent with recent remarks by Google CEO Sundar Pichai and Google Deepmind CEO Demis Hassabis.
Given the intense competition between the major labs, we can be confident that Google is not the only player investing heavily in agentic AI. As for whether these investments will pay off in the form of powerful products and real traction with individuals and organizations, time will tell. But the potential power of AI agents — and their implications for how work gets done in organizations — is such that we need to start taking their possibilities seriously and thinking about the potential implications now.
Enrique Ide and Eduard Talamàs of the Department of Economics at Barcelona’s IESE Business School are doing just that. In a working paper titled “Artificial Intelligence in the Modern Economy”, Ide and Talamàs use a complex economic modeling approach to attempt to anticipate the impact of agentic AI within firms. The modeling is complex, but the conclusion they reach is both simple and consistent with our intuition: “While autonomous AI primarily benefits the most knowledgeable individuals, non-autonomous AI disproportionately benefits the least knowledgeable.” Non-autonomous AI is what we have today: tools like ChatGPT and Claude that can augment humans in the completion of tasks, but cannot complete the tasks themselves. Autonomous AI is agentic AI: tools that can actually do work and complete. In other words, Ide and Talamàs suggest that the generative AI models we have today disproportionately benefit the least knowledgeable individuals, but the AI we may be getting soon will primarily benefit the most knowledgeable individuals.
That today’s generative AI models can disproportionately benefit the least knowledgeable workers is consistent with a growing body of research and with our own experience. The “Jagged Frontier” study, which we’ve written about at length in Confluence, found that generative AI acted as a skill leveler, benefitting lower performers more than higher performers. The AI tool used in that study was GPT-4, a decidedly non-autonomous (i.e., non-agentic) AI model.
With the advent of agentic AI, however, Ide and Talamàs argue that more experienced and knowledgeable individuals will stand to benefit the most. This makes plenty of intuitive sense. If an AI tool can do a relatively simple task at the same level of quality as a junior employee at 10 or even 100 times the speed, why delegate? With agentic AI, experienced employees will have access to “infinite interns” to do this work, and the temptation for many of them will be to increasingly delegate to AI rather than to junior employees.
While the short-term payoff of such an approach may be high, the second- and third-order consequences on skill development across the organization should capture leaders’ attention. Matt Beane has written about this looming dynamic at length in his book, The Skill Code, and on his Substack, Wild World of Work. A throughline of much of Beane’s writing and research is the power of the expert-apprentice bond for skill development. Agentic AI, if not harnessed properly, may be a significant threat to this bond and to the development of future generations of talent within organizations. As agentic AI matures and eventually begins to make its way into the tools we use every day, leaders should begin preparing to address this challenge with renewed focus.
What Gets Lost in AI Translation
Machines master words, but cultural nuance needs humans.
We write often here at Confluence about how generative AI aids communication professionals with tasks like writing talking points, populating FAQs, and developing messaging. For many communication departments with international stakeholders, though, these tasks require an additional step: translation. A recent article in The Economist examines advances in machine translation — advances worth examining for their impact on communication work.
Recent testing by translation technology company Unbabel shows AI models can handily translate everything from casual text messages to dense legal contracts. The progress is quantifiable: between 2017 and 2022, the time required for human translators to review machine translations (“Time to Edit”, or TTE) dropped from three seconds per word to two. Unbabel's CEO estimates human involvement in translation will drop from its current 95% to near zero within three years.
Yet machine translation tools face the same challenges as other tools powered by large language models: limitations in planning, memory, and factual accuracy. While translating individual sentences is “pretty close to solved” for languages with extensive training data, according to Google Translate researcher Isaac Caswell, this applies mainly to widely-used languages. Languages with less available training data remain difficult to translate, and even for well-resourced languages, interpreting meanings rather than just words remains challenging.
The Economist article highlights a tension in translation between “fidelity” and “transparency.” A faithful translation might preserve a Polish idiom like “not my circus, not my monkeys” in English, while a transparent one would adapt such phrases to resonate with the target culture. For corporate communication, where nuance and cultural sensitivity matter as much as accuracy, understanding this distinction is crucial.
The evolution of translation technology reveals a truth about generative AI: while science can advance the mechanics of language conversion, the art of cultural interpretation remains distinctly human. Communication teams that thrive will be those that blend their approach, knowing when to leverage AI’s speed and efficiency, and when to call upon human expertise for cultural nuance and creative interpretation.
We’ll leave you with something cool: The New York Times has created a quiz to see just how good you are at recognizing AI-generated elements in photography.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.