Confluence for 11.19.23
The New Yorker on AI and craft. Thoughts on "unbundling" AI. Microsoft Copilot(s) updates. A crash course in practical AI. The week in GPTs.

First, we know the big news in AI this week is the ongoing corporate intrigue with Sam Altman and OpenAI. We won’t comment on that here, other than to say we, like most everyone, have no idea what’s happening or what it may mean. Second, we’re happy to say our seminar for corporate communication professionals on generative AI has sold out. We’re offering a second run of the program December 19-20. This program is virtual and takes place over two half-days, and you may learn more about it via a one-pager for download here. With that said, here’s what has our attention at the intersection of generative AI and corporate communication:
The New Yorker on AI and Craft
Thoughts on “Unbundling” AI
Microsoft Copilot(s) Updates
A Crash Course in Practical AI
The Week in GPTs
The New Yorker AI Issue: Grappling With the Effects of AI on Craft
Three articles in the magazine’s AI issue stand out.
This week’s edition of the New Yorker was dedicated primarily to AI, with several features exploring different dimensions of the technology’s impact on society and culture. Three stood out to us for their relevance and applicability to corporate communications:
James Somers’ “A Coder Considers the Waning Days of His Craft”
Anna Wiener’s “Holly Herndon’s Infinite Art”
An interview with cover artist Christoph Niemann
What each of these have in common is an examination of the impact of AI on craft. This is a reckoning and an examination that will be necessary across all crafts, most notably those involving knowledge and the manipulation of information. And this of course includes corporate communication. All three pieces are worth reading for this reason, but we’ll highlight a few items from Somers’ piece that we think are particularly worth reinforcing to our readers.
First is the idea of humans augmented with AI being more powerful than a human or AI alone. The “Ben” referenced below is Somers’ friend, a less-experienced coder:
In chess, which for decades now has been dominated by A.I., a player’s only hope is pairing up with a bot. Such half-human, half-A.I. teams, known as centaurs, might still be able to beat the best humans and the best A.I. engines working alone. Programming has not yet gone the way of chess. But the centaurs have arrived. GPT-4 on its own is, for the moment, a worse programmer than I am. Ben is much worse. But Ben plus GPT-4 is a dangerous thing.
Second is the importance of the concept of “mechanical sympathy” to effectively using AI tools. As we’ve written before, knowing what these tools are good at—and not good at—is a prerequisite to using them well (and is an area many people get wrong):
The more one digs, the more one develops what the race-car driver Jackie Stewart called “mechanical sympathy,” a sense for the machine’s strengths and limits, of what one could make it do.
Last is the idea that, as AIs become more and more capable in the technical skills of a given domain, the meta skills and “softer” skills will become ever more important (the emphasis below is ours):
Computing is not yet overcome. GPT-4 is impressive, but a layperson can’t wield it the way a programmer can. I still feel secure in my profession. In fact, I feel somewhat more secure than before. As software gets easier to make, it’ll proliferate; programmers will be tasked with its design, its configuration, and its maintenance. And though I’ve always found the fiddly parts of programming the most calming, and the most essential, I’m not especially good at them. I’ve failed many classic coding interview tests of the kind you find at Big Tech companies. The thing I’m relatively good at is knowing what’s worth building, what users like, how to communicate both technically and humanely. A friend of mine has called this A.I. moment “the revenge of the so-so programmer.” As coding per se begins to matter less, maybe softer skills will shine.
Thoughts on “Unbundling” AI
Ben Evans highlights two paths forward that are worth our attention.
Most current discussions about GenAI refer to general purpose AI tools: applications like ChatGPT, Copilot, and Midjourney, which organizations have created without specific use cases in mind. These multipurpose tools can execute tasks beyond even their creators’ expectations. They are not without flaws, though, and part of using them is developing a keen understanding of how to benefit from their strengths without falling victim to their weaknesses.
In a recent essay Benedict Evans describes the consequences of these limitations. In it he argues that the more a user must intervene to benefit from these tools, the more they resemble existing technologies and the less they resemble future ways of interfacing with technology:
I don’t think the solution is to buy the metaphorical ‘ChatGPT for Dummies’ book. That will tell you that it helps if you type ‘imagine you’re an ambitious VP at an ad agency!’ before you ask for ideas for advertising copy, and that you can get the model to step through the reply, or use plug-ins, or a ‘Code Interpreter’. But if you need a manual, it’s not ‘natural language’ anymore. Once you start talking about ‘prompt engineering’, you’re describing command lines - you’re describing what came before GUIs, not what comes after them. — Ben Evans
Evans’ essay is worth reading in full, with one key idea being how companies develop specialized interfaces or “wrappers” around Large Language Models (LLMs) for specific functions, thereby streamlining our interactions with them. These wrappers could range from “thin,” like a straightforward graphical interface (think mouse-and-click) for easier interaction with LLMs for specific purposes, to “thick,” harnessing the capabilities of particular models for more specialized tasks requiring distinct modes of interaction. The key questions he’s forecasting, though, is “where does this all go for the user?” Do we have one (or a few) massively powerful tools that can do anything? Or over time does generative AI unbundle into specialized tools that we use for specific cases?
We don’t know the answer to that, but it’s something to be thinking about as the direction this question takes will have real implications for how people in your organization use generative AI in the coming months and years — and this has implications for training, governance, and many other factors. This evolution will influence not only the tools we use to get work done, but the very nature of how we work.
Microsoft Copilot(s) Update
There’s not just one Copilot anymore.
We have two items to note this week regarding Microsoft, one on the enterprise front and one on the consumer front.
First, on the enterprise front, it’s been about three weeks since Microsoft made Copilot for Microsoft 365 (formerly known as Microsoft 365 Copilot — confusing, we know) available to enterprise customers willing to purchase 300+ licenses. Prior to its general availability, the product was available to pilot customers for eight months, and this week Microsoft released a report with some initial findings. It’s lengthy (and we should consider the source), but the top-line findings are that Copilot users perceived both productivity gains and quality gains in their work, and that “77% of users said once they used Copilot, they didn’t want to give it up.”
From a perhaps more objective perspective, Ethan Mollick has begun reporting on his own use of the tool, and he sees immediate power in use cases for Outlook, Word, and PowerPoint. On Outlook and email, he notes, “Once more automation is added it is going to impact the process of work more than any of the other tools — automated emails from everyone, to everyone. We aren’t ready for that.” We agree and continue to advise our clients to anticipate the second- and third-order consequences like this as Copilot for 365 and similar enterprise-wide tools become more common.
On the individual consumer front, Microsoft has rebranded Bing Chat to Copilot. This matters for a few reasons. First, from a completely practical perspective, it makes the tool easier to access, as you can now do so from any browser by going to copilot.microsoft.com (previously users could only access it through Microsoft’s Edge browser). Second, this marks a shift in strategy and positioning of the product, with Microsoft now aiming to compete with ChatGPT rather than with Google. This could lead to a better user experience and better performance by the tool, though in our experiments since the rebranding, we still find it vastly inferior to ChatGPT. That said, for organizations with Microsoft licenses, Microsoft Copilot can offer enterprise-grade data protection and security (as we’ve written about here).
A Crash Course in Practical AI
Wharton has a series of five videos online that, while designed for students and teachers, are worth watching for those just getting started with generative AI.
The Wharton School, via professors Ethan and Lilach Mollick, has an “interactive crash course” on practical artificial intelligence online at YouTube that’s worth watching. While it is directed at students and teachers, we’ve found the videos very helpful for anyone looking to build a stronger fundamental understanding of these tools. You may see them here, and the playlist includes:
An introduction to AI
Large Language Models
Prompting AI
AI for Teachers
AI for Students
Note that the prompting AI video is still quite helpful even after the introduction of custom GPTs, as it helps you think through what strong instructions look like (regardless of how you may create them). And while the teacher and student videos are focused on roles from education, they are helpful in thinking through both uses of generative AI and some of their consequences.
The Week in GPTs
The practice of building and using custom chatbots continues to evolve.
As we’ve written in the past two weeks, ChatGPT now allows users to easily create custom GPTs with specific uses — a feature we’ve already put into practice in our firm. There are plenty of examples cropping up of how individuals use GPTs, and there’s one in particular that we want to highlight.
The hosts of the New York Times podcast Hard Fork created a “Hard Fork Bot” by uploading the transcripts from every episode of their show. As they were recording an episode they decided to test their GPT, and the exchange is worth your time. They asked their bot what Sundar Pichai said on the podcast months prior. The Hard Fork Bot not only answered correctly, but it shared enough detail that it effectively preempted the second question they wanted to ask.
This example falls squarely in the “GPTs as communication vehicles” vein we discussed last week. It’s a small leap to go from GPTs capable of referencing podcast transcripts to ones that can pull from information specific to organizational policies or strategies — a wholly new form of communication channel in the corporate environment.
Now we’ll leave you with something cool: On-demand David Attenborough narration.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.