Confluence for 12.3.23
What we learned from our generative AI seminar. Amazon announces Q. The New Yorker on Microsoft, OpenAI, and Nvidia. AI and the economy. Sports Illustrated's AI authorship.

It was an eventful week for our work at the intersection of generative AI and corporate communication. We were fortunate to lead a two-day seminar on generative AI for communication professionals (more on that below). As expected, we learned a great deal from the conversations and from the engaged, curious group we were able to convene. But that wasn’t all. Here’s what has our attention this week:
What We Learned From Our Generative AI Seminar
Amazon Announces Q
The New Yorker on Nvidia, Microsoft, and Open AI
AI and the Economy: A Perspective from Finance and Development
Sports Illustrated’s AI Authorship
What We Learned From Our Generative AI Seminar
Nothing makes generative AI real quite like direct exposure.
Last Friday, we concluded our most recent seminar on generative AI for communication professionals. Over two days of virtual sessions, we engaged with 20 participants, covering a range of topics from an orientation to large language models, to editorial, content-generation, and summarization use cases, to considerations of human factors. We are fortunate to engage with these topics daily, but for many participants, it was their first real exposure to these tools and their possible uses. It seems each participant was impressed — not by us, but by the uses and ramifications of these technologies for themselves and their teams.
Upon reflection, these two days affirmed much of what we’ve discussed in this space over the past several months: the significant disparity in the availability and permission to use generative AI across organizations, the limited experience most people have with them, and the legitimate concerns around trust, authenticity, and the general lack of governance. We were most struck, though, by the rapid growth in the participants’ appreciation for what these tools can accomplish. “Enlightenment” might be too strong a term, but it was almost as if we could see the lightbulbs turning on. Which underlines one of the most simple, but perhaps most important, pieces of advice we’ve been giving on generative AI: start using it. The only way to appreciate its strengths and weaknesses, and to unlock the seemingly infinite ways it can augment you and your work, is to start using it.
Over approximately six hours, we took these 20 professionals inside our own practice with these models, including how we use them, where we’re wary of them, and how we’re thinking about their larger implications for us, our team, and our work. While thoughtfully designed, it was somewhat messy hands-on, show-and-tell, you’ve-seen-us-now-you-do-it work, and it really served to open eyes about the possibilities (and the risks) of generative AI. If this group is reflective of the larger professional community, most communication professionals have only cursory exposure to large language models, and if they are using them for work, are using them in very introductory ways. There are many layers of untapped utility that exposure and learning can reveal. The only way to get to those layers is to start digging in.
Amazon Announces Q
Another major player introduces its enterprise AI chatbot.
This week's announcement of Amazon's AI assistant, Q, marks a significant addition to the enterprise AI market. Its value proposition largely mirrors that of companies like Microsoft with 365 Copilot, Google with Duet AI, OpenAI with ChatGPT for Enterprise, and others, though it has the added benefit of integration with and deep understanding of Amazon Web Services (AWS). Q is yet another signal that technology companies recognize significant opportunities in enterprise chatbots, suggesting more entries in this field from major players are on the horizon.
Amazon's announcement reinforced two of our existing beliefs. First, data security concerns, a major barrier to using AI tools, are likely to diminish as organizations find more ways to integrate large language models securely into their IT environments. Second, the reach of these AI tools is set to expand substantially. The decision to bring AI tools in house gets easier when organizations can work with trusted technology partners who make up significant parts of their existing IT infrastructure. As more organizations adopt these technologies for competitive edge or parity with peers, a wider range of professionals will access these tools for their day-to-day work.
Unless your organization has already chosen a specific AI tool, it's wise to not dwell on the differences between offerings. Instead, focus on understanding these tools as a class of technology. Consider the implications of your organization having widespread access to an AI chatbot that can tap into company data and files. The challenges and opportunities these tools present will be similar, regardless of the chosen AI assistant, thus we believe it prudent to prepare for their integration into your work environment sooner rather than later.
The New Yorker on Microsoft, OpenAI, and Nvidia
The magazine continues to be a leader in comprehensive coverage of developments in AI.
Two weeks ago, we pointed our readers to three pieces in the The New Yorker’s AI issue that examined the intersection of AI and craft. Since then, the magazine has published two more pieces that we strongly recommend, both of which are comprehensive profiles of some of the most important individuals and companies shaping the industry today. We continue to advise communication leaders to stay aware of the broader dynamics of the industry, as those dynamics will eventually impact nearly every organization in one way or another. Regardless of how familiar you are with the state of the industry, you’ll be smarter on it after reading these articles.
First is Charles Duhigg’s “The Inside Story of Microsoft’s Partnership with OpenAI,” which in our view is valuable not so much for its details on the drama that unfolded at OpenAI a few weeks ago (though there is that), but rather for its comprehensive account of Microsoft’s AI strategy, the origins and current state of its partnership with OpenAI, and what all of that might mean for the billion-plus people who use Microsoft’s products every day. If you feel behind on what’s going on in the industry, reading this article would be a great way to begin catching up.
Second is Stephen Witt’s “How Jensen Huang’s Nvidia Is Powering the A.I. Revolution.” You can think of the developments unfolding in AI as happening on three levels: the application level (the tools with which users directly interact, like ChatGPT), the model level (the AI “engines” powering those tools, like GPT-4), and the hardware level (the chips and supercomputing infrastructure that make it all possible). Of those, the hardware level is probably the least-discussed in the mainstream conversation (not surprisingly, as it’s the one most removed from everyday users), but hardware innovations have played an integral role in how we got to this moment — and will play an integral role in where things go from here. Nvidia is the most important company to know in that space, and this profile of its CEO provides a valuable overview of how the technology works and why it matters, as well as what we might expect hardware innovations to make possible in the years to come.
AI and the Economy: A Perspective from Finance and Development
Looking ahead to possible forks in the road.
We often stress the importance of looking beyond the immediate capabilities of AI models to their broader implications, as we believe a broad perspective is crucial to effectively projecting and preparing for the future. It’s one reason we read, and recommend you read, this article in Finance and Development by Erik Brynjolfsson and Gabriel Unger. It offers a compelling take on AI's potential macroeconomic trajectories, and addresses AI’s influence on productivity, income inequality, and industrial concentration through three potential “forks” in the road.
The first fork, productivity growth, involves two paths: one path where AI leads to modest improvements, and another where it becomes a transformative force for productivity. The second fork, income inequality, involves AI either exacerbating disparities or helping to bridge them by enhancing the skills of less-skilled workers. The third fork, industrial concentration, explores a future either dominated by large AI-intensive firms or one with a more balanced landscape thanks to open-source AI models.
These scenarios are not purely speculative — they are grounded in current trends and have significant implications for our economic future. It is in the best interest of communication professionals and leaders to have as informed a perspective on the macro environment of these general-purpose technologies as possible — it will aid in forecasting the future, and knowing which questions to ask or disregard about that future. In this spirit, Brynjolfsson and Unger's article is worth the read.
Sports Illustrated’s AI Authorship
The revelation of AI contributors at SI shines a spotlight on the importance of disclosure.
We’ve been saying for some time that generative AI is going to heighten sensitivities to authenticity of content, and the recent discovery that Sports Illustrated has been publishing articles attributed to fake AI-generated authors illustrates the point. According to reports, SI listed authors like “Drew Ortiz” and “Sora Tanaka,” whose personas and headshots were crafted through AI, as contributors. These revelations bring to light the blurred boundaries between AI-generated and human-produced content, and the whole thing highlights the evolving challenges in content creation and the need for clear ethical guidelines and disclosure practices.
It’s our view that organizations should adopt well-thought-out disclosure policies when generative AI is involved in content production. The case of Sports Illustrated underscores the necessity for transparency if one is to maintain trust with audiences. Our guiding principle is straightforward: if the revelation that generative AI was used in creating content would make us uncomfortable, we disclose. Such transparency aligns with our values, but we also hope it fosters a relationship of trust with our clients.
Now we’ll leave you with something cool. Take a look at what Pika is doing at the leading edge of idea-to-video generative AI.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.