Confluence for 10.5.2024
OpenAI launches "Canvas." Microsoft announces new Copilot features. Ethan Mollick's "Crowd" and "Lab." A first impression of Apple Intelligence.

Welcome to Confluence. Here’s what has our attention this week at the intersection of generative AI and corporate communication:
OpenAI Launches “Canvas”
Microsoft Announces New Copilot Features
Ethan Mollick’s “Crowd” and “Lab”
A First Impression of Apple Intelligence
OpenAI Launches “Canvas”
Another step in generative AI as a colleague working at your direction.
OpenAI surprised folks with a new feature for ChatGPT this week: “Canvas.” Read their blog post on it here. It’s a new way of chatting with ChatGPT, one that allows you to direct ongoing revisions to code — and importantly for folks in communication — text. Think of it as an evolution of the word processor, one in which the reasoning and general knowledge of ChatGPT is embedded. From the OpenAI post:
People use ChatGPT every day for help with writing and code. Although the chat interface is easy to use and works well for many tasks, it’s limited when you want to work on projects that require editing and revisions. Canvas offers a new interface for this kind of work.
With canvas, ChatGPT can better understand the context of what you’re trying to accomplish. You can highlight specific sections to indicate exactly what you want ChatGPT to focus on. Like a copy editor or code reviewer, it can give inline feedback and suggestions with the entire project in mind.
You control the project in canvas. You can directly edit text or code. There’s a menu of shortcuts for you to ask ChatGPT to adjust writing length, debug your code, and quickly perform other useful actions. You can also restore previous versions of your work by using the back button in canvas.
Canvas opens automatically when ChatGPT detects a scenario in which it could be helpful. You can also include “use canvas” in your prompt to open canvas and use it to work on an existing project.
Writing shortcuts include:
Suggest edits: ChatGPT offers inline suggestions and feedback.
Adjust the length: Edits the document length to be shorter or longer.
Change reading level: Adjusts the reading level, from Kindergarten to Graduate School.
Add final polish: Checks for grammar, clarity, and consistency.
Add emojis: Adds relevant emojis for emphasis and color.
We won’t show it to you in action, as there’s already plenty of folks on X and YouTube doing that … this video is a representative and detailed hands-on:
In our initial testing we think Canvas is, well, cool. It’s cool to have ChatGPT make a document longer with a click. Or to adjust the reading level the same way. Or to highlight a paragraph and tell ChatGPT how you’d like it to revise that text.
But the real thing to take from Canvas isn’t its functionality, it’s what Canvas says about the evolution of generative AI. It’s our view that we will increasingly see AI working underneath or alongside another layer of your work and work applications. With GPT-4o you can already “chat with” spreadsheets in real time, uploading a table and, in chat, asking it to create new columns, calculate new variables, make graphs, and more. With Claude Artifacts you can work with the AI in real time to improve and revise code, create outlines, make simple applications, and more. With Canvas, AI becomes a layer for the creation of text. You become the editor, and ChatGPT becomes the staff writer. This is the future for lots of things, we think.
Microsoft Announces New Copilot Features
A sign of how generative AI will show up in everyone’s workspace.
Microsoft has just announced the latest wave of features coming to Copilot, their AI assistant that’s rapidly becoming a fixture in organizations. While these features haven’t yet reached our team, their potential impact warrants our attention. For many, Copilot is becoming their introduction to and primary interface with AI in their daily work lives, so when we learn of significant updates, we pay attention.
Among the promised features are voice capabilities (akin to OpenAI’s Advanced Voice Mode) and a daily briefing. But one feature stands out from the rest: how Copilot will introduce vision. We’re familiar with image analysis in AI — the leading models can all examine an uploaded image and offer insights. But Copilot Vision promises something meaningfully different that holds the potential to greatly affect how we work.
Copilot Vision isn't just able to analyze static images — it's designed to actively “watch” your screen in real-time. The demo below provides a glimpse into how this works.
Now this is a simple demo in what is a highly controlled, and ultimately edited, context. But even so, it’s helpful to start to picture what this means for the future. Imagine getting real-time feedback on that memo you’re drafting, or having an AI coach you through a tricky Excel formula as you type. The possibilities, especially with the shown integration with voice, are extensive.
Of course, we should temper our interest with a dose of reality. We haven’t tested these features ourselves, and even the most impressive demos can gloss over real-world limitations. But whether this specific iteration delivers on its promise or not, it’s a clear signal of how the companies delivering these applications see the potential of generative AI.
Are we ready to truly adapt how we work to get the most out of these features on day one? Almost certainly not. But this simply heightens the need to experiment now. The companies developing and training frontier models and those integrating their capabilities in their applications do not have the expertise and experience to tell us exactly how we should be using these technologies. It’s incumbent upon us to create a context in our teams and in our organizations where individuals have the foundational knowledge needed and the permission to experiment, learn more, and to elevate specific use cases and helpful practices.
Ethan Mollick’s “Crowd” and “Lab”
A new way of thinking about successfully integrating generative AI tools into organizations.
Ethan Mollick recently shared his observations on the state of generative AI in organizations. In this latest blog post, Mollick points out a noticeable gap in how organizations are using AI: while individual workers report significant generative AI use and productivity gains, many organizational leaders see little evidence of this at the company level. Recent studies back this up. In Denmark, for instance, 65% of marketers and 64% of journalists reported using AI at work. In the U.S., a third of workers used generative AI on the job in the past week.
These individual gains aren't always translating to organizational benefits. Mollick argues that to bridge this gap, companies need to invest in their own R&D process when it comes to AI. This isn't a task that can be outsourced to consultants or software vendors — or, at least, not yet. The landscape is evolving too rapidly, and the most effective implementations will be tailored to each organization's unique context and needs.
Mollick suggests two key approaches for organizations: leveraging the “Crowd” and creating an internal “Lab.” The Crowd refers to the employees already experimenting with generative AI in their daily work. Mollick refers to these workers as “Secret Cyborgs” — using AI tools but not disclosing it due to concerns about potential repercussions or loss of perceived value. To tap into the “Crowd,” Mollick advises companies to:
Reduce fear by providing clear guidelines on acceptable generative AI use
Align reward systems to encourage sharing of AI innovations
Model positive generative AI use at the leadership level
Create opportunities for employees to showcase their AI applications
While the “Crowd” is a valuable resource on its own, it’s most productive when combined with the “Lab” — a more focused, centralized effort. The “Lab” is an internal team, comprised of subject matter experts and a mix of technical and non-technical staff, that should focus on:
Building organization-specific generative AI benchmarks
Developing and testing AI-powered tools and processes
Exploring future applications for generative AI, even if they're not immediately viable
Creating demonstrations that illustrate the technology’s potential impact
Mollick's insights are particularly relevant given the aforementioned recent updates to Microsoft's Copilot. As generative AI tools become more sophisticated and integrated into workplace systems, organizations that have already laid the groundwork for effective adoption will be better positioned to leverage these advancements.
The message is clear: now is the time to actively engage with generative AI technologies and to establish the infrastructure to support this engagement. By fostering a culture of experimentation and openly addressing concerns, organizations can maximize the benefits of generative AI while ensuring its responsible and effective use. The challenge — and opportunity — lies in reconfiguring our organizational structures and processes to fully harness the unique capabilities of generative AI. This shift requires us to think beyond traditional paradigms and imagine new ways of working that blend human and artificial intelligence.
You might, for example, rethink your workflow design by identifying tasks that generative AI can augment or automate, then redesigning processes to integrate these tools seamlessly. This could mean using AI to draft initial drafts or generate content ideas, freeing up team members to focus on strategy and interpersonal aspects of projects. Or, you could consider creating AI-human collaborative teams that pair generative AI expertise with domain-specific knowledge. These cross-functional teams can develop AI-powered solutions tailored to specific needs, such as AI-assisted drafting of internal newsletters or AI-enhanced analysis of employee engagement survey responses.
The key is to start small, learn from early experiences, and continuously adapt as both the technology and our understanding of its potential evolve. It’s about building the foundation today for the ever-closer future, where generative AI is not just a tool experimented with by some, but an integral part of how work gets done by most.
A First Impression of Apple Intelligence
The Times’ lead technology writer shares highlights while tempering expectations.
Last June, we shared Apple's plans to bring AI directly to your devices with Apple Intelligence. This month, that vision begins to materialize as the tech giant will begin rolling out its suite of AI-powered tools to iPhones, iPads, and Macs. New York Times tech writer Brian X. Chen recently tested an early version, offering insights into what users can expect. His experience reveals both promising features and areas for improvement.
Chen found several features particularly useful, two of which are of interest to us. The audio transcription tool accurately captured conversations and organized text by speaker, great for professionals who record meetings or interviews, and the integrated writing assistance tools for quick responses and proofreading also sound promising by Chen’s account. Not all features met expectations, though. In addition to an iffy photo-editing tool, the text summarization feature sometimes generated inaccurate information — a reminder that AI-generated content still requires human verification. Notably absent from this initial release are some of Apple's most anticipated features, including ChatGPT integration and Siri's ability to synthesize information across multiple apps. Apple plans to introduce these gradually over the coming year.
Our June post pointed to Apple Intelligence as a significant development in consumer AI. Now, as it rolls out to millions of devices, we're keen to see how it performs in real-world use. Will it streamline daily tasks as promised? How will it shape users' expectations of AI? And crucially, how might it influence the already-growing demand for similar capabilities in professional tools? We’re looking forward to finding out.
We’ll leave you with something cool: Meta has announced their video generation model and … it’s impressive.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.