Confluence Update: What You Should Know About the Sam Altman OpenAI Keynote
Things keep moving fast.
Yesterday marked the launch of OpenAI’s inaugural developer conference, headlined by a 45-minute keynote from CEO Sam Altman. The tech press has already covered the talk extensively, and we expect more discussion in the coming days. Considering you’re likely to hear more, we want to digest Altman’s announcements and explore some of the potential implications for leaders and corporate communication professionals in light of the technology’s trajectory.
With that in mind, here's a summary of the keynote's key announcements:
GPT-4 Turbo: Altman discussed the latest iteration of the model behind OpenAI’s generative AI language technology, GPT-4 Turbo, which marks a significant advance over the current model. In addition to being faster and more responsive, GPT-4 Turbo features an expanded context window, capable of processing input equivalent to 300 standard pages of text, and is updated with data up to April 2023 — a significant update from the previous cutoff in September 2021. The update acknowledges the growing expectation for AI to be not only intelligent and reliable, but also current (as Altman noted, “We’re just as annoyed as all of you, probably more, that GPT’s knowledge about the world ended in 2021”).
Multimodal AI Capabilities: With the integration of auditory and visual understanding, AI’s new multimodal functionalities (ability to read text, hear sounds, see images, run code, talk, and create images) are setting the stage for a future where technology interacts with the world in a manner akin to human perception. One benefit Altman forecasted is the ability for GPT Plus users to no longer have to “switch models” between the normal, “search the web,” advanced analytics, and DALL-E models, and instead use all these abilities in one chat. This functionality should arrive at any moment, though we have yet to see it in action. When it comes, it will be a huge time-saver and will open up more powerful ways to use GPT-4.
GPT Customization and Creation: Perhaps the biggest news was Altman’s announcement of the ability to create customized “agents,” which they’re calling “GPTs,” for specific uses. Read more about them here, but the short story is that anyone can create, and if they wish, share, bespoke GPTs designed to meet specific needs. This capability marks a departure from one-size-fits-all generative AI that many of us have been working with over the past year.
Apps: Altman also introduced plans for a platform akin to an app store, exclusively focused on GPT technology. Altman emphasized that this ecosystem would allow businesses and developers to craft bespoke GPT-powered applications tailored to specific tasks or industry needs. This app store model aims to simplify the integration of AI into organizational operations, and will likely lead to a surge in AI application development, democratizing access to advanced AI and making it easier to distribute AI-based solutions.
Microsoft: Microsoft CEO Satya Nadella was present in person for the keynote. He noted a strong alignment with OpenAI’s vision (Microsoft is a significant investor in OpenAI and GPT is at the core of its Bing and Copilot AI technologies), and a commitment to democratizing AI, emphasizing accessibility and utility in daily operations. As Microsoft leads in workplace software, this only forecasts the extent to which readers should expect generative AI technology to arrive on their desktops sooner rather than later.
AI Safety and Ethics: A cornerstone of the keynote was the emphasis on AI safety and ethics. Altman voiced a commitment to safety and ethical guidelines, and it echoes the need for organizations to establish robust governance frameworks around this technology. As AI becomes more ubiquitous, ethical standards and transparent use policies will be essential to preventing misuse and an erosion of trust in content.
As we consider the implications of this keynote, we’re struck that several of the advances in AI it describes are not merely technological milestones — they are harbingers of a new modus operandi. We see several implications for where things may be headed:
Implications for Daily Operations: For business leaders and communication professionals, the practical implications of these advancements will be significant as the integration of large language models (LLMs) into applications will provide a layer of intelligence that simplifies many otherwise complex tasks. You should anticipate a future where instructing (and, likely, constructing) a digital assistant using conversational language becomes the norm for executing a broad range of tasks, from data analysis to customer service.
Personalization and Customization: The ability to create tailored GPT models points to a shift toward personalization in the tools. Users can now craft AI solutions that align closely with specific needs, which is a game-changer for all forms of workflows and business processes — internal and customer-facing.
Governance and Ethical Use: Altman’s emphasis on safety and ethics rings especially true for corporate governance. As AI becomes more deeply embedded in business processes, the need for clear ethical guidelines and governance structures will continue to grow. Business leaders and communication professionals must work in tandem to establish principles that govern AI use, ensuring alignment with the company’s values and the broader societal good.
The Rise of Bespoke GPTs: Altman’s announcement about the ability to create specific GPTs is significant. Imagine being able to create multiple digital assistants fine-tuned to the nuances of your work, capable of performing tasks ranging from copyediting to crafting creative content to providing challenging feedback — and that’s just in the realm of communication. For leaders and communication professionals, this means harnessing the power of AI in wholly new, and highly-directed, ways. The ability to create specialized GPTs is not just an incremental upgrade — it’s the start of a whole set of ways to unlock efficiency and augment work.
Natural Language as the New Interface: Drawing parallels to the revolutionary shift from command-line interfaces to the intuitive point-and-click approach the computer mouse introduced, LLMs are poised to become the next transformative interface. Expect them to increasingly serve as the interface through which you use technology, as developers incorporate LLMs into a wide variety of applications. Combined with voice technology, we will increasingly engage with computers by typing or speaking natural-language instructions and, indeed, having back-and-forth conversations, rather than navigating through menus and clicking on icons.
If you really want some insight into how this all might work, consider watching the keynote. It’s not too technical, and the examples are illustrative. Overall, we consider yesterday’s announcements signposts for what’s ahead. The better you can anticipate this direction, the more adept you’ll be at ensuring this technology serves you and those you serve or lead well — because things are evolving, fast. We do take note of one thing Altman said, almost as an aside, at the end of his keynote:
We hope that you'll come back next year. What we launched today is going to look very quaint relative to what we’re busy creating for you now. — Sam Altman
We can only imagine what that might be. Thanks for reading, and if you find Confluence of value, consider sharing it with others.
AI Disclosure: We used generative AI in creating imagery for this post. We also used it selectively as a creator and summarizer of content and as an editor and proofreader.