AI Summit 2023 Key Takeaways: No Opting Out

AI

Artificial Intelligence

December 13, 2023

AI Summit 2023 Key Takeaways: No Opting Out

In the eighth version of the event, leaders in AI agreed on a few common themes—starting with the inevitability of adoption plans for all organizations. “We will never have the perfect data,” said a panelist. “Take something and start somewhere.”

Words by Emily Wengert and Jon Hackett

Images generated by DALL·E 3

The annual AI Summit New York reported its highest-ever attendance at the Javits Center this December, as the rapidly advancing technology from the likes of OpenAI, Google, Microsoft, and Meta continues to dominate headlines. As expected, this year’s event attracted broader interest not just from entrepreneurs, policy makers and data scientists, but also chief information security officers, product developers and marketing leaders alike.

Though artificial intelligence has a mixed track record in the corporate world, the need for safe, practical applications for enterprise organizations is clear and present. In the following recap, we highlight the best, boldest ideas discussed during the two-day conference.

Our Take

We’ve passed peak hype — the conversation now among business leaders of nearly every stripe is all about creating pragmatic, responsible solutions to the biggest transformation the business world has seen since the advent of the internet. The option to ‘opt out’ of the AI wave is simply no longer an option.

5 Key Takeaways

1. You Can’t Opt Out

“It’s not an option anymore” was clearly the message across sessions. During his panel, Nicolas de Bellefonds, managing director and senior partner at consulting firm BCG, raised concerns that AI will only accelerate advantages that already exist. “The leaders are widening the gap with the laggards.” He also noted that 10% of BCG’s clients experimenting with Gen AI are approaching it with the goal of end-to-end transformation, with technology, media, entertainment and telecommunications industries leading the way.

Many panelists exhorted brands to get started now. “We will never have the perfect data. Take something and start somewhere,” said Sowmya Gottipati, head of global supply chain technology at Estée Lauder, during a panel titled: Do we really know what problems we are trying to solve with AI?

As you’re contemplating AI adoption, de Bellefonds of BCG advised planning on the 10/20/70 rule, breaking down the effort it takes to create great AI as 10% algorithm, 20% technology and 70% people, recognizing the enormous import of bringing people along for the organizational transformation that AI may require.

As panelist Brian Dummann, chief data officer at AstraZeneca put it, “It’s a team sport.” This technology requires adoption from every function in an enterprise.

2. One Size Fits All Won’t Work

The sentiment echoed again and again. While productivity and efficiency benefits were commonly touted, speakers also highlighted the need to experiment beyond that for real differentiation. In a particularly moving session, Chris Aidan, VP of innovation and emerging technologies at Estée Lauder, demonstrated a computer vision tool that visually checks how well someone’s foundation, eye and lip makeup had been applied, specifically addressing people with visual impairments — thereby empowering a whole new cohort of consumers.

Because the use cases are so many and the need for flexibility so high, Salman Taherian, Generative AI leader at Amazon Web Services (AWS), made clear on his panel that strategic partnerships are key, citing Anthropic, Stability and Meta’s foundational models as examples.

3. Data Maturity Is A Competitive Advantage

Taking advantage of the benefits of AI can be impeded by low or no data maturity within a company. Many companies are behind in their approach to collecting and organizing their data in a way that’s AI-ready. The data being poured into warehouses or ‘data lakes’ that feed an LLM isn’t a small effort, but if you have strong capabilities or partners here you can move much faster than those starting from scratch.

Tarun Chopra, VP of product management at IBM Data and AI reminded the audience how tough it can be for enterprises to scale this technology. “How to go from experimentation to productization? There’s a big chasm there.” One major gap is still data maturity. “If you don’t have a strong data layer, there is no point talking about AI.” Those with mature data sets, however, can reap the benefits. He shared an example from Citibank, which used an AI virtual agent to reduce the number of questions to their help desk by 60%, freeing up its auditors for higher value work.

Across the two-day conference, the use cases ran the gamut from behind the scenes operational efficiency to customer- or consumer-facing examples. Rahul Tyagi, global VP of data science at Kenvue, outlined a framework for companies to take progressive steps when adopting Gen AI across the enterprise to maximize benefits in a consistent way.

At the core of his approach is tailored, agile optimization for each use case, and for each function of the business. At this point, one size fits all is not going to get optimal results; the needs of finance will be very different from sales, and each domain might need its own mini LLM tuned to its specific needs.

4. Governance Lacks Clear Enforcement

When it comes to safety, the things to consider — according to IBM’s Chopra — can be categorized in three buckets. The first is compliance with safety and transparency regulations and policies worldwide, something he described as the “nutrition label” for AI. Second is risk, which requires organizations to proactively monitor for fairness, bias, drift and other concerns created by generative LLM models. And the third area is lifecycle governance, requiring its developers to monitor and manage through to the end of its life.

In a different session, Sy Choudhury, director of AI partnerships at Meta, called out their newly launched Purple Llama model, which is so-named because the LLM model combines both red teaming (those attempting to break a model) and blue teaming (those attempting to build up a model) into one. As Meta’s press release this week mentions, “We believe that to truly mitigate the challenges that generative AI presents we need to take both attack (red team) and defensive (blue team) postures.”

Speaking on a panel presenting the AI Toolkit for Marketers, Emily Wengert, who leads experience innovation at Huge, championed the importance of a diverse ethics board that “has teeth” — including the ability to stop something from launching that isn’t safe. Affirming this point, quite a few vendors in the exhibition hall were also touting their tool to provide a middle layer of governance and safety check between a large language model (LLM) and the end user.

5. The New Frontier Is Full of Questions

Even the AI experts on stage sharing groundbreaking case studies acknowledged the learning curve happening in real time. The emergence of generative AI to the public consciousness as well as increased corporate interest is requiring traditional AI approaches to evolve — from data strategies to governance and oversight.

During a panel with representatives from the biggest AI players such as Google, OpenAI, Meta and Amazon, Meta’s Choudhury acknowledged that it’s hard to keep up, citing as an example the relatively new capability of retrieval augmented generation (RAG) that has only taken hold in the last year.

It was fascinating to hear where these big AI players aligned and where they disagreed. One point of total consensus? When asked by NBC News senior tech, science and climate editor Jason Abbruzzese what percentage of companies will incorporate Gen AI into their business in 10 years, they unhesitatingly answered “100%.” For his part, OpenAI’s Adam Goldberg, a member of their GTM team, thinks it will take only five years to hit that mark. Time will tell.

Share this article.