CES 2024: How AI Will Change the User Experience

CES

Artificial Intelligence

January 18, 2024

CES 2024: How AI Will Change the User Experience

The UX of emerging AI is unresolved. But the groundwork for transformation is being laid already. Here’s how we see this playing out.

Words by Natalie Comins, Jon Hackett and Emily Wengert

Photos Courtesy: Midjourney, Rabbit

We found what we’d expected to find at the Consumer Electronics Show in Las Vegas last week — aisle after aisle in football stadium-sized exhibition halls buzzing with vendors hawking newly enhanced, game-changing, AI-powered products.

But we were looking for something else.

Something seismic: signs of disruption caused by the more emergent corners of generative AI, and how that might shake up customer experience (CX) in the future. And indeed, we came away with greater certainty that AI is sparking a significant disruption to the way we interact with the world, and will soon power a sea change in design fundamentals.

“We’re starting to see a shift in user experience paradigms,” said Natalie Comins, Huge Group Creative Director, during the panel The Digital Brand Experience: AI + 3D + XR + AR + Metaverse. “In the last year, there have been product releases that are meant to completely replace our handheld smart devices. So imagine now, if our phones go away and are replaced by these new devices, we’ll need to define, understand, create, and scale experiences for brands in an entirely new way.”

Hints of Transformation

Of course, there were a few glimmers of what’s to come, beyond interfaces as we know them. Volkswagen announced a ChatGPT integration into their cars. Microsoft touted a new “AI companion” key on their keyboards to access Copilot, which promises to “unlock productivity.” And Samsung demonstrated a clever use case, converting standard definition content into 8K display resolution using AI image upscaling to fill in missing data.

Google’s retro-futuristic Android exhibition space offered hands-on, functional examples of generative AI capability that had been added in the last year. This included generated wallpapers, use of Duet AI in Gmail, and Google answering a phone call, using a natural sounding voice to have a conversation on your behalf — almost as if you have a real assistant screening calls.

Nearby, Walmart showed off their new AI-enhanced search capability that lets people think about shopping in more practical, need-based terms. Shoppers can now search by broad goals like supplies for a unicorn-themed birthday party instead of the tedium of hunting for specific products individually. (Every e-commerce company should take note as Amazon has hinted previously they’re working on something similar.)

Rabbit's r1 device Courtesy of Rabbit

Rabbit Races Ahead

While all these  innovations excited us, there was one standout AI startup headquartered in Santa Monica, CA, that painted a holistic vision for a new paradigm: Rabbit. And they weren’t even on the showroom floor. Select guests and members of the press could view it in a tiny meeting room in the back of the Wynn hotel, where reportedly only one demo version was working.

So it’s not surprising if you’ve never heard of Rabbit. They weren’t on anyone’s radar. But during CES, Rabbit released a video showcasing CEO Jesse Lyu, a self-described serial entrepreneur, announcing a new AI assistant software capability packaged up into a compelling orange hardware casing designed by Teenage Engineering called Rabbit R1.

While press coverage largely centered on the “will this device replace my phone?” question, they entirely missed the real innovation. In the launch video, Lyu showed how the software didn’t just answer questions, but was accomplishing complex tasks, like calling an Uber or planning and booking a trip. He called it a Large Action Model (LAM). As a result, the user never has to engage directly with the interface of the brand providing that service again. Woah.

As Lyu put it, they want to “break away from app-based operating systems.” Simply by talking to it, Rabbit removes the "digital friction" from our interactions so we can stay immersed in our lives. And once an AI agent is performing tasks inside apps instead of users, imagine how tried-and-true design paradigms will need a shift if core interface elements — such as ad placements — will be skipped over by machines.

The second major achievement Rabbit claims is the ability to train the software by doing it once yourself. In the video, Lyu showed the AI how to use Midjourney to create a cartoon of a wild dog, then Rabbit applies the same workflow to making an image of a bunny. No code required. Imagine you have a repetitive task: Instead of doing it many times, you do it once in training mode and the AI repeats it as many times as necessary thereafter. If it works, category disruption will soon follow.

That said, it’s worth tempering expectations around Rabbit. Who knows the quality of its responses or how it was trained and fine tuned to avoid bias or hallucinations? The company hasn’t offered a lot of transparency to date of how the service will store a logged-in state with those services nor what it does with all the data it learns about you. Does it get to know your preferences, or would you have to constantly repeat that you’re allergic to peanuts and soy when asking it to order food?

And that is the final achievement we’re still waiting for: consistent, relevant knowledge of the user. Imagine the switching costs when you have a personal AI that you’ve trained over months or years to both assist you with tasks and understand your unique preferences.

An AI that creates a safe, streamlined model of its user will win.

Rabbit is a first-mover in this regard, but it’s easy to imagine existing big-tech hardware players  like Apple and Google incorporating similar capabilities in their operating systems down the road.

The Future of UX, by Design

Huge is focused on helping brands transition into these disruptive futures with a winning plan based on data, focused on humans, and built for the future. Interested in knowing more about how AI will change UX? Come join Huge at SXSW where our CTO Brian Fletcher takes this further in his panel, Real-time UIs: The Future of Human-Computer Interactions.

Can’t wait for March? Check out our Huge Moves for 2024 including: “Forget Bots: The AI Agents are Coming,” which looks at how task-achieving AI will transform commerce.

And, of course, you can always reach out. We still have humans writing back.

Natalie Comins, Jon Hackett and Emily Wengert all specialize in emerging experiences at Huge, covering strategy, creative and tech.

Find growth for your brand.

Let's kick-start the conversation and discuss your path to transformative growth.

In submitting this form, you are agreeing to Huge's Privacy Policy

Share this article.