From Builders to Gardeners: The Road to Real-Time UIs




From Builders to Gardeners: The Road to Real-Time UIs

Generative AI has far more to offer than generic text, image, and code generation. It has the potential to completely reinvent digital experiences. For it to pay off on its promise, however, designers and developers will have to undertake a fundamental mindset shift. They will need to experiment and break down our current mental models in order to invent the future.

Words by Brian Fletcher, Global Chief Technology Officer

Main image by Natalie Comins with Midjourney. Photo of Brian Fletcher by Alex Wilson.

The early days of the Generative AI revolution have provoked feelings of anxiety for most of us who work in technology and related areas. As we work to make sense of the impact GenAI will have on our world, we can all agree on one thing: in three years none of us will be doing our jobs like we do today. And there’s nothing wrong with this.

Quite the opposite, in fact. 

How disruptive will GenAI be? A recent study on how this technology will affect our work tasks and labor organization showed that 44% of all working hours across industries could be impacted by its current applications, while at the same time it showed that more than half of the almost 20,000 tasks they analyzed could use GenAI “as an input to unleash creativity and enable novel solutions.” It is this last observation that most interests me.

In other words, when it comes to jobs that involve creativity —and as a Chief Technology Officer, I’m obviously including coding among them— GenAI is an opportunity that goes way beyond creating text, code, images, or sound. When it comes to the design and innovation that we do at Huge, what I’m most interested in is how GenAI can be used to push digital experiences to new heights. How do we re-invent experience and how will expectations change for tomorrow’s users?

And this is where real-time UIs come in—a topic I recently spoke about at SXSW.

Real-time UIs (and how we get there).

What do I mean when I say “real-time user interface”? A user interface (UI) is the surface area of any digital experience—made up of content, images, buttons, links—where a human interacts with a computer, website or application. A real-time UI is dynamically generated and adapted to the needs of the user, in the moment. I’m talking about way more than A/B tests, content personalization or rules engines. Imagine an infinite canvas of possibilities, assembled on the fly and unique to each user. It’s the ultimate in personalization. 

For personalized real-time UIs to become a reality, we need GenAI. But not through a text-based chatbot experience. We need LLMs to generate or assemble UI code unsupervised—code that can adapt to the user’s circumstances dynamically. 

Now, this is a big step in LLM capability and, frankly, the capability is not quite there. While LLMs have been trained on billions of lines of code through datasets and large corpora of information, the primary use case for code right now is a coding assistant. We cannot yet trust an LLM to create code unsupervised. But with proper controls and guardrails, we are starting to see this capability emerging. 

If you want to know what this looks like, watch this demo of Google’s Gemini building an experience to plan a birthday party in a chatbot application. This demo is mindblowing to me—an impressive piece of engineering. Gemini is able to respond not just in text, but in an interactive experience that’s also composed of images, text and layout. 

The Gemini demo represents a fundamental shift in how we need to think of digital experiences. If we are going to start trusting an LLM to generate code for our users, we will have to approach the design and development of digital experiences in a new way.

From builders to gardeners.

I have worked in this field for almost a quarter of a century, starting as a developer for a startup in 2000. Over the years, I have developed software and led teams to build hundreds of digital applications. Translating visual mockups into working code, building features and functionality, testing, iterating, launching, scaling, and monitoring these applications. 

This idea of “building” digital experiences is coming to an end. Working with LLMs means we cannot predict what exactly will happen in our applications. To incorporate generative AI into our experiences, we need to start thinking of our role as gardeners instead of builders. 

Instead of painstakingly crafting a user experience, we need to provide the right conditions for great experiences to happen. It will require that we loosen our need to control experiences down to the pixel. We will also need to spend way more time putting in guardrails that prevent these experiences from getting out of control. 

And as dramatic as this change may be, I’m excited for it. It’s a change that will open new possibilities and the next era of experience. 

It will fundamentally change everything we know about digital experience.

Large Experience Models.

For us to get to this next era of experience, we may need more than what Large Language Models can offer us. Let me explain.

We’re all familiar with Large Language Models. In the case of OpenAI, they trained their model on “the Common Crawl,” a huge dataset based on the public Internet, among other sources of text-based content. This approach makes sense when we consider that the goal was language understanding and semantic meaning. But maybe another approach would help to enable real-time UIs.

Do you remember Rabbit, the company that caused a big splash at CES this year with their Rabbit r1 personal AI assistant device? Rabbit decided that a large language model was not going to help them achieve their goal of enabling task completion through their R1 device, so they spent two years training their own model. This model was trained on real users using real digital applications to perform tasks like booking hotel reservations, ordering Uber Eats, or playing music on Spotify. Their model used this training data to learn for itself how to accomplish the same tasks. Rabbit calls it their Large Action Model and it’s the magic glue that allows you to ask the R1 to “book me a hotel next weekend in Las Vegas.”

I wonder if we need to take a similar approach to help pave the way for real-time UIs. What if we trained a model on experience? We could develop an interaction-specific model that is focused on great user experience, the fundamentals of human-computer interaction, allowing the AI to truly understand what “experience” means. We would then have an AI that can “speak” fluent experience, and talk to users with design, animation and interaction, and then understand the user’s reaction. 

A Large Experience Model may be the missing ingredient to fully realize the potential of Real-time UIs for digital experience.

It’s time to experiment. 

This is a momentous time in our careers and for humanity. We as builders and idea people should seize this moment to re-invent the future. We have a massive opportunity to leverage this technological advancement to improve people’s lives. 

We need to channel our anxiety into action and find ways to leverage Generative AI to push experiences to new heights. We’ve been playing and experimenting with the web for over 35 years, and this moment in time is no different. We have an enormously powerful new tool at our disposal, but to fully pay off on its promise will require designers, programmers, product managers, strategists, data scientists and everyone involved in digital products to come together and push each other in order to invent the future. 

We need to play. We need to fail. We need to learn from those failures. We need to educate each other. We need to be adventurous again. We need to be revolutionaries. 

Let’s invent the future.

Find growth for your brand.

Let's kick-start the conversation and discuss your path to transformative growth.

In submitting this form, you are agreeing to Huge's Privacy Policy

Share this article.