Designing AI interfaces that align with how people actually work

UI of AI

"If I were to ding the field right now on one thing, it is that there has been a massive lack of creativity on how people interface with these increasingly smart LLMs and agents."

When David Luan, vice president of Autonomy and head of Amazon's AGI Lab, made his observation on the Unsupervised Learning podcast, he tapped into an issue that influences both the adoption and effectiveness of artificial intelligence tools: design.

The rapid expansion of artificial intelligence capabilities–particularly agentic AI–demands a thoughtful and comprehensive approach to design.

"We're talking about design of a fairly classical nature, but it's all changed by the fact that we're working on AI systems," said James Powderly, principal product designer, AGI Lab. "Sometimes that's look and feel, e.g. what's the color of this button, but mostly it's about how does the interaction facilitate solving customers' problems? And then there's the other chunk – and that's designing the model itself."

Design has a significant role to play in the evolution of AI models to something more useful, contextual, and comprehensive. That transition, as Luan has phrased it, from perpendicular to parallel experiences is essential. It is also a transition the experts at the AGI Lab have been carefully considering for some time.

Danielle Perszyk, cognitive scientist and head of the human-computer interaction team at the lab, is one of those experts. She said she and her colleagues are driven by a question: "What would it mean to build an AI model that is revolutionary in its usefulness for humans?" The answer, she noted, requires a shift toward AI models that are designed with a nuanced understanding of the manifold ways in which humans learn.

Every transformative tool reorganized how humans think

The study of human intelligence is, of course, well-trod ground. "Over many decades there have been scientists across many disciplines studying intelligence, studying the cognitive systems, studying our interactions and how that augments our intelligence," Perszyk said. "Even historians fit into the picture here and can bring to bear insights on how we might design these systems."

Included in those historical insights, Perszyk noted, are useful lessons from humanity's experiences with other transformative systems and tools.

"If you think of writing systems and inventions like math, all the way up to calculators and computers, these all have their own interfaces," she explained. "Those interfaces fundamentally reorganized our brains and shifted how we attended to things collectively. The devil was in the details of how we designed them."

Studying how those tools evolved as people began to interact with them can yield important lessons for the design of evolving AI models. Understanding how other scientific disciplines contend with uncertainty is also important, noted Dave August, member of technical staff, AGI Lab.

"There is an inherent ambiguity in trying to understand how people work and why they work, those cores of intelligence," he observed. "We see this same phenomenon with social scientists, economists, legal scholars – these areas were all developed to deal with the fact that you have this non-static problem with all of these constantly ambiguous parts."

That ambiguity, he noted, is instructive for designing AI models whose look and feel might change radically in the next decade.

"In those fields, they never reach the point where they say: 'We've now got our problem set, and now we can on building our perfect machine to deal with it.' All of these are systems that we've built for people now need to be navigated and pulled into computer science."

Why human data needs to be treated as a renewable resource

Utilizing these lessons begins with both better training data and a different approach to collecting that data. "Human data needs to continually evolve for the models to continually improve," Perszyk noted. "If we think of human data as a renewable resource in the same way that our interactions are going to continually evolve, it completely changes the way that you think about the nature of the data and how you collect it."

AI design must account for that constant evolution, August agreed. "When you're interacting with another person, that relationship changes over time. As you get to know each other, as you learn from each other, you further evolve and transform that relationship. In order to have these successful interactions with AI, you need those same dynamics."

Shifting AI design to reflect those dynamics will also require a reexamination of reasoning. "The way that some researchers think about these constructs right now is very coarse grained," Perszyk said. "They're using the word reasoning to mean a very narrow thing when there is a right or a wrong answer and you can optimize for being right more often than wrong, things like math and code verifiable things."

However, the nature of human interactions makes those more determinative outcomes less helpful.

"When we're interacting, sometimes there's not a right or wrong answer," Perszyk explained. "We're negotiating meaning in real time, we're reasoning given different values, and we're making trade-offs. All of these things go into not just human data, but how we think we're going to need to constrain the agents to be able to be useful for us."

Designing AI that reduces mental burden and earns trust

The weight of our current cognitive load presents another design challenge. The question isn't as simple as reducing clicks; it requires a rethinking of how information is presented.

"That has to be something that the folks on the modeling side and the folks on the interface side are thinking about in tandem," Perszyk said. "Having a model of a mind is such an important anchor to even know what to generate. This is not just personalization but deeply understanding that there's another mind that's trying to learn."

That might even entail designs that nudge users away from text-based interfaces or having models determine which medium is best suited to the task.

"Do we make an interface that doesn't require people to talk or that realigns their mental model away from chatbots?" Powderly asked. "What's the horizontal prompt so that the model understands when it should listen, when it should navigate the web, how it should convert what the user says into a format, be it code or a simplified human readable instruction set, et cetera?"

Of course, that reduction in cognitive load isn't possible without increasing trust in the decisions models make.

"There's lots of agreement around dealing with the cognitive load problem," August said. "But in order to do that, you need to be able to hand off some levels of decision making, and in order to be able to do that, you need extremely high levels of trust."
Achieving that trust requires improving reliability, but not just in the traditional sense.
"Most people think about reliable as clicking in the right place, scrolling when you need to scroll, and that's part of it," Perszyk said. "You trust the model if it does the thing that you predict it should do, but different people will have different expectations. You need the agent to understand your mind, your preferences, your goals."

AI agents that unlock human potential

Understanding human intelligence and context is a powerful tool for improving model reliability, but that metric falls well short of what the AGI Lab team envisions.

"It's not just a continuation of what we have been building," Perszyk said. "I want agents to be our collective subconscious. We're exposed to all of this information in our waking lives, and then, behind the scenes, in the subconscious we've got all this activity and it's connecting, combining, recombining ideas, and sometimes we have these moments where the information has been assimilated in some new way and we experience it as the aha, the eureka – that is the engine for personal and collective discovery.”

Beyond countering information loss, this design approach emphasizes tapping into latent potential. "What if we could actually capture all of the insights that we have and resurface them at the relevant moments while also taking care of digital drudgery, all the stuff that is not worthy of human mental energy and attention?" Perszyk asked. "Our brains are plastic enough that if we build things in a way that they are aligned with our cognition, we can actually have superpowers."

Research areas
  • Machine learning