Intelligent interfaces of the future

Making things that think.

Christopher Reardon
UX Collective

--

A young woman uses a futuristic augmented reality interface to navigate a large city.
Created with Midjourney and Photoshop. Copyright C-Squared AI 2023

I was inspired by this post on LinkedIn.

AI makes me superhuman — or so the saying goes.

As a designer and sci-fi movie lover, I’ve always been captivated by the utopian vision of AI as an assistant or guardian taking care of my every whim, whether it’s the omnipresent voice of Scarlet Johannson in Her, a caring robot in Big Hero 6, or J.A.R.V.I.S. in Iron Man. Who wouldn’t love not worrying about thousands of unread emails, having a personal health coach making decisions for you so you can stay in shape, or delegating running a multi-billion dollar business so you can play a superhero? AI-enabled intelligent systems will decide the best UI paradigm for any given user scenario more accurately than any human could. What should designers do to prepare for that eventually?

AI is a transformative technology we’ve only just begun to experiment with. However, the story doesn’t end with multimodal “UI’s” that appear proactively. For people to rely on intelligent systems, the systems must demonstrate human values, ethical decision-making, transparency in decision-making, privacy controls, data security, traceability, fairness and inclusion, contextuality, and a whole host of other considerations.

“When an AI is trained on the entire internet’s worth of human experience, those holding companies should be obliged to give back in ways that create real-world value for all.”

The corporation biases the UI.

Bringing an AI-enabled multimodal ‘concierge’ (as mentioned in the link above) into a reality that protects people and society will require reimagining how the enterprise builds, manages, and maintains such systems. Here’s why — a corporation’s culture and operations heavily influence decisions that impact how an AI-enabled service will ultimately function.

An AI-powered product or service reflects the values, people, processes, and metrics the organization promotes and supports. For coherence in purpose, corporations must adopt business models and governance systems that tie people’s and society’s well-being core to business success and tangibly reflected in employee incentivization programs. Having a well-crafted mission statement or posters of your principles and values on every office wall isn’t enough to ensure alignment nor impact — researchers and developers need explicit top-down direction reflected externally to maintain accountability. Governments are forming clearer opinions on how AI should work and setting progressively stricter policies to protect people and society. Employees, stakeholders, and shareholders should be on the same page about balancing making money while preserving and prioritizing people’s safety and agency over their data.

CEOs must adapt and retool organizations with unique skill sets, processes, and methods to ensure adequate protections that deliver ethically defensible outcomes and value. While AI will bring efficiencies to the enterprise by freeing up resources, those resources must focus on a new set of responsibilities to safeguard society–the Venn diagram of resources moves from product management to governance and oversight. Businesses should experiment with monetization and compensation strategies to incentivize employees to do the right thing, even if it means losing revenue. Founders will have to manage investor expectations when societal-level decisions hang in the balance regarding launching systems that are untested or verified at scale.

A corporate org chart that resembles the structure of the human brain.
Org charts heavily influence the construction of AI intelligence. Created with Midjourney.

Who decides what’s best for people?

As the fidelity of AI ‘comprehension’ increases, large AI systems will understand more of the nuances of human communication. AI will learn more from less data for efficiency and efficacy reasons, improving the accuracy of its synthesis of human inputs to generate more accurate recommendations to improve outcomes. Simply put…

A virtual concierge that provides what’s needed based on understanding who you are, getting better the more you interact with it.Rachel Kobetz

In this world, AI will exponentially improve at understanding your non-verbal and verbal communication signals, habits, and data exhaust to compile a high-fidelity profile of you better than any human could. It will attain a level of intimacy with the individual that will seem prescient, offering the correct answers, suggestions, and proactive actions with barely a blink of an eye.

Red teams help you prepare for unintended outcomes.

Product designers focus on designing solutions that address users’ unmet needs, assuming an empathetic mindset to help them achieve their goals. They conduct research to uncover insights, create hypotheses, and build and test prototypes, gradually moving towards shippable solutions that deliver features that enhance the user experience. It’s an effective, streamlined process but an incomplete methodology.

Red Teams work in an entirely different way. Originally Red Reams were ‘hackers’ hired to find ways to breach security systems for CTOs who conduct penetration tests so that they could identify gaps in their measures, mitigating errors and bolstering weaknesses once the hackers had identified them.

When designing AI-enabled products from scratch, designers can take a leaf from the Red Team’s playbook and think about ways the AI might deliver adverse outcomes through things like lack of transparency or fairness, consider how malicious actors might manipulate the AI to negatively influence the model’s performance or apply the AI against different use cases to cause harm. They can also brainstorm how the AI might drift over time or develop emergent capabilities that might not align with the original intended purpose. By conducting Red Team brainstorms, product teams can evaluate whether an AI is fit for purpose. Teams conduct typical “what if” brainstorming sessions where the goal is to create a list of ‘how might this go wrong’ scenarios. Privacy, compliance, ethics, philosophy, civic & human rights, engineering, and cyber-security participants are invited to don black hat roles to poke and prod at the shiny new product ideas. CTOs can also employ this process on existing products to help insulate them from undesirable outcomes. Ultimately this approach is meant to help assess risks, identify novel uses and opportunities, and improve and expand the team’s overall appreciation for a rigorous approach to designing and developing robust AI solutions that protect people, society, and the company.

A person stands on a crowded city street, metadata information is imposed over the image in a futuristic user interface.
Ever-present UI could provide any information in an instant. Created with Midjourney.

Here’s an example scenario for the potential downsides of having an ever-present, highly intimate AI co-pilot.

“Imagine an ever-present invisible AI that can see and remember everything you do, having intimate knowledge of your every word and action so that it can readily provide contextual information and services that are convenient, personalized, and unintrusive. On the surface, this system would be the ultimate assistant, transforming your life and freeing up your time so that you can focus on the essential things in life.

With that level of data, models could predict your entire day, influencing your every decision without your awareness. Those with the money to afford an AI-enabled assistant would be given a golden ticket in life, relentlessly increasing the divide between rich and poor. People would live in a world of hyper-personalized experiences that, on the surface, would feel magical. People who share more data get more AI ‘superpowers,’ furthering the cycle. People that don’t have access to the AI will still be recorded with no means to opt out of its data gathering. For AI adopters, however, their worldview and expectations of people, in general, will be altered by the thousands of overt and indirect recommendations they receive daily, separating them from those without, degrading society’s shared beliefs and common ground.”

Blue teams (teams focused on positive solutions) could unpack the scenario, categorize the risks described and look for ways to mitigate the adverse outcomes, asking “How might we” questions to ensure that ever-present UIs (AIs) can’t harm people or influence them in ways that aren’t beneficial to their wellbeing or the stability of society.

What’s the future of interface design?

Product design will be less about sweating the pixels on a UI, worrying about components in the pattern library, or the content decisions that went into your navigation system–although all of those things will still be necessary for some time to come. AI startups are already starting to augment and sometimes replace the designer’s role in many typical product design tasks and will only accelerate and improve with time.

Designers must evolve, leveraging their empathy for users and parlaying it into a new role, one driving strategic design decision-making that influences every function involved in developing, managing, and governing intelligent systems today. To protect society from a dystopian future, designers must facilitate collaborations across organizations and build alliances with ecosystems of experts and stakeholders they haven’t worked with before (like ethicists, civil & human rights advocates, compliance experts, philosophers, psychologists, behavioral economists, and anthropologists) for organizations to create informed, coherent strategies and operating models that equitably benefit all communities. Designers must learn to focus on systems design rather than design systems because AI is built and managed across many competing agendas, systems, processes, and people. In contrast, AIs will ultimately oversee design systems in the future.

Making things that think.

Designers are used to navigating and adapting to changing design challenges because they hold a human-centered approach in all that they do. This anchoring focus can see designers through the disruption of intelligent copilot systems. The future of high-impact design will be artfully designing the intelligence (AI mind) of the service, which happens long before any pixel hits the glass. There are no rule books, so design leaders must focus on bringing together the right stakeholders to make informed decisions on how the AI should behave, what use cases it should be developed against, how it is trained, how to handle when things go wrong, what agency its users will have over their data and the recommendations the AI provides, and whether its value outways its inherent drawbacks. Designers will have to consider whether full automation of some things outways having people do things the old school way because sometimes people should be left to struggle and fail to grow (take education as an example). AIs will enable product teams to manage fleets of personalized apps across different communities, ensuring culture norms, security, and human agency.

What should product owners consider during the gold rush of AI?

  1. Question fundamental assumptions about what problems the world needs to solve and whether AI is the right fit for those solutions. If it’s a universal problem, and the solution can be readily available and work for all, then you might be on to a winner.
  2. Question whether their team’s mission aligns with society’s and the planet’s well-being. Do your revenue goals bias how you might leverage technologies that can influence society?
  3. Expand the concept of the designer’s role beyond someone who solely focuses on the UI of a product. Companies need to bring to bear a liberal arts approach to AI strategic planning, ensuring that engineering and data scientists are well informed about the ethical, legal, and socio-psychological impacts the services they develop can have. Design thinking can empower discussions, workshops, and tactical implementations that synthesize and integrate the requirements from numerous perspectives to ensure equitable outcomes.
  4. Adapting and learning new skill sets ensures AI is correctly and ethically tuned to benefit users, stakeholders, society, and the planet. Question — Should fewer people be working on ‘product’ so more people can work on the ethical decision-making frameworks to manage the intelligent systems?
  5. Consider how design thinking might ensure solutions are equitable and available to all. If everyone needs a $700 phone and broadband internet, you will likely perpetuate inequities that have marginalized communities for centuries.
  6. Consider implementing a top-down strategy that aligns ethical, civic, and legal considerations with business goals so that product teams have a clear line of sight to success. Think about the exponential learning curve the AI of today is on, project what new benefits and harms might come, and work backward from those that inform today’s decisions.
  7. When designing AI-enabled products, be mindful of conflating user convenience with doing good in the world. Life is about learning how to deal with obstacles.
  8. Developing robust risk assessment methods and decision-making frameworks to ensure teams consider the benefits and challenges of managing AI-enabled products. Creating and optimizing, and sequencing red team processes and evaluation methods to ensure product teams maintain momentum while doing their utmost to protect people and society.
  9. Focus on current AI responsibility and privacy concerns, become familiar with regulatory requirements (especially from the EU), and put off visions of the future until the organization's operational maturity is ready to manage the complexities of “thinking machines.”

--

--

Director - AI Envisioning Studio, Google. Ex-Meta - RAI Head of Design. www.c-squared