Maximizing AI potential with UX principles

Table of Contents

After Microsoft’s ChatGPT 4-powered Bing chatbot became a hit, Google hurried to release its own AI chatbot, Bard. Soon enough, it turned out that Bard had its flaws.

A perplexed Twitter user, seeking a basic calculation, asked: “If I’m going 8 miles/hour, how many hours will it take for me to go 8 miles?” Bard’s response was far from accurate, as it confidently proclaimed, “12.5 miles.”
This case is just one among many, highlighting that despite the widespread adoption of AI across industries, even some of the mainstream offerings still have a long way to go.

In recent years, AgileEngine’s Data Studio has empowered companies in areas like fintech, media, logistics, and more to build market-ready solutions leveraging machine learning. Interestingly, some of the best lessons learned have emerged from an unexpected source: our own Design Studio’s user research and UX design guidelines.

Here are two sets of best and worst practices that have become integral to our approach when engaging with AI and ML projects. These practices play a pivotal role in ensuring a more accurate market fit and an enhanced end-user experience.

Good practice #1: focusing on user feedback

An iterative, feedback-driven process is central to agile and commonly used in AI projects. Yet, far too many agile projects fail to extend this approach to encompass end-user feedback. In our experience, incorporating UX techniques for user research in addition to stakeholder feedback can be a game-changer, more so when engaging broad, diverse user groups.

But how to collect user feedback without a 100 million user base?

Consider how OpenAI approaches its user testing needs. As of the latest available data, ChatGPT boasts over 100 million users, with the website receiving 1.6 billion visits in June 2023. Despite these numbers, ChatGPT remains in beta, gathering feedback from this substantial user base.

Conduct qualitative user interviews to solicit feedback on the AI model’s performance in terms of user experience.

Some of the must-have questions include:
1) Is the model helpful?
2) Are the results accurate?
3) Given the current experience, would you prefer switching to interacting with a human or using AI to narrow down your requests

Collect quantitative feedback from users once they interact with the AI. Here are some questions you could ask:

1) How accurate would you rate the results on a scale of 1 to 10?
2) On a scale of 1 to 10, how confident are you in the AI’s ability to assist you?

Run recorded sessions to observe users as they interact with the AI or perform usability testing to identify UX-related issues.

Good practice #2: having user controls in place

Fine-tuning tone, voice, and formality instead of the one-size-fits-all approach used by Google could create translations matching the user intent more precisely. 


Inclusive language is essential in many contexts. Users could have the option to select gender-neutral translations when referring to individuals. This feature helps users communicate inclusively and avoid potential issues that come with gender-specific language.


Run recorded sessions to observe users as they interact with the AI or perform usability testing to identify UX-related issues.

Bad practice #1: disregarding negative feedback

Issues like model drift are inherent in AI systems, and addressing them involves collecting negative user feedback through UI/UX methods.

Companies like Grammarly, Spotify, and YouTube use various UX strategies to monitor their products continuously. These strategies range from enabling users to easily report incorrect output to allowing them to rate the relevance of suggested recommendations.

The data collected through these channels provides valuable insights to AI engineers. These insights help engineers make necessary adjustments “under the hood” to maintain AI effectiveness and detect biases.

Bad practice #2: sacrificing privacy for personalization

While personalization can enhance user experiences, sacrificing user privacy for customization is a detrimental pattern. Ignoring UX principles of respecting user boundaries and informed consent can lead to a breach of trust, or worse, regulatory compliance issues.

AI systems that collect extensive user data without clear communication and opt-in mechanisms can alienate users and tarnish brand reputation.

What to do about it?

Clearly communicate data collection and usage policies to users
When a user signs up for an AI-powered app, provide a clear and concise pop-up that explains what data will be collected, how it will be used, and who it might be shared with.

Offer granular controls over what data users are comfortable sharing
Controls of this type can take the form of a series of checkboxes that allow users to choose what information they want to share.

Implement robust data protection measures and comply with relevant regulations
For instance, if your app targets European users, implement processes that allow users to request their data to be deleted in compliance with GDPR.

Wrapping this up

By strategically integrating UX principles, companies can navigate the intricacies of AI deployment more easily and adeptly. This synergy helps to enhance user experience, elevate product quality, and increase revenue.
AgileEngine’s Design Studio stands as a beacon of expertise in this endeavor. Our team of
5-star-rated UI/UX experts is poised to assist you in overcoming any challenges at competitive nearshore rates.

TAGS

Established in 2010, AgileEngine is a privately held company based in the Washington DC area. We rank among the fastest-growing US companies on the Inc 5000 list and are recognized as a top-3 software developer in DC on Clutch.

Established in 2010, AgileEngine is a privately held company based in the Washington DC area. We rank among the fastest-growing US companies on the Inc 5000 list and are recognized as a top-3 software developer in DC on Clutch.

SHARE THIS POST!
Related articles