AI UX—Recommendations and Takeaways

The final episode of this mini-series looks at some key takeaways, and recommendations for you to consider while designing your AI service, application, or connected device.

AI UX YouTube* Playlist

Subscribe to the YouTube* Channel for Intel® Software

Additional Resources:

Loi, D., 2018, Intelligent, Affective Systems: People’s Perspective & Implications, Yogyakarta, Jakarta, Malang, Indonesia.

Loi, D., Raffa, G. & A Arslan Esme, 2017, Design for Affective Intelligence, 7th Affective Computing and Intelligent Interaction conference, San Antonio, TX

Bostrom, N., & Yudkowsky, E. 2014. The Ethics of Artificial Intelligence in The Cambridge Handbook of Artificial Intelligence. Cambridge University Press

Sophia Chen. AI Research Is in Desperate Need of an Ethical Watchdog. Retrieved 14 October, 2017

PwC. 2017. Global Artificial Intelligence Study: Sizing the Prize.

Gershgorn, D. 2017. The Age of AI Surveillance is Here. Quartz

This is the last episode of AI:UX, a mini-series focused on 10 guidelines that were created to assist all those that are involved in the design and development of AI-based systems, I'm Daria Loi, and today, I'm giving you a summary of the guidelines and some recommendations and takeaways from this series. 

This series was based around a study that aimed to identify design guidelines for AI systems. These guidelines are inspired by the perspective of people who have of a wide range of backgrounds, ethnicities, and ages. These diverse individuals shared with me, their attitudes, thresholds, and expectations towards AI systems. 

In the study, I use qualitative and quantitative tools to derive key insights to create 10 guidelines. 

Among many findings, the study showed how people's knowledge of AI systems impacts their understanding of and willingness to embrace such systems. 

I also learned about what I call, the domino effect of smart things, which occurs when one quickly expands the number of owned AI devices after successful first encounters with one AI system. 

While people have concerns, they're also prepared to flex their comfort zones if there is a high return on investment. 

People also want to control AI systems, and have a preference for efficiency usages. 

Finally, while people are open to smart things, they are less enthusiastic towards intelligent independent ones. 

These guidelines are not set in stone, but rather, consider them as a practical people-centric recommendation, designed to spark a healthy debate on how we create AI systems and the agency that we should have in that process. 

Given that, here are a few questions worth reflecting on. 

What level of autonomy and agency should AI systems have? What level of transparency should be provided? How should it be delivered? How should these systems relate to, converse, and engage with users? 

What design attributes may enable effective, accurate, yet unobtrusive, respectful, intuitive, and transparent intelligence systems? What social and behavioral contrasts should underpin people's interactions with AI? What ethical considerations should we prioritize? 

We all have the moral and ethical responsibility to engage with how intelligence system futures are being and will be shaped. A future enriched and enabled by intelligent yet trustworthy, ethical systems, require careful implementation of guidelines, that govern the actions of those in charge of deciding what to design, how and why, as well as what data to feed into a given system. 

I now, challenge you to actively contribute to the complex yet exciting task of shaping the present and future of AI systems. 

As Intel co-founder Robert Noyce once said, now, go off and do something wonderful. 

It has been a pleasure to share my research with you. We'd like to continue this discussion and ask you, to engage with us in the comments section. We also, encourage you to share your tips, tricks, and best practices for AI development. 

Don't forget to like this video, and subscribe to the Intel software YouTube channel. Thank you for watching.

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at