Imagine a space where children, guided by the cues of a fun and engaging cartoon character, learn by physical interaction with their environment and each other. Or a manufacturing workstation equipped with artificial intelligence that can successfully guide a trainee through a specific task. Or having a fluent conversation with someone who can’t speak or type. These are the fruits of Human-AI collaboration, and a mere sampling of where Intel’s Intelligent Systems Research Lab is taking us.
The realization and even amplification of human potential through collaboration with AI are what the Intelligent Systems Research Lab at Intel is all about. With work that combines the unique cognitive strengths of humans and computers, the Lab is focused on research that will ultimately improve the personal, educational, and professional lives of people. Currently, this research is focused on creating contextually aware experiences that rely on multimodal sensing systems that enable intelligent machines to anticipate and act on the needs of the human they are supporting. Whether it is teaching a small child to count in sets or empowering the disabled with language, Intel is continuously leveraging Human-AI collaboration to empower people to be all they can be.
AI in Assistive Computing
AI opens a world of possibility for people with disabilities and Intel Labs is dedicated to delivering scalable computing solutions for a greater societal impact.
The creation of a communication platform built specifically for renowned scientist, Steven Hawking, remains one of the most widely recognized AI achievements at Intel. However, the research and technology embodied within this achievement continue to evolve. Today, the technology that allowed Professor Hawking a tenfold improvement in computing capability, is now available as a free open source platform. The Assistive Context-Aware Toolkit (ACAT) enables users to efficiently communicate with others through keyboard simulation, word prediction, and speech synthesis. It is also customizable for the specific disabilities of the user and can be trained to respond to even the slightest muscle twitch or auditory response.
Researchers at Intel continue to be engaged with helping more people overcome their unique challenges with AI-assisted computing. Much of this work is focused on improving the capability of response generation systems, which enable people with severe limitations to interact more fully and efficiently with their environments.
AI in Education
Scientists and educators know that behavioral, emotional, and social engagement all play a vital role in learning, especially for young children. This engagement also serves as an important metric that helps teachers deliver more personalized learning experiences. Large classrooms, however, make it impossible for one teacher to recognize and leverage the engagement of each individual student.
AI has enormous potential for helping teachers understand and facilitate student engagement. The Intelligent Systems Research Lab at Intel is exploring ways to help teachers achieve quality engagement by using a blend of physical and digital learning experiences. Intel researchers have developed Kid Space, a research prototype of an interactive learning environment that uses extensive multimodal sensing and sense-making technologies. Kid Space is designed to engage children both physically and socially with play-based learning, guided by an animated teddy bear that responds to children's behaviors, emotions, and utterances in real time.
The Kid Space smart environment incorporates several audio and video hardware components and AI technologies that enable data collection and analysis. The prototype was deployed and tested with children at an elementary school in Oregon, where results indicated high levels of student engagement with a blend of physical and digital interactions and increased physical activity. Utilizing the user insights from this study, the Intelligent Systems Research Lab has developed an improved version of the Kid Space prototype that incorporates more personalization features and plans to deploy it at the same school in 2022.
AI in Manufacturing
The manufacturing industry spends millions of dollars on worker training and process execution improvements. Intel is currently researching ways to accelerate these processes in factories by providing support tools to workers that incorporate machine learning and AI to help them perform tasks on the factory floor.
Project MARIE, an acronym for Multimodal Activity Recognition in an Industrial Environment, is one way Intel's Intelligent Systems Research Lab expects to help mitigate these issues and is another shining example of human-AI collaboration. It is a model framework for an interactive work environment in which there is two-way communication between a worker performing a task and an intelligent system supporting the worker. The system uses multimodal inputs from a variety of sensors (cameras, RFID tags and readers, audio sensors, and more) as well as natural language processing and advanced analytics to make inferences about task execution and to provide support to the worker in real time. For instance, if the system infers confusion, it can ask the worker questions about what they are doing and eventually learns to guide a worker through the task. Once complete, researchers expect that this technology will be able to efficiently guide a novice worker through a task, while also enabling continuous, two-way learning between the worker and the intelligent system. The potential benefits include facilitation of training, higher-quality manufacturing, and less stressful work scenarios.
Moving Forward - Responsibly
While Intel continues to leverage AI for the amplification of human potential, we are acutely aware of the responsibility that comes with each breakthrough. Responsible AI means constantly assessing potential risks to privacy and safety throughout the developmental lifecycle of any solution. Technologies must be built on robust and ethical AI that is free of bias.
We also seek collaborative solutions—particularly those that sustain employment while optimizing human and AI potential. This isn’t just responsible, it’s also sensible. Humans and AI have capabilities that are far more complementary than they are overlapping. Humans excel at learning from very limited data, transferring their knowledge easily to new domains, and making real-time decisions in complex and ambiguous settings. Therefore, it only makes sense to create solutions that maximize collaboration.
Finally, we take care to involve experts across a wide range of disciplines, from ethnographers to psychologists, to make sure our solutions are inclusive and targeted to the specific needs we want to fulfill.