Responsible AI Research
Intel Labs is researching new ways to responsibly develop, deploy and use artificial intelligence technology, including influencing the industry through academic collaborations and alliances.
Responsibly Leveraging AI Capabilities
With the rapid growth of generative AI technologies, there’s been growing societal concerns around AI deployment, from privacy to safety to sustainability. Intel Labs is committed to the responsible advancement of AI technology. Together, we can strive to responsibly build and deploy AI technologies so that our efforts do not use data in unethical ways, discriminate against different populations or harm the environment.
Research Based on Responsible AI Principles
Intel Labs’ research is based on international standards and industry best practices. Internal advisory councils review AI development activities based on Intel's responsible AI principles. These principles guide our key research areas including transparency and explainability, security and safety, misinformation, privacy, human-AI collaboration, and AI sustainability.
Promote Equity and Inclusion
Through equity and inclusion, AI models can better understand and reflect the diversity of the world. Fair algorithms ensure that applications don’t favor one group over another. Using bias detection, datasets and AI models are monitored for potential discrimination, allowing stakeholders to take action to ensure fairness.
Read more: Social counterfactuals reduce bias in AI foundational models
Read more: Using large language models as judges to evaluate gender bias
Design for Privacy
AI models use large amounts of data, so respecting and safeguarding privacy and data rights is important. AI applications should be transparent when collecting any personal data, allowing user choice and control. Products should be designed, developed and deployed with appropriate guardrails to protect personal data.
Read more: Helping artists and content owners protect their data and voices from generative AI
Protect the Environment
Researchers are exploring ways to make AI more sustainable, such as developing hardware and software that accelerate the transition toward a low-carbon, low-waste future as well as creating AI solutions that help tackle environmental issues.
Read more: ClimDetect aids in early detection of climate change signals
Read more: A guide from Intel and the National Renewable Energy Laboratory explains AI energy measurement in data centers
Advance Security, Safety and Reliability
Intel prioritizes security, safety, resistance to tampering and reliability in the development of AI products. A robust AI model should protect data, behave as expected in different situations and perform well consistently.
Read more: Deep dive into securing machine learning pipelines
Read more: LLMart evaluates robustness of GenAI models against attacks
Enable Transparency and Explainability
AI systems and supporting materials should provide developers and users with explanations of system behavior so they can easily understand the AI model’s decision-making process. Revealing training sets, how AI systems were trained and tested, and the results of bias testing helps build trust and ensure fairness.
Read more: LVLM-Interpret explains inner workings of models
Read more: Identify model biases and weaknesses with CLIP-InterpreT
Responsible AI Collaborations and Research Centers
Intel Labs collaborates with academic and industry partners worldwide on responsible AI research. Together, we can create ethical AI systems and solutions as well as standards and benchmarks to advance the state-of-the-art in AI.
AI Alliance
Technology developers, researchers, industry leaders and advocates who collaborate to advance safe, responsible AI rooted in open innovation.
Business Roundtable on Human Rights and AI
Founded by Article One, the group brings together representatives from companies at the forefront of AI development to share common challenges, potential solutions and goals for the future.
Coalition for Secure AI
Open ecosystem AI and security experts share best practices for secure AI deployment and collaborating on AI security research and product development.
Intel Center of Excellence on Responsible Human-AI Systems
The European Laboratory for Learning and Intelligent Systems Alicante, DFKI German Research Center for Artificial Intelligence, FZI Research Center for Information Technology, and Leibniz Universität Hannover collaborate on the ethical development of AI.
ML Commons AI Risk & Reliability Working Group
Supports development of AI risk and reliability tests and benchmarks to guide responsible development, support consumer decision making and enable policy negotiation.
Open Platform for Enterprise AI
Through the Linux Foundation AI & Data, this sandbox project accelerates secure, cost-effective generative AI deployments for businesses, starting with retrieval-augmented generation.
Partnership on AI
This nonprofit partnership of academic, civil society, industry and media organizations develop tools and solutions to advance positive outcomes in AI for people and society.
Private AI Collaborative Research Institute
Established by Intel, Avast and VMWare, the institute’s research focuses on secure, trusted and decentralized analytics and compute at the edge.
Additional AI Resources
Intel Labs AI Research
Intel Labs is shaping the next generation of AI by pioneering advancements that will unlock its true potential.
Human-AI Collaboration
Using multimodal sensing and natural language processing to explore how humans and AI can work together to achieve common goals.
Responsible AI at Intel
Intel remains committed to evolving best methods, principles and tools to ensure responsible practices in product development and deployment.
Responsible AI Principles
These principles serve as a strong foundation for considering the risks associated with AI products and projects.