Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

Responsible AI Research

Intel Labs is researching new ways to responsibly develop, deploy and use artificial intelligence technology, including influencing the industry through academic collaborations and alliances.

Responsibly Leveraging AI Capabilities

With the rapid growth of generative AI technologies, there’s been growing societal concerns around AI deployment, from privacy to safety to sustainability. Intel Labs is committed to the responsible advancement of AI technology. Together, we can strive to responsibly build and deploy AI technologies so that our efforts do not use data in unethical ways, discriminate against different populations or harm the environment.

Research Based on Responsible AI Principles

Intel Labs’ research is based on international standards and industry best practices. Internal advisory councils review AI development activities based on Intel's responsible AI principles. These principles guide our key research areas including transparency and explainability, security and safety, misinformation, privacy, human-AI collaboration, and AI sustainability.

  • Equity and Inclusion
  • Privacy
  • Protect the Environment
  • Safety, Security, and Reliability
  • Transparency and Explainability

Promote Equity and Inclusion

Through equity and inclusion, AI models can better understand and reflect the diversity of the world. Fair algorithms ensure that applications don’t favor one group over another. Using bias detection, datasets and AI models are monitored for potential discrimination, allowing stakeholders to take action to ensure fairness.

Read more: Social counterfactuals reduce bias in AI foundational models

Read more: Using large language models as judges to evaluate gender bias

two asian women in a pride parade.

Design for Privacy

AI models use large amounts of data, so respecting and safeguarding privacy and data rights is important. AI applications should be transparent when collecting any personal data, allowing user choice and control. Products should be designed, developed and deployed with appropriate guardrails to protect personal data.

Read more: Helping artists and content owners protect their data and voices from generative AI

Protect the Environment

Researchers are exploring ways to make AI more sustainable, such as developing hardware and software that accelerate the transition toward a low-carbon, low-waste future as well as creating AI solutions that help tackle environmental issues.

Read more: ClimDetect aids in early detection of climate change signals

Read more: A guide from Intel and the National Renewable Energy Laboratory explains AI energy measurement in data centers

waterfall

Advance Security, Safety and Reliability

Intel prioritizes security, safety, resistance to tampering and reliability in the development of AI products. A robust AI model should protect data, behave as expected in different situations and perform well consistently.

Read more: Deep dive into securing machine learning pipelines

Read more: LLMart evaluates robustness of GenAI models against attacks

Enable Transparency and Explainability

AI systems and supporting materials should provide developers and users with explanations of system behavior so they can easily understand the AI model’s decision-making process. Revealing training sets, how AI systems were trained and tested, and the results of bias testing helps build trust and ensure fairness.

Read more: LVLM-Interpret explains inner workings of models

Read more: Identify model biases and weaknesses with CLIP-InterpreT

Responsible AI Collaborations and Research Centers

Intel Labs collaborates with academic and industry partners worldwide on responsible AI research. Together, we can create ethical AI systems and solutions as well as standards and benchmarks to advance the state-of-the-art in AI.

AI Alliance

Technology developers, researchers, industry leaders and advocates who collaborate to advance safe, responsible AI rooted in open innovation.

Business Roundtable on Human Rights and AI

Founded by Article One, the group brings together representatives from companies at the forefront of AI development to share common challenges, potential solutions and goals for the future.

Coalition for Secure AI

Open ecosystem AI and security experts share best practices for secure AI deployment and collaborating on AI security research and product development.

Intel Center of Excellence on Responsible Human-AI Systems

The European Laboratory for Learning and Intelligent Systems Alicante, DFKI German Research Center for Artificial Intelligence, FZI Research Center for Information Technology, and Leibniz Universität Hannover collaborate on the ethical development of AI.

ML Commons AI Risk & Reliability Working Group

Supports development of AI risk and reliability tests and benchmarks to guide responsible development, support consumer decision making and enable policy negotiation.

Open Platform for Enterprise AI

Through the Linux Foundation AI & Data, this sandbox project accelerates secure, cost-effective generative AI deployments for businesses, starting with retrieval-augmented generation.

Partnership on AI

This nonprofit partnership of academic, civil society, industry and media organizations develop tools and solutions to advance positive outcomes in AI for people and society.

Private AI Collaborative Research Institute

Established by Intel, Avast and VMWare, the institute’s research focuses on secure, trusted and decentralized analytics and compute at the edge.

Show more Show less

Additional AI Resources

Intel Labs AI Research

Intel Labs is shaping the next generation of AI by pioneering advancements that will unlock its true potential.

Learn more

Human-AI Collaboration

Using multimodal sensing and natural language processing to explore how humans and AI can work together to achieve common goals.

Learn more

Responsible AI at Intel

Intel remains committed to evolving best methods, principles and tools to ensure responsible practices in product development and deployment.

Learn more

Responsible AI Principles

These principles serve as a strong foundation for considering the risks associated with AI products and projects.

Learn more

Show more Show less
  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo