Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Alder Lake
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

Responsible AI

Intel has long recognized the importance of the ethical and human rights implications associated with the development of technology. This is especially true with the development of AI technology, for which we remain committed to evolving best methods, principles, and tools to ensure responsible practices in our product development and deployment.

Responsible AI Research

Intel Labs has been conducting research in Responsible AI and collaborating with academia to advance the state of art in the areas of privacy, security, human/AI collaboration, fairness and robustness, trusted media and sustainability.  See below a sample of our publications and engagements.

Security & Privacy

Object Sensing and Cognition for Adversarial Robustness (OSCAR)

Developed in collaboration with Georgia Tech supported by DARPA GARD.

ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector

Written in collaboration with Georgia Tech.

An Open-Source Framework For Federated Learning

Open Federated Learning (OpenFL) is an easily learnable and flexible tool for data scientists.

The Federated Tumor Segmentation (FeTS) Initiative

The largest international federation of healthcare institutions.

Fairness & Transparency

Mitigating Sampling Bias and Improving Robustness in Active Learning

Human-in-the-loop learning workshop at International Conference on Machine Learning (ICML 2021).

Limits and Possibilities for “Ethical AI” in Open Source: A Study of Deepfakes

Exploring transparency and accountability in the open source community.

Uncertainty as a Form of Transparency: Measuring, Using, and Communicating Uncertainty

AAAI/ACM conference on Artificial Intelligence, Ethics and Society (AIES-2021).

Human AI Collaboration

Few-shot Prompting Towards Controllable Response Generation (ArXiv pre-print, June 2022)

Written in collaboration with NTU, Taiwan.

Human in the Loop Approaches in Multi-modal Conversational Task Guidance System Development

Search in conversational AI workshop (SCAI) at COLING 2022.

CueBot: Cue-Controlled Response Generation for Assistive Interaction Usages

Ninth Workshop on Speech and Language Processing for Assistive Technologies (SLPAT-2022), ACL 2022.

Semi-supervised Interactive Intent Labeling

Workshop on Data Science with Human-in-the-loop: Language Advances (DaSH-LA), NAACL 2021.

Trusted Media

FakeCatcher: Detection of Synthetic Portrait Videos Using Biological Signals

Transactions on Pattern Analysis and Machine Intelligence (2020).

How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals

2020 IEEE International Joint Conference on Biometrics (IJCB).

Where Do Deep Fakes Look? Synthetic Face Detection via Gaze Tracking

ACM Symposium on Eye Tracking Research and Applications, ETRA 2021.

Adversarial Deepfake Generation for Detector Misclassification

Tenth Women In Computer Vision Workshop (WiCV) at CVPR 2022.

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Diversity & Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Do Not Share My Personal Information

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration and other factors. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.

Intel Footer Logo