A preview is not available for this record, please engage by choosing from the available options ‘download’ or ‘view’ to engage with the material
Description
Intel Labs researchers developed an innovative approach to reduce bias in AI foundational models using social counterfactuals. They created a dataset of synthetic images that varied intersectional social attributes, which allowed them to isolate and study the effect of each attribute. This work is part of Intel's commitment to Responsible AI, aiming to ensure AI models are accurate, grounded in authoritative sources, and free from harmful biases. Additionally, the researchers have open-sourced their dataset to help improve AI fairness across the industry.