Intel advocates for risk-based, principles-driven, and globally aligned AI regulations.
Intel supports a regulatory and policy environment that facilitates the responsible adoption of Artificial Intelligence (AI). Intel advocates for risk and principles-based AI policy measures, reducing the compliance burden to the strictly necessary ones, and leveraging internationally accepted standards.
Intel is committed to advancing AI technology responsibly. We do this by utilizing rigorous, multidisciplinary review processes throughout our product development lifecycle, establishing diverse development teams to reduce biases, and collaborating with industry partners to mitigate potentially harmful uses of AI. Intel encourages AI policy measures that enable an open ecosystem and consider context and sector-specific use cases and applications rather than horizontal requirements. We believe that an open AI ecosystem drives accessibility for all actors in the AI value chain and promotes a level playing field where innovation can thrive. Additionally, Intel strongly recommends that industry stakeholders adopt responsible AI principles or guidelines that enable human oversight of AI systems and address risks related to human rights, transparency, equity and inclusion, safety, security, reliability, and data privacy.
Intel supports AI accountability practices that promote ethical behavior and shares information with other for organizations to help ensure AI systems are responsibly developed and deployed. Utilizing a risk-based approach, organizations should implement processes to evaluate and address potential impacts and risks associated with the use, development, and deployment of AI systems. While numerous existing laws or regulations apply to the deployment and use of AI technology, such as privacy and consumer financial laws, new rules may need to be created where gaps exist.
Trust and Safety
Intel supports a risk-based, multi-stakeholder approach to trustworthy AI informed by international standards (e.g., ISO/IEC) and frameworks such as the National Institute for Standards and Technology (NIST) AI Risk Management Framework. These provide key guidance to address important requirements underpinning the trust and safety of AI, such as data governance, transparency, accuracy, robustness, and bias. Regulatory agencies should also evaluate the use and impact of AI in relevant, specific use cases to clarify how existing laws apply to AI and how AI can be used in compliance with existing laws. If necessary, regulatory agencies may consider the development of appropriate requirements in collaboration with industry and stakeholders to address additional concerns.
Generative AI describes the algorithms used to create new data that can resemble human-generated content, including audio, code, images, text, simulations, and videos. This technology is trained with existing content and data, creating the potential for applications like natural language processing, computer vision, metaverse, and speech synthesis. As generative AI continues to improve, reliable access to verifiable, trustworthy information, including the certainty that a particular piece of media is genuinely from the claimed source, will be increasingly critical in our society. Technology is likely to play an important role. Intel is working to mitigate risks and build trust by developing algorithms and architectures to determine whether content has been manipulated using AI techniques. Intel Labs’ trusted media research team investigates new approaches to help determine the authenticity of media content; our research areas include using AI and other methods for deepfake detection, deepfake source detection, and media authentication.
Intel plays a vital role in AI. Intel’s products, both hardware and software, help to solve today’s most complex challenges. For example, we accelerate research and patient outcomes in healthcare and life sciences with faster, more accurate analysis across precision medicine, medical imaging, lab automation, and more. For manufacturing, we transform data into insights that help our customers optimize plant performance, minimize downtime, improve safety, and drive profitability. In research, we work with academics across the world to address global challenges with AI innovations from climate science to drug discovery and many others. Intel is committed to advancing AI technology responsibly and contributing to the development of principles, international standards, best practices methods, tools, and solutions to enable a more responsible, inclusive, and sustainable future.