News & Events

President Biden Launches Blueprint for an Artificial Intelligence Bill of Rights

NTC AI Bill of Rights

NOTICE TO CLIENTS AND FRIENDS

Intellectual Property Department

On October 4th, 2022, the White House Office of Science and Technology Policy launched the blueprint for an Artificial Intelligence (“AI”) Bill of Rights. Inspired by the negative impact the implementation of AI has had (such as limitations of credit opportunities; biased algorithms used for hiring; discrimination and privacy violations) the AI Bill of Rights is intended to protect civil rights and promote democratic values in the deployment and governance of automated systems.

Aware of the constant and rapid change of the AI, the Blueprint uses a two-part test to determine what systems are in its scope. Hence, the AI Bill of Rights applies to: (1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services. In essence, the Blueprint for an AI Bill of Rights is a set of five (5) principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the public.

Safe and effective systems: Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.

Algorithmic discrimination protections: Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, gender identity and sexual orientation), religion, age, national origin, among others. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination.

Data privacy: Designers, developers, and deployers of automated systems should seek your permission and respect the public’s decisions regarding collection, use, access, transfer, and deletion of their data. Any consent requests should be brief, be understandable in plain language, and give control over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed.

Notice and explanations: Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes.

Human alternatives, consideration, and fallback: You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions.

The above-mentioned guidelines are non-binding and will be followed on an opt-in basis. It does not constitute binding guidance for the public or Federal agencies and therefore does not require compliance with the principles described herein. Future sector-specific guidance will likely be necessary and implemented for guiding the use of automated systems in certain settings such as AI systems used as part of school building security or automated health diagnostic systems.