Mechanisms exist to regularly assess the effectiveness of existing controls, including reports of errors and potential impacts on affected communities.
Mechanisms exist to regularly assess the effectiveness of existing controls, including reports of errors and potential impacts on affected communities.
Mechanisms exist to identify and document unmeasurable risks or trustworthiness characteristics.
Mechanisms exist to gather and assess feedback about the efficacy of Artificial Intelligence (AI) and Autonomous Technologies (AAT)-related measurements.
Mechanisms exist to utilize input from domain experts and relevant stakeholders to validate whether the Artificial Intelligence (AI) and Autonomous Technologies (AAT) perform consistently, as intended.
Mechanisms exist to evaluate performance improvements or declines with domain experts and relevant stakeholders to define context-relevant risks and trustworthiness issues.
Mechanisms exist to validate the information sources and quality of pre-trained models used in Artificial Intelligence (AI) and Autonomous Technologies (AAT training, maintenance and improvement-related activities.
Mechanisms exist to proactively prevent harm by regularly identifying and tracking existing, unanticipated and emergent Artificial Intelligence (AI) and Autonomous Technologies (AAT)-related risks.
Mechanisms exist to protect human subjects from harm.
Mechanisms exist to assess and document the environmental impacts and sustainability of Artificial Intelligence (AI) and Autonomous Technologies (AAT).
Mechanisms exist to respond to and recover from a previously unknown Artificial Intelligence (AI) and Autonomous Technologies (AAT)-related risk when it is identified.