Responsible AI Governance
About this Project
This project creates a maturity model for AI governance, based on the NIST AI Risk Management Framework. The model includes a questionnaire and scoring guidelines.
The project comes at the backdrop of repeated calls from researchers, government bodies, and organizations for a shift the conversation in the field of AI ethics from general principles to tangible and operationalizable practices in mitigating the sociotechnical harms of AI. Frameworks like the NIST AI RMF embody an emerging consensus on recommended practices in operationalizing sociotechnical harm mitigation. However, private sector organizations lag far behind this emerging consensus. Implementation is sporadic and selective at best. At worst, it is ineffective, and can risk serving as a misleading veneer of trustworthy process providing an appearance of legitimacy to substantively harmful practices.
We are developing a practical framework for evaluating where organizations sit relative to the emerging consensus on sociotechnical harm mitigation best practices: a flexible maturity model based on the NIST AI RMF.
Join my newsletter for tech ethics resources.
I will never use your email for anything else.