top of page
Ravit banner.jpg

Responsible AI Governance
Maturity Model

About the Maturity Model
 

What should organizations do to govern their AI responsibly?

And how should organizations measure their level of responsibility?
 

The Responsible AI Governance Maturity Model helps organizations answer these questions. 
 

  • The maturity model consists of a questionnaire and scoring guidelines for evaluating the social responsibility of AI governance in AI-enabled organizations.

  • The questionnaire includes nine topics divided into development life-cycle stages so that companies can evaluate their work no matter which stage they’re at.

  • The scoring guidelines help evaluators assess the company’s performance in these nine topics and explain their scoring decisions using concrete information about the company


The questionnaire and scoring guidelines are based on the NIST AI RMF, one of the most influential AI governance frameworks in the world, and other NIST resources.

You can read more about the model in the full paper and in the report from the Responsible AI Governance Maturity Hackathon, in which diverse stakeholders used it to evaluate companies, and which includes a case study and evaluation examples.



 The rest of the team behind this project:

Borhane Blili-Hamelin, PhDJeanna MatthewsRavi Madhavan, Benny Esparra, Dr Joshua ScarpinoCarol AndersonRic McLaughlin

Get in Touch

To explore using the maturity model in your organization

Who the model helps

​When asked who the model can help, survey respondents at a hackathon using the maturity model named the following: all companies, auditors, companies at the beginning of their AI ethics journey, academics, executives, and consumers.
 

  • Companies that develop and use AI products - can evaluate where they stand with regard to a leading industry standard and plan improvements.

 

  • Companies at the beginning their AI ethics journey - can use it to  decide how to get started.

  • External stakeholders - such as investors, procurers, and auditors can use it to evaluate companies they engage with.

Software Engineer

The guidelines can help companies that are implementing/utilizing/plan to utilize AI products and services to create their own AI Ethics blueprint

AI Ethicist

Auditors and regulatory bodies can utilize the questionnaire as a standardized tool for evaluating companies' AI governance practices. It provides a structured framework for assessing compliance with relevant regulations, standards, and ethical guidelines

Policy Professional

Small to medium-sized enterprises or startups that may not have the resources to appoint a dedicated person or team for AI governance can significantly benefit from these tools. The questionnaire and guidelines provide a structured approach to evaluating their current AI governance practices, helping these organizations identify areas of strength and weakness

How the model helps

The activity of filling out the questionnaire helps companies as well as the evaluators themselves. Evaluators report that they learn how to evaluate companies and contribute to their company, as well as upskill in AI ethics. 

Software Engineer

The experience was eye-opening for me. It was very helpful to have a framework to work off of. I often think about the topics that the framework walks you through, but until working with it I did not have a robust way of assessing the topics and a way to ground and level my assessment. I found it very helpful

Technology Consultant

The framework is a great sounding board for organisations at different phases of their AI journey. It enables them to catch issue[s] early in the system reducing cost and reputation implications

Tech Strategist

I think that the AI Governance Maturity Model raises many excellent questions that AI companies and projects should strive to address, and to do so in a requisitely comprehensive way. I value having the Maturity Model as a reference to return to as our project continues to develop, and as a tool to plan for future considerations

Case Study:

Light-It
 

Light-It is a digital product agency building tailor-made healthcare web and mobile applications. They partner with digital health companies, healthcare innovation centers, and startups to reach technology’s full potential by ideating, designing, and developing custom applications that revolutionize the industry.
 

Light-it is a startup that used the maturity model to evaluate itself. In addition, it set an example for other companies by sharing its experiences with others as part of the Responsible AI Governance Maturity Model Hackathon, in which participants used the framework to evaluate companies. 

 

Light-it’s insights can help many other companies in their responsible AI governance journey, so we shared them in the hackathon report.

Adam Mallát
Innovation Manager, Light-It

[Filling out the questionnaire] was the first time, that I personally went into so much detail and became so analytical about what we do in the area of of ethics

Javier Lempert
Light-It, Founder and CEO

Bias is one of the main points we are going to tackle in the future that we weren’t…We are now developing tools to allow developers to understand bias

Get in Touch

To explore using the maturity model in your organization

RELATED RESOURCES

FOR UPDATES

Join my newsletter for tech ethics resources. 

I will never use your email for anything else.

bottom of page