Deploying adaptive AI in distributed water plants - Barbara Accoina

NIST provides organizations with AI risk management framework

NIST provides organizations with AI risk management framework

The National Institute of Standards and Technology (NIST) has released a new Risk Management Framework, aiming to ensure the trustworthiness and responsible use of artificial intelligence (AI). The framework provides organizations with a flexible, structured, and measurable process that can help maximize the benefits of AI technologies while minimizing the risk of negative impacts on individuals, groups, and society.

The framework is part of the agency’s broader ambition to build trust in Artificial Intelligence systems, according to Under Secretary for Standards and Technology and NIST Director Laurie E. Locascio. 

“The AI Risk Management Framework can help companies and other organizations in any sector and any size to jump-start or enhance their already existing risk management approaches,” Locascio said. “It offers a new way to integrate responsible practices and actionable guidance to operationalize trustworthy and responsible AI. We expect the AI RMF to help drive development of best practices and standards.”

The AI Risk Management Framework is divided into two parts. The first section offers organizations a discussion on how to frame AI risks. The second part outlines four specific functions — governance, mapping, measuring, and management — that can be applied in context-specific use cases and at any stage of the AI lifecycle.

NIST says the framework results from 18 months of collaboration with over 240 organizations from the private and public sectors. NIST also released a voluntary AI RMF Playbook, which offers ways to navigate and use the framework.

In addition, NIST plans to unveil a Trustworthy and Responsible AI Resource Center that will provide support in putting the AI RMF 1.0 into practice.

To ensure the continued relevance of the framework, NIST says it is actively working with members of the AI community to review and update it. As such, they are asking for feedback regarding adjustments or suggestions for additions to the resource. All comments received by February 2023 will be integrated into an upgraded version to be released in early spring 2023.

“This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” stated Deputy Commerce Secretary Don Graves.

Last year, the National Institute of Standards and Technology (NIST) unveiled a blueprint to bolster server platform security, data protection and edge computing for cloud data centers with hardware-based security approaches.

Article Topics

 |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Machine learning at the Edge

“Barbara

Latest News