NIST AI Risk Management Framework (AI RMF) Workshop – January 26th and 28th, 2026 – 6:00 pm to 9:30 pm ET

Course Level: Foundational
Duration: Day (7 Hours Total)
Delivery: Virtual – Live via Microsoft Teams
Date: January 26th & 28th, 2026 (6:00 PM – 9:30 PM ET each evening)
CPEs: 7

Course Overview

Join ISSA-NOVA for an immersive, full-day workshop designed to equip cybersecurity, privacy, risk, and technology professionals with a comprehensive understanding of the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) and its application across modern AI-enabled environments. This program provides a manager-focused exploration of how the AI RMF can be used to identify, assess, measure, and manage AI risks throughout the AI lifecycle, helping organizations implement responsible and trustworthy AI practices.

Participants will gain practical insights into the Framework’s Core functions—Govern, Map, Measure, and Manage—and learn how to apply trustworthiness characteristics, risk mapping methods, and lifecycle-aligned controls. Real-world use cases, ecosystem-wide responsibilities, and AI-specific risk scenarios (such as bias, safety, security, transparency, and emergent behavior) are emphasized throughout.

Led by Jim Wiggins, ISSA-NOVA President and Founder of the Federal IT Security Institute (FITSI), this workshop blends expert instruction, demonstrations, hands-on exercises, and collaborative discussions to help technical and non-technical leaders operationalize the AI RMF within their organizations.

Course Format

A blended learning experience combining:

  • Interactive discussions and analysis of deployment contexts, risk scenarios, and trustworthiness challenges
  • Group activities applying the Govern–Map–Measure–Manage functions
  • Practical exercises building AI RMF Profiles and evaluating AI risk controls

Course Materials Include:

  • Expert lectures
  • Demonstrations
  • Case Studies
  • Online Additional Resources

Learning Objectives

By completing this course, participants will be able to:

  • Understand the foundational components, structure, and purpose of the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0).
  • Explain AI risk management concepts—including AI-specific risks, lifecycle considerations, and trustworthiness characteristics—and their importance in supporting organizational objectives.
  • Identify and apply the AI RMF Core functions (Govern, Map, Measure, Manage) and their associated categories and subcategories.
  • Develop and use AI RMF Current and Target Profiles to support alignment, comparative analysis, and self-assessment.
  • Assess the trustworthiness of AI systems by applying measurement methods, TEVV practices, and appropriate risk metrics aligned to the Framework.
  • Conduct AI risk mapping by evaluating context, potential impacts, assumptions, affected actors, and system categorizations throughout the AI lifecycle.
  • Integrate AI governance structures, accountability mechanisms, and organizational policies to support responsible and transparent AI practices.
  • Embed AI risk management activities into system design, development, deployment, and monitoring processes across the AI lifecycle.
  • Understand and manage ecosystem-wide roles, responsibilities, and risks by engaging relevant AI actors—including developers, deployers, impacted communities, and third-party entities—throughout the AI system lifecycle.

Table of Contents

Module 0 – Introduction and Course Overview
Module 1 – Introduction to AI Risk and Trustworthiness
Module 2 – Core Components of the NIST AI RMF
Module 3 – Governing AI Systems and Establishing Organizational AI Risk Culture
Module 4 – Mapping AI Risks Across Context, Actors, and Lifecycle
Module 5 – Measuring AI System Risks Using TEVV and Trustworthiness Characteristics
Module 6 – Managing AI Risks, Response Planning, and Continuous Monitoring
Module 7 – Building and Using AI RMF Profiles
Module 8 – AI Lifecycle Integration, Human-AI Interaction, and Ecosystem Responsibilities

Who Should Attend

This workshop is ideal for:

  • Cybersecurity managers and technical team leads
  • AI program managers and solution owners
  • Data scientists and machine learning engineers
  • Federal and defense IT security personnel
  • Security auditors and assessors
  • Governance, Risk, and Compliance (GRC) professionals
  • Technology leaders preparing to implement or evaluate AI systems
  • Privacy and ethics professionals responsible for evaluating AI impacts

Pricing
ISSA-NOVA Members: Free

Members of Other ISSA Chapters: $50

Non-Members: $150

Each participant earns 7 CPEs and receives a certificate of completion based on attendance.


Registration
REGISTERISSA-NOVA Members Link: https://docs.google.com/forms/d/e/1FAIpQLSc8vvmM6tEOPV1CclEaWHbQNCumXV14sIkOF9tx_u9z8yGbNw/viewform?usp=publish-editor

REGISTERMembers of Other ISSA Chapters Link: https://square.link/u/AWFT8BZ7

REGISTERNon-Members Link: https://square.link/u/QFwHg63v