AI Governance and Assurance
I help organizations build practical AI governance programs that satisfy multiple regulatory and standards obligations at once. Current focus areas: ISO/IEC 42001 readiness, AI policy and procedure development, and cross-mapping AI work to overlapping requirements (NIST AI RMF, ONC HTI-1 § 170.315(b)(11) Predictive DSI).
The starting observation: most organizations adopting AI need a governance baseline before they need AI testing. ISO/IEC 42001 is the practical anchor. It’s the recognized AI Management System standard, and a sound 42001 implementation creates the policy and process structure that downstream NIST AI RMF and ONC DSI obligations can lean on.
What I help with
ISO/IEC 42001 readiness
- AIMS scope definition and boundary documentation
- Gap analysis against the 42001 control set and management system clauses
- AI policy development and customization
- Risk evaluation, control selection, control design, and control implementation documentation
- Training and awareness program design
- AIMS internal audit procedure and management review process design
- Nonconformity tracking and corrective action workflows
AI policy and procedure development
- AI usage policy, data handling policy, model lifecycle policy
- Procedures for risk evaluation, AI impact assessment, third-party AI relationships, and human oversight
- Templates aligned with ISO/IEC 42001 Annex A controls and NIST AI RMF functions
Cross-mapping to overlapping frameworks
I’ve built a mapping that ties Singapore’s AI Verify Testing Framework (88 outcomes across 11 governance dimensions) to:
- ISO/IEC 42001 controls A.2 through A.10 (policies, internal organization, impact assessment, AI development guidance, system requirements, data, information for interested parties, AI use, third-party relationships)
- NIST AI RMF 1.0 functions: GOVERN, MAP, MEASURE, MANAGE
- ONC HTI-1 Predictive DSI source attributes per § 170.315(b)(11)
For organizations implementing AI in healthcare, this means a single governance program can satisfy 42001, NIST AI RMF, and ONC’s Decision Support Intervention requirements rather than running three separate workstreams.
The mapping is operationalized as a working plugin I built on the AI Verify v2 toolkit. I run the assessment as part of advisory engagements, producing an interactive process checklist across the 11 principles plus Organisational Considerations, with an Excel-exportable evidence package by principle.
NIST AI RMF profile development
- Profile development across the four AI RMF functions:
- GOVERN. Policies, accountability, and organizational culture for AI risk.
- MAP. AI system context, intended use, foreseeable misuse, risk and benefit categorization.
- MEASURE. Methods, metrics, tracking, feedback loops.
- MANAGE. Prioritization, mitigation, third-party risk, documentation.
- Function-by-function readiness review
- Tabletop exercise of the profile against realistic AI risk scenarios
AI testing and validation
Testing is a tool in the assurance kit, not the starting point. It’s most useful when an organization has a governance baseline in place and needs evidence about a specific model’s behavior.
- Performance and reliability. Accuracy, precision, recall across stratified subsets; robustness to data drift and adversarial inputs; calibration on classification and regression models.
- Fairness. Subgroup performance comparison; disparate impact assessment; outcome equity across demographic and clinical strata.
- Explainability. Feature importance analysis; local explanations; model behavior documentation for clinical or regulatory review.
When testing makes sense:
- An ONC certification touches Predictive DSI and you need source attribute evidence for § 170.315(b)(11)
- An ISO/IEC 42001 AIMS Annex A control requires periodic measurement
- A customer or auditor is asking for fairness, robustness, or explainability evidence on a specific model
When governance comes first: if there’s no AI inventory or risk register yet, or no policies in place to act on results, testing produces reports that sit unused.
What’s at stake
When this works. Customer trust signals are in place before procurement asks for them. Enterprise deals don’t stall on the third question from a security review. ISO 42001 readiness, NIST AI RMF mapping, and the (b)(11) Source Attributes share one set of artifacts instead of three.
When this doesn’t get done. An AI feature ships without governance documentation, and a procurement team flags it during diligence on your next funding round or enterprise deal. A customer security questionnaire arrives asking about NIST AI RMF posture, and the answer takes weeks because nothing is written down.
Background
- Credited contributor to the Coalition for Health AI (CHAI) Responsible AI Guide (Appendix 3: Privacy and Cybersecurity Profile); authored Appendix 1’s Clinical Operations and Administration use case (Prior Authorization with Medical Coding)
- Member of the NIST AI Safety Consortium and CHAI working groups
- Built an AI Verify v2 plugin that operationalizes the cross-mapping across ISO/IEC 42001, NIST AI RMF, and ONC HTI-1 DSI for use in advisory engagements
Related practice areas
AI governance work increasingly overlaps with adjacent tracks. If your product is ONC-certified and uses Predictive DSI, the ONC certification page covers § 170.315(b)(11) Source Attributes and HTI-5 transition. If customers are asking for HITRUST attestations alongside AI assurance, the HITRUST advisory page covers the assessment work.
Get started
Most AI governance engagements begin with a gap analysis scoped to whichever framework is the immediate driver: a customer asking about ISO 42001, an ONC certification touching DSI, an internal AI risk review. From there the work is policy development, control implementation, and process documentation.
Schedule a call to discuss your AI governance program.