Dennis D. McDonald (ddmcd@ddmcd.com) consults from Alexandria Virginia. His services include writing & research, proposal development, and project management.

Real-World Challenges to Regulating Artificial Intelligence

Real-World Challenges to Regulating Artificial Intelligence

By Dennis D. McDonald

US Efforts

Here in the US, government efforts are underway to regulate artificial intelligence applications. Foremost are efforts initiated by President Biden’s October 30, 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.

One agency responsible for developing such regulations is NIST, the National Institute of Standards and Technology, part of the U.S. Department of Commerce. Back in September of 2023 I provided comments on NIST’s “concept note” on AI risk management, but I could tell even then that government efforts to actually regulate AI would be problematic (e.g., having to play catch-up due to rapid changes in technology and aggressive experimentation by industry) and influenced--politically & financially--by AI vendors and others with a vested interest in AI applications. (As a consultant, of course, I have a “vested interest” of my own!)

Lots of Moving Parts

I do wonder how a body such as NIST can address constantly changing technologies and industry innovations while ensuring that all relevant stakeholders are represented in the regulatory processes.  The percolation of NIST-generated regulations into, say, US government procurement practices and regulations will be complex simply due to the complexity of government procurement and to how many moving—and changing--parts will need to be touched on.

Process Transparency

I also have questions about how process transparency will be regulated. At least two questions must be addressed: (1) transparency of what? and (2) transparency to whom?

Regarding (1) which aspects of AI will be regulated:

  • How is software developed?

  • How is software trained?

  • How is software marketed, used, and monitored?

Issues of proprietary technology will have to be addressed along with issues of data ownership and privacy with respect to how AI models are trained.

How the Sausage Is Made

Regarding (2), how much do users & regulators need to see and understand about "how the sausage is made"? Answering this question relates to how AI usage is governed and managed internally by the organizations that employ AI technologies and the jury is still out on that.

Just as important is how an organization effectively governs usage of AI is related to how it governs its data, and that can be problematic as well (e.g., see Towards Unified Governance of AI, Data, and Cybersecurity Initiatives.)

AISI & AISIC

To address such issues NIST’s newly established U.S. AI Safety Institute (AISI) and its Artificial Intelligence Safety Institute Consortium (AISIC) are ramping up. The list of technical expertise topics that Consortium members are expected to contribute (see appendix to this article below) is imposing.

It’s About Data

I am pleased to see that  data and data documentation head the list. As noted above, data governance is closely tied to both cybersecurity and artificial intelligence governance, but data governance in real-world organizations is already a very complicated and challenging real-world topic.  While I suggested in my above-cited Towards Unified Governance article that a cross-functional PMO-type organization (Project Management Organization) might be one approach to governing an organization’s data, the reality is that organizational politics can militate against a truly unified approach to governing how an organization’s data are generated, managed, and used. Add in the possible need to access externally-governed data resources as part of an AI based initiative and that complexity multiplies.

Follow the AI Governance Lifecycle

While on the one hand it’s easy to recommend a “unified approach” to governing data, AI, and cybersecurity, the on-the-ground complexity of regulating how AI will be managed and used will be immense. It may be best to start with a few high priority use cases and follow the regulatory processes through the entire AI governance lifecycle before promulgating AI-related regulations.

Copyright © 2024 by Dennis D. McDonald . The author appreciates having communicated via Linkedin with Tino Merianos prior to writing this article. Also, the graphic at the top of the page took several prompts to generate. The first couple of iterations included only men in the picture, hence the addition of “men and women” in the final prompt submitted to Copilot.

Appendix

Below list copied 4/5/24 from “AISIC Members” page of U.S. ARTIFICIAL INTELLIGENCE SAFETY INSTITUTE:

“Consortium members will be expected to contribute technical expertise in one or more of the following areas: 

  • Data and data documentation ​​ 

  • ​​​AI Metrology ​​ 

  • ​​​AI Governance ​​ 

  • ​​​AI Safety 

  • ​Trustworthy AI ​​ 

  • ​​​Responsible AI ​​ 

  • ​​​AI system design and development ​​ 

  • ​​​AI system deployment  

  • ​AI Red Teaming​​ 

  • ​​​Human-AI Teaming and Interaction ​​ 

  • ​​​Test, Evaluation, Validation and Verification methodologies ​​ 

  • ​​​Socio-technical methodologies ​​ 

  • ​​​AI Fairness  ​​ 

  • ​​​AI Explainability and Interpretability ​​ 

  • ​​​Workforce skills  ​​ 

  • ​​​Psychometrics ​​ 

  • ​​​Economic analysis

  • Models, data and/or products to support and demonstrate pathways to enable safe and trustworthy artificial intelligence (AI) systems through the NIST AI Risk Management Framework 

  • Infrastructure support for consortium projects 

  • Facility space and hosting consortium researchers, webinars, workshops and conferences, and online meetings”

    More on “Governance”

 

Managing How Basic Research and National Security Interact

Managing How Basic Research and National Security Interact

Here Comes the Anti-AI Butlerian Jihad!

Here Comes the Anti-AI Butlerian Jihad!