Dennis D. McDonald (ddmcd@ddmcd.com) consults from Alexandria Virginia. His services include writing & research, proposal development, and project management.

Tapping the Brakes on AI Use in Peer Review

Tapping the Brakes on AI Use in Peer Review

By Dennis D. McDonald

A recent article by Jocelyn Kaiser in AAAS’ Science magazine, Science funding agencies say no to using AI for peer review, reports that NIH and other US government agencies that fund scientific research are beginning to forbid the use of generative AI tools such as ChatGPT in the peer review processes that evaluate research funding requests.

Funding agencies do need more time to create appropriate ground rules for how to best use AI tools in the peer review process. While simply banning their use outright would be both premature and counterproductive, the reality is that we don’t yet understand how best to govern the use of such tools. Issues associated with their use are real:

  • Even if the proposal review use of AI tools is limited to “summarizing” the text of proposals to speed the review process, the possibility of errors being introduced is real.

  • AI platform managers are still developing protections to govern the input and subsequent processing of confidential, proprietary, and/or personal information.

  • AI tools have been known to fabricate very realistic sounding information.

Forbidding the use of tools like ChatGPT in the review process makes sense – but only for now. We do have to experiment with and consider a variety of ways for governing their usage.

For example, Michael Kaplan and I have been researching the use of AI tools such as ChatGPT in the project management process; some of our finding so far are here and here. We have found such tools to be extremely valuable--but only if their use is carefully governed and understood by knowledgeable and experienced people.

My own belief is that eventually rules and policies will be developed governing the use of AI tools not just in the proposed proposal peer review process, but also in other areas of R&D, including research design, data analysis and interpretation, and even in R&D management. Current issues such as the handling of confidential and personal information in LLM database training will eventually be resolved.

Put another way, there’s no going back. People who understand these tools will learn how to use them and govern them responsibly; I intend to be in that group.

At the same time, we have to understand that these new tools do have the ability to disrupt both management and research processes. We must be prepared to recognize and manage the resulting changes.

Copyright 2023 by Dennis D. McDonald

Related articles:

My Interest in the “NIST Generative AI Public Working Group (NIST GAI-PWG)”

My Interest in the “NIST Generative AI Public Working Group (NIST GAI-PWG)”

LLM Tools: Force Multipliers and/or Sanity Checkers?

LLM Tools: Force Multipliers and/or Sanity Checkers?