AI @ Emory: Research & Beyond – Applications, Impact, and Compliance Risks

Photo credit: elenabsl – stock.adobe.com

Author: Deepika Bhatia, AVP, RCRA

Artificial intelligence (AI) is the simulation of human-like intelligence processes by machines, especially computer systems. AI tools have created much flurry across the globe, and the Emory research enterprise is no exception! We are excited to embrace this new technology while ensuring the Emory community is aware of compliance risks and regulatory requirements in their use of AI.

The Office of Research Compliance & Regulatory Affairs (RCRA) established a workgroup in May this year to bring together stakeholders from across the university to discuss AI risks, applications and any best practices that may have been implemented at the School and department levels.

Objectives of the AI Tools Work Group

  • Discuss AI applications and risks across Emory University for:
    • Academic
    • Research
    • Privacy
    • Infosec
  • Preliminary recommendations for use of AI based on regulatory guidance and risks/trends noted in Schools and departments.
  • Draft best practices and recommendations as a resource for department and stakeholder use
  • The work group is multidisciplinary with representatives from Schools, Depts, Academics, Compliance teams, Infosec, IT

Outlined below are risks and preliminary recommendations for use of AI tools in academics, research, privacy as well as on the information security side.

Academics

One of the top risks for higher education is plagiarism. The Emory College of Arts and Science has made significant progress to address and prevent plagiarism concerns with best practices in place that can be replicated across other Schools. To provide further clarity, their Honor code now has a section addressing use of AI in assignments and a violation of the honor code to use AI tools without citation.

Other resources include sessions from the Center for Faculty Development and Excellence (CFDE) – panels for ChatGPT and AI –

ChatGPT: Artificial Intelligence and Teaching | Emory University | Atlanta GA

Research

AI tools can be effective for some research related activities in well-defined tasks, that could produce un-biased results. For example, AI is being used in a CT to calculate Target zoned for head and neck radiotherapy. The hypothesis is that the end-result will yield more accurate and quicker results.​

AI can also assist researchers with tedious tasks, such as researching data patterns or even literature review​s.  Elicit.org is an AI platform that assist in publication finding for research.​

AI tools can be used to write grants, assist in the peer review process, and other tasks that need repetitive, long, or unbiased review.

AI also brings many challenges for researchers to navigate and avoid:

Data Integrity Concerns​ due to –

  • Lack of data transparency​
  • Data protection and security​ measures
  • Erroneous data​
  • Lack of tools to identify erroneous data

Authorship Concerns

  • Plagiarism in grant proposals​ and publications
  • Fairness in data selected for AI articles as bias prevention isn’t possible as data diversity is lacking due to the newness of the technology.​
  • Improper or inadequate citations​

         Funding Risks

  • NIH cautions researchers that using AI tools may introduce several concerns related to research misconduct, such as, including plagiarized text from someone else’s work or fabricated citations. Plagiarized, falsified, or fabricated information in a grant write-up, may result in noncompliance with NIH and/or other funding agency policies.
  • There are several AI tools that can be used to write grants:​
  • https://www.fundwriter.ai/
  • https://www.grantwritingmadeeasy.com/
  • https://grantable.co/

Privacy & Information Security

Open-source AI tools are not Health Insurance Portability and Accountability (HIPAA), General Data Protection Regulation (GDPR), or the Family Educational Rights and Privacy Act (FERPA) compliant.

Inputting data into ChatGPT or similar AI tools is equivalent to disclosing that data to the public and could be considered a violation under FERPA, HIPAA, PCI, GLBA or other federal or state laws.

Organizational confidential and proprietary information are at risk due to the sharing of AI responses in Chatbots. ChatGPT also poses a potential breach of contextual integrity, which dictates that individuals’ information is not revealed outside of the context in which it was originally created.

ChatGPT is NOT HIPAA compliant. Therefore, it is critical not to input any PHI (or any Individually identifiable Health information used for research data) into ChatGPT.

De-identifying or anonymizing data is key to minimizing the risk of a data breach. However, AI models have demonstrated to be particularly adept at re-identification of data subjects even when the source data set was supposedly de-identified in accordance with existing standard.

AI Related Regulatory Guidance

  1. NIH Peer review: The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process (NOT-OD-23-149)

In June 2023, NIH released a notice prohibiting NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals. NIH is also revising its Security, Confidentiality, and Non-disclosure Agreements for Peer Reviewers to clarify this prohibition. The notice also reiterates that uploading or sharing content or original concepts from an NIH grant application, contract proposal, or critique to online generative AI tools violates the NIH peer review confidentiality and integrity requirements.

  1. DOE – released an AI report with resources for teaching and learning, ed.gov/documents/ai-report/ai-report.pdf
  2. WH Factsheet – New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights andSafety

OSTP released an RFI to aid in the development of national AI strategy for US to harness the benefits and mitigate risks associated with AI- comment period ended July 7th.

  1. NSF – is conducting an internal assessment to develop (conservative) guardrails for appropriate use of ChatGPT
    • “Soour main concerns about ChatGPT are what data you provide it in questions. And in general, we would prefer people be conservative in their use of it, so we’ve got a few guardrails set up like you can’t determine an NSF grant award winner using chat GPT,” Dorothy Aronson said (NSF Chief Information Officer). https://fedscoop.com/national-science-foundation-looking-at-use-cases-for-chatgpt/ 

Resources:

  • AI Tools workgroup contributions – RCRA, Tracy Dawson, Privacy.
  • Buch VH, Ahmed I, Maruthappu Artificial intelligence in medicine: current trends and future possibilities. Br J Gen Pract. 2018 Mar;68(668):143-144. doi: 10.3399/bjgp18X695213. PMID: 29472224; PMCID: PMC5819974.
  • Should AI have a role in assessing research quality? https://www.nature.com/articles/d41586-022-03294-3

For any questions or additional information related to the AI workgroup at Emory, please contact Deepika Bhatia at researchcompliance@emory.edu

This entry was posted in ORA, RCRA and tagged , , , , , , . Bookmark the permalink. Both comments and trackbacks are currently closed.