October 27, 2023 | By Rachel Rose, JD, MBA
“Do not go gentle into that good night” is a poem by Dylan Thomas. While this poem is often connected with a dying individual, it is equally apropos to procuring technology which includes generative artificial intelligence (“AI”). As set forth in the American National Standard Dictionary of Information Technology (ANSDIT) – ANSI INCITS 172-220 (R2007), one definition of AI is “[t]he capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement.” And, according to Statistica, the AI healthcare market is projected to be worth $187 billion in 2023, a more than ten-fold increase since 2021, when it was valued at $11 billion. Hence, AI is not going away – but neither are human beings.
Recently, a bi-partisan Senate initiative headed by Sen. Richard Blumenthal (D-CT) and Sen. Josh Hawley (R-MO), “Framework for a U.S. AI Act amid mounting concerns about potentially misleading, deceptive and discriminatory uses of AI technologies” addressed concerns that have been echoed by other U.S. Government agencies, consumers, and businesses alike. The White House Office of Science and Technology – Blueprint for an AI Bill of Rights sets forth five (5) key considerations, which permeate other agencies notices of proposed rulemaking and guidance. The five principles include: (1) safety and effectiveness; (2) algorithmic discrimination protections; (3) data privacy; (4) notice and explanation; and (5) human alternatives, consideration and feedback. This is consistent with the Department of Commerce’s Request for Comment on AI Accountability, which was published on Apr. 13, 2023 (88 Fed. Reg. 22433). Specifically, “[t]his request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders – that is, to provide assurance – that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.” (emphasis added).
This is echoed in and directed by the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283), the goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. In response, the National Institute of Standards and Technology’s (NIST) released the Artificial Intelligence Risk Management Framework (AI RMF 1.0), a guidance document for voluntary use by organizations. Likewise, the U.S. Department of Health and Human Services issued its Trustworthy AI (TAI) Playbook (Sept. 2021), which like NIST, is not binding but sets forth practices, which are prudent for any person considering a software or device that utilizes AI. These four (4) main practices include:
Strategy and Reputation – Loss of public trust and loyalty due to lack of transparency, equitable decision-making, and accountability.
Example: If an AI model uses health care expenses as a proxy for health care needs, it may perpetuate biases that affect Black patients’ access to care
since Black patients tend to spend less than White patients for the same level of need. In turn, Black patients may lose trust in the health care community. 2, 3
Cyber and Privacy – Security and privacy breaches due to inadequate data protection and improper use of sensitive data.
Example: If an AI model that uses protected health information (PHI) to inform public health interventions is not properly secured, it may be compromised by adversarial attacks. This can cause emotional and financial harm to affected individuals.
Legal and Regulatory – Unfair practices, compliance violations, or legal action due to biased data or a lack of explainability.
Example: If an AI-based benefits distribution system discriminates against a protected class due to biased data, the agency may face legal ramifications.
Operations – Operational inefficiencies due to disruption in AI systems or inaccurate or inconsistent results
Example: If a call center bot that answers grantee inquiries about compliance requirements provides inconsistent responses, it may cause confusion
among grantees and additional work for agency officials managing compliance.
A recent article written by a Vanderbilt professor, I Secretly Let ChatGPT Take My Final Exam – The results were stunning, underscores the notion that AI cannot replace humans critical thinking skills. He created an AI “student” who did the same assignments and took the same exam as the students in his Algorithms class. “The results were stunning, and confirmed what I had witnessed during the semester: Every single student in the morning section of my class scored higher on the final exam than Glenn, which only managed a C-minus with a score of 72.5. My class average was in the mid-80s. In my afternoon section, and facing a different set of final exam problems, Glenn fared somewhat better, but still scored below the mean in the bottom third of the class—the equivalent of a C-plus.” Let’s pause and translate this result to healthcare diagnoses, coding and claims submissions for payment, and medical record entries. In healthcare, peoples’ lives can be adversely affected in a physical way, algorithms can lead to misdiagnoses, and inaccurate charting and coding can lead to potential liability under the False Claims Act.
In sum, neither AI nor humans are “going gently into that dark night” anytime soon. By having appropriate checks and balances in place and having a human being review the output for exaggerated or skewed algorithms, both humans and technology can live in harmony – at least for now.
Your next steps:
- Contact NAMAS for full auditing, documentation, and compliance consultation.
- Read more blog posts to stay updated on the 2023 Revisions to the 2021 E&M Guidelines.
- Subscribe to the NAMAS YouTube channel for more auditing and compliance tips!
- Check out the agenda for the 15th Annual NAMAS Auditing & Compliance Conference and register to attend!
NAMAS is a division of DoctorsManagement, LLC, a premier full-service medical consulting firm since 1956. With a team of experienced auditors and educators boasting a minimum of a CPC and CPMA certification and 10+ years of auditing-specific experience, NAMAS offers a vast range of auditing education, resources, training, and services. As the original creator of the now AAPC-affiliated CPMA credential, NAMAS instructors continue to be the go-to authorities in auditing. From DOJ and RAC auditors to CMS and Medicare Advantage Auditors to physician and hospital-based auditing professionals, our team has educated them all. We are proud to have helped so many grow and excel in the auditing and compliance field.
Looking to start up a medical practice or grow your existing practice? Contact our parent company, DoctorsManagement.