May 19, 2023 | By Terry A. Fletcher BS, CPC, CCC, CEMC, CCS, CCS-P, CMC, CMSCS, ACS-CA, SCP-CA, QMGC, QMCRC, QMPM
Unless you have been climbing Mount Everest alone with no cell or Wi-Fi services for the past few months, I am certain you have heard of ChatGPT and its use as a form of artificial intelligence (AI). The reality is that AI, or certainly the concept, has been around for a very long time. Until recently, it’s been much less actual intelligence and more number crunching in that it goes through every variation and combination of responses until it finds one that fits — as opposed to what many conventionally thought AI was — “true intelligence and reasoning.” Well, let’s use some common sense here. AI is “machine learning”. AI has also now leaped into popular and mainstream culture and how we look at innovation and our world going forward.
And fortunately or unfortunately, healthcare is one industry where this has been an ongoing discussion and raging debate, both on the pros and cons side of the discussion.
AI has the potential to bring enormous benefits to healthcare by improving diagnosis and treatment, predictive analytics, drug discovery and development, virtual assistants and chatbots, and streamlining administrative tasks. However, to fully realize these benefits, significant challenges such as data privacy and security (HIPAA), bias in the data, lack of transparency, regulation, and governance, AI “hallucinations” — meaning creating its own forethought without foundation — and lack of understanding need to be overcome. I believe it is crucial that healthcare organizations, regulators, and researchers work together to ensure that the technology is used in an ethical, actionable, and meaningful manner.
The pros or the positives first. Again, AI in healthcare has the opportunity to transform the way we diagnose, treat, and prevent diseases. The technology could help improve patient outcomes, reduce costs, and increase efficiency in the healthcare system.
- Diagnosis and Treatment Planning: AI can be used to analyze imaging, such as X-rays and MRIs, to help doctors identify diseases and plan treatment. For example, AI-powered algorithms can detect signs of cancer¹ in mammograms with a high degree of accuracy, which can help doctors make a diagnosis and plan treatment more quickly.
- Predictive Analytics: Electronic health records and other patient data can be analyzed by AI to predict which patients are at risk of developing certain conditions. This may help doctors intervene early before a condition becomes more serious and can also help healthcare organizations allocate resources more effectively.
- Virtual Assistants and Chatbots: AI-powered virtual assistants and chatbots can help patients access healthcare information and services in a simpler and possibly easier fashion. However, this is where improvement needs to be made. For example, a chatbot could potentially answer patients’ questions about their symptoms or help them schedule an appointment with a doctor, but not all rare conditions may be known to the AI bot.
- Streamlining Administrative Tasks: AI can also be used to automate routine administrative tasks, such as scheduling appointments and processing insurance claims. This can help reduce costs and increase efficiency in the healthcare system.
While the potential benefits of AI in healthcare are clear, there are also significant challenges that must be overcome. Here are five that I find the most important:
- Data Privacy and Security: The use of AI in healthcare requires large amounts of patient data, which raises concerns about data privacy and security. It is important to ensure that patient data is protected from unauthorized access and that patients have control over how their data is used. Companies that engage in the creation of AI platforms will need specific BAAs, HIPAA, and privacy agreements in place, or this could turn into a big marketplace to tag consumers with ads to purchase their products based on shared medical data that should not be shared. Additionally, proper security measures must be put into place in order to protect sensitive patient data from being exploited for malicious purposes.
- Bias in the Data: AI systems can be biased if the data they are trained on is not representative of the population they will be used to serve. This may lead to inaccurate or unfair results, particularly for marginalized communities. I have already experienced this when using the Beta testing ChatBot, and the biases just in mainstream information are severe. This would not serve well in the healthcare field.
- Lack of Transparency: Many AI systems are considered “black boxes or black holes” because it is difficult to understand how they arrived at a particular decision. This lack of transparency can make it difficult for doctors and other healthcare professionals to trust the results of an AI system. Who will be monitoring and fact-checking the information?
- Regulation and Governance²: There is currently a lack of clear regulations and guidelines for the use of AI in healthcare. This can make it difficult for healthcare organizations to know how to use the technology responsibly and can also make it difficult for patients to know what to expect when they interact with an AI system.
- Lack of Understanding: Many healthcare professionals and patients may not have a good understanding of how AI works and what it can and cannot do. This can lead to unrealistic expectations and mistrust of the technology.
As healthcare organizations increasingly invest in the use of artificial intelligence in healthcare for a range of tasks, the challenges facing this technology must be addressed, as there are many ethical and regulatory issues that may not apply elsewhere.
Some of the most pressing challenges in addition to the above concerns: patient safety and accuracy, training algorithms to recognize patterns in medical data, integrating AI with existing IT systems, gaining physician acceptance and trust, and ensuring compliance with federal regulations. Currently, there is a lack of federal oversight.
Finally, gaining acceptance and trust from medical providers is critical for successful adoption of AI in healthcare. Physicians need to feel confident that the AI system is providing reliable advice and will not lead them astray. This means that transparency is essential – physicians should have insight into how the AI system is making decisions so they can be sure it is using valid, up-to-date medical research. Let’s hope with the rapid rollouts of these platforms, the Federal Government has a plan to protect consumers before it completely comes to market, or we could have a mess on our hands instead of innovation.
Your next steps:
- Become a NAMAS Member to earn those CEUs and take advantage of learning resources, products, and resources!
- Read more blog posts to stay updated on the 2023 Revisions to the 2021 E&M Guidelines.
- Subscribe to the NAMAS YouTube channel for more auditing and compliance tips!
NAMAS is a division of DoctorsManagement, LLC, a premier full-service medical consulting firm since 1956. With a team of experienced auditors and educators boasting a minimum of a CPC and CPMA certification and 10+ years of auditing-specific experience, NAMAS offers a vast range of auditing education, resources, training, and services. As the original creator of the now AAPC-affiliated CPMA credential, NAMAS instructors continue to be the go-to authorities in auditing. From DOJ and RAC auditors to CMS and Medicare Advantage Auditors to physician and hospital-based auditing professionals, our team has educated them all. We are proud to have helped so many grow and excel in the auditing and compliance field.
Looking to start up a medical practice or grow your existing practice? Contact our parent company, DoctorsManagement.