Clinicians should think twice about using artificial intelligence tools as a productivity booster, healthcare attorneys warned after a Florida doctor publicized on TikTok how he used ChatGPT to write a letter to an insurer arguing for patient coverage.
The Good and Bad
Most, if not all, technologies "can be used for good or evil, and ChatGPT is no different," says Jon Moore, chief risk officer at privacy and security consultancy Clearwater.
Healthcare organizations should have a policy in place preventing the use of tools such as ChatGPT without prior approval or, at a minimum, not allowing the entry of any electronic protected health information or other confidential information into them, Moore suggests.
"If an organization deems the risk of a breach still too high, it might also elect to block access to the sites so employees are unable to reach them at all from their work environment."
Besides potential HIPAA and related compliance issues, the use of emerging AI tools without proper diligence can presents additional concerns, such as software quality, coding bias, and other problems.
"Without testing, true peer review and other neutral evaluation tools, implementation should not be in a monetization prioritized 'release first and fix later' typical tech product/service introduction," Teppler says.
"If things go wrong, and AI is to blame, who bears liability?"
Continue reading the full article here.
About the Author
Follow on Twitter Follow on Linkedin Visit Website More Content by CynergisTek, Inc.