About the Author: Bradley Merrill Thompson is a member of the firm at Epstein Becker & Green, P.C. There, he counsels medical device, drug and combination product companies on a wide range of FDA regulatory, reimbursement and clinical trial issues. The opinions in this piece are Thompson's and don't necessarily reflect the opinions of
Many appreciate that telemedicine is more than just using Skype so that a doctor can look a patient in the eyes. For telemedicine to be truly useful, the patient must be able to collect and transmit a variety of data the healthcare professional needs in order to assess the patient’s health. Indeed, state regulators have historically been skeptical of telemedicine precisely because they fear that the doctor-patient relationship in that context is too thin, with doctors being forced to make judgments based on too little available information.
Artificial intelligence can help. It can help with data collection by instructing patients how to use medical technology that collects data. It can help physicians sort through and analyze the data they do receive. It can even help deliver therapy.
In the face of COVID-19 with its attendant greater need for medical services while achieving social distancing, is FDA now ready to support the use of AI in telemedicine? I think they are.
Current telemedicine is already able to collect data in many ways, including the use of weight scales, blood pressure cuffs, heart monitors, blood glucose meters, pulse oximeters and peek flow meters and thermometers, among many others. While quite useful, all those are fairly basic. The devices have existed for years, and the innovation over the last several years primarily was simply adding connectivity and data management.
But technology is moving well beyond those vital sign monitors. For example, it’s been amply reported that ultrasound can now be done with a probe attached to a smart phone. ECG machines and other technologies also are likewise achieving small scales that allow for mobility.
Presently, in many cases like ultrasound, these new technologies are still limited to administration by licensed healthcare professionals because of their complexity, the risk associated with improper use and the difficulty in obtaining a high-quality image. But the next wave of innovation after creating cheaper and more mobile hardware will be using AI to help guide untrained users in how to administer these tests.
In February of this year, FDA authorized the marketing of AI that would guide cardiac ultrasound use. In its first FDA authorization, this guidance system is limited to use by healthcare professionals, but fundamentally the technology now exists to use AI to guide the use of historically complex diagnostic tools. Before this new guidance system, such technology would only be administered by a trained sonographer, not by other healthcare professionals.
In this analysis, I’m not going to reference FDA guidance documents much, even though they're the natural place to look. Unfortunately, this is so new that for the most part FDA has not kept its guidance documents up to date to reflect the agency’s current thinking.
Instead I’m going to look at what FDA actually does. The only real exception is in late February, FDA did have a public meeting to discuss the agency’s approach specifically to the use of AI in radiology. The agency’s comments there were very important more generally though. As those who observe FDA understand, FDA deals with new technologies like AI in whichever clinical context is first for innovation, and the principles worked out in that clinical context then form the basis for FDA policy with regard to technologies that follow in other areas of clinical practice.
The easiest way to assess the regulatory environment for AI in telemedicine is to distinguish between those technologies on the patient side of telemedicine versus those on the professional side.
As already noted, one of the key technological challenges to effective telemedicine is the use of technologies on the patient end that can accurately collect the data the doctor needs. As explained above, there are a wide variety of simple sensors to which connectivity already has been added. Those work well, and help tremendously. But to push telemedicine toward its more natural value proposition, we need to ask the patient to collect more data. And that means patients or untrained caregivers using devices that have been traditionally restricted for healthcare professional use.
Novel patient guidance systems
There is a natural limit on what can be achieved by providing more and better written directions for use, or even video tutorials. Generally, to ensure that the vast majority of patients can use technology appropriately, the information provided needs to be communicated at perhaps a fifth-grade level. That is the generally accepted standard for patient-directed instructions. For complex technologies, that presents a real challenge if our only mechanism is providing more details, somehow in simplified language, to cover the full range of issues that the user needs to understand.
But here is where AI can be so useful. I’ll return to the example of ultrasound and the newly authorized guidance system that allows healthcare professionals without special training to take ultrasound images. As explained more in the FDA February meeting, the concept behind the guidance system is to permit professionals like nurses and army medics out in the field to use equipment that otherwise would require an expert to use. My point is that one day soon companies will be able to develop such good guidance systems that the user won’t even need any medical training at all. Patients will be able to use these technologies.
Before talking about the regulatory pathway, it’s helpful to understand exactly what this guidance system does. The system guides the user through four interrelated functionalities:
- A quality meter that provides real-time feedback to the user on expected diagnostic quality of the resulting video clip.
- A prescriptive guidance feature that provides direction to the user to emulate how a sonographer would manipulate the transducer to acquire the best view. It suggests changes the user might make, such as increasing the angle of the probe or moving the probe closer.
- An auto-capture feature that triggers an automatic capture of a video clip when the quality is predicted to be diagnostic.
- A saved best clip feature that continually assesses the clip quality as it is captured, and keeps the best one, even if auto-capture is never triggered because the quality never reaches that level.
In this way, the AI both gives direction to the user, and assesses the quality of the images in order to capture and preserve the best so far. During development, the company had to test the performance of the algorithm by testing each of these features individually. The metrics of performance included individual frame-level prediction of the current image as compared to an ideal image, relative image quality prediction of the current video clip as compared to the ideal, and ultimately the evaluation of auto-capture with regard to whether the images so captured were clinically-acceptable images.
The pathway to market for this ultrasound guidance system gives us a pretty good indication of what the pathway to market will look like for completely untrained users. The requirements FDA developed for this new technology fall into two buckets as follows:
- Design verification and validation studies, including:
- a. Clinical performance testing where FDA seems to be reasonably flexible in the design of the trial
- b. A detailed protocol that describes, in the event of future changes to the product:
- i. Identification of types of changes that trigger the need for resubmission to FDA
- ii. For changes that do not require a resubmission to FDA
- The assessment metrics used
- Acceptance criteria
- Analytical methods that will be used
- c. Usability testing demonstrating the effectiveness of the training program
- Detailed labeling that includes, among other things, an explanation that the guidance system only helps collect the data, and that the data themselves, once collected, still need to be interpreted by a trained clinician.
That’s a reasonable set of requirements, and a road map for those who want to develop new technologies for patients to collect data within the context of telemedicine. Notice in particular the creativity FDA is exhibiting in the second validation requirement, a way to address inevitable changes in AI. By allowing AI to adapt to new information, the performance should improve over time, but that can’t be assumed. FDA is saying that it will permit a certain amount of adaptation within agreed-upon limits. FDA will permit evolution of the product on the condition that the company agree to follow an approved protocol for verifying and validating the changes. That’s a huge step forward.
Low-risk patient guidance systems
But wait: there’s more. In addition to showing a willingness to consider novel technology that employs AI to guide less-trained users on the use of sophisticated medical equipment, FDA also seems to be permitting class I devices to be converted from professional use to patient use, again simply in the administration of the devices, not in the interpretation of the data they produce. I wrote “seems to be,” because all we can do is watch what FDA does with regard to class I; we have no visibility into the FDA decision-making process itself, since there is no premarket review. But the agency seems to be allowing some pretty obvious examples of this with no intervention.
An example of this is Prescription Check by Warby Parker. Among other things, the patient-facing app substitutes for an eye chart in an optometrist’s office, and includes a triage questionnaire and an astigmatism test. It is part of “a telehealth service that, if you are eligible, allows an eye doctor to assess how you're seeing through your glasses and provide an updated glasses prescription.” The American Optometric Association had been very vocal in complaining about apps that substituted for a doctor’s judgment, arguing that there’s a whole lot of medicine and art that goes into an eye exam. The strategy behind the app appears to be responsive to those concerns in that the Warby Parker app doesn’t bypass the doctor, but merely gives the doctor a way to do a simple eye test remotely through telemedicine.
The cleverness of the software is simply in guiding the patient in exactly how to conduct the exam and collect valid data. And this Warby Parker app is not alone in the family of class I devices for which software is being provided to guide lay users to do what traditionally a doctor would’ve done in administering a test, and to do it remotely while providing the data to a physician for review.
And all that was before COVID-19. Now, in the face of the pandemic, in April 2020 FDA released an emergency enforcement policy for remote ophthalmic assessment and monitoring. For the duration of the emergency, FDA apparently is willing to offer even more latitude for a wide range of products, including visual acuity charts, visual-field devices, general-use ophthalmic cameras and even Class II tonometers for measuring intraocular pressure. FDA is willing to waive many of the requirements, including applicable premarket requirements specifically, to allow devices to be adapted for home use by consumers, and specifically for use in the context of telemedicine consultations.
Home use in vitro diagnostics using AI
Home-use diagnostics are nothing new, and indeed diagnostics using a smartphone are not new. But there continues to be a lot of innovation in the space that makes use of AI. For example, just recently Luminostics announced plans to develop a test using smartphone optics paired with an inexpensive adapter, in combination with “glow-in-the-dark” nanochemistry and signal-processing AI. The path to getting marketing authorization for these home use diagnostics is relatively well defined at FDA.
Patient therapy systems using AI
Apart from collecting data, FDA also seems to be encouraging the use of AI in delivering therapy on the patient’s side. For example, in December, 2017, FDA created a new classification for a computerized behavioral therapy device for psychiatric disorders (21 CFR 882.5801). By regulation, these are prescription-only devices intended to provide a computerized version of condition-specific behavioral therapy as an adjunct to clinician-supervised outpatient treatment to patients with psychiatric conditions. The digital therapy is intended to provide patients access to therapy tools used during treatment sessions to improve recognized treatment outcomes. So, even though it can only be obtained through prescription, it is intended for use in the hands of a patient at home.
Earlier this month, FDA expanded that classification, at least temporarily during the COVID-19 outbreak, to permit much broader use of these products even without FDA review and clearance. In explaining the rationale for the guidance, FDA certainly appreciated the immediate need for social distancing, but FDA is also clearly impressed by the ability of such software to play an adjunctive role to physician services.
There are many technologies that can be used for collecting important information that guides diagnostic and therapeutic decision-making that, if placed in the hands of lay users, could facilitate telemedicine. Further, there are clearly software applications that help deliver needed therapy to patients remotely. FDA seems to be doing what it can to encourage the development and deployment of these devices on the patient side to make telemedicine more powerful.
It’s important to note that, while some telemedicine is patient to physician, other telemedicine involves professional to professional. Telemedicine can be used to bring specialized, sophisticated capabilities to underserved populations, including those in remote areas.
On the professional side of telemedicine, the advancements are well documented and are not unique to telemedicine. The AI-based tools that physicians are using for in-person encounters can be used for telemedicine as well. Image analysis, a long-standing application of AI in medicine, could be deployed, for example, to do a preliminary read of a radiological image through computer-aided detection, reviewed by a physician.
In its Policy for Imaging Systems Used During the Coronavirus Disease 2019 (COVID-19) Public Health Emergency, FDA notes that, “Imaging devices can help visualize pulmonary abnormalities and are used routinely to diagnose and evaluate the causes of reduced lung function. Accordingly, there is increased demand for imaging devices that may assist in the diagnosis and treatment monitoring of lung disease.” FDA further explains “Increasing the availability of mobile and portable systems may increase options to image patients inside and outside of healthcare facilities, which could help to reduce the spread of COVID-19. Additionally, modified use of ultrasound imaging systems may expand the number of healthcare practitioners capable of performing this imaging technique.”
The agency specifically includes radiological computer-assisted detection/diagnosis software within the scope of this enforcement policy. FDA indicates that it will permit certain changes in product design as well as instructions for use and indications intended to make the software more suitable for use in the pandemic.
Apart from image analysis, FDA has come out with a new redraft of its Clinical Decision Support Software Guidance last September that is much more flexible in its treatment of AI. The prior 2017 draft would have required all use of AI in clinical decision support to be FDA regulated, because it required that the human be able to do the same calculation as the software to avoid FDA regulation. The new draft instead requires that the calculations be understandable to the user. That makes a huge difference in allowing at least some low-risk applications of AI as clinical decision support to be used without FDA regulation.
Another related potential AI application for telemedicine is triage. There appear to be dozens of triage products on the market presently, although they may not be linked to telemedicine as of yet. Triage software can assess symptoms broadly or specifically, helping the physician prioritize the order in which patients need attention. For example, software can flag patients who may have COVID-19 or, for those known to have the disease, can assess a patient’s level of acuity and severity, and predict likely treatment needs. On the patient side, the software can guide the patient to appropriately enter his or her symptoms.
Here the FDA pathway is murkier. At the February 2020 FDA meeting, the agency discussed AI triaging software, but in the context of radiology specifically. The agency has been clearing AI when used for triage, but generally in a specific, narrow indication. For example, this month FDA issued 510(k) clearance to medical AI software developer CuraCloud for CuraRad-ICH, a computer-aided triage and notification application for detecting intracranial hemorrhage on head CT scans. But the pathway through FDA for broad triaging software, or even the circumstances when FDA marketing authorization is required, is not clear.
All software that aids the physician side has the potential to make telemedicine a very cost-effective means of treating patients. And because telemedicine is fundamentally digital platform already, it’s easier to integrate many of these AI tools into the workflow.
FDA has provided a road map for guidance-system technologies in ways that they can be used to facilitate telemedicine. Unfortunately this roadmap can’t be found in any guidance, and no doubt the rules will continue to evolve. But it appears that FDA is willing to be creative and flexible. And, nearly all of this was even before COVID-19 created an urgency around the use of telemedicine. The future is looking pretty bright for these technologies.