Your pill pack was not delivered to the lab in the same way that your medicine arrived. The first step is extensive lab research. Then comes animal testing. But before a medicine can be approved for use, it must be tested on humans – in an expensive, complex process known as a clinical trial.
The Basics
A clinical trial is simply this: Researchers recruit patients with the disease the drug is being tested for. Randomly, volunteers are divided into two groups. The experimental drug is given to one group and the placebo, which is a treatment that looks identical but does not have any effect, is given to the second group. If the patients who get the active drug show more improvement than the ones who get the placebo, that’s evidence that the drug is effective.
Finding enough volunteers to participate in the trial is one of the most difficult aspects of designing it. It is possible that doctors don’t know about certain trials or that patients are not willing to sign up for the trial. Artificial intelligence could make this job easier.
Meet Your Twin
Digital twins are computer-generated models that can be used to simulate real-world objects and systems. They behave statistically almost the same as their physical counterparts. NASA used a digital replica of Apollo 13 to repair an oxygen tank that had burst, leaving engineers scrambling from over 200,000 miles away to fix it.
With enough data scientists can create digital twins from people using machine learning. This type of artificial intelligence uses large amounts rather than being designed for specific tasks and learns from them. The creation of digital twins for patients in clinical trials is done by training machine-learning algorithms on data from past clinical trials and individual patient records. The model predicts how the patient’s health would progress during the course of the trial if they were given a placebo, essentially creating a simulated control group for a particular patient.
So here’s how it would work: A person, let’s call her Sally, is assigned to the group that gets the active drug. Sally’s digital twin (the computer model) is in the control group. It predicts what will happen if Sally does not receive the treatment. The difference between Sally’s response to the drug and the model’s prediction of Sally’s response if she took the placebo instead would be an estimate of how effective the treatment would be for Sally.
Patients in the control group can also create digital twins. Researchers can compare the predicted outcomes for digital twins receiving the placebo to those of humans who received it. This allows them to spot problems and improve the accuracy of the model.
Patients and researchers could benefit from digital twins replacing or augmenting control groups. People who sign up for a trial hope to find a new drug. But there’s a 50/50 chance they’ll be put into the control group and won’t get the experimental treatment. The possibility of replacing control groups with digital twins may mean that more people will have access to experimental drugs.
The Unexpected
The technology may be promising, but it’s not yet in widespread use – maybe for good reason. New York University’s Daniel Neill is a specialist in machine learning and its applications in healthcare. He points out that machine learning models depend on having lots of data, and it can be difficult to get high quality data on individuals. Information about things like diet and exercise is often self-reported, and people aren’t always honest. He says that people tend to underestimate how much exercise they get and underestimate how much junk food they consume.
He also suggests that rare adverse reactions could also be a problem. “Most likely, those are things you haven’t modeled for in your control group.” For example, someone could have an unexpected negative reaction to a medication.
But Neill’s biggest concern is that the predictive model reflects what he calls “business as usual.” Say a major unexpected event – something like the COVID-19 pandemic, for example – changes everyone’s behavior patterns, and people get sick. “That’s something that these control models wouldn’t take into account,” he says. Unanticipated events that aren’t accounted for by the control group could cause the trial to fail.
Eric Topol is the founder and director, Scripps Research Translational Institute. He is an expert on digital technologies in healthcare and believes this idea to be great.,However, it is not ready for prime-time. “I don’t think clinical trials are going to change in the near term, because this requires multiple layers of data beyond health records, such as a genome sequence, gut microbiome, environmental data, and on and on.” He predicts that it will take years to be able to do large-scale trials using AI, particularly for more than one disease. (Topol is also the editor-in-chief of Medscape, WebMD’s sister website.)
Gathering enough quality data is a challenge, says Charles Fisher, PhD, founder and CEO of Unlearn.AI, a start-up pioneering digital twins for clinical trials. But, he says, addressing that kind of problem is part of the company’s long-term goals.
Two of the most commonly cited concerns about machine learning models – privacy and bias – are already accounted for, says Fisher. “Privacy is easy. We work only with data that has already been anonymized.”
When it comes to bias, the problem isn’t solved, but it is irrelevant – at least to the outcome of the trial, according to Fisher. A well-documented problem with machine learning tools is that they can be trained on biased data sets – for example, ones that underrepresent a particular group. Fisher says that the results of the trials, which are randomized, are not sensitive to biases in the data. Based on comparisons with controls, the trial determines how the drug is affecting the subjects in the trial and adjusts its model to better match the actual controls. Fisher stated that the trial’s subjects were chosen with bias, but the original data was still valid.set is biased, “We’re able to design trials so that they are insensitive to that bias.”
Neill doesn’t find this convincing. You can remove bias in a randomized trial in a narrow sense, by adjusting your model to correctly estimate the treatment effect for the study population, but you’ll just reintroduce those biases when you try to generalize beyond the study. Unlearn.AI “is not comparing treated individuals to controls” Neill says. “It’s comparing treated individuals to model-based estimates of what the individual’s outcome would have been if they were in the control group. Any errors in those models or any events they fail to anticipate can lead to systematic biases – that is, over- or under-estimates of the treatment effect.”
But unlearn.AI is forging ahead. It is already working with drug companies to design trials for neurological diseases, such as Alzheimer’s, Parkinson’s, and multiple sclerosis. These diseases have more data available than others, so it was a good place where to start. Fisher believes that the approach could be applied to all diseases, significantly reducing the time required to bring new drugs on the market.
These invisible siblings could be a benefit for researchers as well as patients if the technology proves to be useful.