Non-invasive Brain Stimulation

Associate Professor, Harvard Medical School
Professor Emilliano Santarnecchi is a neurologist and currently a clinical research scientist at the Berenson-Allen Center for Non-invasive Brain Stimulation and also co-director of the CME course Introduction to Transcranial Electrical Stimulation (tES) In Neuropsychiatric Research at Harvard Medical School. Prof. Santarnecchi started his medical career as a clinician in Sienna, Italy. He did his PhD degree in translational neurology in Italy and completed his last year at Harvard Medical School where he received a faculty position as an associate professor. He is also a licensed dental technician and holds a PhD in psychotherapy, however, neuroscience is what he wants to pursue.
How AI might be useful in neurodegenerative diseases and clinical trials.
First of all, my interest in Artificial Intelligence started because of my career as a neuroscientist, where I was studying human intelligence. I’ve done a lot of neuroimaging studies looking at which networks and structures in the brain support intelligence. My goal was always to use their knowledge to build some kind of neuromorphic AI that reflects how the brain works. The brain is definitely the best example we have of the most optimized complex system, developed through thousands of years, that reached the peak of computational power and efficiency and everything you want from an AI.
That’s how I started my journey and then I invested directly in AI and the reason was that we are at the point where we have so much data collecting information of various kinds, like neuroimaging data, electrophysiology genetics, cognition, everything, you name it. When looking at studies where we’ve been collecting this data for many days, months, years, at some point, I realized there’s no way we can manage all this data. It’s a missed opportunity to keep running studies the way we run them now, especially for Alzheimer’s disease, because we go one hypothesis at a time. We just select the data we need to look at that specific question and it takes 5-7 years to run a trial. At this pace, in 50 years we have tested around 50 hypotheses. Instead, if we organize the data and we work on a larger scale, with a bigger vision, leveraging AI, we could actually run a lot of simulation studies that would tell us already at the beginning of a trial if it’s working or not.
The main problem with clinical trials, without going into too much detail, is that you always rely on clinical outcomes which are usually symptoms and the results of clinical scales. However, those tools, cognitive tests, for example, are not very sensitive in capturing the level of the severity of disease and you cannot repeat them as many times as you would like, every day, for instance, to see if there is any hint of improvement in a patient while using a drug for instance. So you only do them every 3 months, 6 months, 1 year and you have to wait a year to see if something is working. That’s not how the brain works. If we can capture more subtle signals within the noise in the brain that tell us that this network, which is related to that symptom is changing we can tell whether the therapy is working or not. However, the way studies are conducted currently, we will have to wait 7 months to see the change at a cognitive level. If you don’t see anything changing in the first few weeks or months probably you should change course and one way to do all this is to build models with AI with all the data we have available at the individual level to do precision neuroscience and predict trajectory for each patient and carefully track them over time.
How big of a problem is the lack of mechanistic understanding of neurodegenerative diseases?
That’s the other reason why we want to do this big computational effort to build models. Every technique is improving now, you get higher resolution imaging of the brain and you can measure things like neurotransmitters that we couldn’t 10-15 years ago. Now that we get all this data, paradoxically, many diseases start looking the same and people start thinking that they actually share symptoms. Are we talking about one big problem that just differentiates at some point or are these all different? It’s difficult to just support the idea that they’re all different now that you can look at the details of what the problem is in the brain. So that’s one big issue for which we need to look at more data.
The model that we’re trying to build now is what’s called “Neuro Twins”, to use the right term. Meaning we try to recreate everything we can about each individual patient’s brain, using AI and computational modelling. We go from modelling the behaviour of each individual neuron in the brain and seeing how neurons interact with each other, which is called “Microscale Modelling“. Then we move to “Mesoscale Modelling” when you look at the chunk of the brain and you look at how different types of neurons actually interact with each other within the brain region, how they work to inhibit or excite another region. Lastly, we move to “Macroscale Modelling” which is the modelling of the entire brain when you look at the dynamics of networks in the brain, how subcortical structures trigger activity in cortical structures.
So we want to get granularity on each element and then build one big model that encompasses the entire brain and that’s the model that we would use then to identify novel biomarkers and try to predict if a patient is going to respond to a therapy or not. It’s a big endeavour obviously and we are doing it exactly for Alzheimer’s disease, we’re trying to build the virtual Alzheimer’s brain which I think is necessary.
Why are scientists reluctant to share their data with others?
Scientists are human beings, human beings have their own priorities and all they want is to succeed. Being a scientist is challenging at many levels so when you have something good the innate response of people is to keep it for themselves.
I’ve set up consortia to share data to go after Alzheimer’s together and you see that people tend to respond: “Yes, let’s do it”, but then it never happens. Maybe they perceive the same from me when they ask for my data. I’m trying to be very sharing however and I’ve given all the data to build that virtual Alzheimer brain, but it’s never enough. We should work on creating a platform that makes us scientists feel secure to pull the data together and at the same time run brand new studies where everything is harmonized. We need to use the same sequences on patients for MRI scanning for instance or to record the EEG from the brain or the same cognitive task. That’s another challenge, even if you pull data together sometimes you don’t have the same data, but that’s science.
My biggest frustration being in this field, is that you don’t see enough communication and willingness to share data for the greater good, everybody has to report to their institution and is challenged by fundings for their studies. It certainly is a problem.
Is neuromodulation superior to drugs?
I don’t think neuromodulation is superior and obviously, I’m biased because I decided to do the vast majority of my work in neuromodulation. We use magnetic stimulation, electrical stimulation, ultrasound stimulation to go directly to the brain and exactly in the area, region or network where we think the problem is. That is the main difference with drugs. Drugs are great and we need drugs but they’re very noisy and they affect everything in the brain, even areas that you don’t want them to be affected. It’s always been like that.
I think now, going towards precision neuroscience, neurology, psychiatry, we really want to be specific about what we hit, why and how. Neuromodulation allows you to do that. You need to do it in a specific way which is not what has been done for the past 20 years when people used to snap electrodes to the brain and they were hoping that by stimulating almost the entirety of the brain they will get an effect. Now we need to bring imaging and electrophysiology to do precise targeting for each patient. Obviously, I think neuromodulation will play a big role for all these reasons.
Regarding drugs, I see many pharma companies now switching to going after neuroinflammation which is actually the common thing here that happens even before protein accumulation, after microglia and glia. So I’m seeing a shift and I hope they will come up with drugs that can tackle that. In my mind, the best would be to combine the two. You have a drug that affects microglial everywhere in the brain for example and reduces neuroinflammation and on top of that, you use brain stimulation to go after a couple of specific areas that we know they have additional problems.
So for me neuromodulation yes, but also drugs.
What does the future of neuromodulation look like?
Going towards personalised targeting, meaning we do individual modelling of each patient using everything we can, from different MRI sequences that look at structural aspects of the brain, the level of perfusion in each brain area to PET imaging that looks at where the proteins are for instance or look at hypermetabolism. Then we can combine it with EEG (electroencephalography), to look at fast oscillatory activity in the brain because MRI doesn’t have enough temperature to measure this kind of dynamics. Then we get information about maybe the specific brain oscillation that we want to go after.
So with brain stimulation then you can combine the two and if you do it right you can get a multi-target solution that attacks multiple nodes of a network at the same time. instead of what was done in the past 20 years which was attacking one or two regions. Therefore, we can be more surgical in terms of spatial resolution and at the same time, we can set the frequency of stimulation and try to entrain a specific brain oscillation that comes from specific neurons and interneurons in that region, instead of going for a massive excitatory stimulation that tries to bring everything up, which is what we have done in the past. So I think personalisation is key to really what I feel forward.
The other thing that has happened unfortunately is that brain stimulation suffered from the idea that it was like old electroshock done in the 50s-60s and some people were sceptical. When people started adopting it you could see a lot of commercially available devices being put out on the market, which were not optimised for anything. It was an attempt to do some neuromodulation to improve meditation abilities or memory that inflated the market and gave brain stimulation this bad name that is like pseudoscience. I think now we are leaving that phase and people have new confidence in brain stimulation so this time we need to do it right, otherwise, the field is not gonna take off and that would be a real wasted opportunity.
This is the highlight of the interview. If you like to explore more, please visit our YouTube Channel.