The Crucial Role of Software in the Future of Neuromodulation

Dr Cameron McIntyre

Professor, Biomedical Engineering and Neurosurgery, Duke University

Dr Cameron McIntyre is currently a professor of biomedical engineering and neurosurgery at Duke University. He started in the field of biomedical engineering as an undergraduate student at Case Western Reserve University. He became interested in neuroengineering and as an undergraduate started working at Warren Grill’s research lab. Dr McIntyre was working on electrical stimulation of the spinal cord and started building some computer models of how electric fields interact with neurons. He realised a substantial need to understand better how electric fields interact with brain tissue in a human context, and deep brain stimulation was getting going.

Why Do We Need Engineers in the Field of Neuromodulation?

The world of neuromodulation is very device-based, and I don’t see that changing anytime soon. Those devices are amazingly complex – I mean there are a lot of electrical-engineering details in those devices. Many clinicians look at neuromodulation as a black box technology, because they had no access to Kirchhoff’s current laws for example, which is not what’s getting taught in medical school. They’re getting taught drug dynamics and things of that nature, which is great because most of the real-life is pharmacology. However, if the world does start to transition to being more acceptive of electrical therapies, maybe medical schools will start training them more on the electrical side of how information transfer can happen in the brain or the peripheral nervous system. But until that happens, you’re dependent on these devices to just work.

It would probably be better if you understood how they actually worked; let’s say at the neural or neurotransmitter levels. The time constant of change is very slow in medicine because the training paradigms, the residency training and the practice patterns are very slow to change, so it takes a long time. Until we get to that point, I think there’s a great need for biomedical engineering type people to play that intermediate role. I try to train people to be those kinds of scientists or clinical scientists, but it’s a slow process.

What Are the Differences in Using Neuromodulation in the Brain vs the Heart?

The fundamentals are the same: you’re just using an electric field to manipulate the voltage sensor in a sodium channel. That’s it! At the basics, it doesn’t matter whether that’s in the heart or in the brain. I don’t know anything about the cardiac world and maybe cardiologists would feel like, “Hey, we understand this, and we know what we’re doing, and we don’t need engineers to help us use these devices efficiently or effectively, they’re optimized and they work great”. I don’t want anyone to take offence to this, but the heart is just a much more simple system than the brain or the peripheral nervous system, for that matter, either way.

For example, you have ten different neurotransmitter systems that you are potentially manipulating. You’ve got millions of different circuits that you are manipulating with electrodes implanted in the nervous system. The heart is one circuit, one system that you are pulsing. It’s just a much more approachable problem, and that’s why it worked first, and that’s why it was very successful, even in a time of maybe relatively limited scientific and engineering detail, and was able to work very efficiently and effectively in a clinical environment. Brain neuromodulation is the biggest unsolved machine in the universe. We are barely scratching the surface. Now you throw in your manipulation of it with electric fields, and it gets complicated real fast.

How Can We Get Feedback from the Brain Analogous to That We Get from the Heart’s ECG/EKG?

We’ve been attempting to talk to the nervous system with our electrical pulses for 30-50 years, and it’s been a one-sided conversation that we don’t know if what we’re trying to say is getting through or not. We know that if we get the behavioural effect that we wanted, the brain “heard” what we were saying and did it. But clearly, it would be a lot more elegant if we had a better understanding of the language and we could give information in a way that the nervous system could interpret better. We could do that presumably a lot more efficiently if we understood what it was trying to tell us back, and that’s the whole goal of these kinds of LFPs (Local Field Potentials) and the entire field of brain-machine interfacing. Like how do we listen and respond appropriately with our stimulation and maybe LFPs are the way to go or maybe not – we’ll figure it out.

Where Should Recording Electrodes

Be Positioned for Closed-Loop Neurostimulation?

We can definitely not get a global readout of the neuronal function of the brain. If it was that simple, we would have solved this problem with EEG 50 years ago. I think the goal is that maybe you don’t need a global readout; you need a local readout that is robust and reproducible and that you can rely on to trigger your stimulation. It’s a proxy – you don’t have to understand the actual language of the brain or the science; all you need is a control signal. So you need to look at it from an engineering perspective, let’s not make it a science project if it doesn’t need to be one. I think that’s where we’re starting doing right now. We don’t know the best place to put the recording electrode to make an efficient closed-loop deep brain stimulation system. But you have to have the tools to start testing different ideas, and that was a great thing that happened with the Activa Pc+S 5-10 years ago. You start putting this technology in the hands of creative clinician-scientists and they’re going to test some cool ideas. That’s how you’ll figure out where to put the electrodes, once you get to the point where you’re ready for a real closed-loop device, and I’d say we’re only at the early stages of that process now.

How Much of a Game-Changer Is AI/Machine Learning for Closed-Loop Neuromodulation?

It is undoubtedly a hot topic right now. I’m personally not sold on it, but that doesn’t mean it will not work. I think that is one approach, to trying to map out this system. We don’t know the language, and so we’re trying to figure out where we should record and how we should interpret those signals and machine learning is a massively powerful tool to attempt that. Machine learning works awesome when you’re trying to characterize a stable system and a reproducible system. But (1) we don’t really know where to record, (2) we don’t know the stability, and (3) we don’t know the dynamics of these systems. So we’re throwing black-box algorithms, hoping we get something that will be a robust way to predict or give us a control signal, but I think the jury is still out on that in my opinion.

Is Neuromodulation Ultimately

a Software or a Hardware Play?

I came to a realisation that the opportunity for impact, at least for an academic researcher, was really on the software side, and an academic type person is never going to have the infrastructure to build a competitive clinical-level neuromodulation device. There are just realities that prohibit that kind of thing from happening at the scale that would be necessary. We have many different devices out there; there are research things to very simple neuromodulation devices at different levels. Still, somewhere along the line, we will customise them to individual disease states and have real engineering criteria that say, in this disease state, for example, we need to have 15 channels and 3 different leads placed in these specific locations. We’re a long way from that now, but I do believe that we will eventually get there. Once you get to the point where you have come up with a relatively customised hardware therapy for that disorder, then the real differentiator will be the software integration of how you are communicating with the nervous system via that hardware

Now I’d say we’re only beginning to learn how to do that, but once you know what the hardware requirements are, it’s pretty easy to get those devices built. The differentiator will be the software, and that’s what will make a big impact and whether company A will be able to offer the coolest features compared to company B and not just from a competitive perspective but from a patient outcome perspective, like how do you customise it to that patient and customise it to the specific percentage of therapy or symptoms that one patient has. For example, in Parkinson’s disease, you’ve got tremor-dominant patients and rigidity-dominant patients. My guess is that you probably want to stimulate those two different patient populations in different ways. The software is what’s going to figure that out for you; the hardware is going just probably to be the same for each of those different applications. I think that’s the future, and that’s why I spent my career working on it.

This is the highlight of the interview. If you like to explore more, please visit our YouTube Channel.