fbpx

Computational Modeling of Neural Stimulation and Circuitry

Jeffrey Arle

Associate Professor of Surgery, Harvard Medical School

Professor Jeffrey Arle is an associate professor in neurosurgery at Harvard Medical School and currently works at the Beth Israel, the Deaconess Medical Center and the Mount Auburn Hospital in Cambridge. Prof. Arley was interested in the brain and mind since his teen years and wrote an essay on what a thought is to get into college. He got into medical school where he studied neuroscience and he then did a PhD in computational modelling. Later he did his neurosurgery residency at Penn University where he got involved in different areas of neurosurgery and was unsure of what he wanted to follow. It was when he attended a lecture that discussed the future of neurosurgery in the 21st century when he decided to get into the area of computational modelling of neural stimulation and circuitry.

What computational tools are being deployed in neurosurgery today?

Over the last 30 years, we’ve seen more and more the ability to utilize imaging in computers in the operating room (OR), whether it’s augmented reality or virtual reality, components for teaching as well as real-time heads up displays. There is a more precise and refined use of computers to help with targeting and utilizing diffuse retention imaging pathway information for some of the types of surgeries we do. And this is just neurosurgery, however, it now expands beyond that.

I was very interested in using neural networks early on to try and understand multi-faceted problems where you don’t know what the important variables to prediction are. For example, how this patient is going to do or if we should put this device in or if we should do this surgery or another one. Is there anything that can tell us that beforehand so we can get better results and also understand how things work?

As these devices became more online, most of it was serendipitous, meaning people would just try something with some type of physics like electricity or near-infrared or ultrasound and say “If we apply this to this area of the nervous system, maybe we’ll get some results” and sometimes they did and said “Hey, I think we can start selling things that do this. It looks like it works.”, “How does it work?”, “Well, we’re not really sure, we think it might do this but anyway it looks like it works really well let’s sell it.”.

So if we go back to using computers as part and parcel of this and try to get at the underlying mechanisms why can’t we just study it in a normal scientific way? You can, but understanding how literally hundreds of thousands or millions of neurons are all interacting dynamically in a normal research setting with animal models is extraordinarily difficult. You can answer more straightforward questions perhaps, but the non-linear dynamics of those complex systems are better handled perhaps with computational approaches. So I think that’s come more and more into the mindset of people. Approaching this computational neuroscience is now a significant part of things. In fact, the FDA recently has suggested to companies making devices that they have computational modelling support for the newer devices that are coming out in order to support their efficacy.

How close are we to having a digital double of the patient?

Jonathan: Sometimes people say there are two different kinds of computational neuroscience. There’s a kind of computational neuroscience where we’re just using the computational power of computers to do things like produce very nice 3D anatomical maps of the brain or the cerebral hemispheres for the purposes of surgical planning for example, but it’s not actually offering any deeper insight into the mechanistic underpinnings of the organ. And then we have this kind of computational neuroscience where we’re actually trying to understand the neural code and how populations of neurons actually compute behaviorally important outputs, instead of the location of objects. I suppose the former already seems to be present in the operating theatre and the clinic more generally. If we think about the DBS context and even something as simple as a debate about whether the STN or the GPI is the appropriate target for an individual patient, we’d like to be able to deploy computational models that would somehow allow us to simulate that patient and do the in silico experiment. Yet, it seems very far off the actual clinical deployment of that. There are one or two initiatives that are trying to push the needle on it but do you see that being a reality in the clinic in the near future?

Jeff: I think those are all great observations and I do think that there is a lot of separation right now between the enhanced imaging aspects of computational work and the more mechanistic and predictive aspects of computational work. There are some underlying similarities in perhaps some of the mathematics, but they don’t really translate across very much yet. There is some of that work, as you you’ve alluded to, with DBS and precision medicine of taking a patient’s tractography information and other anatomical information, putting it into the computer and getting the various derivatives of the field, like effects. You can make pretty good estimates of it beforehand and saying “Hey, we think…” -and here’s where things go off the rails a bit- “we think if we get this field volume in this region in this patient, it’s going to have a more beneficial effect than if we put it over here”. However, you still don’t really know on the levels of granularity below that if the physiology dynamically in real-time is going to have that effect. And that’s the best guess right now and I think putting in more of the functional dynamic information is something that is going to happen, however, I don’t see it happening in the near future.

What could a technological breakthrough, equivalent to high-speed genome sequencing in genetics, for neuroscience look like?

Some of this advanced technology is out there and people are working on closed-loop systems more and more, including us, as some of our research is involving closed-loop dynamics. However, the problem I see is that there’s a convergence of if we had all the money and time in the world and no fiscal or economic constraints toward efficiency of producing something to make our money back, we could start at the ground up, from quarks for example, and try and understand every level and meta-level of the dynamics of this highly interconnected complex system, called a nervous system and once we got that we could do anything. But the expediency of reality of people having to pay for jobs and the amount of time available are present and we say “Well, how about we just try and help people with this disorder in the meantime. In order to do that we might use information from this little electrode that’s over here in forming this electrode over here of what to stimulate and maybe that’ll help their symptoms”. So now you have this little mini closed-loop idea and every once in a while somebody publishes a paper with some data that contributes to that making it a little better.

You’re really reliant on physiological markers of various kinds to inform your system. This is so crude right now when you look at that. If you think DBS alone is crude, but in a way, it’s actually fairly precise, it’s like a very precise drug applied to a very precise area in the nervous system. If you start thinking about multifaceted opsins with micro or nano LEDs where you can turn on excitatory and inhibitory you get to this very precise electroceutical or photoceutical idea.

The closed-loop idea, yes, it still has that, but now you’re using fairly crude information to inform that as if it’s better, but it’s not necessarily, because you have to know that the marker you’ve chosen is actually really relevant and makes sense. How do you know that? It’s a stab in the dark.

At what level of the modelling hierarchy should we study diseases such as addiction?

You’re dealing with phenomena in the universe and you need to decide at what level to go into it to understand it. Do you go at the quantum level, the “heisenbergian” level, where you’re taking statistical quanta of every aspect of the material that’s involved to study addiction? At what level can you gain information and then what are your goals? Why are you doing this? Are you trying to help people who are addicted, not be addicted anymore?

So understanding that each level is informing some other meta-level above it all the way up, you can go in anywhere and you may need to understand the couple levels below it and maybe a couple of levels above it maybe, maybe not. You have to decide. “This is where we’re going to go in, these are our goals, is this level able to tell us something that helps us toward our goals?”. Maybe we need to go in in a different level.

We did some work on high-frequency stimulation and paresthesia non-paresthesia and what is the mechanism and I really felt at some level we had to look at the dynamics of the individual sodium channels and the potassium channels, the actual gating mechanisms the statistical dynamics of those sub-components and how these fields were at these frequencies affecting the dynamics of those channels or these ions. That’s because I thought that level of analysis might be where we had to go to explain the overall higher level of phenomena that we were seeing with paresthesia or not.

I think about the dynamics of the stock market or whatever you want to choose in the financial world, other things are impinging on those prices at any moment. There are people’s behaviours, there’s the news cycle in various ways and there’s a granularity to all of those things. They’re not just the companies and what’s going on in the company, there are all these other things from the world impinging on them and I think addiction, for example, is one of those things where those things also matter. You have to take into account the person’s environment and the people they interact with and their physical environment and other aspects of what they’re doing every day in order to understand it maybe.

What could a potential treatment for pain look like in the future?

Jeff: I think it’s going to be a stepwise thing of interacting with these devices in ways that become more and more refined for time in the clinic and being able to see patients and get where you need to go. Because there are all these other elements that impinge on our time and ability to do things. It’s interesting, however, because even now as you look at these different waveforms that are out there or the segmented leads in DBS or the RNS system for epilepsy and the complexity of programming these devices, it has already gone outside of the realm of the typical clinician being able to really wrap their head around it. Even if they’re interested in it and they want to they can’t sit there and go through every kind of combination.

So the companies are left with the need to combine information to get it to a level where somebody can -to put it in simple terms- move a joystick around and harness these complex underlying therapy aspects that they’re putting together. Otherwise, it’s not going to be tenable. “Look at our system, it takes all this into account, here it is.” and then the clinician says “That sounds really great but I can’t really do that, I don’t have the time or the knowledge frankly to combine these things and know what I’m doing”. So I think it’s going to be on the companies to figure how to translate that back to the clinician to use it.

Jonathan: It’s actually a problem that we’re working on very hard to do algorithmic-based motor assessments so that we can assess arbitrary quantities of clinical data instead of the 30 minutes or so every month that currently is possible. Because, you’re absolutely right, these things are just becoming more and more complex which imbues them with more and more power, but makes them more and more unmanageable.

This is the highlight of the interview. If you like to explore more, please visit our YouTube Channel.