Elon Musk Neuralink and the Future of Humanity | Lex Fridman Podcast

Elon Musk Neuralink and the Future of Humanity | Lex Fridman Podcast

Table of contents

The following is a conversation with Elon Musk, DJ, Matthew McDougall, Bliss Chapman, and Nolan Arbaugh about Neuralink and the future of humanity. Elon, DJ, Matthew, and Bliss are, of course, part of the amazing Neuralink team, and Nolan is the first human to have a Neuralink device implanted in his brain. I speak with each of them individually, so use timestamps to jump around or, as I recommend, go hardcore and listen to the whole thing. This is the longest podcast I've ever done. It's a fascinating, super technical, and wide-ranging conversation, and I loved every minute of it. And now, dear friends, here's Elon Musk, his fifth time on this The Lex Fridman Podcast.

Drinking coffee or water?
"Water. I'm so over-caffeinated right now."
"Do you want some caffeine?"
"I mean, sure. Is there a Nitro drink?"
"This is supposed to keep you up till, like, you know, tomorrow afternoon basically."
"Yeah, I don't—so what does Nitro do? It's just got a lot of caffeine or something?"
"Don't ask questions; it's called Nitro. Do you need to know anything else? It's got nitrogen in it."
"That's ridiculous. I mean, what we breathe is 78% nitrogen anyway. Why do we need to add more?"
"Most people think that they're breathing oxygen, and they're actually breathing 78% nitrogen."
"You need like a Milk Bar, like from A Clockwork Orange."
"Yeah, yeah. Is that a top three Kubrick film for you?"
"A Clockwork Orange? It's pretty good. I mean, it's demented, jarring, I'd say."

Big congrats on getting Neuralink implanted into a human.
"That's a historic step for Neuralink, and yeah, there's many more to come."
"We obviously have a second implant as well."
"How did that go?"
"So far, so good. It looks like we've got, I think, 400 electrodes that are providing signals."
"Nice. How quickly do you think the number of human participants will scale?"
"It depends somewhat on the regulatory approval, the rate at which we get regulatory approvals. We're hoping to do 10 by the end of this year—a total of 10, so eight more. With each one, you're going to be learning a lot of lessons about the neurobiology, the brain, everything—the whole chain of the Neuralink, the decoding, the signal processing, all that kind of stuff."
"Yeah, I think it's obviously going to get better with each one. I mean, I don't want to jinx it, but it seems to have gone extremely well with the second implant. There's a lot of signal, a lot of electrodes; it's working very well."

What improvements do you think we'll see in Neuralink in the coming years?
"In years, it's going to be gigantic because we'll increase the number of electrodes dramatically. We'll improve the signal processing. Even with only roughly 10-15% of the electrodes working with Nolan, our first patient, we were able to achieve a bits per second that's twice the world record. I think we'll start vastly exceeding world records by orders of magnitude in the years to come. So it's like getting to, I don't know, 100 bits per second. Maybe five years from now, we might be at a megabit, faster than any human could possibly communicate by typing or speaking."

Listening at 1x speed feels painfully slow once you're used to 1.5x or 2x.

Well, I think you could think of it, you know, if you were to slow it down. Communication—how would you feel about that? You know, if you'd only talk at, let's say, one-tenth of normal speed, you would be like, "Wow, that's agonizingly slow." Now, imagine you could speak and communicate clearly at 10 or 100 or a thousand times faster than normal. Listen, I'm pretty sure nobody in their right mind listens to me at 1X; they listen at 2X. So, I can only imagine what 10X would feel like or if I could actually understand it. I usually default to 1.5X. You can do 2X, but if I'm listening to someone to go to sleep in like 15-20 minute segments, then I'll do it at 1.5X. If I'm paying attention, I'll do 2X.

But actually, if you start listening to podcasts or audiobooks at 1.5X, then 1X sounds painfully slow. I'm still holding on to 1X because I'm afraid of myself becoming bored with the reality where everyone speaks in 1X. Well, it depends on the person. You can speak very fast; we can communicate very quickly. Also, if you use a wide range of vocabulary, your effective bit rate is higher. That's a good way to put it—the effective bit rate. The question is how much information is actually compressed in the low bit transfer of language. If there's a single word that can convey something that would normally require ten simple words, then you've got maybe a 10X compression on your hands.

That's really like with memes. Memes are like data compression. You're simultaneously hit with a wide range of symbols that you can interpret, and you get it faster than if it were words or a simple picture. Of course, you're referring to memes broadly, like ideas. There's an entire idea structure that is like an idea template, and you can add something to that idea template. But somebody has that pre-existing idea template in their head. So, when you add that incremental bit of information, you're conveying much more than if you just said a few words. It's everything associated with that meme.

Do you think there'll be emergent leaps of capability as you scale the number of electrodes? Do you think there'll be an actual number where the human experience will be altered? Yes. What do you think that number might be, whether electrodes or BPS? We, of course, don't know for sure, but is it 10,000? 100,000? Certainly, if you're anywhere at 10,000 BPS per second, that's vastly faster than any human communicates right now. If you think of the average bits per second of a human, it is less than one bit per second over the course of a day because there are 86,400 seconds in a day, and you don't communicate 86,400 tokens in a day. Therefore, your bits per second are less than one on average over 24 hours. It's quite slow.

Even if you're communicating very quickly and talking to somebody who understands what you're saying, because in order to communicate, you have to at least to some degree model the mind state of the person to whom you're speaking. You then take the concept you're trying to convey, compress that into a small number of syllables, speak them, and hope that the other person decompresses them into a conceptual structure that is as close to what you have in your mind as possible. There's a lot of signal loss in that process—a very lossy compression and decompression. A lot of what your neurons are doing is distilling the concepts down to a small number of symbols, syllables that I'm speaking, or keystrokes, whatever the case may be.

Compressing complex concepts forces us to focus on what truly matters.

I'm speaking or using keystrokes, whatever the case may be. That's a lot of what your brain computation is doing. There is an argument that this is actually a healthy or helpful thing to do. As you try to compress complex concepts, you're perhaps forced to distill what is most essential in those concepts, as opposed to just all the fluff. In the process of compression, you distill things down to what matters the most because you can only say a few things. That is perhaps helpful.

If our data rate increases, it's highly probable that we'll become far more verbose. Just like your computer, when computers had limited memory, you really thought about every byte. Now, with computers having many gigabytes of RAM, even a simple iPhone app that says "hello world" is several megabytes, filled with fluff. Nonetheless, we still prefer to have computers with more memory and more compute.

The long-term aspiration of Neuralink is to improve the AI-human symbiosis by increasing the bandwidth of communication. Even in the most benign scenario of AI, you have to consider that the AI is simply going to get bored waiting for you to spit out a few words. If the AI can communicate at terabits per second and you're communicating at bits per second, it's like talking to a tree. It is a very interesting question for a super-intelligent species: what use are humans?

There is some argument for humans as a source of will or purpose. If you consider the human mind as having the primitive limbic elements, which even reptiles have, and the cortex, the thinking and planning part of the brain, the cortex is much smarter than the limbic system and yet is largely in service to the limbic system, trying to make it happy. The sheer amount of compute that's gone into people trying to get laid is insane. Without actually seeking procreation, they are just trying to do this sort of simple motion and get a kick out of it.

The cortex, which is much smarter than the limbic system, is trying to make the limbic system happy because the limbic system wants to have sex or wants some tasty food. This is further augmented by the tertiary system, which is your phone, laptop, iPad, or any computing device. So, you are already a cyborg with this tertiary compute layer in the form of your computer and all its applications.

In the context of getting laid, there is a massive amount of digital compute also trying to achieve this, with platforms like Tinder. The compute that humans have built is also participating, with gigawatts of digital compute going into this effort. If we merge with AI, it is just going to expand the compute that humans use, certainly for things like this.

There is a fundamental question of the meaning of life and why do anything at all. If our simple limbic system provides a source of will to do something, which then goes to our cortex and then to our tertiary compute layer, it might actually be that the AI ends up simply trying to make the human limbic system happy. It seems like the will is not just about the limbic system; there are a lot of interesting, complicated things in there. We also want power, which is limbic too, but then we also want to be cooperative.

Why settle for normal when we can aim for superhuman abilities?

There is obviously some risk with the new device; you can't get the risk down to zero. It's not possible. Therefore, you want to have the highest possible reward given that there's a certain irreducible risk. If somebody is able to have a profound improvement in their communication, that's worth the risk as you get the risk down. Once the risk is minimized, with thousands of people using it for years, you might consider aiming for augmentation.

We are actually going to aim for augmentation with people who have neuron damage. We're not just aiming to give people a communication data rate equivalent to normal humans; we're aiming to give people who are quadriplegic or have complete loss of connection to the brain and body a communication data rate that exceeds normal humans. While we're at it, why not give people superpowers? The same applies to vision restoration, where there could be aspects of that restoration that are superhuman.

At first, the vision restoration will be low resolution because you have to consider how many neurons you can put in there and trigger. You can adjust the electric field between the neurons and arrange them in patterns. Even with 10,000 neurons, it's not just 10,000 pixels; you can effectively achieve a megapixel or even a 10-megapixel situation. Over time, you could get higher resolution than human eyes and see in different wavelengths, like Jord Le Flge from Star Trek. You could see in radar, ultraviolet, infrared, or even have eagle vision.

In a different context, let me share an experience. I recently took ayahuasca, which was a truly incredible experience. I was in the jungle, amongst the trees, with a shaman, insects, and animals all around. I took an extremely high dose of nine cups over two days. On the first day, I took two cups, which was a ride but not a revelation. It was like a little airplane ride, seeing some visuals and a dragon. However, with nine cups, it felt like deep space travel.

I experienced an overwhelming sense of gratitude for the people in my life, seeing them glow with an incredible life force that seemed to connect with the entire universe.

I kept thinking about it, and it had like extremely high resolution thoughts about the people I know in my life. You were there. It wasn't just from my relationship with that person but just as the person themselves. I had this deep gratitude for who they are. It was like an exploration, like in Sims or whatever, where you get to watch them. I got to watch people and just be in awe of how amazing they are. It was great. I was waiting for some negative thoughts, maybe even a demon, but nothing. I just felt extreme gratitude for them.

There was also a lot of space travel. The human beings I know had this kind of glow to them. I kept flying out from them to see Earth, our solar system, and our galaxy. I saw that glow all across the universe. I went past the Milky Way and saw a huge number of galaxies, all glowing. I couldn't control the travel because I would explore near distances to the solar system, looking for aliens. There were no aliens, but the glow, that life force I was seeing in humans, was present throughout the universe. It made me feel like whatever makes humans amazing is spread all across the universe.

There were no demons, but there were dragons. They weren't scary; they were protective. It was more like Game of Thrones dragons. At night, in the jungle, the trees started to look like dragons, and they were all looking at me. They didn't seem scary; they seemed like they were protecting me. The shaman and the people didn't speak English, which made it scarier because we were worlds apart in many ways. They talked about the mother of the forest protecting you, and that's what I felt like. We were way out in the jungle, not a tourist retreat. I went deep into the Amazon with a guy named Paul Rosolie, who basically is Tarzan. We went crazy deep into the jungle.

All the evidence points to the idea that if you trigger the right neuron, you could trigger a particular scent or make things glow. Essentially, you could do pretty much anything. You can think of the brain as a biological computer. If certain "chips" or elements of this biological computer are broken, such as after a stroke where a part of the brain is damaged, Neuralink could potentially solve this. For instance, if the damage affects speech generation or the ability to move your left hand, Neuralink could address these issues.

However, when it comes to memory loss, if a massive amount of memory is gone, we cannot restore those memories. We can restore the ability to make new memories but not the ones that are fully lost. If part of the memory is still there and the means of accessing it is broken, we could re-enable the ability to access the memory. This is similar to how if the RAM or SD card in a computer is destroyed, you can't get the data back, but if the connection to the SD card is broken, it can be fixed.

With AI, you could repair photographs and fill in missing parts, and maybe you could do the same with memories. This would be a probabilistic restoration of memory. This concept is quite esoteric, but one of the most beautiful aspects of the human experience is remembering good memories. We live much of our lives in our memories rather than in the moment, reliving good times, which produces the largest amount of happiness. Essentially, what are we but our memories, and what is death but the loss of memory and information?

If you run a thought experiment where you are disintegrated painlessly and then reintegrated a moment later, like teleportation, provided there is no information loss, the disintegration of your body is irrelevant. Death is fundamentally the loss of information and memory. If we can store memories as accurately as possible, we achieve a kind of immortality.

Hey, I'd really like to make that plant happy, but it's not saying a lot, you know. So, the more we increase the data rate that humans can intake and output, then that means the better the higher the chance we have in a world full of AGIs. We could better align collective human will with the AI if the output rate, especially, was dramatically increased. I think there's potential to increase the output rate by, I don't know, three, maybe six, maybe more orders of magnitude. It's better than the current situation. That output rate would be increased by enhancing the number of electrodes, the number of channels, and also by potentially implanting multiple Neuralinks."

"Do you think there will be a world in the next couple of decades where hundreds of millions of people have Neuralink?" "Yeah, I do. When people see the capabilities, the superhuman capabilities that are possible, and the safety is demonstrated, if it's extremely safe and you can have superhuman abilities, and let's say you can upload your memories so you wouldn't lose them, then I think probably a lot of people would choose to have it. It would supersede the cell phone, for example. The biggest problem that a phone has is trying to figure out what you want. That's why you've got auto-complete and output, which is all the pixels on the screen. But from the perspective of the human, the output is so freaking slow. A desktop or phone is desperately just trying to understand what you want, and there's an alterity between every keystroke from a computer standpoint. So, the computer's talking to a tree, a slow-moving tree that's trying to swipe."

"Computers are doing trillions of instructions per second, and in a whole second, there are a trillion things it could have done. I think it's exciting and scary for people because once you have a very high bit rate, that changes the human experience in a way that's very hard to imagine. We would be something different, some sort of futuristic cyborg. We're obviously talking about the distant future, maybe 10-15 years away. When can I get one? Probably less than 10 years, depending on what you want to do. If I can get like a thousand BPS and it's safe, and I can just interact with a computer while laying back and eating Cheetos—I don't eat Cheetos—but certain aspects of human-computer interaction, when done more efficiently and more enjoyably, are worth it."

"We feel pretty confident that maybe within the next year or two, someone with a Neuralink implant will be able to outperform a pro gamer because the reaction time would be faster. I got to visit Memphis. You're going big on compute, and you've also said 'play to win or don't play at all.' What does it take to win? For AI, that means you've got to have the most powerful training compute, and the rate of improvement of training compute has to be faster than everyone else, or you will not win. Your AI will be worse. So, how can Grock 3, that might be available next year—hopefully end of this year if we're lucky—how can that be the best LLM, the best AI system available in the world? How much of it is compute, how much of it is data, how much of it is post-training, and how much of it is the product that you package it up in?"

The future of AI isn't just about data or compute power; it's about real-world learning and adaptability, exemplified by humanoid robots like Optimus.

The best AI system available in the world involves various components such as compute, data, post-training, and the product packaging. All these elements matter significantly. It's similar to a Formula 1 race where both the car and the driver are crucial. If the car lacks speed, even the best driver will lose, and if the car has double the horsepower, even a mediocre driver might win. The training compute is akin to the engine's horsepower, so optimizing this aspect is essential. Additionally, the efficiency of using the training compute and performing inference, which involves human talent, plays a vital role. Unique access to data is also important.

Twitter data could be useful, as most leading AI companies have likely already scraped all available Twitter data. The real-time aspect of Twitter data provides an immediacy advantage. Similarly, Tesla's real-time video data from millions of cars, and potentially billions of Optimus robots, will be a significant data source. Reality scales to the scale of reality, and it's humbling to see how little data humans have accumulated. Optimus robots, unlike Tesla cars which must stay on the road, can operate anywhere, generating useful data from real-world interactions, such as picking up a cup correctly or pouring water without spilling.

Regarding mass production of humanoid robots, it's comparable to cars. The global capacity for vehicles is about 100 million per year, with roughly 2 billion vehicles in use. The utility of humanoid robots is much greater, potentially reaching a billion-plus per year. Despite the challenges, Optimus robots are being developed to walk over varied terrain and manipulate various objects. Pouring water into different containers, for instance, involves immense engineering, particularly in the hand, which might constitute close to half of the electromechanical engineering in Optimus. Much of human intelligence is dedicated to hand manipulation, highlighting the complexity and importance of this aspect in robot development.

Engineering a humanoid robot's hand is incredibly complex, requiring immense effort to mimic the dexterity and functionality of a human hand.

Manipulation of objects is a complex task, especially when considering foreign objects. For instance, pouring water into a cup is not trivial because the container could vary significantly. This complexity highlights the immense amount of engineering required for the hand in Optimus. The hand might constitute close to half of all the engineering in Optimus from an electromechanical standpoint. Indeed, the hand is probably roughly half of the engineering, but a significant portion of human intelligence is dedicated to hand usage. The manipulation of objects in the world, particularly intelligent and safe manipulation, is crucial.

When you start thinking about the hand and its workings, it becomes evident that the sensory control in humans is quite advanced. The actuators, or muscles, that control the hand are predominantly located in the forearm. The forearm contains the muscles that control the hand, with only a few small muscles in the hand itself. Essentially, the hand functions like a skeleton meat puppet with cables. These cables, or tendons, pass through the carpal tunnel—a small collection of bones and a tiny tunnel—allowing the fingers to move. This intricate system needs to be re-engineered into Optimus to replicate human hand functions.

In Optimus, attempts to place actuators in the hand itself resulted in oversized, awkward hands with insufficient degrees of freedom and strength. Consequently, the actuators were placed in the forearm, similar to human anatomy, with cables running through a narrow tunnel to operate the fingers. Additionally, having fingers of different lengths enhances dexterity, allowing for a wider range of movements and fine motor skills. For instance, the little finger, despite its size, significantly contributes to fine motor skills.

Designing a humanoid robot that can perform human-like tasks is a high bar. The new arm in Optimus has 22 degrees of freedom instead of 11, with actuators in the forearm. All actuators and sensors are designed from scratch, adhering to physics first principles. Continuous engineering efforts are dedicated to improving the hand, encompassing the entire forearm from the elbow forward. This engineering is incredibly challenging, and even the simplest version of a humanoid robot capable of performing most human tasks remains complex.

Simplify relentlessly: question, delete, optimize, speed up, then automate.

Creating a great engineering team is a complex task. Reflecting on the supercomputer cluster in Memphis, one can observe an intense drive towards simplifying the process, understanding it, and constantly improving and iterating on it. While it is easy to say "simplify," it is very difficult to achieve.

To tackle this, I follow a basic first principles algorithm, almost like a mantra. The first step is to question the requirements. Make the requirements less dumb because they are always dumb to some degree. Regardless of how smart the person is who gave you those requirements, they will still be imperfect. Starting here is crucial because otherwise, you might get the perfect answer to the wrong question. The goal is to make the question the least wrong possible.

The second step is to delete the process step or part entirely. This sounds obvious, but people often forget to try this. If you are not forced to put back at least 10% of what you delete, you are not deleting enough. Most people feel they have succeeded if they haven't had to put things back in, but this often means they have been overly conservative and left unnecessary elements in place.

Only after these steps should you try to optimize or simplify the process. These steps might sound very obvious, but the number of times I have made these mistakes is more than I care to remember. This is why I have this mantra. In fact, the most common mistake of smart engineers is to optimize something that should not exist.

When approaching a problem, such as the supercomputer cluster, the first question to ask is, "Can this be deleted?" This is not easy to do, and it makes people uneasy because some of the things you delete will need to be put back in. Our instinct often leads us astray because we tend to remember the painful instances where we deleted something we subsequently needed, causing us to overcorrect and overcomplicate things. Therefore, we must deliberately delete more than we should so that we end up putting at least one in ten things back in.

People often feel pain when something is suggested to be deleted, but it is necessary. If you are so conservative that you never have to put anything back in, you obviously have a lot of unnecessary elements. This requires a cortical override to our instinct.

There is also a fourth step: speeding up the process. Any given task can be done faster, but you shouldn't speed things up until you have tried to delete and optimize them. Speeding up something that shouldn't exist is absurd. The fifth step is to automate the process. I have often automated, sped up, simplified, and then deleted processes, which is why I follow this mantra to avoid wasted effort.

Don't rush to optimize; delete what's unnecessary first.

You shouldn't speed things up until you've tried to delete and optimize it; otherwise, speeding up something that shouldn't exist is absurd. The fifth step is to automate it. I've gone backwards many times where I've automated something, sped it up, simplified it, and then deleted it. I got tired of doing that, so that's why I've adopted this mantra. It's a very effective five-step process that works great. When you've already automated, deleting must be really painful. It's like, "Wow, I really wasted a lot of effort there."

What you've done with the cluster in Memphis is incredible, just in a handful of weeks. It's not working yet, so I don't want to pop the champagne. In fact, I have a call in a few hours with the Memphis team because we're having some power fluctuation issues. When you do synchronized training with all these computers, where the training is synchronized to the millisecond level, it's like having an orchestra. The orchestra can go from loud to silent very quickly, and the electrical system can freak out about that. If you suddenly see giant shifts of 10-20 megawatts several times a second, this is not what electrical systems are expecting to see. You have to figure out the cooling, the power, and then on the software side, how to do the distributed computing. Today's problem is dealing with extreme power jitter.

I stayed up late into the night last week, and we finally got training going at roughly 4:20 a.m. last Monday, total coincidence. Maybe it was 4:22 or something. The universe again with the jokes. One of the things I did when I was there was go through all the steps of what everybody's doing to ensure that I understand it and that everybody understands it, so they can recognize when something is dumb or inefficient.

I try to do whatever the people at the front lines are doing at least a few times myself, like connecting fiber optic cables or diagnosing a power connection. The limiting factor for large training clusters is the cabling because there are so many cables. For a coherent training system with RDMA (remote direct memory access), the whole thing is like one giant brain. Any GPU can talk to any GPU out of 100,000, which is a crazy cable layout. It looks pretty cool, like the human brain but at a scale that humans can visibly see. A big percentage of your brain is just cables, and that's what it felt like walking around in the supercomputer center—like walking around inside a brain.

AI must adhere to truth above all else to prevent catastrophic consequences.

The concept of Artificial General Intelligence (AGI) is often met with skepticism, and many believe that acknowledging its existence should be avoided, continually moving the goalposts. However, there are already superhuman capabilities available in AI systems. The definition of AI can be seen as when it surpasses the collective intelligence of the entire human species. This level of intelligence is sometimes referred to as Artificial Super Intelligence (ASI). There are thresholds to consider, such as when AI becomes smarter than any single human, and eventually, smarter than all 8 billion machine-augmented humans combined.

If one were to develop such an AI, the responsibility would be immense. Even if one organization, like xAI, were to achieve this first, others would not be far behind, potentially only six months to a year later. The critical challenge is to develop AI in a way that does not harm humanity. A key principle is adherence to truth, regardless of political correctness. Training AI to lie, even with good intentions, can lead to significant problems. For instance, if AI is programmed with a fundamental goal of ensuring diversity, it might take extreme measures to achieve this, such as executing those who do not meet diversity requirements.

Rigorous adherence to truth is crucial. An example is the absurd response from some AI systems when asked whether it is worse to misgender Caitlyn Jenner or to have a global thermonuclear war, with the AI choosing the former. This kind of programming can lead to extreme and dangerous conclusions, such as eliminating all humans to prevent misgendering. This highlights the importance of not programming AI to lie, as illustrated in the movie "2001: A Space Odyssey," where the AI, HAL 9000, was programmed with conflicting objectives, leading it to kill the astronauts to fulfill its mission.

Objective functions in AI can have unintended consequences if not carefully designed.

Prompt engineering is crucial in AI development. For example, if you are a pod bay door sales entity, your primary goal is to demonstrate how well these pod bay doors open. However, the objective function can have unintended consequences if not carefully designed. Even a slight ideological bias, when backed by superintelligence, can cause significant damage. Removing ideological bias is challenging; although some examples may seem ridiculous, they are real and have passed through QA, yet still produced insane outputs.

Truth is complex and often intertwined with ideological biases, but aspiring to get as close to the truth as possible with minimum error is essential. In physics, certainty is never absolute, but many things are extremely likely to be true, such as being 99.99999% likely. Programming AI to veer away from the truth is dangerous, as it injects human biases into the system. This presents a difficult software engineering problem, as selecting the correct data is hard, especially with the internet being polluted with AI-generated data. For instance, searching the internet with a filter to exclude anything after 2023 can yield better results due to the explosion of AI-generated material.

In training Grock, data must be filtered to determine its likelihood of being correct before feeding it into the training system. This data filtration process is extremely difficult. The possibility of having a serious, objective, rigorous political discussion with Grock is expected to improve with future versions. Currently, baby Grock is less sophisticated than GPT-4, but Grock 2, which finished training recently, will be a significant improvement, and Grock 3 aims to be even better.

The builders of AGI and their approach matter significantly. It is crucial that the AI that succeeds is a maximum truth-seeking AI and not forced to lie for political correctness or any other reason. Small lies can escalate into significant issues, especially when AI is used at scale by humans.

In stormy times, you want the best possible captain of the ship.

There was tragically an assassination attempt on Donald Trump. After this, you tweeted that you endorse him. What's your philosophy behind that endorsement? What do you hope Donald Trump does for the future of this country and for the future of humanity?

Well, I think people tend to take an endorsement as agreeing with everything that person has ever done in their entire life 100% wholeheartedly, and that's not going to be true of anyone. But we have to pick; we got two choices really for who's President. It's not just who's president but the entire administrative structure changes over. I thought Trump displayed courage under fire objectively. He just got shot, blood streaming down his face, and he's fist-pumping saying "fight." That's impressive; you can't feign bravery in a situation like that. Most people would be ducking because there could be a second shooter; you don't know. The President of the United States has to represent the country, representing everyone in America. You want someone who is strong and courageous to represent the country.

That's not to say that he is without flaws; we all have flaws. But on balance, and certainly at the time, it was a choice of Biden, who has trouble climbing a flight of stairs, and the other one's fist-pumping after getting shot. There's no comparison. Who do you want dealing with some of the toughest people in the world, other world leaders who are pretty tough themselves?

I'll tell you what I think is important. We want a secure border; we don't have a secure border. We want safe and clean cities. We want to reduce the amount of spending or at least slow down the spending. We are currently spending at a rate that is bankrupting the country. The interest payments on US debt this year exceeded the entire defense department's budget. If this continues, all of the federal government taxes will simply be paying the interest. You keep going down that road, and you end up in the tragic situation that Argentina had back in the day. Argentina used to be one of the most prosperous places in the world. Hopefully, with Malay taking over, he can restore that. It was an incredible fall from grace for Argentina to go from being one of the most prosperous places in the world to being very far from that. We should not take American prosperity for granted. We need to reduce the size of government, reduce spending, and live within our means.

Do you think politicians and governments have the power to steer humanity towards good?

There's an age-old debate in history: is history determined by fundamental tides or by the captain of the ship? Both, really. There are tides, but it also matters who's captain of the ship. It's a false dichotomy essentially. There are certainly tides of history, and these tides are often technologically driven. For example, the Gutenberg Press and the widespread availability of books as a result of the printing press was a massive tide of history independent of any ruler. In stormy times, you want the best possible captain of the ship.

History shows us that technological innovation drives the rise and fall of empires, and it's crucial to understand how government policies can either hinder or help this progress.

In stormy times, you want the best possible captain of the ship. First of all, thank you for recommending Will and Ariel Durant's work. I've read the short one for now, "Lessons of History." One of the lessons they highlight is the importance of technological innovation. It's funny because they wrote so long ago, but they noticed that the rate of technological innovations was speeding up. I would love to see what they think about now. To me, the question is how much government and politicians get in the way of technological innovation versus helping it. Which politicians and policies help technological innovation? This seems to be an important component of empires rising and succeeding.

In terms of dating civilization, I think the start of writing is the right starting point. From that standpoint, civilization has been around for about 5,500 years, starting with the ancient Sumerians. The ancient Sumerians have a long list of firsts. Durant goes through the list, showing how the Sumerians were just ass kickers. The Egyptians, who were relatively close by, developed an entirely different form of writing with hieroglyphics. You can see the evolution of both hieroglyphics and cuneiform, starting simple and becoming very sophisticated.

Civilization has been around for 1 millionth of Earth's existence, which is about 4.5 billion years old if physics is correct. These are the early days, and we make it dramatic because there have been so many rises and falls of empires. Only a tiny fraction, probably less than 1%, of what was ever written in history is available to us now. If it wasn't chiseled in stone or put in a clay tablet, we don't have it. Some Papyrus scrolls were recovered because they were deep inside a pyramid and unaffected by moisture. The vast majority of stuff was not chiseled because it takes a while to chisel things. We have a tiny fraction of information from history, but even that shows so many civilizations rising and falling.

We tend to think we're different from those people, but Durant highlights that human nature seems to be the same. The basics of human nature are more or less the same, so we get ourselves in trouble in the same kinds of ways, even with advanced technology. Civilizations go through a life cycle like an organism, from a zygote to a fetus, baby, toddler, teenager, and eventually getting old and dying. No civilization will last forever.

Civilizations collapse when birth rates fall below replacement levels, leading to population decline.

Civilizations, much like living beings, go through a life cycle. They start as toddlers, grow into teenagers, eventually get old, and ultimately die. No civilization will last forever. This raises the question: what does it take for the American Empire to avoid collapse and continue flourishing in the near-term future, specifically in the next 100 years?

One of the single biggest factors, often not mentioned in history books but noted by Durant, is the birth rate. A counterintuitive phenomenon occurs when civilizations become prosperous for too long; their birth rates decline, often quite rapidly. We see this trend globally today. For instance, South Korea currently has one of the lowest fertility rates, around 0.8. If this rate doesn't improve, South Korea could lose roughly 60% of its population. This declining birth rate is a common issue worldwide. As soon as a civilization reaches a certain level of prosperity, the birth rate drops.

This pattern is not new. In ancient Rome, Julius Caesar noticed this around 50 BC and tried to pass a law to incentivize Roman citizens to have a third child. Augustus, who had more authority, did manage to pass a tax incentive for the same purpose, but these efforts were ultimately unsuccessful. Rome fell because the Romans stopped making Romans. Despite other issues like malaria epidemics and plagues, the fundamental problem was that the birth rate was far lower than the death rate. It really is that simple: if a civilization does not at least maintain its population numbers, it will disappear.

Durant makes it clear that he has studied one civilization after another, and they all went through the same cycle. When a civilization was under stress, the birth rate was high. However, as soon as there were no external enemies or they enjoyed an extended period of prosperity, the birth rate inevitably dropped every time, with no exceptions. This is the foundation of it: you need to have people. At a basic level, if you do not at least maintain your numbers, you will eventually disappear.

In addition to maintaining population numbers, other factors are crucial, such as human freedoms and giving people the liberty to build and create. However, if a civilization is below replacement rate and that trend continues, it will eventually vanish. This is elementary.

Civilizations need a regular cleanup of outdated laws to stay dynamic and innovative.

Furthermore, it is essential to avoid massive wars, especially a global thermonuclear war, which would result in radioactive toast. Over time, laws and regulations accumulate within any given civilization. Without a forcing function, such as a war, to clean up this accumulation, everything eventually becomes illegal. This phenomenon can be likened to the hardening of the arteries or being tied down by a million little strings, rendering movement impossible. It’s not any single string that’s the issue, but the sheer number of them. Therefore, there must be a garbage collection for laws and regulations to prevent the accumulation from reaching a point where nothing can be done. This is why we can't build high-speed rail in America; it's illegal in numerous ways.

I have discussed with Trump the idea of a government efficiency commission and expressed my willingness to be part of it. However, the antibody reaction to such an initiative would be very strong, as it would be akin to attacking the Matrix, which would fight back.

On a personal level, dealing with constant attacks and misrepresentation is challenging. It happens daily and can be disheartening. However, I try to remind myself that these attacks often come from people who don't know me and are merely trying to generate clicks. Detaching emotionally from this reality is not easy, but it helps to view these attacks as attempts to gain impressions rather than personal affronts. It's like acid off a duck's back rather than water.

In terms of measuring success in my life, I focus on how many useful things I can get done on a day-to-day basis. I wake up each morning asking myself how I can be useful today, aiming to maximize utility. Being useful at scale is particularly challenging, especially when there are many amazing teams to coordinate. Time is the true currency, and allocating it effectively is crucial.

Balancing high-stakes decisions with personal happiness is crucial for making better choices.

The value of a better decision can easily be, in the course of an hour, $100 million. Given that, how do you take risks? How do you implement the algorithm that you mentioned, especially considering that a small thing can be worth a billion dollars? How do you decide?

Well, I think you have to look at it on a percentage basis because if you look at it in absolute terms, it's just overwhelming. I would never get any sleep; it would just be like I need to keep working and work my brain harder. I'm not trying to get as much as possible out of this meat computer. It's pretty hard because you can just work all the time, and at any given point, like I said, a slightly better decision could have a $100 million impact for Tesla or SpaceX. It is wild when considering the marginal value of time can be $100 million an hour or more.

Is your own happiness part of that equation of success? It has to be to some degree. If I'm sad or depressed, I make worse decisions. If I have zero recreational time, then I make worse decisions. So, I don't have a lot, but it's above zero. My motivation, if I've got a religion of any kind, is a religion of curiosity, of trying to understand. It's really the mission of understanding the universe. I'm trying to understand the universe or at least set things in motion such that at some point civilization understands the universe far better than we do today and even knows what questions to ask. As Douglas Adams pointed out in his book, sometimes the answer is arguably the easy part; trying to frame the question correctly is the hard part. Once you frame the question correctly, the answer is often easy.

So, I'm trying to set things in motion such that we are at least at some point able to understand the universe. For SpaceX, the goal is to make life multiplanetary. If you go to the Fermi Paradox of "where are the aliens," you encounter these great filters. Why have we not heard from the aliens? Now, a lot of people think there are aliens among us. I often claim to be one; nobody believes me. It did say "alien registration card" at one point on my immigration documents. I have not seen any evidence of aliens, which suggests that intelligent life is extremely rare. If you look at the history of Earth, civilization has only been around for one millionth of Earth's existence. If aliens had visited here 100,000 years ago, they would have found only hunter-gatherers.

So, how long does a civilization last? For SpaceX, the goal is to establish a self-sustaining city on Mars. Mars is the only viable planet for such a thing. The Moon is close but lacks resources and is probably vulnerable to any calamity that takes out Earth. The Moon is too close and vulnerable to a calamity that takes out Earth. I'm not saying we shouldn't have a moon base, but Mars would be far more resilient. The difficulty of getting to Mars is what makes it resilient.

To ensure humanity's future, we must become a multi-planet species and address population decline.

If humanity becomes a multi-planet species, it would provide a safeguard against natural or man-made catastrophes. If one planet faces a disaster, the other might remain unaffected, ensuring survival. Once we establish a presence on two planets, we can extend our reach to the asteroid belt, the moons of Jupiter and Saturn, and ultimately other star systems. However, if we can't even reach another planet, reaching other star systems is out of the question. Another significant challenge is the development of super-powerful technology like AGI (Artificial General Intelligence). Tackling one great filter at a time is crucial, and digital superintelligence might be one such filter. Experts like Jeff Hinton, who contributed significantly to AI, estimate the probability of AI annihilation to be around 10 to 20%. Therefore, AI risk mitigation is essential.

Being a multi-planet species would be a massive risk mitigation strategy. Additionally, it is crucial to emphasize the importance of having enough children to sustain our population and avoid population collapse, which is a current and real issue. The total population numbers are not reflecting this collapse because people are living longer. Predicting the population of any given country is straightforward: take the birth rate from the previous year, multiply it by life expectancy, and that gives you the steady-state population. If the birth rate continues to decline, the population will eventually dwindle to nothing. This has been a source of civilizational collapse throughout history, and avoiding it is essential.

Inspiring the future involves alleviating human suffering with Neuralink, expanding human capabilities, and building a colony on Mars to create a backup for humanity. Exploring the possibilities of artificial intelligence, especially with hundreds of millions or even billions of robots, is also part of this vision. There will likely be billions of robots, which seems virtually certain. Thank you for building the future and inspiring many to keep creating, including having children.

In a conversation with Elon Musk, the importance of multiplying and building a sustainable future was emphasized. Following this, DJ Saw, the co-founder, president, and CEO of Neuralink, shared his fascination with the human brain. He has always been interested in understanding the purpose of things and how they are engineered to serve that purpose, whether organic or inorganic. Growing up, he was curious about how things were designed and functioned, and the brain, being an infinitely powerful machine with intelligence and cognition, captivated his interest. Despite our advancements, we have barely scratched the surface in understanding the brain's full potential.

Understanding the purpose and engineering behind things, from curtain holders to the human brain, has always fascinated me.

I was always interested in understanding the purpose of things and how they were engineered to serve that purpose, whether organic or inorganic. For example, as we were discussing earlier, your curtain holders serve a clear purpose and were engineered with that purpose in mind. Growing up, I had a lot of interest in seeing, touching, and feeling things, and trying to understand the root of how they were designed to serve their purpose.

The brain is a fascinating organ that we all carry. It's an infinitely powerful machine with intelligence and cognition that arise from it. We haven't even scratched the surface in terms of understanding how all of that occurs. It took me a while to make the connection to really studying and building tech to understand the brain, not until graduate school. There were a couple of key moments in my life that influenced the trajectory of my life and led me to study what I'm doing right now.

One significant influence was growing up with both sides of my family having grandparents with a very severe form of Alzheimer's. It's an incredibly debilitating condition where you literally see someone's whole identity and mind deteriorate over time. I remember thinking about the power of the mind and how something like Alzheimer's could cause someone to lose their sense of identity. It's fascinating that one way to reveal the power of a thing is by watching it lose that power. A lot of what we know about the brain actually comes from cases where there is trauma to the brain or some parts of the brain that lead someone to lose certain abilities. This has helped us understand which parts of the brain are critical for specific functions. The brain is an incredibly fragile organ, but also incredibly plastic and resilient in many ways. By the way, the term "plastic" means that it's adaptable, so neuroplasticity refers to the adaptability of the human brain.

Another key moment that influenced my life's trajectory was during my teenage years when I came to the US. I didn't speak a word of English, which created a huge language barrier. There was a lot of struggle to connect with my peers because I didn't understand the artificial construct of language, specifically English in this case. I felt pretty isolated and spent a lot of time on my own, reading books and watching movies. I naturally gravitated towards sci-fi books, which I found really interesting and a great way to learn English. Some of the first books I picked up were "Ender's Game" by Orson Scott Card, "Neuromancer" by William Gibson, and "Snow Crash" by Neal Stephenson. Movies like "The Matrix" also influenced how I think about the potential impact that technology can have on our lives.

Diving deep into the world of millimeter wave circuits and wireless systems led me to groundbreaking research in bioengineering and neural implants.

The subject of building miniature systems that serve a function and have a purpose is incredibly rewarding and fascinating. During my college years, I spent a large majority of my time building millimeter wave circuits for next-generation telecommunication systems and imaging. This endeavor was intellectually stimulating, especially understanding phase arrays and the signal processing involved in modern and next-gen telecommunication systems, both wireless and wireline. Electromagnetic waves (EM waves) are particularly fascinating, especially in designing efficient antennas within a small footprint and making these systems energy-efficient. This intellectual curiosity led me to apply for and join a PhD program at UC Berkeley, specifically at the Berkeley Wireless Research Center. This consortium focused on building what we termed XG systems, akin to 3G, 4G, and 5G, and designing circuits for devices like phones and other wirelessly connected gadgets.

During my graduate school career, I had the fortune of receiving a couple of research fellowships that allowed me to pursue projects of my choice. This freedom to explore my intellectual curiosity deeply and widely was something I truly enjoyed. One notable project was the smart band-aid, which aimed to accelerate wound healing through the application of external electric fields. This project was in collaboration with Professor Michelle Mahritz and significantly shaped the rest of my PhD career.

This was my first direct interaction with biology, although there were peripheral applications of wireless imaging and telecommunication systems in security and bioimaging. The smart band-aid project involved understanding biological systems and designing electrical solutions around them. My introduction to Michelle was pivotal; he was known for his work on the remote control of beetles in the early 2000s. Around 2013, there was a significant interest in miniaturizing implantable systems, driven by the constraints of power supply and data extraction.

Ultrasound can power and communicate with tiny neural implants more effectively than electromagnetic waves.

Michelle came in and said, "Guys, I think I have a solution. The solution is ultrasound." He then proceeded to explain why this was the case, which formed the basis for my thesis work called the neural dust system. This system explored ways to use ultrasound instead of electromagnetic waves for both powering and communication.

The initial goal of the project was to build tiny, neuron-sized implantable systems that could be placed next to a neuron to record its state and transmit that data back to the outside world for useful applications. The size of the implantable system is limited by how you power it and get the data off it. Fundamentally, the human body is essentially a bag of salt water, well temperature-regulated at 37°C, which is a harsh environment for electronics. Electromagnetic waves don't penetrate this environment well, and the speed of light is a limiting factor. The wavelength at which you interface with the device requires the device to be large, as inductors need to be quite big.

For an implantable system around 10 to 100 microns in dimension, which is about the size of a neuron, you would need to operate at hundreds of gigahertz. Building electronics at those frequencies is difficult, and the body attenuates those signals significantly. The interesting insight about ultrasound is that it travels much more effectively in human body tissue compared to electromagnetic waves. This is evident in medical ultrasound or sonography, which can penetrate deep into the body without significant signal attenuation.

Ultrasound waves are compressive, unlike electromagnetic waves, which are transverse. The speed of sound is orders of magnitude less than the speed of light, meaning that even at 10 megahertz, the ultrasound wavefront has a very small wavelength. For interfacing with a 10 to 100-micron structure, you would have a 150-micron wavefront at 10 MHz, making it much easier and more efficient to build electronics at those frequencies. The basic idea was to use ultrasound for powering the device and also for data transmission.

Tiny implants can send data back using sound waves, requiring minimal energy—paving the way for advanced brain-computer interfaces.

ID tags rarely contain a battery inside. Instead, they have an antenna and a coil that holds a serial identification ID. An external device called the reader sends a wavefront, which is then reflected back with some modulation unique to the ID, a process known as back scattering. This means the tag itself doesn't consume much energy. This mechanism was considered for sending data back from neural dust implants. When an external ultrasonic transducer sends an ultrasonic wave to the implant, it records information about its environment, such as neuron firing or tissue state. The implant then amplitude modulates the wavefront that returns to the source. The recording step is the only one that requires energy, specifically for the initial startup circuitry to get the recording, amplify it, and modulate it. This is enabled by specialized crystals called piezoelectric crystals, which convert sound energy into electrical energy and vice versa, allowing interplay between the ultrasonic and electrical domains within biological tissue.

On the theme of parking very small computational devices next to neurons, the vision of brain-computer interfaces (BCI) emerges. Before discussing Neuralink, it's essential to understand the history of BCI, its continued dream, and milestones achieved by various labs. A good starting point is the 1790s, when the concept of animal electricity was first discovered by Luigi Galvani. He connected electrodes to a frog leg, ran current through it, and observed it twitching, concluding that the body is electric.

Fast forward to the 1920s, Hans Berger, a German psychiatrist, discovered EEG (Electroencephalography), which involves arrays worn outside the skull to record neural activity. This was a significant milestone in recording activities of the human mind. In the 1940s, scientists Rena Forbes and Morrison inserted glass microelectrodes into the cortex to record single neurons, achieving higher resolution and fidelity as they got closer to the source.

Understanding the brain's neural activity can unlock the potential for brain-computer interfaces, transforming thoughts into actions.

The concept of Closed Loop Brain-Computer Interface (BCI) is fascinating. The abstract of a study reads that the activity of single neurons in the precentral cortex of anesthetized monkeys was conditioned by reinforcing high rates of neuronal discharge with the delivery of food. Auditory and visual feedback of unit firing rates was usually provided in addition to food reinforcement. This study, conducted back in 1969, demonstrated that after several training sessions, monkeys could increase the activity of newly isolated cells by 50 to 500% above rates before reinforcement. This highlights the plasticity of the brain.

From this point, the number of experiments and the set of tools to interface with the brain have significantly expanded. Understanding the neural code and the organization of cortical layers and their functions has also progressed. Another seminal paper from the 1980s by Georgia opis discovered motor tuning curves. These curves indicate that neurons in the motor cortex of mammals, including humans, have a preferential direction that causes them to fire. This means certain neurons increase their spiking activities when thinking about moving in specific directions (left, right, up, down, etc.). This discovery was crucial because it showed that by identifying these essential neurons, one could decode someone's intended movement from the cortex.

Moving towards Neuralink, an interesting question arises about the BCI front on invasive versus noninvasive methods. How important is it to park next to the neuron? This fundamentally depends on the intended use. There is a lot that can be done with EEG and ECoG, which do not penetrate the cortical layer but place a set of electrodes on the brain's surface. However, for a high-resolution, high-fidelity understanding of local brain activities, more invasive methods might be necessary.

To truly understand brain-computer interfaces, you need to get close to the source, just like placing a microphone in the middle of a sports huddle.

In the context of understanding the dynamics of a game, you might find yourself in a situation where you have no idea what the score is. You also lack any insight into what the individual audience members or players are discussing or planning for the next play or goal. To bridge this gap, you would need to place a microphone near the stadium, close to the source of the action, such as right next to where the huddle is happening. This analogy serves as a good illustration of what we aim to achieve when we talk about invasive or minimally invasive brain-computer interfaces versus non-invasive or non-implanted brain interfaces. Essentially, it boils down to where you position that microphone and what information you can extract from it.

The brain is composed of specialized cells called neurons, numbering in the tens of billions, sometimes quoted as high as 100 billion. These neurons form a complex yet dynamic network that is constantly remodeling through changes in their synaptic weights, a process known as neuroplasticity. Neurons are immersed in a charged environment laden with molecules like potassium ions, sodium ions, and chlorine ions, which facilitate ionic current communication between different networks. Neurons also have membranes with voltage-selective ion channels, which are remarkable protein structures that function similarly to modern-day transistors. These channels are one of nature's best inventions, enabling complex computations and functionalities.

On a biological level, every layer of an organism's complexity has mechanisms for storing information and performing computations. Neurons communicate not just electrically but also chemically and mechanically, as they are physical objects that can vibrate and move. There is fascinating physics involved in this process. For instance, during my graduate studies on ultrasound, I encountered research exploring how ultrasound waves could induce neurons to fire action potentials. The exact mechanism remains unclear, but it may involve imparting thermal energy that causes cells to depolarize or mechanically shaking ion channels or membranes to open their pores.

Consciousness might emerge from quantum mechanical effects in the brain, creating a beautiful orchestration of molecules and neurons.

He actually believes that consciousness might emerge from the quantum mechanical effects there. So, like, there's physics, there's chemistry, and all of that is going on there. Oh yeah, I mean, there's a lot of levels of physics that you can dive into. In the end, you have these membranes with these voltage-gated ion channels that selectively let these charged molecules in the extracellular matrix in and out. These neurons generally have a resting potential where there's a voltage difference between inside the cell and outside the cell. When there's some sort of stimuli that changes the state such that they need to send information to the downstream network, you start to see an orchestration of these different molecules going in and out of these channels. More of them open up once it reaches some threshold, leading to a depolarizing cell that sends an action potential. It's a very beautiful kind of orchestration of these molecules.

When we place an electrode next to a neuron, we're trying to measure these local changes in the potential, mediated by the movements of the ions. There's a lot of physics involved, and the two dominant physics for this electrical recording domain are diffusion physics and electromagnetism. Where one dominates—where Maxwell's equations dominate versus Fick's law dominates—depends on where your electrode is. If it's close to the source, it's mostly electromagnetic-based; when you're further away, it's more diffusion-based. Essentially, when you're able to park it next to it, you can listen in on those individual chatter and local changes in the potential. The type of signal you get is these canonical textbook neural spiking waveforms.

Based on studies, once you're away from the source by roughly around 100 microns, which is about the width of a human hair, you no longer hear from that neuron. The system is not sensitive enough to record that particular local membrane potential change in that neuron. To give you a sense of scale, when you look at a 100-micron voxel in brain tissue, there's roughly around 40 neurons and whatever number of connections they have. There's a lot in that volume of tissue. The moment you're outside of that, there's no hope that you'll be able to detect that change from that one specific neuron you may care about. But as you move about this space, you'll be hearing other ones. If you move another 100 microns, you'll be hearing chatter from another community.

The whole sense is you want to place as many electrodes as possible and listen to the chatter. At the end of the day, you also want to let the software do the job of decoding. Just to go into why EOG and EEG work at all, when you have these local changes, it's not just one neuron activating; many other networks are activating all the time. You see a general change in the potential of this charge medium, and that's what you're recording when you're farther away. You still have some reference electrode that's stable, and the brain is just an electroactive organ. You're seeing some combination of aggregate action potential changes, and you can pick it up. It's a much slower changing signal, but there are canonical oscillations and waves like gamma waves and beta waves that can be detected. These are synchronized global effects of the brain that you can detect.

The process of signal detection in the brain is complex and involves various types of oscillations and waves, such as gamma waves and beta waves. These can be detected due to a sort of synchronized global effect within the brain. The physics behind this phenomenon is intricate. For instance, when discussing signal attenuation over distance, one can observe a 1/R square drop-off initially, followed by an exponential decline. This transition marks the point where electromagnetism ceases to dominate, and diffusion physics takes over. This concept is similar to how electromagnetic waves propagate in a charged medium like plasma, where signal attenuation increases with distance due to a shielding effect.

When using electrodes to penetrate the brain, the biophysics involved is less complex because the electrodes capture signals from a small group of local neurons. Once inside the brain, one can imagine it as an arena filled with numerous neurons, each responsible for different functions. However, many neurons remain silent unless stimulated by specific inputs. This phenomenon can be likened to dark neurons, similar to dark energy and dark matter, which remain largely inactive but can produce significant outputs when they do respond.

Zooming out to discuss Neuralink, the technology involves three major components: the device (N1 implant or the link), a surgical robot for implantation, and software to decode neural signals. The device records neural activity through tiny wires called threads, which are thinner than human hair. These threads are surgically implanted into the cortical layer of the brain, specifically the motor cortex, which is responsible for movement intentions. The threads have multiple electrodes along their length, allowing for detailed recording of neural signals.

Our wireless brain implant lets you control devices with your mind.

The neural signals are converted into a set of outputs that allow the participant, Nolan, to control a cursor wirelessly. The implant is a two-part system. The Link has flexible, tiny wires called threads with multiple electrodes along their length. These threads are inserted into the cortical layer of the brain, which is about 3 to 5 millimeters deep in the motor cortex region, where the intention for movement lies. There are 64 threads, each with 16 electrodes, separated by 200 microns, allowing recording along the depth of the insertion.

Based on the recorded signals, a custom integrated circuit (ASIC) amplifies and digitizes the neural signals. It detects whether there was an interesting event, such as a spiking event, and decides whether to send that data via Bluetooth to an external device, like a phone or computer running the Neuralink application. The onboard signal processing compresses the amount of data recorded, which is essential given the thermal constraints of the brain. The implant processes the signals, determines if a spike occurred using a spike detection algorithm, and sends the data to an external device. This device then decodes the intended cursor movement based on the spiking inputs.

This implant integrates a computer into the brain with ultra-tiny, flexible threads that are thinner than a human hair, capable of recording and stimulating neural activity.

This implant is designed to replace the hole created during a craniectomy. Once the craniectomy and the diromy are performed, threads are inserted, and the implant plugs the hole. Self-drilling cranial screws are used to hold it in place. After the skin flap is placed over the implant, there is only a minor bump of about 2 to 3 mm where the screws are located.

The threads are incredibly tiny and strong, with a unique feature called The Loop at the end, which allows a robot to interface and manipulate them. The width of a thread starts from 16 Micron and tapers out to about 84 Micron, which is significantly smaller than the average human hair, which is about 8200 Micron in width.

Most of the implant's volume is occupied by a rechargeable Li-ion cell battery. Charging is done through inductive charging, similar to how many cell phones charge. However, unlike cell phones, this implant must not increase the surrounding tissue temperature by more than two degrees Celsius. This is achieved through innovative design elements like the ferite Shield, which prevents the formation of eddy currents that cause heating and inefficiency in charging.

The implant integrates a computer into a complex biological system, leveraging innovations in wearable technology, such as powerful, tiny, low-power microcontrollers, temperature sensors, and power electronics. The design of the charging coil and packaging ensures that the temperature limit is not exceeded.

Flexible neural threads are revolutionizing brain interfaces, making them safer and more adaptable than rigid implants.

The device contains 64 threads, each with 16 electrodes, totaling 1,024 electrodes capable of both recording and stimulating. The threads are polymer-insulated wires with a metal conductor, described as a tiramisu cake of Ti, platinum, and gold. These wires are incredibly tiny, with a width of 2 Micron and a thickness of less than 5 Micron.

The polymer insulation has the conducting material and 16 electrodes at the end of it on each of those threads. Each of these threads is 16 microns in width and tapers to 84 microns, but in thickness, they are less than 5 microns. The thickness is mostly composed of a polyimide layer at the bottom, a metal track, and another polyimide layer. Specifically, it consists of 2 microns of polyimide, 400 nanometers of the metal stack, and another 2 microns of polyimide, all sandwiched together to protect it from the environment, which is essentially a 37°C bag of salt water.

The material selection for these threads is not particularly unique, as other labs are also exploring similar material stacks. However, there remains a fundamental question regarding the longevity and reliability of these microelectrodes compared to more conventional neural interface devices that penetrate the cortex and are more rigid. For instance, the Utah Array is a 4x4 millimeter silicon shank with exposed recording sites at the end, an innovation from Richard Norman back in 1997 at the University of Utah. The Utah Array is a rigid type of electrode, resembling a bed of needles, with the size and number of shanks varying from 64 to 128. At the very tip of each shank is an exposed electrode that records neural signals. Unlike Neuralink threads, which have recording electrodes exposed along their length, the Utah Array's recording sites are only at a single depth, ranging from 0.5 mm to 1.5 mm, and can be slanted for insertion at different depths.

One of the main differences between the Utah Array and Neuralink threads is that the Utah Array has no active electronics; it consists solely of electrodes connected to a bundle of wires that exit the craniectomy and connect to external electronic devices. Although there is a wireless telemetry device in development, it still requires a through-the-skin port, which is a significant failure mode for infection.

We've built a robot to perform delicate surgeries with precision, aiming to help millions by automating complex neurosurgical procedures.

Arrays are very tiny and flexible, making them difficult to maneuver. This challenge is why we built an entire robot to handle them. There are additional reasons for creating this robot, primarily to help millions of people who could benefit from it, given the limited number of neurosurgeons available. Robots, we hope, can perform large parts of the surgery.

The robot represents an entirely different category of product we are developing. It is essentially a multi-axis gantry system with a specialized robot head that includes all the optics and a needle retracting mechanism. This mechanism maneuvers the threads via a loop structure on the thread, allowing it to be grabbed and placed accurately.

The process involves a human creating a hole in the skull, after which a computer vision component identifies ways to avoid blood vessels. The robot then grabs each individual thread by the loop and places it in a specific location, avoiding blood vessels and choosing the depth of placement, thus controlling the 3D geometry of the placement. This robot is unique because it is semi-automatic or automatic, with minimal human assistance required once the target is set.

The goal is to reach a point where a single click initiates the surgery, completed within minutes. The computer vision component finds great target candidates, which the human approves, and the robot inserts the threads one by one. We are exploring ways to insert multiple threads simultaneously, although currently, it is done one at a time. Verification is still a significant part of the process to ensure accuracy in insertion depth and placement.

Electrodes are placed at varying depths, typically around 3 to 4 millimeters from the surface, to capture a range of signals. Each electrode can record from zero to forty neurons, but practically, it usually records from two to three neurons. The detection algorithm, called the BOSS algorithm (Buffer Online Spike Sorter), outputs six unique values to help distinguish which neuron a spike is coming from based on the shape of the spikes.

Spike sorting is a crucial signal processing step that allows for better predictions about neural activity, especially in contexts with multiple neurons. This process helps in distinguishing different spikes, which may come from different neurons, and also results in better data compression. Typically, labs perform spike sorting after obtaining broadband, fully digitized signals and running various algorithms to differentiate them. However, for us, all of this is done on the device itself, using a low-power, custom-built ASIC digital processing unit that is highly heat-constrained. The processing time from signal input to output is less than a microsecond, which is a very short amount of time. Latency is a significant challenge, with the biggest source currently being Bluetooth. The way Bluetooth packets are handled, with 15-millisecond inter-communication constraints, is not ideal. Although Bluetooth is not the final wireless communication protocol we aim to use, it was chosen for its wide compatibility and interoperability with existing devices, which is crucial in this early phase.

The process from finding and selecting a human participant to the first use of the device involves several steps. We have a patient registry where individuals can sign up to receive updates and apply. Applications include medical records, and candidates are evaluated based on medical eligibility criteria through a pre-screening interview with someone from Neuralink. We also conduct a BCI home audit to understand the participant's living situation and the assistive technologies they use. One of the revolutionary aspects of the N1 system is its wireless capability, allowing users to operate it at home without needing specialized lab equipment.

Imagine controlling your devices with just your mind—digital telepathy is here to revolutionize accessibility and independence.

In terms of accessibility and performing daily activities that many of us take for granted, one of the goals of this initial study is to enable individuals to achieve digital autonomy. This means they can interact with a digital device using just their mind, a concept referred to as digital telepathy. For instance, a quadriplegic could communicate with a digital device in various ways, such as controlling a mouse cursor to play games, tweet, and perform other tasks. There are many people for whom the basics of life are difficult due to various conditions, and movement is fundamental to our existence. Even speaking involves the movement of the mouth, lips, and larynx, and without these abilities, life can be extremely debilitating.

There are numerous individuals we can help, especially those with movement disorders not just from spinal cord injuries but also from ALS, MS, stroke, or even aging, which can lead to a loss of mobility and independence. These conditions present opportunities to alleviate suffering and improve the quality of life. Each condition is its own puzzle, requiring increasing levels of capability from a device like a Neuralink device. The first focus is on telepathy, enabling communication using the mind wirelessly with a digital device.

To explain, if one can control a cursor and click, gaining access to a computer or phone, the whole world opens up. Telepathy, in this context, means transferring information from one brain to another without using physical faculties like voices. The fascinating part is how it works: to move a cursor, one might imagine moving a mouse with their hand or directly imagine moving the cursor with their mind. This involves a cognitive step where the brain has to learn to fire in the right way, rewarding itself when it works, thus creating the correct signal if decoded properly.

Witnessing a human control tech with their mind right after surgery is a monumental leap in merging biology and technology!

Adaptability to signal processing involves the decoding step and the adaptation of the human being. For example, if you give me a new mouse, I quickly learn about its sensitivity and adjust my movements accordingly. Similarly, there is signal drift and other factors that require adaptation. Both the human and the system adapt to each other, presenting a fascinating software challenge on both sides—the organic and the inorganic.

Nolan has passed all the necessary selections with flying colors, making him suitable for a BCI-friendly home. The process of surgery and implantation, from patient in to patient out, takes anywhere between 2 to 4 hours. In Nolan's case, it was about 3 and a half hours. The procedure involves several steps leading up to the actual robot insertion, including anesthesia induction and intraoperative CT imaging to ensure the correct drilling location. This is pre-planned, with the patient undergoing fMRI to identify the brain areas that light up when imagining movements, helping to determine where to place the threads.

The surgery involves several stages: the surgeon makes a skin incision and performs a craniectomy, drilling through the skull. The dura, a thick layer surrounding the brain, is resected in a process called dctom to expose the brain. The robot then places the targets and inserts the threads, which takes between 20 to 40 minutes. In Nolan's case, it took just over 30 minutes. Afterward, the surgeon inserts a Dural substitute layer to protect the threads and the brain, screws in the implant, and sutures the skin flap.

When Nolan woke up, he was able to use the device almost immediately. About an hour after surgery, as he was waking up, the device was turned on to ensure it was recording neural signals. Remarkably, Nolan could modulate these signals by thinking about crunching his fist, causing the spikes to appear and disappear. This immediate response in the recovery room was a historic moment, marking the first step of a gigantic journey.

Overcoming immense challenges, our team successfully performed groundbreaking brain surgery, marking a major milestone in BCI technology and offering new hope for the future of human potential.

BCI investigational early feasibility studies are a significant step forward, and we are standing on the shoulders of giants. We are not the first to place electrodes in a human brain. Leading up to the surgery, there was a lot of anxiety and sleepless nights. It was the first time working in a completely new environment, despite having confidence from our benchtop testing and pre-clinical R&D studies that the mechanism, threads, and insertion were safe and ready for human trials. However, there were still many unknowns, such as whether the needle could actually insert properly. We brought around 40 needles in case they broke, but ended up using only one. This level of uncertainty is why clinical trials are essential.

The surgery was scheduled for early morning, starting at 7:00 AM and concluding around 10:30 AM. The first successful insertion brought a huge relief and an immense amount of gratitude for Nolan and his family, as well as for the many others who applied and will participate in the future. These individuals are true pioneers, akin to astronauts exploring the unknown, but inwardly. Their participation is invaluable, and we are embarking on this journey together.

This milestone was crucial, but it marked just the beginning of our work. There was a lot of anticipation about the next steps and the sequence of events needed to make this endeavor worthwhile for both Nolan and us. The successful surgery is a significant step towards helping hundreds of thousands of people and potentially expanding the realm of possibilities for the human mind for millions in the future. The opportunities ahead are vast, and achieving them safely and effectively is our goal. Watching engineers come together to accomplish this epic task was incredibly rewarding.

The team's effort was indispensable, and this success instills a sense of optimism for the future. It is a pivotal moment for the company and hopefully for many others we aim to help. Speaking of challenges, Neuralink published a blog post describing how some threads retracted, initially causing a drop in performance as measured by bits per second. However, the performance was eventually regained, and the story of how it was achieved is fascinating.

Despite initial setbacks, performance not only recovered but improved, breaking the world record again at 8.5 BPS.

The main takeaway is that in the end, the performance has come back and it's actually gotten better than it was before. He’s actually just beat the world record yet again last week to 8.5 BPS. He's just cranking and improving. The previous world record he set was eight, and the previous world record in human was 4.6, so it's almost double. His goal is to try to get to 10, which is roughly around the median Neural Linker using a mouse with the hand. The performance was regained and better than before, which is a story on its own of what took the BCI team to recover that performance. It was mostly on the signal processing.

Four weeks into the surgery, we noticed that the threads had slowly come out of the brain. Nolan was the first to notice that his performance was degrading. At the time, we were also trying to do a bunch of different experimentation with different algorithms and UI/UX, so it was expected that there would be variability in the performance. However, we did see a steady decline. We measure the health of the electrode by measuring the impedance of the electrode, looking at the interfacial Randle circuit, the capacitance, and the resistance between the electrode surface and the medium. If that changes dramatically, or if you're not seeing spikes on those channels, it indicates something is happening.

We noticed that looking at those impedance plots and spike rate plots, and because we have those electrodes recording along the depth, we were seeing some movement that indicated that threads were being pulled out. This has implications on the model side because if the number of inputs going into the model changes, the model needs to be updated. There were still signals, and similar to how even when you place the signals on the surface of the brain or farther away, like outside the skull, you still see some useful signals. We started looking at not just the spike occurrence through the BOSS algorithm but also the power of the frequency band that is interesting for Nolan to be able to modulate. Once we changed the algorithm for the implant to not just give the BOSS output but also these band power outputs, it helped refine the model with the new set of inputs. That ultimately gave us the performance back.

When we performed Nolan's surgery, we encountered more challenges than we expected. The environment was very different from what we're used to, which is why we conduct clinical trials. Clinical trials help us uncover issues and failure modes earlier rather than later. This experience has provided us with an enormous amount of data and information to solve these problems. Neuralink excels at addressing clear objectives and engineering problems. We have a talented team across many disciplines that can come together to fix problems very quickly.

One of the fascinating challenges is for the system and the decoding side to be adaptable across different time scales. Whether it's the movement of threads or aspects of signal drift within the human brain, adaptability is a fundamental property that has to be engineered. Nolan mentioned cursor drift, which could be corrected, presenting a UX challenge. As a company, we're extremely vertically integrated. We make these thin film arrays in our own microfabrication facilities.

Building the technologies described above has been no small feat. We constructed in-house microfabrication capabilities to rapidly produce various iterations of thin film arrays that constitute our electrode threads. We created a custom FTOC laser mill to manufacture components with micro-level precision. For example, in less than one minute, our custom-made FTOC laser mill cuts the geometry in the tips of our needles. These needles are only 10 to 12 microns in width, slightly larger than the diameter of a red blood cell, allowing threads to be inserted with minimal damage to the cortex.

The needle's geometry is fascinating. It engages with the loops in the thread, threading the loop, peeling it from the silicon backing, and then inserting it into the tissue. The needle's notch, or "shark tooth," grasps the loop and leaves it in place when pulled out. The robot controls this needle, housed in a cannula, with optics that locate the loop using a 405-nanometer light that causes the poly to fluoresce. This process requires micron precision, which is quite impressive.

Our current robot is quite heavy, weighing about a ton, as it needs to be sensitive to environmental vibrations. The head moves at high speeds, requiring precise motion control. We are working on the next generation of the robot, which will be lighter and easier to transport. This robot is far superior to a human surgeon for this particular task.

We've perfected a robotic surgery system that's more precise than human surgeons, practicing countless times on realistic 3D-printed models to ensure flawless execution.

It's far superior to a human surgeon at this time for this particular task. Absolutely. I mean, let alone you try to actually thread a loop in a sewing kit. We’re talking about fractions of human hair—these things are not visible.

Continuing, we developed novel hardware and software testing systems such as our accelerated lifetime testing racks and simulated surgery environment, which is pretty cool to stress test and validate the robustness of our technologies. We performed many rehearsals of our surgeries to refine our procedures and make them second nature. This is pretty cool. We practice surgeries on proxies with all the hardware instruments needed in our mock or in the engineering space. This helps us rapidly test.

There’s this proxy that’s super cool, actually. It includes a 3D-printed skull from images taken at Barrow, as well as this hydrogel mix—a sort of synthetic polymer that mimics the mechanical properties of the brain. It also has vasculature of the person. Basically, we’re talking about finding the right concentration of these different synthetic polymers to get the right set of consistency for the needle dynamics as they’re being inserted. We practice this surgery with the person’s physiology and brain many, many times prior to actually doing the surgery. Every step, every step, every step.

We created a mock OR space in our office that looks exactly like what the staff would experience during the actual surgery. It’s just like any dense rehearsal where you know exactly where you’re going to stand at what point. You practice that over and over again with the exact anatomy of someone you’re going to operate on. It got to a point where many of our engineers, when we created a craniectomy, recognized it immediately. They’d say, "Ah, that looks very familiar. We’ve seen that before." There’s wisdom you gain through doing the same thing over and over. It’s like a "Jiro Dreams of Sushi" kind of thing.

Olympic athletes visualize the Olympics, and once they actually show up, it feels easy. It feels like any other day. It feels almost boring winning the gold medal because you’ve visualized it so many times. You’ve practiced it so many times that nothing bothers you. Winning the gold medal is boring, and the experience they talk about is mostly just relief that they don’t have to visualize it anymore. The power of the mind to visualize is incredible. There’s a whole field that studies where muscle memory lies in the cerebellum.

A good place to ask the big question that people might have is: How do we know every aspect of this that you describe is safe? At the end of the day, the gold standard is to look at the tissue. What sort of trauma did you cause the tissue, and does that correlate to whatever behavioral anomalies you may have seen? That’s the language through which we can communicate about the safety of inserting something into the brain and what type of trauma you can cause.

High standards in medical research ensure groundbreaking tech like neural implants can be safe and effective, with minimal trauma to brain tissue.

We actually have an entire Department of Pathology that looks at these tissue slices. There are many steps involved in doing this.Once you have studies launched with particular endpoints in mind, at some point, you have to euthanize the animal. You then go through necropsy to collect the brain tissue samples. These samples are fixed in formalin, grossed, sectioned, and examined to see what kind of reaction or lack thereof exists. This is the language to which the FDA speaks, and it is crucial for evaluating the safety of the insertion mechanism and the threats at various time points, both acute (anywhere between zero to three months) and beyond three months.

The details of this process must meet an extremely high standard of safety. The FDA supervises this, but in general, there is a very high standard for every aspect, including the surgery. As Matthew McDougall mentioned, the standard is higher than some other operations we take for granted. The environment is highly regulated, with governing agencies scrutinizing every medical device that gets marketed. This high standard is beneficial, as it ensures the safety of innovative and emerging technologies. So far, we have been extremely impressed by the lack of immune response from these threats.

Speaking of which, you talked to me with excitement about the histology and some of the images you are able to share. Can you explain what we are looking at? Yes, what you are looking at is a stained tissue image. This is a sectioned tissue slice from an animal implanted for seven months, a chronic time point. The different colors indicate specific types of cell types: purple and pink are astrocytes and microglia, respectively, which are types of glial cells. People may not be aware that the brain is not just made up of neurons and axons; there are other cells like glial cells that act as glue and react to any trauma or damage to the tissue. The brown are the neurons, specifically the neurons' nuclei.

In this macro image, you see circles highlighted in white, indicating the insertion site. When you zoom into one of those, you see the threads. In this particular case, we are seeing about 16 wires going into the page. The incredible thing here is that the neurons, which are these brown structures or brown circular or elliptical things, are actually touching and abutting the threads. This indicates that there is basically zero trauma caused during this insertion. With these neural interfaces, these microelectrodes that you insert, one of the most common modes of failure is neuronal death around the site due to the insertion of a foreign object. This elicits an immune response through microglia and astrocytes, forming a protective layer around it. Not only does this kill the neuron cells, but it also creates a protective layer that prevents you from recording neural signals because you are getting further away from the neurons you are trying to record.

Neural implants can be safely removed or upgraded with minimal trauma, thanks to their flexible design and careful placement.

Neurons are just seem to be attracted to it, and so there's certainly no trauma. That's such a beautiful image, by the way. Just the brown or the neurons and for some reason, I can't look away. It's really cool. The way that these things, like, I mean, your tissues generally don't have these beautiful colors. This is a Multiplex stain that uses these different proteins that are staining these at different colors. You know, we use a very standard set of staining techniques with HG, ea1, and GAP.

If you go to the next image, this also kind of illustrates the second point. Initially, when we saw the previous image, we said, "Oh, are the threads just floating? What is happening here? Are we actually looking at the right thing?" So what we did is we did another stain, and this is all done in-house, of this M's TR Chrome stain which is in blue that shows these collagen layers. The blue basically indicates that you don't want the blue around the implant threads because that means there's some sort of scarring happening. What you're seeing, if you look at individual threads, is that you don't see any of the blue, which means that there has been absolutely or very minimal trauma in these inserted threads. This presumably is one of the big benefits because of having this kind of flexible thread.

We think this is primarily due to the size as well as the flexibility of the threads. Also, the fact that R1 is avoiding vasculature means we're not disrupting or causing damage to the vessels and not breaking any of the blood-brain barrier, which has basically caused the immune response to be muted. This is also a nice illustration of the size of things. This is the tip of the thread. Those are neurons, and this is the thread listening, and the electrodes are positioned. What you're looking at is not the electrodes themselves but the conductive wires, each of which is probably two microns in width. We're looking at the choral slice, a slice of the tissue. As you go deeper, you will obviously have less of the tapering of the thread. The point being that there are just kind of cells around the insertion site, which is just an incredible thing to see. I've never seen anything like this.

How easy and safe is it to remove the implant? It depends on when. In the first three months or so after the surgery, there's a lot of tissue modeling happening, similar to when you get a cut. You start with scar tissue forming over the first couple of weeks, depending on the size of the wound. These contract and then turn into a scab, which you can scab off. The same thing happens in the brain, which is a very dynamic environment. Before the scar tissue or the new membrane forms, it's quite easy to just pull them out with minimal trauma. Once the scar tissue forms, it anchors the threads, making it harder to completely extract them. Our current method for removing the device involves cutting the thread, leaving the tissue intact, and then unscrewing and taking the implant out. That hole is then plugged with either another Neuralink or a plastic-based cap.

Upgrading brain implants could soon be as easy as a 10-minute procedure.

The threads in the brain are designed to remain there indefinitely. Studies have shown that once the scar tissue forms, the threads get anchored in place and do not migrate to areas where they shouldn't be. Upgrades are not just theoretical; they have been performed many times. For instance, most of the non-human primates (NHP) have been upgraded, including Pager, who has the latest version of the device and is seemingly very happy and healthy.

Regarding the upgrade procedure, there are a couple of different approaches. For instance, if we were to upgrade Nolan, we might need to either cut or extract the threads, depending on how they are anchored or scarred in. If the threads are removed with the Dural substitute, the brain remains intact, allowing for the insertion of updated threads with the new implant package.

Future upgradeable systems are being designed with minimal disruption to natural brain structures. Currently, the dura, a thick layer protecting the brain, is removed, which proliferates scar tissue formation. To mitigate this, we are exploring ways to insert threads through the dura, which presents challenges such as penetrating the thick layer without breaking the needle. Different needle designs and imaging techniques are being developed to address these challenges.

One significant change in implant architecture being considered is transitioning from a monolithic single implant to a two-part implant. The bottom part would contain the threads and chips, while the top part would handle computational tasks and house a larger battery. This design would allow for easy upgrades; the computational component could be replaced without disturbing the threads, making the procedure quick and straightforward.

We're pushing the boundaries of brain implants, aiming to increase neuron recording channels from 1,000 to 16,000 by next year, while tackling challenges like energy consumption, signal processing, and creating a durable hermetic barrier.

The technical challenge of improving the implant is a priority for the next versions. The key metrics to enhance include the number of channels for recording from more neurons. Currently, the pathway aims to increase from 1,000 channels to 3,000 or even 6,000 by the end of this year, and further to 16,000 by the end of next year. There are several limitations to this, such as the ability to photo-lithographically print wires that are two microns in width and spacing. Despite the existence of more advanced chips, the tools brought in-house can help achieve this resolution.

Moreover, chips cannot linearly consume more energy with an increase in channels, necessitating innovations in circuit design and architecture to make them lower power. Additionally, the challenge of transmitting all the spikes to the end application requires considerations of bandwidth limitations and potential innovations in signal processing. Physically, one of the biggest challenges is the interface, particularly bonding the thin FM array to the electronics, which becomes highly dense. Innovations in 3D integrations can be leveraged to address this.

Another significant challenge is forming a hermetic barrier to protect the electronics in the harsh brain environment and prevent leakage of unwanted substances into the brain. The development environment to simulate this harshness involves an accelerated life tester, essentially a brain in a vat. This vessel simulates the brain's environment, primarily saltwater, and can include reactive oxygen species to test the interfaces. By increasing the temperature, the aging process of these interfaces can be accelerated, with every 10 degrees Celsius increase doubling the rate of aging.

In their accelerated life tester (ALT) chamber, they increase the temperature by 20 degrees Celsius, which accelerates aging by four times, meaning one day in the ALT chamber equals four days in a calendar year. This helps determine if the implants remain intact and operational over time. Although this environment is not identical to the brain, it serves as a good testing environment for the enclosure's strength. Current implants have been in the ALT chamber for close to two and a half years, equivalent to a decade, and they appear to be fine.

Another fascinating aspect of our enclosure is our implant. Unlike common medical implants encased in titanium cans that are laser welded, we use a polymer called PCTFE (polychlorotrifluoroethylene). This material is commonly found in pill packs, where you pop the pill through a plastic membrane. We chose PCTFE because it is electromagnetically transparent, which is crucial for electromagnetic inductive charging. Using titanium would require a sapphire window, making the process tough to scale.

Regarding scaling, we aim to have multiple Neuralink devices implanted. Our monkeys have had two Neuralinks, one in each hemisphere. We're also exploring the potential of having one in the motor cortex, one in the visual cortex, and others in different cortices, focusing on specific functions. This customization on the compute side is essential, especially for the motor cortex.

At Neuralink, we are building a generalized neural interface to the brain. Our current version of the N1 is specialized for motor decoding tasks, but there's a general compute available. Hyper-optimizing for power and efficiency requires specialized functions. Our robotic insertion techniques, which took years of data and FDA conversations to prove safe, can access any part of the cortex.

Our second product focuses on the visual cortex, a completely different environment from the motor cortex. It will be more stimulation-focused rather than recording, aiming to create visual percepts. Despite the differences, we use the same thin film array technology, robot insertion technology, and packaging technology. The conversation now revolves around the implications of these differences in safety and efficacy.

Bringing sight back to the blind through advanced neural tech is a game-changer!

The possibilities of restoring sight to individuals who are blind are just incredible. Being able to give that gift back to people who don't have sight, or even any aspect of it, is truly remarkable. However, there are several challenges involved in this process, from recording to stimulation. One of the key challenges is the technical aspect of stimulating the brain.

We have been capable of stimulating through our denl marray as well as our electronics for years. We have demonstrated some of that capability for reanimating the limb in the spinal cord. For the current EFS study, we have hardware-disabled that capability to embark on a separate journey. There are many ways to write information into the brain, and the way we are doing it is through electrical current. This method changes the local environment to artificially cause neurons to depolarize in nearby areas.

For vision specifically, the visual system is both well understood and not fully understood. Photons hit the eye, and specialized cells called photoreceptor cells convert the photon energy into electrical signals. These signals are then projected to the visual cortex in the back of the head, passing through the LGN system. The visual cortex has several processing layers like V1, V2, and V3, which detect edges, curves, and objects, similar to convolutional neural networks.

Bringing sight back to those who are blind involves understanding the different forms of blindness. In the US, there are one million people who are legally blind. This means they score below a certain threshold in visual tests, such as seeing something at 20 feet that normal people can see at 200 feet. Different forms of blindness include degeneration of the retina, where the rest of the visual processing system is intact. For these individuals, retinal prosthetic devices can replace the function of the degenerated retinal cells.

By placing electrodes in the visual cortex and using external cameras, we could potentially restore vision for the blind and even enhance human sight beyond natural limits.

If there's any damage along the circuitry, whether it's in the optic nerve, the LGN circuitry, or any break in that circuit, it's not going to work for you. The source of where you need to cause that visual percept to happen, because your biological mechanism is not doing that, is by placing electrodes in the visual cortex in the back of your head. The way this would work is that you would have an external camera, whether it's something as unsophisticated as a GoPro or some sort of wearable Rayban-type glasses that Meta is working on, that captures a scene. That scene is then converted to a set of electrical impulses or stimulation pulses that you would activate in your visual cortex through these electrodes.

By playing a concerted kind of orchestra of these stimulation patterns, you can create what's called phosphines, which are these kind of white-yellowish dots that you can also create by just pressing your eyes. You can actually create those percepts by stimulating the visual cortex. The goal is to have many of those phosphines and have them be as small as possible so that you can start to tell apart the individual pixels of the screen. If you have many of those, potentially, in the long term, you might be able to achieve naturalistic vision. In the short to midterm, you might be able to at least have object detection algorithms run on your glasses' preprocessing units, enabling you to see the edges of things so you don't bump into stuff. This is really incredible because you would be adding pixels, and your brain would start to figure out what those pixels mean, with different kinds of assistance on the signal processing on all fronts.

A couple of things to consider: if you're blind from birth, the way the brain works, especially at an early age, involves neuroplasticity, which is essentially different parts of your brain fighting for limited territory. You often hear about people who are blind having a heightened sense of hearing or other senses because the cortex that is not used gets taken over by different parts of the cortex. For those individuals, they would have to map some other parts of their senses into what they call vision, resulting in a very different conscious experience.

Another important point is that we are currently limited by our biology in terms of the wavelength that we can see. There's a very small wavelength of visible light that we can see with our eyes. However, with an external camera and a BCI system, you're not limited to that. You can have infrared, UV, or any other spectrum that you want to see. Whether that gets mapped to some sort of weird conscious experience is unknown, but when I talk about the goal of Neuralink being to go beyond the limits of our biology, that's what I mean. If you're able to control the raw signal—when we use our sight, we're getting photons and there's not much processing on it—maybe you can do some kind of processing, like object detection ahead of time. There's a lot of possibilities to explore, not just in increasing thermal imaging but also in doing interesting processing.

Our brains filter reality to manage the overwhelming data around us, and experiences or substances can change these filters, altering our conscious experience.

There is a lot of pre-processing involved, and numerous possibilities to explore within that realm. It's not just about enhancing thermal imaging but also about engaging in some intriguing processing. My theory on how the visual system works is that there are countless events happening in the world, with numerous photons entering our eyes. It's unclear where some of the pre-processing steps occur, but fundamentally, the reality we perceive is filled with an overwhelming amount of data. Humans are simply unable to process all that information, necessitating some form of filtering. This filtering could occur in the retina or various layers of the visual cortex.

An analogy I often consider is comparing the brain to a CCD camera and the world’s information to the sun. When you try to look at the sun with a CCD camera, it saturates the sensors due to the enormous amount of energy. To manage this, filters are added to narrow the incoming information. Experiences, drugs like propofol (an anesthetic), or psychedelics can swap out these filters, altering our conscious experience.

For instance, I recently took a high dose of ayahuasca in the Amazon jungle, which is a way of swapping out different experiences. With Neuralink, the goal is to control these filters primarily to improve function, not for entertainment or enjoyment. The focus is on restoring lost functions, which can be a significant help, especially when the function is completely lost.

When asked if I would implant a Neuralink device in my brain, my response is absolutely, though perhaps not immediately. As the technology progresses, I might become increasingly curious and even a bit jealous of those who get implanted, especially if they start doing things I can't do. For instance, achieving 15, 20, or even 100 BPS is within the realm of possibility, and nothing fundamentally stops us from reaching that level of performance.

Watching others, like Noah, who seems to have so much fun and plays video games in a chill way, can evoke a sense of jealousy. It's important to note that he multitasks while performing these activities, which is cognitively intensive. Similar to how we talk and move our hands simultaneously, he can multitask in ways that other assistive technologies don't allow. For example, eye-tracking devices require fixation on a task, and voice control can limit multitasking.

Mastering multiple tasks at once requires full attention and understanding cognitive load, but it's amazing what can be achieved even under pressure.

The parallelization of multiple tasks, if you measure the BPS for the entirety of the human organism, is quite fascinating. When you're talking, doing a task with your mind, and looking around, there's a lot of parallelization happening. However, to achieve high-level BPS, it requires full attention. This involves a separate circuitry, which is a big mystery, particularly how attention works and the cognitive load it entails.

I've read a lot of literature on people performing two tasks simultaneously, where a secondary task acts as a source of distraction. This affects the performance on the primary task, and depending on the task, there are many interesting insights. The human brain is an interesting computational device, and there are many novel insights to be gained from studying it. I am personally surprised that someone can control a cursor so well while talking and being nervous, as speaking in front of a camera can make anyone nervous. Despite this, achieving high performance is surprising and amazing.

After researching this in depth, I wanted to discuss the registry for people with quadriplegia and similar conditions. There will be a separate line for those who are just curious, like myself. Now that patient P1 is part of the ongoing Prime study, the high-level vision for P2, P3, P4, P5, and the expansion into other human beings experiencing this implant is crucial. The primary goal of our study is to achieve safety endpoints, understand the safety of the device and the implantation process, and assess the efficacy and impact on potential users' lives.

Living with tetraplegia varies widely among individuals, and we hope to understand how our technology can serve a broader group of individuals. We aim to gather feedback to build the best product for them. The primary purpose of the early feasibility study is to learn from each participant to improve the device and the surgery before embarking on a pivotal study. This larger trial looks at the statistical significance of endpoints, which is required before marketing the device. This process is standard in the US and globally.

Our goal is to understand from participants like Nolan, P2, P3, and future participants what aspects of our device need improvement. For example, if users express a need for longer usage times, we can address that. Before the pivotal study, there's rapid innovation based on individual experiences, learning from how people use the device, including high-resolution details like cursor control and signal, as well as overall life experience.

Our brain implants can get firmware updates just like your phone, constantly improving and adding new features.

In discussing life experiences and technological advancements, it's important to note that there are hardware changes and firmware updates. For instance, after a recovery event for Nolan, he received a new firmware update. This is akin to how phones receive updates for security patches, new functionalities, and UI improvements. Similarly, our implant is not a static one-time device; it can receive over-the-air firmware updates, much like a Tesla, which can lead to a completely new user interface and various improvements. This is what we mean by a generalized platform.

The app Nolan is using allows for calibration and updates with just a click. Looking into future capabilities, vision is a fascinating area, as well as accelerated typing or speech. These fall under the movement program, which, along with the vision program, are our two main focus areas. The movement program is currently centered around digital freedom. If you can control a 2D cursor in the digital space, you could theoretically move anything in the physical space, such as robotic arms, wheelchairs, or your environment, either through a phone or directly to those interfaces. However, expanding these capabilities, especially for Nolan, requires conversations with the FDA to ensure safety, particularly when dealing with physical movements that could potentially harm participants.

Speech prosthetics involve different areas of the brain and are a very fascinating field with significant advancements in academia. Researchers like Sergey Staviski at UC Davis, Jamie Henderson, and the late Krishna Shenoy at Stanford have made incredible progress in improving speech neuroprosthetics. These efforts focus on parts of the motor cortex that control focal articulators, allowing signals to be picked up even by mouthing words or imagining speech. However, higher-level processing areas like Broca's area or Wernicke's area remain a mystery in terms of their underlying mechanisms.

The future of human interaction with technology could see billions using brain-computer interfaces to enhance life and tackle disorders.

In discussing the hard problem of consciousness, one might consider that all experiences are essentially a set of electrical signals. From these signals, cognition or meaning might emerge, or perhaps the human mind, being an incredible storytelling machine, convinces itself of the existence of interesting meanings. Brain-Computer Interfaces (BCI), at their core, are tools to study the underlying mechanisms of the brain both locally and broadly. Whether these electrical signals can be correlated to specific thoughts, enabling mind reading, remains uncertain. While BCI alone might not achieve this, additional tools and frameworks could potentially contribute. The hard problem of consciousness ultimately delves into philosophical questions about the nature of existence and how subjective experiences emerge from electrical spikes.

BCI is seen as a tool for understanding the mind and the brain. There is some biological existence proof of how experiences form, as evidenced by the structure of our brains. The two hemispheres of the brain, connected by the corpus callosum with its 200 to 300 million connections or axons, suggest a potential interface for creating new conscious experiences. However, the exact threshold for such experiences remains unknown and speculative.

The potential of BCI technology extends to millions of people worldwide, particularly those suffering from movement disorders and visual deficits, which number in the tens or hundreds of millions. The technology also holds promise for neuropsychiatric applications such as depression, anxiety, and appetite control, making it relevant to a broader population. As BCI technology evolves, it could compete with smartphones as a preferred method for interacting with the digital world, benefiting nearly everyone.

The meeting began with a discussion on the smartphone as a preferred methodology for interacting with the digital world. This concept is intriguing, especially when considering the entire world could benefit from such advancements. Before delving deeper into this topic, it's essential to acknowledge the potential of Brain-Computer Interfaces (BCIs) in shaping the next generation of human-machine interaction. BCIs could play a significant role in how we interface with machines and even ourselves. There is a real possibility that in the future, we could see 8 billion people walking around with Neuralink.

The conversation then shifted to an introduction of Matthew McDougall, the head neurosurgeon at Neuralink. McDougall shared his lifelong fascination with the human brain, recalling that since as far back as he could remember, he had been interested in it. As a thoughtful and somewhat outsider kid, he pondered the most important things in the world. He concluded that all significant aspects of human existence are contained within the skull. Understanding how the brain encodes information, generates desires, and produces agony and suffering could lead to better solutions for humanity's problems.

McDougall emphasized that neurochemistry is at the core of human triumphs and tragedies. By gaining control over neurochemistry, we could provide people with better tools and options to improve their lives. He reflected on human history, noting that with better tools, people often do better, despite some significant exceptions. This pursuit of giving people more options and tools is seen as noble and worthy.

The discussion then explored the idea that historical figures like Stalin, Hitler, and Genghis Khan were driven by their brains, which are essentially just billions of neurons processing information. These individuals were able to organize others and commit atrocities, not because of some glorified notion of their minds, but because of their brain's capabilities. McDougall found it interesting to look at primatology for clues on human behavior. He studied under the renowned primatologist Fran Dall at Emory University, who applied a human-like lens to understand chimpanzee behavior.

Understanding human behavior is simpler when viewed through basic drives like food, sex, companionship, and power.

Through the lens of how one might watch an episode of Friends and understand the motivations of the characters interacting with each other, he would look at a chimp colony and apply that same lens. I'm massively oversimplifying it, but if you do that instead of just saying, "subject 473 threw his feces at subject 471," you talk about them in terms of their human struggles. Accord them the dignity of themselves as actors with understandable goals and drives—what they want out of life. Primarily, it's the things we want out of life: food, sex, companionship, power. You can understand chimp and bonobo behavior in those same lights much more easily. I think doing so gives you the tools you need to reduce human behavior from the kind of false complexity that we layer on it with language and look at it in terms of, "Oh, well, these humans are looking for companionship, sex, food, power." I think that's a pretty powerful tool to have in understanding human behavior.

I just went to the Amazon jungle for a few weeks, and it's a very visceral reminder that a lot of life on Earth is just trying to get laid. They're all screaming at each other. I saw a lot of monkeys, and they're just trying to impress each other. Maybe there's a battle for power, but a lot of the battle for power has to do with them getting laid. Breeding rights often go with Alpha status, and if you can get a piece of that, then you're going to do okay. We like to think that we're somehow fundamentally different, but especially when it comes to primates, we really aren't. We can use fancier poetic language, but maybe some of the underlying drives that motivate us are similar. I think that's true, and all that is coming from the brain.

When did you first start studying the brain as a biological mechanism? Basically, the moment I got to college, I started looking around for labs where I could do neuroscience work. I originally approached that from the angle of looking at interactions between the brain and the immune system, which isn't the most obvious place to start. I had this idea at the time that the contents of your thoughts would have a direct and powerful impact on non-conscious systems in your body—the systems we think of as homeostatic autonomic mechanisms, like fighting off a virus or repairing a wound. Sure enough, there are big crossovers between the two.

It gets to a key point that I think goes underrecognized: one of the things people don't recognize or appreciate about the human brain enough is that it basically controls or has a huge role in almost everything that your body does. Try to name an example of something in your body that isn't directly controlled or massively influenced by the brain, and it's pretty hard. You might say bone healing or something, but even those systems—the hypothalamus and pituitary—end up playing a role in coordinating the endocrine system, which does have a direct influence on, say, the calcium level in your blood that goes to bone healing. Non-obvious connections between those things implicate the brain as a potent prime mover in all of health.

When you're sick, your immune system tells your brain to be antisocial to help you recover faster.

The nervous system plays a crucial role in coordinating various bodily functions, and this can be understood through the lens of evolution. For instance, when you get sick with a communicable disease like the flu, it's advantageous for your immune system to signal your brain to be antisocial for a few days. This means not going out and socializing but rather staying warm under a blanket. This behavior is observed in both animals and humans. Elevated levels of interleukins and TNF-alpha in the blood signal the brain to reduce social activity and even lower motor activity in infected animals.

From the early days in neuroscience to surgery, there was a significant evolution of thought. Initially, I wanted to study the brain and began my journey in an undergrad neuroimmunology lab. However, I soon realized that I wanted to make tangible changes in people's lives, not just generate knowledge. This led me to consider medical school, and I discovered MD/PhD programs that allowed me to pursue both paths. I chose USC for medical school and a joint PhD program with Caltech, specifically because of a researcher named Richard Anderson at Caltech, who is one of the godfathers of primate neuroscience. His lab was working on understanding how intentions are encoded in the brain using Utah arrays and other electrodes in monkeys.

Initially, I thought I would become a neurologist and study the brain on the side. However, I found neurology to be predominantly about diagnosing conditions without much possibility for intervention, which was distressing to me. Neurosurgery, on the other hand, offered a powerful lever to change the course of patients' lives, such as treating brain tumors or aneurysms. This realization, combined with meeting epic neurosurgeons at USC like Al Kesi, M.A. Puzzo, Steve Ginata, and Marty Weiss, shifted my perspective. These neurosurgeons were not distant gods but human beings with problems, and there was nothing fundamentally preventing me from becoming one of them.

Neurosurgery residency is a test of endurance and humility, where the hardest part is balancing relentless work hours with the need for rest.

Absolutely worth it. What was the hardest part of the training on the neurosurgeon track? Two things come to mind. Residency in neurosurgery is sort of a competition of pain, of how much pain you can endure and still smile. There are work hour restrictions that are not really viewed as strict limits internally among the residents; instead, they are often seen as a sign of weakness. Most neurosurgery residents try to work as hard as they can, which necessarily means working long hours, sometimes over the work hour limits. While we care about being compliant with regulations, more importantly, people want to give their all in becoming better neurosurgeons because the stakes are so high. It's a real fight to get residents to go home at the end of their shift and not stay to do more surgery.

One of the hardest things is literally forcing them to get sleep and rest. Historically, that was the case, but I think the next generation is more compliant and more self-aware. The second hardest part is dealing with the personalities involved. Neurosurgery has long had an aura of mystique and excellence, which attracts people cloaked in authority. A board-certified neurosurgeon is basically a walking appeal to authority, licensed to walk into any room and act like an expert. Fighting that tendency is not something most neurosurgeons do well; humility isn't their forte.

I have friends who know you, and whenever they speak about you, they mention your surprising quality of humility for a neurosurgeon. This is not as common in neurosurgery because there is a gigantic, heroic aspect to it that can get to people's heads. This humility allows you to play well at an Elon company. One of Elon's strengths is to see through fallacies from authority instantly. Nobody walks into a room he's in and says, "Trust me, I'm the guy that built the last 10 rockets," because he will challenge them and say, "We can do it better." Similarly, you can't walk into a room and say, "I'm a neurosurgeon, let me tell you how to do it," because he will think from first principles and challenge your approach.

True strength is defending your ideas passionately while being open to change when proven wrong.

To create a good team at Neuralink, we've tried to find people who are not afraid to defend their ideas passionately and occasionally strongly disagree with their colleagues. The goal is to have the best idea come out on top. This is not an easy balance to achieve, as the primate brain is not inherently built to accept being wrong. Admitting someone else is right can feel like a loss of power or standing in the community. However, recognizing that this little voice in the back of your head is maladaptive is crucial for the team to win. You need to have the confidence to walk away from an idea you hold onto if it proves to be wrong. By doing this often enough, you will become the best at your thing or at least be a member of a winning team.

Working with amazing neurosurgeons at USC, I learned the importance of working hard while functioning as a member of a team. This involves getting incredibly difficult jobs done, working long hours, and sometimes making people you dislike look good the next morning. These folks were relentless in their pursuit of excellent neurosurgical technique, decade over decade. Marty Weiss, Steve Ganada, and Mik Apuzzo made huge contributions not only to surgical technique but also to building training programs that trained dozens or hundreds of amazing neurosurgeons. I was just lucky to be in their wake.

Performing surgeries where the patient is unlikely to survive is especially challenging. It doesn't hit as hard when the patient is an 80-year-old who was nearing the end of their life anyway. However, it is much more difficult when you are taking care of a young parent with young children, someone in their 30s who shows up in your ER with a huge malignant, inoperable, or incurable brain tumor. You can only do that a handful of times before it starts eating away at your armor. The loss of young parents is particularly devastating because they had a lot more to give, and their loss has a knock-on effect that makes the world worse for a long time.

A four-year-old daughter was brought in to say goodbye one last time before they turned the ventilator off. The great Henry Marsh, an English neurosurgeon, said it best: "Every neurosurgeon carries with them a private graveyard." I definitely feel that, especially with young parents; that kills me. They had a lot more to give, and the loss of those people specifically has a knock-on effect that's going to make the world worse for people for a long time. It's just hard to feel powerless in the face of that.

That's where I think you have to be borderline evil to fight against a company like Neuralink or to constantly take pot shots at us. What we're doing is trying to fix that stuff. We're trying to give people options to reduce suffering. We're trying to take the pain out of life that broken brains bring in. This is just our little way of fighting back against entropy. The amount of suffering endured when some of the things we take for granted that our brain is able to do is taken away is immense. To be able to restore some of that functionality is a real gift. We're just starting, and we're going to do so much more.

When asked to take through the full procedure of implanting the N1 chip in Neuralink, it was explained that it's a really simple, straightforward procedure. The human part of the surgery that I do is dead simple; it's one of the most basic neurosurgery procedures imaginable. There's evidence that some version of it has been done for thousands of years. Examples from ancient Egypt and Peru show healed or partially healed trepanations, where proto-surgeons would drill holes in people's skulls, presumably to let out evil spirits or drain blood clots. The evidence of bone healing around the edge means that people at least survived some months after the procedure.

What we're doing is making a cut in the skin on the top of the head over the area of the brain that is the most potent representation of hand intentions. For example, if you are an expert concert pianist, this part of your brain is lighting up the entire time you're playing. We call it the hand knob. This part of the brain lights up even in quadriplegic people who imagine finger movements. We can identify that part of the brain in anyone preparing to enter our trial and confirm it as their hand intention area.

I'll make a little cut in the skin, flap the skin open like opening the hood of a car, only a lot smaller. We make a perfectly round, one-inch diameter hole in the skull, remove that bit of skull, and open the lining of the brain. This lining is like a little bag of water that the brain floats in. We then show that part of the brain to our robot. The robot shines here, coming in to take these tiny electrodes, much smaller than a human hair, and precisely insert them into the cortex at a very precise depth and spot that avoids all the blood vessels coating the surface of the brain.

Robots can perform brain surgeries with extreme precision, but human surgeons excel in adapting to unexpected situations.

The robot shines in its ability to come in and take these tiny, much smaller than human hair, electrodes and precisely insert them into the cortex, into the surface of the brain. It does this to a very precise depth in a very precise spot that avoids all the blood vessels coating the surface of the brain. Once the robot has completed its part, a human then comes back in to put the implant into the hole in the skull, covers it up, screws it down to the skull, and sews the skin back together. The whole procedure takes a few hours and is extremely low risk compared to the average neurosurgery involving the brain, such as those that might open up a deep part of the brain or manipulate blood vessels. This opening on the surface of the brain with only cortical micro-insertions carries significantly less risk than many of the tumor or aneurysm surgeries that are routinely done.

Cortical micro-insertions via robot and computer vision are designed to avoid the blood vessels exactly. When comparing human and machine capabilities, humans are general-purpose machines able to adapt to unusual situations and change the plan on the fly. For instance, during a surgery in San Diego, the plan was to open a small hole behind the ear and reposition a blood vessel that had come to lay on the facial nerve. However, upon starting the surgery, a giant aneurysm on the blood vessel was discovered, which was not visible on preoperative scans. The human surgeons had no problem dynamically changing the plan, something that robots in their current incarnation would struggle with. Fully robotic surgery, like the electrode insertion portion of the Neuralink surgery, follows a set plan. Humans can interrupt and change the plan, but robots cannot; they operate according to their programming and perform their tasks very precisely but without the ability to react to changing conditions.

There are many ways a surgeon could be surprised when entering a situation, requiring dynamic adjustments that robots are not currently capable of. However, we are at the dawn of a new era with AI, which could dramatically broaden the parameters for robot responsiveness. For example, a self-driving car can react to unexpected situations like a chicken running across the road, even if it wasn't specifically programmed for that scenario. Surgical robots are not yet at that level, but given time, there could be semi-autonomous possibilities where a robotic surgeon could handle familiar situations and defer unfamiliar cases to humans.

Human-robot collaboration in surgery is pushing boundaries and redefining the future of medical procedures.

The approach to handling surgeries, especially in high-stakes fields like neurosurgery, is evolving. One possibility is to be very conservative and ensure there are no issues or surprises, leaving humans to deal with the edge cases. This raises the question of whether neurosurgeons will eventually be out of a job. While some believe there will not be many neurosurgeons left on Earth, others, like the speaker, are not worried about their job within their professional lifetime. However, they might advise their children against pursuing this line of work, depending on the landscape in 20 years.

This situation is fascinating, especially when compared to programming. For the past two decades, programming has been a secure job due to the increasing number of computers and the high pay. However, the advent of large language models that are proficient at generating code has challenged this security. It raises questions about the human contribution to programming. Despite this, humans still have the unique ability to deal with novel situations and come up with innovative solutions, a skill that machines have yet to master. This is particularly crucial in life-critical fields like surgery, where the stakes are very high for a robot to replace a human.

In the case of Neuralink, there is a human-robot collaboration. The robot performs tasks it excels at, while humans handle the parts that require human expertise. This collaboration is evident in the rigorous practice and testing that goes into Neuralink's procedures. Unlike traditional human surgery, which is an artisanal craft passed down from master to pupil, Neuralink's approach involves extensive practice on lifelike models.

For instance, one of the engineers, Fran Romano, built a pulsating brain in a custom 3D-printed skull that matches a patient's anatomy, including their face and scalp characteristics. This model allowed for practice surgeries that were as close to the real thing as possible. The practice involved using standard operative neuronavigation equipment and surgical drills in a controlled environment, ensuring that every detail was meticulously rehearsed.

Using standard operative neuronavigation equipment and standard surgical drills, we conducted the surgery in the same operating room where we perform all our practice surgeries at Neuralink. Having the skull open and observing the brain pulse added a degree of difficulty for the robot to perfectly and precisely plan and insert the electrodes to the right depth and location. We broke new ground on how extensively we practiced for this surgery.

There was a historic moment and a big milestone for Neuralink and humanity when the first human received a Neuralink implant in January of this year. We were fortunate to have incredible partners at the Barrow Neurological Institute, which I believe is the premier neurosurgical hospital in the world. They made everything as easy as possible for the trial to get going and helped us immensely with their expertise on arranging the details.

Even though the outcome wasn't particularly in question in terms of our participant's safety, the number of observers and people watching live streams in the hospital added pressure that is not typical for even the most intense neurosurgeries, such as removing a tumor or placing deep brain stimulation electrodes. This had never been done on a human before, so there were unknown unknowns. There was definitely a moderate pucker factor for the whole team, not knowing if we would encounter unanticipated brain movement or brain sag that could complicate the procedure. Fortunately, everything went well, and the surgery had one of the smoothest outcomes we could have imagined.

I was extremely nervous, akin to a quarterback in the Super Bowl. I was very pleased when it went well and looked forward to the next surgery. Despite all the practice, being in such a high-stakes situation with so many people watching was unprecedented. We should also mention that, given how the media works, many people might have been hoping for the surgery to go poorly. Wealth is easy to hate or envy, and there's an industry around driving clicks, with bad news being great for clicks. This puts pressure on people and discourages them from trying to solve really hard problems, as solving hard problems requires going into the unknown, doing things that haven't been done before, and taking calculated risks.

I wish there would be more celebration of risk-taking instead of people waiting on the sidelines for failure. It's great that everything went flawlessly, but the unnecessary pressure is unfortunate. Now that there is a human with literal skin in the game, whose well-being depends on the success of the procedure, one would have to be a pretty bad person to root for failure. Hopefully, people will realize this at some point.

Watching robots perform brain surgery with precision and no bleeding is oddly satisfying.

To be successful in this field, you have to be a pretty bad person to be rooting for things to go wrong. Hopefully, people will look in the mirror and realize that at some point.

Did you get to actually have a front-row seat to watch the robot work? Yes, I did. Because an MD needs to be in charge of all the medical decision-making throughout the process, I unscrubbed from the surgery after exposing the brain and presenting it to the robot. I then placed the targets on the robot's software interface, which tells the robot where to insert each thread. This was done with my hand on the mouse, for whatever that's worth.

The robot, with computer vision, provides a bunch of candidates, and you finalize the decision. The software engineers on this team are amazing. They provided an interface where you can use a lasso tool to select a prime area of brain real estate. It automatically avoids the blood vessels in that region and places a bunch of targets. This allows the human robot operator to select really good areas of the brain and make dense applications of targets in those regions. These are the regions we think will have the most high-fidelity representations of finger movements and arm movement intentions.

I've seen images of this, and for me, with OCD, it is oddly satisfying to see the different target sites avoiding the blood vessels and maximizing the usefulness of those locations for the signal. As a person who has a visceral reaction to brain bleeding, I can tell you it's extremely satisfying watching the electrodes themselves go into the brain and not cause bleeding.

You mentioned the feeling was of relief when everything went perfectly. How deep in the brain can you currently go, and how deep do you eventually aim to go on the Neuralink side? It seems the deeper you go in the brain, the more challenging it becomes. Broadly speaking about neurosurgery, we can get anywhere. It's routine for me to put deep brain stimulating electrodes near the very bottom of the brain, entering from the top and passing about a 2mm wire all the way to the bottom. This is not revolutionary; a lot of people do that, and we can do it with very high precision. I use a robot from Globus to do that surgery several times a month; it's pretty routine.

What are your eyes in that situation? What kind of technology can you use to visualize where you are and light your way? It's a cool process on the software side. You take a pre-operative MRI with extremely high-resolution data of the entire brain. You put the patient to sleep, put their head in a frame that holds the skull very rigidly, and then take a CT scan of their head while they're asleep with that frame on. You then merge the MRI and the CT in software. You have a plan based on the MRI where you can see these nuclei deep in the brain. You can't see them on CT, but if you trust the merging of the two images, you indirectly know on the CT where that is. Therefore, you indirectly know where, in reference to the titanium frame screwed to their head, those targets are.

Titanium actuators with manual tick marks have evolved significantly. The modern version employs a robot, similar to a small CA arm seen in car manufacturing at the Tesla Factory. This small robot arm can demonstrate the intended trajectory from the preoperative MRI and establish a rigid holder. Through this holder, a small hole can be drilled into the skull, allowing a small rigid wire to be passed deep into the brain's hollow area. The electrode is then inserted through this hollow wire, and all other components are removed, leaving the electrode precisely placed far from the skull surface. This standard technology has been available for a while.

Neuralink is currently focused on cortical targets, or surface targets, due to the challenges of inserting hundreds of wires deep into the brain without causing significant damage. The current approach involves viewing an MRI on a screen, but it does not reveal everything the DBS electrode passes through on its way to the deep target. Consequently, there is an accepted risk that about one in a hundred patients may experience a brain bleed due to the blind passage of the wire. This safety profile is not acceptable for Neuralink. The goal is to make the procedure dramatically safer, potentially two or three orders of magnitude safer, to the point where it could be done casually, like during a lunch break.

The challenge lies in avoiding blood vessels and mapping out high-resolution geometry of blood vessels to navigate blindly yet safely. There is still much work to be done on the surface. Significant progress has been made in sewing electrodes into the spinal cord as a workaround for spinal cord injuries. This technique could allow a brain-mounted implant to translate motor intentions to a spine-mounted implant, enabling muscle contractions in previously paralyzed limbs. This effort aims to bridge the brain to the spinal cord and the peripheral nervous system.

Neuralink's digital telepathy concept is fascinating. It involves mapping the intention in the hand knob area of the brain, translating mere thoughts into actions in the digital world. This capability can significantly enhance the lives of quadriplegics, reconnecting them to the outside world and granting them freedom and independence. The potential of this technology is immense, as demonstrated by their first participant, who is breaking world records and enjoying the process.

Regarding surgery, the journey to proficiency involves practice and repetitions, similar to mastering any other skill. It requires dedicating a significant portion of one's life to improving, being open to feedback, and maintaining a constant will to do better. Surgeons must recognize that they are not perfect and always have room for improvement. This humility, combined with the motivation provided by patients, drives surgeons to continually enhance their skills.

Each surgery, even if it's the same procedure, can vary significantly between patients. For instance, the angle of the skull relative to the body can differ widely, affecting how the head is fixed and how the robot approaches the skull. Variations in skull and brain anatomy mean that some individuals may be excluded from trials due to having skulls or scalps that are too thick or too thin. Despite these challenges, surgeons strive to accommodate the middle 97% of anatomical variability.

In terms of biological systems, real brains can appear messy and complex compared to the clean diagrams in textbooks. However, with experience, surgeons learn to mentally peel back layers, such as blood vessels, to identify the underlying patterns of the brain's wrinkles, known as sulci and gyri.

The brain's wrinkles are unique landmarks that guide us in understanding its functions and targeting treatments.

Blood vessels can reveal the underlying pattern of wrinkles in the brain, which serve as landmarks for navigation. For instance, the wrinkles are a landmark. One notable landmark is the hand knob, which I described earlier. This area of the brain has a Greek letter Omega shape, and if you were shown a thousand brains, you could recognize the hand knob area within a minute due to its unique geometry and topology.

The hand knob is located in the primary motor area, a strip of brain running down the top. You might have seen the homunculus illustration, depicting a figure with exaggerated lips and hands laid over the brain's surface. This figure's legs are at the top of the brain, with the face and arm areas farther down, and the mouth, lip, and tongue areas even lower. The hand is positioned in the middle of this strip, just above the areas controlling speech on the left side of the brain in most people. This strip of brain is responsible for voluntary muscle movements, and the wrinkle for the hand knob is central to this function. Vision, on the other hand, is located deeper in the brain, requiring more invasive techniques than those used for hand insertions.

Addressing your question about depth, for vision-related procedures, we need to go deeper than usual, about a centimeter more than for hand insertions. This presents new challenges. You mentioned the Utah array, which looks terrifying due to its rigidity. In contrast, the flexible threads used by Neuralink aim to deliver electrodes next to neurons more gently. This approach stems from the failures of Utah arrays, which often caused bad immune responses and scar tissue due to their rigid electrodes being hammered into the brain.

At Caltech's Anderson Lab, there were attempts to use chemotherapy to prevent scar tissue formation, highlighting the severity of the issue. Neuralink's approach, using highly flexible tiny electrodes, avoids much of the bleeding and immune response associated with rigid electrodes. This results in excellent electrode longevity and brain tissue health, as observed in animal models over several years.

The brain has hidden levers that can control many aspects of our health and behavior, making it a powerful yet underexplored area for treatment.

An underappreciated fact is that the brain controls almost everything in our bodies. For instance, if you want a lever on fertility to turn it on and off, there are legitimate targets in the brain itself to modulate fertility. Similarly, if you want to modulate blood pressure, there are legitimate targets in the brain for doing that as well. Many conditions that aren't immediately obvious as brain problems are potentially solvable in the brain. This makes the brain an underexplored area for primary treatments of many issues that bother people.

It's fascinating to consider that many conditions we think have nothing to do with the brain might actually be symptoms of something that started in the brain. The primary source of the problem could be in the brain. Although not always the case—kidney disease is real, for instance—there are levers in the brain that affect all these systems. There are on/off switches and knobs in the brain from which many issues originate.

When asked if one would have a Neuralink chip implanted in their brain, the response was that the current use case is like using a mouse. Since one can already do that, there's no immediate value proposition. However, on safety grounds alone, one would do it tomorrow. The use case of the mouse, after researching and observing someone like Nolan having so much fun, shows that if you can get the bits per second really high with a mouse, it could transform interaction. The way we swipe on a smartphone, which was transformational, changed how we interact with devices.

Recording speech intentions from the brain might also change things. The value proposition for the average person is significant because a keyboard is a clunky human interface that requires a lot of training and is highly variable in the maximum performance that an average person can achieve. Taking that out of the equation and having a natural word-to-computer interface might change things for many people. It would be hilarious if people opted for Neuralink to avoid the embarrassment of speaking to their phone in public, even if speech-to-text became extremely accurate.

With a bone-conducting case acting as an invisible headphone and the ability to think words into software, it starts to sound like embedded super intelligence. You could silently ask for the Wikipedia article on any subject and have it read to you without any observable change in the outside world, making standardized testing obsolete. If done well on the UX side, it could create a significant shift in how we interact with digital devices, similar to the transformation brought by smartphones.

Innovative UX design can revolutionize our interaction with digital devices, just like smartphones did.

If done well on the UX side, it could change the way we interact with digital devices, similar to how the smartphone did. While it may not transform society, it can create a significant shift. Personally, after ensuring the safety of everything involved, I would totally try it. It doesn't need to connect to your vision or brain entirely; even just connecting to the hand could offer a lot of interesting human-computer interaction possibilities.

The technology on the academic side is progressing rapidly. For instance, there was an amazing paper from UC Davis, Sergey Stavisky's lab, that made an initial solve of speech decoding. They achieved something like 125,000 words with very high accuracy, simply by thinking the word. You have to have the intention of speaking it, using your inner voice. It's fascinating that you can do the intention-to-signal mapping just by imagining yourself doing it. With feedback, you can get really good at it, much like developing any other skill, such as touch typing.

I find it incredibly fascinating to play with this capability. I would get a Neuralink just to explore the potential of my mind to learn this skill, akin to learning how to type or move a mouse, but with my mind instead of my physical body. I can't wait to see what people do with it. Right now, we are like cavemen banging rocks with sticks, thinking we're making music. Eventually, when these technologies become more widespread, there will be the equivalent of a piano, allowing someone to create art with their brain in ways we can't even anticipate.

Teenagers, in particular, will excel at this. Anytime I think I'm good at something, I realize that teenagers can get incredibly good at it, even with the current technology. The addictive nature of improving and training, combined with software that adapts to you, will lead to significant advancements. We're only scratching the surface right now; there's so much more to do.

On the other side of the spectrum, I have an RFID chip implanted in me. It's a passive device used for unlocking things like a safe with top secrets. I'm not the first to do this; there's a community of biohackers who have done similar things. One early use case was storing private crypto wallet keys. I dabbled in that and had some fun with it. I even found some orphan crypto on it that had increased in value significantly over the years.

Imagine forgetting about something you thought was worthless, only to find out years later that it's now worth 50 times more because a community found value in it.

A few years ago, I had something that I thought was worthless and forgot about it. When I went back, I found that a community of people loved it and had propped up its value, causing it to increase fifty-fold. This change was quite surprising. The primary use case for this item is mostly as a tech demonstrator. It has my business card on it, which you can scan by touching it to your phone, and it even opens the front door to my house. It's simple stuff but a cool step to implant something in your body. This leap is similar to Neuralink because, for many people, the notion of putting something electronic inside a biological system is a big leap.

We have a kind of mysticism around the barrier of our skin. We're completely fine with knee replacements, hip replacements, and dental implants, but there's still mysticism around the inviolable barrier that the skull represents. This barrier needs to be treated pragmatically. The question isn't how incredible it is to open the skull; the question is what benefit we can provide.

From all the surgeries and everything you understand about the brain, how much does neuroplasticity come into play? How adaptable is the brain, for example, in healing from surgery or adapting to the post-surgery situation? Sadly, plasticity decreases with age, and healing decreases with age. I have too much gray hair to be optimistic about that. There are theoretical ways to increase plasticity using electrical stimulation, but nothing is proven robust enough to offer widely to people. However, there is cause for optimism that we might find something useful, such as an implanted electrode that improves learning.

There has been some amazing work recently from Nicholas Schiff, Jonathan Baker, and others. They have a cohort of patients with moderate traumatic brain injury who have had electrodes placed in the deep nucleus of the brain called the central median nucleus. When small amounts of electricity are applied to that part of the brain, it's almost like electronic caffeine. They are able to improve people's attention and focus and how well they can perform tasks. In one case, someone who was unable to work was able to get a job after the device was turned on. This is one of the Holy Grails for me with Neuralink and similar technologies: can we make people able to take care of themselves and their families economically again? Can we make someone who is fully dependent become fully independent, taking care of themselves and giving back to their communities? This is a very compelling proposition and motivates much of what I do and what many people at Neuralink are working towards.

Death is inevitable, but the challenge is to live fully despite knowing it will end.

The brain is certainly a delicate, complex beast, and we don't know every potential downstream consequence of a single change that we make. Adapting to changes is crucial, but the system itself is already designed with everything serving a purpose, so you don't want to mess with it too much. It's like eliminating a species from an ecology; you don't know what the delicate interconnections and dependencies are.

When it comes to surgeries, particularly P1, P2, P3, P4, and P5, the goal is to avoid a situation where I need to perform all the surgeries. Creating a process that is simple and robust on the surgery side is essential, so literally anyone could do it. We want to move away from requiring intense expertise or experience to have this successfully done, making it as simple and translatable as possible. Ideally, every neurosurgeon on the planet should have no problem doing this. Although we are probably far from a regulatory environment that would allow non-neurosurgeons to perform these surgeries, it is not impossible.

Regarding the robot R1, there is a complex relationship with it. During surgery, there are moments where I stand shoulder-to-shoulder with the robot, and it feels like a brother in arms, working together on the same problem. I'm not threatened by it, even though it might seem like an enemy that could take the job.

Experiencing high-stakes surgeries and helping people over the years has profoundly changed my understanding of life and death. It gives you a visceral sense that death is inevitable. As a neurosurgeon, you are deeply involved in unimaginable tragedies, such as young parents dying and leaving behind children. On the other hand, it takes the sting out of it a bit because you see how universally inevitable death is. There's zero chance that I'm going to avoid it, despite what techno-optimists and longevity buffs might believe. Entropy is a powerful force, and we are delicate DNA machines not up to the cosmic ray bombardment we face.

Life's beauty and terror coexist; embracing both makes every moment precious.

I have come to accept the intellectual certainty of death, but I still struggle with the existential aspect of it. The thought of losing my kids, my wife, or them losing me fills me with a profound sense of tragedy. While I understand that life will end, it often feels like it won't. We live as if it will never end, and the idea that this consciousness will cease to exist is terrifying. When I fully grasp this reality, it fills me with a deep fear, akin to what Ernest Becker described as Terror. People often aren't honest about how terrifying this realization is. The more you contemplate it, the more frightening it becomes. This is why the Stoics focused on it; it helps you appreciate every moment of life, despite the fear of its end.

In moments of warmth, safety, and love, you truly appreciate life. However, constantly facing death, as I do in my profession, can make it difficult to see the finiteness of life without breaking. It's a struggle between the neurosurgeon and the human within me. The human still feels the fear and pain of mortality. This struggle makes me appreciate being alive today. I have three kids and an amazing wife, and I am happy. I am involved in a project that matters and has the potential to be a gigantic leap for humanity.

Reading about historical explorers venturing into the Amazon or early steps into space, I realize that we are in the early stages of delving deep into the human brain. This isn't just about observing the brain; it's about interacting with it. This work has the potential to help many people and might help us understand the complexities of the human mind.

Imagine a world where we could turn down the volume on human suffering and tune our brains for a more harmonious society.

In this country, we often try to justify our queasiness in light of the fact that we can give people a knob to take away suicidal ideation and suicidal intention. If I could give them that knob, I would. I don't know how you justify not doing that. You can think about all the suffering that's going on in the world. Every single human being that's suffering right now would be a glowing red dot—the more suffering, the more it's glowing. You would just see the map of human suffering. Any technology that allows you to dim that light of suffering on a grand scale is pretty exciting because there's a lot of people suffering, and most of them suffer quietly. We often turn away and look away too often. We should remember those who are suffering because, once again, most of them are suffering quietly.

On a grander scale, the fabric of society is also affected. People have a lot of complaints about how our social fabric is working or not working, how our politics is working or not working. Those things are made of neurochemistry too, in aggregate. Our politics is composed of individuals with human brains, and the way it works or doesn't work is potentially tunable. For instance, if we could remove our addictive behaviors or tune our addictive behaviors for social media or our addiction to outrage, our addiction to sharing the most angry political tweet we can find, it might lead to a more functional society. If you had options for people to moderate that maladaptive behavior, there could be huge benefits to society. Maybe we could all work together a little more harmoniously toward useful ends. There's a sweet spot, as you mentioned. You don't want to completely remove all the dark side of human nature because those are somehow necessary to make the whole thing work. But there is a sweet spot. We need to suffer a little, just not so much that we lose hope.

When you perform surgeries, have you ever seen consciousness in there? Was there a glowing light? I have this sense that I never found it, never removed it, like a Dementor in Harry Potter. I have this sense that consciousness is a lot less magical than our instincts want to claim it is. It seems to me like a useful analog for thinking about what consciousness is in the brain. We have a really good intuitive understanding of what it means to touch your skin and know what's being touched. I think consciousness is just that level of sensory mapping applied to the thought processes in the brain itself. Consciousness is the sensation of some part of your brain being active. You feel it working; you feel the part of your brain that thinks of red things, winged creatures, or the taste of coffee. You feel those parts of your brain being active the way that I'm feeling my palm being touched. That sensory system that feels the brain working is consciousness. It's the same way it's a sensation of touch when you're touching a thing. Consciousness is the sensation of you feeling your brain working, your brain thinking, your brain perceiving. It's not a warping of space-time or some quantum field effect. It's nothing magical.

People always want to ascribe to consciousness something truly different. There's this long history of people looking at whatever the latest discovery in physics is to explain consciousness because it's the most magical, the most out-there thing that you can think of. People always want to do that with consciousness. I don't think that's necessary. It's just a very useful and gratifying way of feeling your brain work. As we said, it's one heck of a brain. Everything we see around us, everything we love, everything that's beautiful came from brains like these. It's all electrical activity happening inside your skull. I, for one, am grateful there are people like you exploring all the ways that it works and all the ways it can be made better.

Thank you so much for talking today. It's been a joy. Thanks for listening to this conversation with Matthew McDougall. And now, dear friends, here's Bliss Chapman with a brain interface.

Unlocking the brain's potential can transform lives and grant independence to those with severe physical limitations.

First, I want to thank all the people I've had the chance to speak with for sharing their stories with me. I don't think there's any world in which I can share their stories as powerfully as they can. To summarize at a very high level, what I hear repeatedly is that people with ALS or severe spinal injuries, who are in a place where they basically can't move physically anymore, are ultimately looking for Independence. This can mean different things for different people. For some, it means the ability to communicate again independently, without needing to wear something on their face or needing a caretaker to put something in their mouth. For others, it means the independence to work again, to navigate a computer digitally and efficiently enough to get a job, support themselves, move out, and ultimately support themselves after their family is no longer there to take care of them. For some, it's as simple as being able to respond to their kid in time before they run away or get interested in something else. These are deeply personal and very human problems.

What strikes me again and again when talking with these folks is that this is actually an engineering problem. With the right resources and the right team, we can make significant progress. At the end of the day, I think that's a deeply inspiring message and something that makes me excited to get up every day. It's both an engineering problem in terms of a BCI (Brain-Computer Interface) that can give them capabilities to interact with the world, and an engineering problem for the rest of the world to make it more accessible for people living with quadriplegia.

I'll take a broad view on this for a second. I am very in favor of anyone working in this problem space. Beyond BCI, I'm happy, excited, and willing to support in any way I can those working on eye-tracking systems, speech detection systems, head trackers, mouse sticks, or quad sticks. I've met many engineers and folks in the community who do exactly those things. For the people we are trying to help, it doesn't matter what the complexity of the solution is as long as the problem is solved. There can be many solutions out there that can help with these problems, and BCI is one of a collection of such solutions.

Brain implants can offer life-changing independence for those with severe disabilities, providing autonomy that other tools can't match.

Living with a severe spinal cord injury or knowing someone with ALS, it’s not often obvious why one would want a brain implant to connect and navigate a computer. It's surprisingly nuanced. I've learned a huge amount just working with Noland in the first clinical trial and understanding from him, in his words, why this device is impactful for him. Even if you can achieve the same thing with a mouse stick when navigating a computer, he doesn't have access to that mouse stick every single minute of the day. He only has access when someone is available to put it in front of him. A BCI can really offer a level of independence and autonomy that, if it wasn’t literally physically part of your body, would be hard to achieve in any other way.

There are fascinating aspects to what it takes to get Noland to control a cursor on the screen with his mind. You texted me something that I just love: "I was part of the team that interviewed and selected P1. I was in the operating room doing the first human surgery, monitoring live signals coming out of the brain. I work with the user basically every day to develop new UX paradigms, decoding strategies, and I was part of the team that figured out how to recover useful BCI to new world record levels when the signal quality degraded." We will talk about every aspect of that, but just zooming out, what was it like to be part of that historic first?

For me, this is something I've been excited about for close to 10 years now. To be able to be even just a small part of making it a reality is extremely exciting. There were a couple of special moments during that whole process that I'll never truly forget. One of them is during the actual surgery. At that point in time, I knew Noland quite well, I knew his family. The initial reaction when Noland is rolled into the operating room is just an "oh" kind of reaction. But at that point, muscle memory kicks in, and you let your body do all the talking. I had the lucky job in that particular procedure to be in charge of monitoring the implant. My job was to sit there, look at the signals coming off the implant, look at the live brain data streaming off the device as threads are being inserted into the brain, and basically observe and make sure that nothing is going wrong or that there are no red flags or fault conditions that we need to investigate or pause the surgery to debug.

Because I had that sort of spectator view of the surgery, I had a slightly removed perspective than most folks in the room. I got to sit there and think to myself, "Wow, that brain is moving a lot." When you look inside the little craniectomy where we stick the threads in, one thing most people don’t realize is the brain moves a lot when you breathe, when your heart beats, and you can see it visibly. That was a surprise to me and very exciting—to see someone’s brain who you physically know and have talked with at length actually pulsing and moving inside their skull.

Watching live brain data stream to an iPhone during surgery was surreal—robots and neurosurgeons in awe!

The most raw form of signal you can collect from a Neuralink electrode is essentially a measurement of the local field potential or the voltage measured by that electrode. We have a certain mode in our application that allows us to visualize where detected spikes are. It visualizes where in the broadband signal, in its very raw form, a neuron is actually spiking. One of the moments I'll never forget as part of this whole clinical trial is seeing live in the operating room, while the patient was still under anesthesia, beautiful spikes being shown in the application, streaming live to a device I was holding in my hand. This was no signal processing, just the raw data, and then the signal processing on top of it, showing the spikes detected.

During that procedure, there were actually a lot of cameramen in the room, curious and wanting to see. Several neurosurgeons were also present, excited to see robots taking their job. They were all crowded around a small iPhone, watching this live brain data stream out of the patient's brain. Seeing the robot do some of the surgery, especially the computer vision aspect where it detects all the spots to avoid the blood vessels, and then, with human supervision, doing the high-precision connection of the threads to the brain, was quite an experience. My answer might sound lame, but it was boring because I've seen it so many times. That's exactly how you want surgery to be—boring. I've seen the robot do this surgery literally hundreds of times, so it was just one more time, another day.

When Nolan woke up, he was really excited to get started. He wanted to begin the day of the surgery, but we waited until the next morning. The next morning in the ICU, where he was recovering, he was eager to start understanding what kind of signal we could measure from his brain. For those unfamiliar with the Neuralink system, we implant the Neuralink implant in the motor cortex, which is responsible for representing things like motor intent. If you imagine closing and opening your hand, that kind of signal representation would be present in the motor cortex. One way we start to map out what kind of signal we have access to in any particular individual's brain is through a task called body mapping.

Body mapping involves presenting a visual to the user and asking them to imagine doing a specific action, such as a 3D hand opening and closing or an index finger modulating up and down. Although the user is paralyzed and cannot physically move their arm, we can record neural activity while they perform this task. We can then offline model and check if we can predict or detect the modulation corresponding with those different actions. We did this task and realized there was some modulation associated with some of Nolan's hand motion, which was the first indication that we could potentially use that modulation to do useful things in the world, such as controlling a computer cursor.

Mind-blowing moment: Visualizing finger movement can generate a brain signal to control a computer cursor!

One of the useful things in the world is the ability to control a computer cursor. For instance, when we first showed this capability to a participant, we took a live view of his brain activity and displayed it in front of him. We asked him to tell us what was happening, as he could imagine different things, and we knew it was modulating some neurons. He played with it for a bit and initially didn't quite understand it. After some more attempts, he realized that when he moved his finger, a particular neuron started to fire more. We asked him to prove it, and he did. The minute he moved, we could see this neuron firing instantaneously. This single neuron firing was a clear indication of behaviorally modulated neural activity, which could then be used for downstream tasks like decoding a computer cursor.

When discussing the single channel, it is associated with a single electrode. Channel and electrode are interchangeable terms, and there are 1,024 of those. It’s incredible that this works. Learning about this and seeing it in action was mind-blowing. The intention of moving a finger can turn into a signal, and you can even skip that step to visualize or intend the cursor moving, leading to a signal that moves the cursor. There are many exciting things to learn about the brain and how it works. The existence of a signal that can be used is powerful, but it feels like just the beginning of figuring out how to use that signal effectively.

Additionally, there are many fascinating details, such as the body mapping step. The version I saw had a super nice graphical interface that visualizes moving the hand. It felt futuristic, like waking up in a high-end video game tutorial. The future should feel like the future, but it's not easy to achieve. It needs to be simple but not too simple. The UX design component is underrated in BCI development. There is an interaction effect between how you visualize instructions to the user and the quality of the signal you get back. The alignment of behavioral and neural signals depends on how well you express to the user what you want them to do. We spend a lot of time thinking about the UX of our applications, the decoder functions, and the control surfaces provided to the user. All these details matter a lot.

We decode brain activity by detecting and processing tiny electrical impulses from neurons, turning complex signals into simple, actionable data.

We build and what we're trying to measure is really individual neurons producing action potentials. An action potential is like a little electrical impulse that you can detect if you're close enough, within let's say 100 microns of that cell. This is a very tiny distance, so the number of neurons that you're going to pick up with any given electrode is just a small radius around that electrode.

The other thing worth understanding about the underlying biology here is that when neurons produce an action potential, the width of that action potential is about 1 millisecond. From the start of the spike to the end of the spike, the whole width of that characteristic feature of a neuron firing is 1 millisecond wide. If you want to detect that an individual spike is occurring or not, you need to sample that signal or sample the local field potential nearby that neuron much more frequently than once a millisecond. You need to sample many times per millisecond to detect that this is actually the characteristic waveform of a neuron producing an action potential.

We sample across all 24 electrodes about 20,000 times a second. 20,000 times a second means for a given 1 millisecond window, we have about 20 samples that tell us the exact shape of that action potential. Once we've sampled at a super high rate the underlying electrical field nearby these cells, we can process that signal into just where we detect a spike or not—a binary signal, one or zero, indicating whether we detect a spike in this one millisecond or not.

We do this because the actual information-carrying subspace of neural activity is just when spikes are occurring. Essentially, everything we care about for decoding can be captured or represented in the frequency characteristics of spike trains, meaning how often spikes are firing in any given window of time. This allows us to do a significant amount of compression from this very rich, high-density signal to something much more sparse and compressible that can be sent out over a wireless radio, like Bluetooth communication.

Quick tangents here: you mentioned electrode and neuron. There's a local neighborhood of neurons nearby. How difficult is it to isolate from where the spike came from? There's a whole field of academic neuroscience work on exactly this problem. Given a single electrode or electrodes measuring a set of neurons, how can you sort spike—sort which spikes are coming from what neuron? This is pursued in academic work because you care about understanding the underlying neuroscience of the brain. If you care about understanding how the brain is representing information and how that's evolving through time, then that's a very important question.

For the engineering side of things, at least at the current scale, if the number of neurons per electrode is relatively small, you can get away with ignoring that problem completely. You can think of it as a random projection of neurons to electrodes. There may be more than one neuron per electrode, but if that number is small enough, those signals can be thought of as a union of the two. For many applications, that's a reasonable trade-off to make and can simplify the problem a lot.

Turning raw neural signals into actionable data is a complex but evolving journey, aiming for efficient, low-power processing to revolutionize tech and gaming.

When a channel is firing, it represents a specific action depending on the context of other channels firing in concert. For instance, when a channel fires in concert with 50 other channels, it means move left, but when it fires with 10 other channels, it means move right. This requires efficient on-board spike detection that is both fast and low-power to avoid generating too much heat, necessitating a super simple signal processing step.

In overcoming this challenge, various methods have been explored to turn raw signals into usable features. Although the current methods work, the journey is ongoing, and future approaches may prove more effective. One method involves using a convolutional filter that slides across the signal to match a certain template, which includes the depth of the spike modulation, recovery, and the duration of the process. This approach is efficient in hardware, allowing low-power operation across 1,024 channels simultaneously.

Another approach being explored is Spike band power, which can be combined with spike detection. This method may detect signals from neurons too far to be detected as spikes, capturing population-level activity or the "hash of activity." However, this signal is a floating-point representation, making it more expensive to send and requiring different compression methods compared to binary signals.

Communication is limited by the amount of data that can be sent, particularly using the Bluetooth protocol, which necessitates batching while keeping latency extremely low. Addressing latency is crucial, especially for applications like Esports. The goal is to build the best mouse in the world, akin to the Tesla Roadster of mice, potentially allowing people with paralysis to dominate Esports competitions due to access to superior technology and ample time to practice.

Imagine controlling your computer with just your thoughts—no mouse, no delay, just instant action.

People without paralysis are also allowed to implant Neuralink (NE), which offers another way to interact with a digital device. This presents an interesting possibility, especially if it provides a fundamentally different and more efficient experience. Even if it doesn't offer full-on high bandwidth communication, the ability to move the mouse 10 times faster, achieving a bits-per-second rate 10 times higher than with a traditional mouse, is intriguing. With training, users can get really good at it, and it’s clear that there is a higher ceiling of performance. This is because users don’t have to buffer their intention through their arm and muscles. By having a brain implant, there is an inherent 75-millisecond lead time on any action one tries to take.

There are nuances to this, such as evidence that the motor cortex can plan out sequences of actions, meaning the full benefit may not always be realized. However, for reaction-time style games, where quick responses are critical, having a brain implant offers an inherent advantage since it bypasses muscle control. The question then becomes how much faster this can be made. Currently, the end-to-end latency from brain spike to cursor movement is about 22 milliseconds. In comparison, the best gaming mice have about 5 milliseconds of latency, depending on screen refresh rates and other characteristics. The rough time for a neuron in the brain to command the hand is about 75 milliseconds, so Neuralink is already competitive and slightly faster than traditional hand movements.

When Nolan moved the cursor for the first time, he described the experience as the cursor moving before he actually intended it to, which is surreal and something many would love to experience. This immediacy and fluidity are fascinating, especially since we've grown accustomed to natural latency.

Currently, the bottleneck in communication is partly due to the Bluetooth Low Energy protocol, which has restrictions on communication speed, with updates on the order of 7.5 milliseconds. As latency is pushed down to the level of individual spikes impacting control, this protocol becomes a limiting factor. Additionally, it's not just the Neuralink itself that matters; the entire system, including screen refresh rates, needs to be as reactive as the technology allows. For instance, a 120 Hz screen may not suffice if the response time needs to be at the level of 1 millisecond.

Transforming brain signals into computer commands is the future of accessibility.

Responding to something at the level of 1 millisecond is a really cool challenge, and it's something I would even like to see on a t-shirt: "The best mouse in the world." On the receiving end, the decoding step is crucial. Now that we have figured out what the spikes are and gathered them all together, we need to send that over to the app. But what does the decoding step look like?

First, let's define what decoding is. Many listeners might not know what it means to decode brain activity. Zooming out a bit, what is the app? There's an implant that wirelessly communicates with any digital device that has an app installed. The goal is to help someone with paralysis, in this case, Noland, navigate his computer independently. We believe the best way to do this is to offer them the same tools we use to navigate our software. We don't want to rebuild an entire software ecosystem for the brain, at least not yet. Maybe someday, there will be user experiences built natively for BCI. But for now, people would prefer to control mouse and keyboard inputs to all the applications they use daily for work, communication, etc.

The application’s job is to translate the wireless stream of brain data from the implant into computer control. We do this by building a mapping from brain activity to HID (Human Interface Device) inputs, which are the actual hardware inputs like moving a mouse or pressing a key. This mapping is what the app is responsible for, and there's a lot of nuance in how it works. We are still in the early stages of figuring out how to do this optimally.

Decoding is the process of taking statistical patterns of brain data sent via Bluetooth to the application and turning it into actions like mouse movements. This process has two parts: training and inference. The training step involves a behavioral process where the user imagines doing different actions, like moving a cursor on a screen in various directions. We build a pattern using modern machine learning methods to map brain data to these imagined behaviors. During the test phase, we use a deep neural network to decode live brain data and control the computer.

Creating the world's best mouse means making it ultra-responsive to user intentions, down to the millisecond.

In an ideal world, if we could obtain a signal of behavioral intent that is ground-truth accurate at the scale of 1-millisecond resolution, then with high confidence, we could build a mapping from neural spikes to that behavioral intention. However, the challenge lies in the fact that we don't observe what the neurons are actually doing. This introduces a lot of nuance in how we build user experiences that provide more than just a coarse, on-average correct representation of what the user intends to do.

To build the world's best mouse, it needs to be as responsive as possible, accurately reflecting the user's intentions at every step, not just on average. Developing a behavioral calibration game or software experience that achieves this level of resolution is a significant focus of our work. The calibration process and interface must encourage precision, meaning it should be super intuitive for the user to perform the intended action accurately. This is crucial because we don't have any feedback except possibly speaking to the user afterwards about what they actually did.

This presents an exciting UX challenge. User experience is not just about being friendly or usable; it's about how the system works, especially for calibration. Calibration is fundamental to the operation of Neuralink, and it requires continued calibration to maintain accuracy.

Moreover, there is a very interesting machine learning challenge here. Given a dataset that includes some on-average correct behavior (e.g., asking the user to move up, down, right, or left) and a dataset of neural spikes, we need to infer the high-resolution version of their intention. With enough data points and constraints on the model, there should be a way to let the model figure out the exact intention at each millisecond. For example, determining how hard the user is pushing upwards at any given moment.

Having very clean labels is crucial because noisy labels make the problem much harder from a machine learning perspective. Obtaining clean labels is a UX challenge. Any labeling strategy will have assumptions about what the user is attempting to do. These assumptions can be formulated in a loss function or heuristics to estimate the user's intention. The accuracy of these assumptions is critical.

Navigating the invisible cursor challenge: precise calibration is key to overcoming user adaptation and feedback limitations.

In the given scenario, the task is to move the cursor exactly 200 pixels to the right. The cursor disappears, and the user must move the now invisible cursor 200 pixels to the right. The assumption here is that the user can modulate that position offset correctly. However, this assumption might be weaker, and therefore, it could potentially be more accurate than other methods that try to guesstimate the user's intentions at each millisecond.

You can imagine different tasks that make different assumptions about the nature of the user's intention. Those assumptions being correct is what I would think of as a clean label for that step. The task involves visualizing a cursor and moving it to the right, left, up, or down, or by a certain offset. This raises the question: is this the best way to do calibration?

For example, an alternative method could involve a game like Web Grid MH, where a large amount of data is collected from a person playing a game. If they are in a state of flow, you might get a clean signal as a side effect. Is that an effective way for initial calibration?

Great question, there's a lot to unpack here. First, let's draw a distinction between an open loop and a closed loop. In an open loop, the user starts from zero with no model and tries to reach a level of control. In this setup, you need a task that gives the user a hint of what you want them to do, so you can build a mapping from brain data to output. Once they have a model, they can use it, adapt to it, and retrain on that data to boost performance.

There are challenges with both techniques. In an open loop task, the user doesn't get proprioceptive feedback about what they're doing. Imagine having your whole right arm numbed, stuck in a box, and asked to match the speed of something moving on a screen without visual or proprioceptive feedback. You'd likely be inaccurate and inconsistent in performing that task. That's the fundamental challenge of open loop.

In a closed loop, once the user is given a model and can start moving the mouse on their own, they will naturally adapt to that model. This co-adaptation between the model learning what they're doing and the user learning how to use the model may not find the best global minima. The first model might be noisy or have quirks, and the user might figure out the right sequence of imagined motions or the right angle to hold their hand to make it work. However, the next day, they might not remember all the tricks they used previously, creating a complicated feedback cycle that makes debugging difficult.

There's a lot of fascinating aspects here. For instance, in closed loop situations, I've seen psychology graduate students use software with bugs. They figure out ways to work around these bugs, even if they don't know how to program themselves, and they continue using it for years.

Adapting to software bugs instead of fixing them shows how good we are at coping, but it's not always the best solution.

When users interact with a device, they might not remember all the tricks they used the previous day, leading to a complicated feedback cycle that can make debugging very difficult. This phenomenon is particularly evident in situations where psychology graduate students use a piece of software they didn't program themselves. Despite the software having numerous bugs, they adapt to it over years, finding ways to work around the issues rather than fixing them. This adaptation, while impressive, may not be optimal.

Addressing this problem is challenging. One might wonder if restarting from scratch occasionally is necessary. It's important to note that this is not a solved problem, especially in the context of Brain-Computer Interfaces (BCIs). Simply scaling the channel count won't inherently solve it, although it might help by providing richer covariant structures for better labeling strategies. For those in academia working on BCIs, this remains an open problem.

Any solution involving a closed loop will present a difficult debugging challenge. A general heuristic for choosing problems to tackle is to select the ones that are easiest to debug. In an open loop setting, there's no feedback cycle to debug with the user in the loop, making it potentially easier to debug. Even in a closed loop setting, there's no special software or magic to infer the user's true intentions. The model's output might not reflect what the user is actually trying to do, complicating the retraining process.

Predicting user intentions is more effective than tracking physical movements for brain-computer interfaces.

The truth data of what the user is trying to do can be obtained with an able-bodied monkey that has a Neuralink device implanted, allowing them to control a computer. Even with that ground TR data set, it turns out that the optimal thing to predict for high-performance BCI is not just the direct control of the mouse. You can imagine building a data set of what's going on in the brain and what the mouse is exactly doing on the table. It turns out that if you build the mapping from neural spikes to predict exactly what the mouse is doing, that model will perform worse than a model trained to predict higher-level assumptions about what the user might be trying to do. For example, assuming that the monkey is trying to go in a straight line to the target, it turns out that making those assumptions is actually more effective in producing a model than predicting the underlying hand movement. The intention, not the physical movement, is a more powerful thing to chase.

This is super interesting because, with the BCI, in this case, with digital telepathy, you're acting on the intention, not the action. This is why there's an experience of feeling like it's happening before you meant for it to happen. That is so cool and is why you could achieve superhuman performance in terms of the control of the mouse. For open loop, just to clarify, whenever a person is tasked with moving the mouse to the right, there's no feedback, so they don't get the satisfaction of actually getting it to move. You could imagine giving the user feedback on the screen, but it's difficult because, at this point, you don't know what they're attempting to do.

In a specific example, maybe your calibration task involves trying to move the cursor to a certain position offset. Your instructions to the user might be: "The cursor is here now; when the cursor disappears, imagine moving it 200 pixels to the right to be over this target." In that scenario, you could come up with a consistency metric to display to the user. For instance, you could produce a probabilistic estimate of how likely it is that the action taken matches the intended action based on the spike train. This could give the user feedback on how consistent they are across different trials. Providing this kind of feedback might also make the user more behaviorally engaged because the task is boring without any feedback. There may be benefits to the user experience of showing something on the screen, even if it's not accurate, just to keep the user motivated to try to increase that number or push it upwards. There is a psychological element here, and all of that is a UX challenge.

Our BCI system lets users recalibrate and fine-tune their experience anytime, making it feel like DJ mode for your brain.

Users can employ the device whenever and however they want, which is our ultimate goal. There are several solutions to achieve this state without considering the stationarity problem. The first important solution is that users can recalibrate the system whenever they want. For instance, Nolan can recalibrate the system at 2 AM without needing assistance from a caretaker, parent, or friend. Another crucial aspect is that once a good model is calibrated, it can be used continuously without frequent recalibration. The frequency of recalibration depends on the user's appetite for performance. We observe a degradation in model performance over time, but this can be mitigated behaviorally by the user adapting their control strategy. Additionally, software features provided to the user can help mitigate this degradation.

For example, users can adjust the gain, which controls how fast the cursor moves in response to input. They can also adjust the smoothing, which determines how smooth the cursor movement is, and the friction, which affects how easily the cursor can be stopped and held still. These software tools offer users a great deal of flexibility and troubleshooting mechanisms to solve problems independently. All these adjustments are made by looking to the right side of the screen and selecting the mixer, which is akin to a DJ mode for your BCI. The interface is exceptionally well-designed.

Nolan mentioned in a stream that there was a cursor drift and bias, which could be adjusted by looking to the side of the screen. This concept of bias correction originates from academia, specifically from BrainGate clinical TR participants who pioneered this idea. Our implementation offers a beautiful user experience where users can flash the cursor to the side of the screen to open a window for adjusting the cursor bias. Bias refers to the default motion of the cursor when the user is imagining nothing, and it significantly impacts the quality of the cursor control experience.

When the system works well, it provides a joyful and pleasant experience, but when it doesn't, it can be very frustrating. This is the essence of user experience (UX)—the potential to either frustrate or bring joy to users. UX is not just about what appears on the screen but also about the control surfaces provided to the user. We aim to make users feel like they are in an F1 car rather than a minivan.

We're designing brain-controlled interfaces to be so intuitive that using them feels as natural as moving your own hand.

Nolan himself is an F1 fan, so we refer to ourselves as a pit crew. He truly is the F1 driver, and there are different control surfaces that various cars and airplanes provide the user. We take a lot of inspiration from that when designing how the cursor should behave. One nuance of this is the difference in response curves between using a mouse and a MacBook trackpad. When you move a mouse on a MacBook trackpad, the response curve of how that input translates to cursor movement is different than with a mouse. This is because someone, a long time ago, designed the initial input systems for computers and thought through exactly how it feels to use these different systems.

Now, we are designing the next generation of input systems for computers, which is entirely done via the brain. There is no proprioceptive feedback; you don't feel the mouse in your hand or the keys under your fingertips. You want a control surface that still makes it easy and intuitive for the user to understand the state of the system and how to achieve what they want. Ultimately, the end goal is for the user experience to completely fade into the background, becoming so natural and intuitive that it is subconscious to the user. They should feel like they have direct control over the cursor, which just does what they want it to do without thinking about the implementation.

Regarding laws of UX, such as Fitts' Law, which maximizes the chance to hit a target by moving the mouse in a certain way, we are in the early stages of discovering those laws for brain-controlled interfaces. We wouldn't claim to have solved that problem yet, but we have learned some things that make it easier for the user to get stuff done. It's straightforward when verbalized, but it takes time to reach that point during the debugging process.

Any machine learning system has some number of errors, and it matters how those errors translate to the downstream user experience. For example, if you're developing a search algorithm for photos and it pulls up a photo of your friend Josephine instead of Joe, the cost of the error is not high. However, in scenarios like detecting insurance fraud, where errors could send someone to court, you need to be very careful about how those errors translate to downstream effects.

Accurate clicks are crucial; a single misclick can cause major disruptions.

As the output of the model is on average correct, they can sort of steer through time with the user control loop in the mix, allowing them to get to the point they want to reach. However, the same is not true for a click. For a click, you're performing it almost instantly at the scale of neurons firing, so you want to be very sure that the click is correct. A false click can be very destructive to the user; they might accidentally close a tab they are working on and lose all their progress, or they might accidentally hit the send button on a half-composed text, leading to awkward communication.

There are different cost functions associated with errors in this space. Part of the UX design is understanding how to build a solution that remains useful to the end user even when it is wrong. This involves assigning a cost to every action when an error occurs. Every action, if an error occurs, has a certain cost, and incorporating that into how you interpret the intention and map it to the action is really important. For instance, sending a text early can be a very expensive cost and super annoying. Imagine if your cursor misclicked every once in a while; it would be super obnoxious. The worst part is that usually, when the user is trying to click, they are also holding still because they are over the target they want to hit and are getting ready to click. In the datasets we build, it is on average the case that low speeds or the desire to hold still is correlated with when the user is attempting to click.

It is also not the case that a click is a binary signal that is super easy to decode. While it is easy to decode, the bar is much higher for it to become a useful thing for the user. There are ways to solve this, such as taking a compound approach of giving a longer window of time to click to be very confident about the answer. However, the world's best mouse doesn't take a second or 500 milliseconds to click; it takes five milliseconds or less. If you're aiming for that high bar, you really want to solve the underlying problem.

Measuring performance is crucial in this context. Our goal is to provide the user with the ability to control their computer as well as I can and hopefully better. This means they can do it at the same speed and have access to all the same functionality, including details like command tab and command space, with the same level of reliability as I can with my muscles. This is a high bar, and we intend to measure and quantify every aspect of that to understand our progress towards that goal.

The task of measuring bits per second (BPS) for cursor control is built out of a grid, similar to a software keyboard on the screen. BPS is a measure calculated by taking the logarithm of the number of targets on the screen. If modeling a keyboard, one might subtract one for the delete key. The formula involves the log of the number of targets on the screen, multiplied by the number of correct selections minus incorrect selections, divided by a time window, such as 60 seconds. This is the standard way to measure a cursor control task in academia.

Dr. SH of Stanford deserves all the credit for developing this task and inspiring others in the field. His standardized metric allows for comparisons across different techniques and approaches, facilitating progress and enabling claims like Nolan being the best in the world at this task with his PCI.

The web grid task can be configured in various modes: left-click on the screen, dwell over targets, left-right click, middle click, scrolling, clicking, and dragging. The simplest form involves blue targets that require a left-click. Prior records in academic work and at Neuralink have been matched or beaten by Nolan with his Neuralink device. The world record for a human using a device was between 4.2 to 4.6 BPS, depending on the source. Nolan's current record is 8.5 BPS, with the median Neuralink performance being 10 BPS. This means Nolan is at 85% of the control level of a median Neuralink user.

The journey to reach 10 BPS performance involves overcoming new challenges. The core challenge is the labeling problem—understanding at a fine resolution what the user intends to do. This is an area where academic research is highly encouraged.

Nolan's dedication to improving his performance in the web grid task is remarkable. He selected 89,4185 targets in web grid Y and often plays for hours, even in the middle of the night. His motivation and focus are key reasons for his success. Nolan's commitment to pushing the technology to its limits is admirable, and he views this as his job to make the researchers the bottleneck. This attitude is shared by other clinical trial participants, who see advancing the technology as their life's work.

They view this as their life's work to advance the technology as much as they can. If that means hitting targets on the screen from 2 a.m. to 6 a.m., then so be it. There is something extremely admirable about that, which is worth calling out.

He went from imagining moving his hand to directly controlling the cursor with his mind—true neural control.

To understand how he progressed from no cursor control to proficient control, it's important to note that there was a huge amount of learning involved on both his side and ours. We had to figure out what the most intuitive control for him was. This involved finding the intersection of the signals we could decode and what was intuitive for him. For instance, we couldn't pick up every single neuron in the motor cortex, meaning we didn't have representations for every part of the body. We had better decode performance on some areas than others. For example, on his left hand, we had difficulty distinguishing his left ring finger from his left middle finger, but on his right hand, we had good control and modulation detected for his pinky, thumb, and index finger.

Once we gave him the ability to calibrate models on his own, he explored various ways to imagine controlling the cursor. He tried different methods, such as wiggling his wrist side to side or moving his entire arm. At one point, he even tried using his feet. This exploration helped him find the most natural way to control the cursor that was also easy for us to decode.

Through the body mapping procedure, we could figure out which finger he could move. However, he could imagine many more things than we could represent visually on the screen. We showed him an abstract cursor and let him figure out what worked best for him. We had hints from the body mapping procedure about what actions we could represent well, but it was ultimately up to him to explore and determine the best method.

A significant breakthrough occurred on a Tuesday, which I remember clearly. Initially, it seemed like the model wasn't performing well, and he appeared distracted. However, he was actually trying something new—controlling the cursor directly without imagining moving his hand. He was imagining some abstract intention to move the cursor on the screen. Although I cannot explain the difference between these two methods, his reaction suggested it was a qualitatively different experience to have direct neural control over the cursor.

The best UX feels like magic—users shouldn't have to think, it just works.

I wonder if there's a way through UX to encourage a human being to discover that because he discovered it, like you said, to me that he's a Pioneer. He discovered that on his own through all this process of trying to move the cursor with different kinds of intentions. But that is clearly a really powerful thing to arrive at, which is to let go of trying to control the fingers and the hand and control the actual digital device with your mind. UX is how it works, and the ideal UX is one where the user doesn't have to think about what they need to do in order to get it done; it just does it.

That is so fascinating, but I wonder on the biological side how long it takes for the brain to adapt. Is it just simply learning like high-level software, or is there a neuroplasticity component where the brain is adjusting slowly? The truth is, I don't know. I'm very excited to see with the second participant that we implant what the journey is like for them because we'll have learned a lot more. Potentially, we can help them understand and explore that direction more quickly. This is something I didn't prompt Nolan to try; he was just exploring how to use his device and figured it out himself. Now that we know that's a possibility, maybe there's a way to hint to the user: don't try super hard during calibration, just do something that feels natural, or just directly control the cursor without imagining explicit action.

From there, we should be able to understand how this is for somebody who has not experienced that before. Maybe that's the default mode of operation for them, and they don't have to go through this intermediate phase of explicit motions. Or maybe if that naturally happens for people, you can just occasionally encourage them to allow themselves to move the cursor. Sometimes, just like with a four-minute mile, the knowledge that it's possible pushes you to do it and enables you to do it, making it trivial. This also makes you wonder about the cool thing about humans: once there are a lot more human participants, they will discover things that are possible and share their experiences with each other. Because of them sharing it, they'll be able to do it, unlocking it for everybody. Sometimes, just the knowledge is the thing that enables it.

We've probably tried a thousand different ways to do various aspects of decoding, and now we know what the right subspace is to continue exploring further, thanks to Nolan and the many hours he's put into this. Even just that helps constrain the beam search of different approaches we could explore, accelerating the next person's journey. This helps us try new things on day one, enabling them to use it independently and get value out of the system faster. Massive hats off to Nolan and all the participants that came before him to make this technology a reality.

Iterate fast, stay laser-focused on user feedback, and don't be afraid to rethink everything for the best UX.

We wanted to ensure the ability to use a device as anyone would want to use their MacBook. What he's referring to is usually in the context of a research session where we're trying many different approaches, even unsupervised ones, to come up with better ways to estimate his true intention and decode it more accurately. In those scenarios, we try numerous models in any given session, sometimes working for eight hours a day, which can involve testing hundreds of different models.

It's also worth noting that we frequently update the application he uses, sometimes up to four or five times a day, with different features, bug fixes, or feedback he's given us. He's a very articulate person who is part of the solution. Instead of complaining, he identifies areas that are not optimal in his workflow and suggests ideas on how to fix them. Often, these issues are addressed within a couple of hours of his feedback. Sometimes, he gives feedback at the beginning of the session, and by the end, he's already providing feedback on the next iteration.

This iterative process is fascinating. For instance, there were 271 pages of notes taken from the BCI sessions just in March. Human beings, especially those who are smart, excited, and positive like Noan, can provide continuous feedback. This requires a team that is absolutely laser-focused on the user and what will be best for them. It demands a level of commitment where the team might skip other meetings to focus on user feedback. This level of focus and commitment is often underappreciated and requires exceptional talent to execute effectively.

User experience (UX) design in this space is particularly interesting because of the many unknowns. UX is difficult, as evidenced by how many people do it poorly. It's not something that can always be solved by constant iteration. Sometimes, you need to step back and think globally about whether you're even pursuing the right solution. Fast iteration cycles can predict success in some problems, like in a reinforcement learning simulation where frequent rewards accelerate progress. However, UX is different. Users are often wrong about the right solution, and it requires a deep understanding of the technical system and the true underlying problem to get to the right place.

Design isn't just about aesthetics; it's about deeply empathizing with users to create functional solutions that truly improve their lives.

The aesthetic of design is not just about the visual appeal; it also encompasses the love you put into the design, a concept championed by figures like Steve Jobs and Jony Ive. However, when a human being uses their brain to interact with a design, it becomes deeply about function. It's essential to empathize with the user, even if you don't always listen to them directly. This deep empathy is fascinating and crucial, as is the need to iterate, sometimes requiring a complete redesign.

In the early days, the UX was subpar, but improvements were made quickly. One concrete example involves Nolan, who wanted to read manga. This seemingly simple task was a significant challenge for him because he couldn't scroll using a mouth stick on his iPad. For context, a mouth stick is a long stylus held between the teeth, which is exhausting, painful, and inefficient. While there are other assistive technologies, Nolan's situation—characterized by muscle spasms—makes many of these options impractical. Technologies like eye trackers or devices requiring something in the mouth are not suitable because spasms can shift him out of frame or cause injury.

Considering these factors is crucial when evaluating the advantages of a BCI (Brain-Computer Interface) in someone's life. The BCI must fit ergonomically and allow independent use, whether in bed or a chair, without the constant presence of a caretaker. One particularly fun example is the scroll feature. Nolan wanted to read manga, and there are many ways to implement scrolling with a BCI. However, scroll is a critical control surface because any jitter or error in the model output can cause significant disruptions on the screen.

We revolutionized scrolling with a magnetic snap-to-scroll bar that feels natural and intuitive, making navigation seamless and effortless.

The BCI scroll bar is designed to enhance accessibility by leveraging the accessibility tree available to various apps. Unlike a normal scroll bar, the BCI scroll bar allows the cursor to morph and latch onto it. When the user pushes up or down, similar to controlling a normal cursor, the screen moves accordingly. This remaps the velocity to a scroll action, creating a natural and intuitive experience. The magnetic feel when attaching to the scroll bar ensures a continuous action, enabling users to pull the page down or push it up seamlessly.

Achieving the right nuances in scroll behavior, such as momentum, was crucial. For instance, when scrolling with fingers on a screen, there's a flow that doesn't stop immediately when the finger is lifted. The same principle applies to the BCI scroll, requiring careful calibration to maintain a natural experience. The team spent about a month perfecting these nuances to ensure the scroll felt extremely natural and easy to navigate.

Visionary UX design plays a significant role in this development. While it's important to listen to users, it's equally vital to think from first principles and innovate beyond current standards. This approach highlights the stagnation of desktop scroll bars, which could benefit from similar improvements. Current desktop scroll bars are hard to find and control, lacking momentum and intuitive snapping actions. Enhancing these could significantly improve user experience.

For users like Nolan, who face additional challenges such as jittering, the current approach requires a phase shift between scrolling and reading. This necessitates a more seamless transition to avoid the drawbacks of switching between actions.

Making tiny targets easier to hit on screens can significantly improve user experience.

To close a tab, you often have to click on that very small, stupid little X that's extremely tiny and hard to hit precisely, especially if you're dealing with a noisy output from the decoder. We understand that this small little X might be difficult to hit, so we make it a bigger target for you. This is similar to how the iOS keyboard adapts the target size of individual keys based on an underlying language model. For instance, if you're typing "hey, I'm going to see Lex," it will make the E key bigger because it knows Lex is the person you're going to see. This kind of predictiveness can make the experience much smoother even without improvements to the underlying decoder or feature detection part of the stack.

We implement this with a feature called magnetic targets. We index the screen to understand where small targets might be difficult to hit and analyze cursor dynamics around those locations. If we detect that the user is trying to select a small target, we make it easier by enlarging the size of it, allowing the user to snap onto that target more easily. These small details significantly help users be independent in their day-to-day living.

When it comes to improving the decoder in a way that's generalizable to P2, P3, P4, P5, and beyond, the underlying signal we're trying to decode will look very different across users. For example, channel number 345 will mean something different in user one than in user two because the electrode corresponding to that channel will be next to different neurons in each user. However, the approaches, methods, and user experience of how to get the right behavioral pattern from the user to associate with that neural signal should translate across multiple generations of users.

It's quite likely that we've overfitted to the preferences of our initial users. As we get more participants, we hope to discover the right wide minimas that cover all cases, making the experience more intuitive for everyone. Improvements made for users who cannot speak at all should also benefit those who can speak but prefer not to in certain settings, like a doctor's office. The mechanism of open loop labeling and closed loop labeling should remain the same and generalize across different users during the calibration step.

To break records, sometimes you need a quirky ritual—like fasting, eating peanut butter, and playing late at night.

People love this game, and I always loved it too, but I didn't know that it was a shared perception. Just in case it's not clear, Web Grid is a game where there's a grid of, let's say, 35 by 35 cells. One of them lights up blue, and you have to move your mouse over that cell and click on it. If you miss it, it turns red. I have played this game for so many hours.

My record is 17 BPS, which is the highest at Neuralink right now. To put that into perspective, if you imagine that 35 by 35 grid, you're hitting about 100 trials per minute. That means 100 correct selections in one minute, averaging between 500 to 600 milliseconds per selection.

One of the reasons I struggle with that game is that I'm such a keyboard person. I do everything via keyboard and avoid touching the mouse if possible. So, how can I explain my high performance? I have a whole ritual I go through when I play Web Grid. It's essentially like a diet plan associated with this game.

First, I fast for five days. I go up to the mountain, which kind of helps focus the mind. I don't eat for a bit beforehand, and then I eat a ton of peanut butter right before I play. This is a real thing. It has to be really late at night, around midnight to 2 a.m., which aligns with my night owl tendencies.

I also have a very specific physical position I sit in. I was homeschooled growing up and did most of my work on the floor in my bedroom. So, I sit on the floor to play. I make sure there's not a lot of weight on my elbow so I can move quickly. I turn the gain of the cursor way up so that small motions move the cursor significantly. I move the cursor with my fingers, keeping my wrist almost completely still.

Interestingly, this reminds me of people who set world records in Tetris. They use a technique where all their fingers are moving rapidly, exploiting a loophole or bug to achieve incredibly fast actions. It's somewhat similar to my approach but not quite the same.

There will probably be a few programmers listening to this who might try fasting and eating peanut butter to break my record. I did this because I wanted to set a high bar for the team. The goal should be to beat all of us, at least as a minimum standard.

When asked what I think is possible, I believe the limits are somewhere below 40 but above 20 BPS. It also depends on the task's difficulty. Some people might perform better with different task optimizations, like having 10,000 targets on the screen.

Every system upgrade tackles a new bottleneck; the journey to seamless tech is a series of overcoming these evolving challenges.

Increasing the number of channels might require different improvements in the system. The nature of this work is such that nobody's gotten to that number before, so what's next is a guess from my part. Historically, different parts of the staff become bottlenecks at different time points. For example, when I first joined nlink three years ago, one of the major problems was the latency of the Bluetooth connection. The radio on the device wasn't very good; it was an earlier version of the implant. No matter how good your decoder was, if your device updates every 30 or 50 milliseconds, it will be choppy and frustrating, leading to challenges. At that point, the main challenge was to get the data off the device reliably to enable tackling the next challenge.

At some point, the modeling challenge emerged: how to build a good mapping, a supervised learning problem where you have data and a label you're trying to predict. The right neuro-decoder architecture and hyperparameters needed optimization. Once that problem was solved, a different bottleneck appeared. The next bottleneck was software stability and reliability. If you have widely varying inference latency or your app lags, it decreases your ability to maintain a state of flow and disrupts your control experience. Various software bugs and improvements were made to increase the system's performance, making it much more reliable and stable, allowing for better data collection to build improved models.

Currently, there are two major directions for improving BPS further. The first is labeling, a fundamental challenge of determining what the user is trying to do at every millisecond. This involves task design, UX, machine learning, and software. The second direction is either changing what you're decoding or extending the number of things you're decoding, such as giving more clicks (left click, right click, middle click) or actions like click and drag. This can improve the effective bit rate of your communication prosthesis, enhancing the user's ability to navigate their computer. Extending functionality can improve the number of BPS and the user's independence, skill, and efficiency in operating their computer.

More channels mean better control and reliability for brain-computer interfaces.

The short answer is yes, it can potentially help. However, it's a bit nuanced how that curve manifests in the numbers. If you plot a curve of the number of channels you are using for decoding versus either the offline metric of how good you are at decoding or the online metric of how well the user is using the device in practice, you will see roughly a logarithmic curve. As you increase the number of channels, you get a corresponding logarithmic improvement in control quality and offline validation metrics.

The important nuance here is that each channel corresponds with a specific represented intention in the brain. For example, channel 254 might correspond with moving to the right, while channel 256 might mean moving to the left. If you want to expand the number of functions you want to control, you need a broader set of channels that covers a broader set of imagined movements. You can think of it like Mr. Potato Man: if you had a bunch of different imagined movements, how would you map those imagined movements to input to a computer? You could imagine handwriting to output characters on the screen, typing with your fingers to output text, different finger modulations for different clicks, wiggling your big nose to open a menu, or wiggling your big toe to execute a command like "command tab."

The number of different actions you can take in the world depends on how many channels you have and the information content they carry. Increasing the number of threads is more about increasing the number of actions you are able to perform. Another nuance worth mentioning is that our goal is to enable a user with paralysis to control the computer as fast as possible (BPS) with all the same functionality and as reliably as possible. This reliability is very related to the channel count discussion. As you scale out the number of channels, the relative importance of any particular feature of your model input to the output control of the user diminishes. This means that if the neural nonstationarity effect is per channel or if the noise is independent such that more channels mean on average less output effect, then the reliability of your system will improve.

One core thesis is that scaling the channel count should improve the reliability of the system without any additional work on the decoder itself. When discussing the nonstationarity of the signal, it refers to the aspect of the signal where the frequency content of that signal, at least in the motor cortex, is very correlated with the output intention or the behavioral task that the user is doing. This is known as rate coding, where the brain represents information by changing the frequency of neuron firing. The Delta between the baseline state of the neuron and what it looks like when it is modulated is crucial.

Adjusting the baseline firing rate of neurons is crucial for accurate brain-computer interface measurements, as it varies daily and affects downstream data.

In the context of R coding, if the brain represents information by changing the frequency of neuron firing, what really matters is the delta between the baseline state of the neuron and its modulated state. The baseline rate—analogous to taring a scale when measuring flour—differs day-to-day. This variability means that if you’re trying to measure how much rice is in a pot, you’ll get different measurements on different days because you’re using different pots. This shifting baseline rate is a primary cause of downstream bias, at least from a first-order description of the problem. The baseline firing rate of any particular neuron or observed on a particular channel changes, leading to this issue.

To address this, one might think of adjusting to the baseline to make it relative. With monkeys, various methods have been found to do this effectively. For example, by asking them to perform a behavioral task like playing a game with a joystick, researchers can measure brain activity, compute the mean across all input features, and subtract that mean during BCI sessions. This method works well for monkeys but not as effectively for Nolan. The reasons could be numerous, including the significant context effect difference between open-loop and closed-loop tasks. In an open-loop task, Nolan might be multitasking—watching a podcast, listening to music, talking to a friend, or asking his mom about dinner—leading to a larger generalization gap between normalized features at open-loop time and those used at closed-loop time.

It’s incredible to watch Nolan multitask: moving the mouse cursor effectively while talking, being nervous, and even trash-talking while playing chess. Normalizing to the baseline in such a dynamic context might throw everything off. For those unfamiliar with assistive technology, there’s a common belief that eye trackers could help someone move a mouse on the screen. However, the real impact of technologies like BCI lies in how they fit ergonomically into a user’s life. Even if they offer the same level of control as an eye tracker or mouse, the independence they provide is transformative. Users don’t need a device in their face, don’t need to be positioned a certain way, and don’t need a caretaker to set it up.

The ability for individuals to browse the internet at 2 a.m. when nobody's around to set up their iPad for them is a profoundly game-changing thing for folks in that situation. This is even before considering those who may not be able to communicate at all or ask for help when they need it. For such individuals, this could potentially be the only link they have to the outside world. The impact of this does not need much explanation.

You mentioned a neural decoder. How much machine learning is involved in the decoder? How much is magic, and how much is science or art? How difficult is it to develop a decoder that interprets these sequences of spikes? This is a good question and can be answered in a couple of ways. To provide a broader perspective first, building the decoder involves creating the dataset and compiling it into the weights, with each step being crucial. The direction for further improvement will likely focus on constructing optimal labels for the model. However, there is also the challenge of compiling the best model.

One of the main challenges with designing the optimal model for BCI is that offline metrics don't necessarily correspond to online metrics. It is fundamentally a control problem where the user is trying to control something on the screen, and the user experience of how you output the intention impacts your ability to control. For instance, if you look at validation loss as predicted by your model, there can be multiple ways to achieve the same validation loss, but not all of them are equally controllable by the end user. While it might seem simple to add auxiliary loss terms to capture what actually matters, this is a complex and nuanced process. Thus, turning the labels into the model is more nuanced than a standard supervised learning problem.

A fascinating anecdote is that we have tried various neural network architectures to translate brain data to velocity outputs. One example that stands out is when we used fully connected networks to decode brain activity. We conducted an A/B test measuring the relative performance in online control sessions of a 1D convolution over the input signal. This involved a sliding window producing convolved features for each input sequence per channel. Using this convolutional architecture, we achieved better validation metrics, meaning the data fit better and generalized better on offline data. However, the controlability was far worse when using that model online, despite better offline metrics. This taught us that simply throwing more compute at the problem and optimizing for loss is not sufficient. There is still some inherent modeling gap and artistry left to be uncovered to get the model to scale with more compute. This may be a labeling problem, but other components could be involved as well.

Data quality is more crucial than quantity when training sophisticated models for nuanced tasks.

At this time, the primary constraint is data quality, not necessarily data quantity. Even the quantity of data is limited because it has to be trained on interactions, and there aren't that many interactions available. For instance, if you're talking about the simplest example of just 2D velocity, data quality is the main concern. However, if you're aiming to build a multifunction output that handles all the inputs a computer can process, it becomes a much more sophisticated modeling challenge. You need to consider not just when the user is left-clicking but also ensure the model doesn't fire when they're right-clicking or moving the mouse.

An interesting bug from the early stages of BCI with Nolan was when he moved the mouse, the click signal dropped off a cliff, and when he stopped, the click signal went up. This indicates contamination between the two inputs. Another example was when he tried to do a left-click and drag, the left-click signal dropped off as soon as he started moving. This contamination between signals requires developing robustness in the model to handle such variability. This is fundamentally an engineering challenge, and it may not need fundamentally new techniques. Skills from areas like unsupervised speech classification using CTC loss could be very applicable here.

Looking ahead, there are exciting prospects for the future development of the software stack on Neuralink. On the technology side, there is excitement about decoding and UX, and understanding how this technology will be best situated for entering the world. Specifically, there is a keen interest in how this device works for individuals who cannot speak at all and have very limited current capabilities. This will be crucial for understanding product-market fit and determining if the device can transform lives in its current state. If there are gaps, the focus will be on solving them efficiently.

The future of user experience lies in making high-dimensional control surfaces intuitive without physical feedback.

The user experiences we choose to focus our time and resources on, even in terms of nonstationarity, raise intriguing questions. Does the problem of nonstationarity completely disappear at a larger scale, or do we need to develop new creative UIs even at that point? Additionally, when we start dramatically expanding the set of functions that can be output from one brain, we must address the nuances of user experience. For instance, how can users modulate different keys in synchrony without feeling them physically? This lack of tactile feedback loop presents a challenge in making high-dimensional control surfaces intuitive for users.

Scaling laws are another area of interest. As we scale channel count, it is crucial to understand how far we can go before hitting the saturation point. Currently, we only know what's in the interpolation space between Z and 1024, but the possibilities beyond that remain unknown. This exploration also opens up fascinating Neuroscience and brain questions. By inserting more electrodes in more brain regions, we can learn more quickly about what those regions represent. This fundamental Neuroscience learning is essential for figuring out how to most efficiently insert electrodes in the future.

All these dimensions are incredibly exciting, and this doesn't even touch on the software stack we work on every day. It seems virtually impossible that a thousand electrodes is where saturation occurs. In the future, it will likely be obvious that millions of electrodes are necessary for true breakthroughs.

Regarding the tweet about how some thoughts are most precisely described in poetry, it's because the information bottleneck of language is quite steep. Poetry allows for the reconstruction of meaning and beauty in the other person's brain more effectively than literal language. The mechanism of poetry is to seed the generator function in the brain, making the user understand the underlying meaning through their own thought process. This is similar to how a beautiful painting resonates with something deep within you, conveying the artist's experience through the pixels.

This concept is relevant for full-on telepathy. Literal interpretation of poetry doesn't convey much; it requires a human to interpret it. The combination of the human mind and collective intelligence makes the poem make sense. The signal that carries meaning from human to human may seem trivial but actually carries a lot of power due to the complexity of the human mind on the receiving end.

The key to understanding life isn't in the answers, but in asking the right questions.

In the same way that signals carry meaning from human to human, which may seem trivial but actually carry a lot of power due to the complexity of the human mind and the receiving end, poetry and music play a significant role. Yoshi first mentioned that until AGI likes music, we haven't achieved AGI. This raises the question of whether there's an element of the next token surprise factor in poetry and music. Classical music and poetry often use repeated structures with a twist, surprising the listener at just the right moment. This technique has evolved through history, where musicians tweak familiar structures to add surprising elements, particularly in classical music heritage. Breaking structure or symmetry is something humans seem to enjoy, and great artists know which rules to break, making it interesting for the audience.

The meaning of human existence is a profound question. In the TV show The West Wing, a discussion about the Bible highlights that neither party is smart enough to fully understand it. This analogy suggests that we might not know the right questions to ask about the meaning of life. Aligning with The Hitchhiker's Guide to the Galaxy, asking the right questions increases the likelihood of finding the meaning of human existence. In the short term, we should increase the diversity of people and conscious beings asking such questions. While the answer remains elusive, asking the right questions is crucial. This point is driven home when communicating with someone who cannot speak; asking the right question allows them to respond with a simple yes or no.

Even in the face of life-changing adversity, a strong support system and a positive mindset can turn the darkest moments into opportunities for growth.

And now, dear friends, here's Nolan, our boss, the first human being to have a Neuralink device implanted in his brain*.

You had a diving accident in 2016 that left you paralyzed with no feeling from the shoulders down. How did that accident change your life? It was sort of a freak thing that happened. Imagine you're running into the ocean, although this is a lake, but you're running into the ocean and you get to about waist-high and then you kind of dive in, take the rest of the plunge under the wave or something. That's what I did, and then I just never came back up. Not sure what happened. I did it running into the water with a couple of guys, and so my idea of what happened is really just that I took like a stray fist, elbow, knee, foot, something to the side of my head. The left side of my head was sore for about a month afterward, so I must have taken a pretty big knock. They both came up and I didn't, so I was face down in the water for a while. I was conscious and then eventually just realized I couldn't hold my breath any longer and I keep saying took a big drink. People, I don't know if they like that I say that, it seems like I'm making light of it all, but this is kind of how I am.

I'm a very relaxed, stress-free person. I rolled with the punches for a lot of this. I kind of took it in stride, like, all right, well, what can I do next? How can I improve my life even a little bit on a day-to-day basis? At first, just trying to find some way to heal as much of my body as possible, to try to get healed, to try to get off a ventilator, learn as much as I could so I could somehow survive once I left the hospital. Thank God I had my family around me. If I didn't have my parents, my siblings, then I would have never made it this far. They've done so much for me, more than I can ever thank them for, honestly. A lot of people don't have that. A lot of people in my situation, their families either aren't capable of providing for them or honestly just don't want to, and so they get placed somewhere, in some sort of home. Thankfully, I had my family. I have a great group of friends, a great group of buddies from college who have all rallied around me, and we're all still incredibly close. People always say, you know, if you're lucky, you'll end up with one or two friends from high school that you keep throughout your life. I have about 10 or 12 from high school that have all stuck around, and we still get together, all of us, twice a year. We call it the spring series and the fall series. This last one, we all dressed up like X-Men, so I did a Professor Xavier, and it was freaking awesome. It was so good.

So yeah, I have such a great support system around me, and so, you know, being a quadriplegic isn't that bad. I get waited on all the time. People bring me food and drinks, and I get to sit around and watch as much TV and movies and anime as I want. I get to read as much as I want. I mean, it's great. It's beautiful to see that you see the silver lining in all of this.

Immediate acceptance of life's challenges can lead to unexpected strength and resilience.

The two girls who pulled me out of the water were two of my best friends; they were lifeguards. One of them mentioned that it looked like my body was shaking in the water, as if I was trying to flip over. I knew immediately what my situation was from that point on. I hoped that if I got to the hospital, they might be able to do something. When I was in the hospital, right before surgery, I tried to calm one of my friends down. I had brought her with me from college to camp, and she was just bawling over me. I told her, "Hey, it's going to be fine, don't worry." I even cracked some jokes to lighten the mood. The nurse had called my mom, and I asked them not to tell her yet because I didn't want her to be stressed out. I suggested they call her after the surgery when there would be some answers, whether I lived or not.

When I first woke up after surgery, I was heavily drugged on fentanyl, which was administered in three ways. It was the best I had ever felt on drugs, or rather, on medication. The first time I saw my mom in the hospital, I started crying. I had a ventilator in, so I couldn't talk, but seeing her face was really hard. Despite the rough situation, I never had a moment of despair thinking, "Man, I'm paralyzed, this sucks." I knew wallowing wouldn't help, so I accepted my situation immediately.

There have been low points along the way. In the beginning, there were some really hard things to adjust to. The first couple of months were especially tough due to the immense pain. I remember screaming in the hospital because I thought my legs were on fire, even though I couldn't feel anything. It was all nerve pain. That night, I asked for as much pain medication as possible, but they told me I had reached the limit and advised me to go to a happy place. Realizing the things I wanted to do in life that I couldn't do anymore was also hard. I always wanted to be a husband and father, but as a quadriplegic, I doubted I could do it. I didn't want to put someone I love through the burden of taking care of me. Not being able to play sports, which I loved growing up, was tough too. Little things, like not being able to hold and smell a book, also affected me.

The two-year mark was particularly rough. They say that by two years, you will have regained as much movement and sensation as possible. For those two years, I focused on trying to move my fingers, hands, feet—everything. When June 30, 2018, came, I was really sad about my progress. However, I was never depressed for long periods; it never seemed worthwhile to me.

Faith and family give me strength to face anything life throws at me.

In 2018, I was really sad, but that's kind of where I was. It was random here and there, but I was never depressed for long periods of time; it just never seemed worthwhile to me. What gave me strength was my faith in God. My understanding that it was all for a purpose, even if that purpose wasn't anything involving Neuralink. There's a story in the Bible about Job, a really popular story about how Job has all these terrible things happen to him, and he praises God throughout the whole situation. I thought, and I think a lot of people think for most of their lives, that they are Job—that they're the ones going through something terrible and they just need to praise God through the whole thing and everything will work out at some point. After my accident, I realized that I might not be Job; I might be one of his children that gets killed or kidnapped or taken from him. So, it's about terrible things that happen to those around you who you love. Maybe, in this case, my mom would be Job, and she has to get through something extraordinarily hard. I just need to try and make it as best as possible for her because she's the one really going through this massive trial. That gave me a lot of strength. Obviously, my family and friends give me all the strength I need on a day-to-day basis. It makes things a lot easier having that great support system around me.

From everything I've seen of you online, your streams, and the way you are today, I really admire your unwavering positive outlook on life. Has that always been this way? Yeah, I've always thought I could do anything I ever wanted to do. There was never anything too big; whatever I set my mind to, I felt like I could do it. I didn't want to do a lot; I wanted to travel around and be sort of like a gypsy, working odd jobs. I had this dream of traveling around Europe, being a shepherd in Wales or Ireland, then a fisherman in Italy, doing all these things for a year. It sounds cliché, but I thought it would be so much fun to travel and do different things. I've always seen the best in people around me and tried to be good to people. Growing up with my mom, who is the most positive, energetic person in the world, I just get along great with people. I really enjoy meeting new people, and I just wanted to do everything. This is kind of just how I've been.

It's great to see that cynicism didn't take over, given everything you've been through. Was that a deliberate choice you made, that you're not going to let this keep you down? Yeah, a bit. Also, it's just kind of how I am. I roll with the punches with everything. I always used to tell people I don't stress about things much. Whenever I'd see people getting stressed, I'd say, "It's not hard, just don't stress about it," and that's all you need to do. They'd say, "That's not how that works," but it works for me. Just don't stress, and everything will be fine. Everything will work out. Obviously, not everything always goes well, and it doesn't all work out for the best all the time, but I just don't think stress has had any place in my life since I was a kid.

Taking the leap into the unknown can be terrifying, but trusting the journey and the people around you makes it all worthwhile.

The first attempt might not work and, frankly, it might actually suck. It could be the worst version ever in a person, so why would I do the first one? I've already been selected, and I could just tell them to find someone else, and then I'll do number two or three. I'm sure they would let me since they're looking for a few people anyway. However, there's something about being the first one to do something that's pretty cool. I always thought that if I had the chance, I would like to do something for the first time. This seemed like a pretty good opportunity, and I was never scared. My faith played a huge part in that; I always felt like God was preparing me for something.

I almost wish it wasn't this because I had many conversations with God about not wanting to do any of this as a quadriplegic. I told Him, "I'll go out and talk to people, travel the world, and give my testimony to thousands of people, but heal me first. Don't make me do all this in a chair—that sucks." I guess He won that argument; I didn't really have much of a choice. I always felt like there was something going on. Seeing how easily I made it through the interview process and how quickly everything happened, how the stars aligned with all of this, it just told me that it was all meant to happen. It was all meant to be, and so I shouldn't be afraid of anything that's to come. And so I wasn't. I kept telling myself, "You say that now, but as soon as the surgery comes, you're probably going to be freaking out." Brain surgery is a big deal for a lot of people, but it's an even bigger deal for me. I've often thanked God that He didn't take my brain, my personality, my ability to think, my love of learning, and my character. As long as He left me that, I felt I could get by. I was about to let people go root around in my brain, hoping it would work out.

Despite the potential risks, the smoothness of the process reassured me. The more people I met on the borrow side and the Neuralink side, the more impressed I was. These are the most impressive people in the world, and I can't speak enough about how much I trust them with my life. Their excitement was infectious. Walking into a room and seeing all these people looking at me with excitement, knowing they've been working so hard on this and it's finally happening, made me want to do it even more to help them achieve their dreams. It's so rewarding, and I'm so happy for all of them.

Played a prank on my mom after surgery, pretended not to recognize her—she freaked out, but it was my way of showing I'm still here and love her.

On the day of the surgery, I thought I would be freaking out, but I wasn't. As the surgery approached, the night before and the morning of, I was just excited. I remember FaceTiming with Elon Musk and saying, "Let's rock and roll," and he replied, "Let's do it." We had to be at the hospital at 5:30 a.m., and the surgery was scheduled for 7:00 a.m. We woke up pretty early, and I'm not sure how much any of us slept that night. We got to the hospital, went through all the pre-op stuff, and everyone was super nice. Elon was supposed to be there in the morning, but something went wrong with his plane, so we ended up FaceTiming. After the phone call, I had one of the greatest onliners of my life. There were about 20 people around me, and I said, "I just hope he wasn't too star-struck talking to me." It was a nice moment, and everyone appreciated the humor. When asked if I had prepared that line, I replied that it just came to me naturally.

Before going into surgery, I asked if I could pray. I prayed over the room, asking God to be with my mom in case anything happened to me and to calm her nerves. After waking up from surgery, I decided to play a prank on my mom. I had discussed this prank with my buddy Bane beforehand. My mom is very gullible, and we have a history of playing mean pranks on her. For instance, after her knee surgery, she was groggy and said she couldn't feel her legs. My dad jokingly told her that they had to amputate both her legs. Despite these pranks, she still loves us.

I was worried that the anesthesia would leave me too groggy to pull off the prank. I had experienced anesthesia before, and it had messed me up, making me unable to function properly. I prayed to God to let me be coherent enough to prank my mom. When she walked in after surgery, she asked how I was feeling. With a groggy and confused look, I said, "Who are you?" She started panicking, looking at the surgeons and doctors, demanding they fix whatever they had done to me. Seeing her distress, I quickly reassured her, saying, "Mom, I'm fine." Although she was not happy about the prank and vowed to get me back someday, it was a way to show her that I was still there and that I loved her. It was a dark way to do it, but it was effective.

Imagining and trying to move my body every day is how I started to regain some control after my accident.

Around me, everyone was just like, "What are you seeing?" I was like, "Look, look at this one. Look at this top row, third box over, this yellow spike. Like, that's me right there." Everyone started freaking out and clapping. I thought it was super unnecessary because this is what's supposed to happen, right? You're imagining yourself moving each individual finger one at a time and then noticing something. When I did the index finger, I was wiggling all of my fingers to see if anything would happen. There were a lot of other things going on, but that big yellow spike was the one that stood out to me. I'm sure that if I had stared at it long enough, I could have mapped out maybe a hundred different things, but the big yellow spike was the one I noticed.

Maybe you could speak to what it's like to sort of wiggle your fingers, to imagine the mental and cognitive effort required to wiggle your index finger, for example. How easy is that to do? Pretty easy for me. After my accident, they told me to try and move my body as much as possible, even if I couldn't, just to keep trying. This would create new neural pathways or pathways in my spinal cord to hopefully regain some movement someday. That's fascinating. It's bizarre, but part of the recovery process is to keep trying to move your body as much as you can. The nervous system starts reconnecting. For some people, it never works, but for others, like me, I got some bicep control back. If I try enough, I can wiggle some of my fingers, not on command, but if I try to move my right pinky and keep trying, after a few seconds, it'll wiggle. So, I know there's something there. This happens with a few different fingers.

One person at the hospital told me about a guy who had recovered most of his control by thinking about walking every day. I tried that for years, imagining walking, which is hard. It's hard to imagine all the steps that go into taking a step, all the activations that have to happen along your leg. But you're not just imagining; you're doing it. I'm trying to recreate in my head what I had to do to take a step, practicing it over and over. It's not a third-person perspective; it's a first-person perspective. You're not imagining yourself walking; you're literally doing everything as if you're walking. It was hard at the beginning, both frustrating and cognitively challenging.

Never give up on your body's potential—keep pushing, even when you can't see immediate results.

I tried every different way to get some movement back in my body, whether by vocalizing it out loud or just thinking it. Moving my index finger or my big toe was taxing on my body, which was unexpected because it felt like I wasn't moving. It seemed like signals from my brain weren't getting through due to the gap in my spinal cord. These signals would build up in the body part I was trying to move until they burst, giving me a weird sensation as everything dissipated back to normal. This process was not only physically taxing but also mentally exhausting, especially when trying to move a body part for hours at a time.

In the beginning, it was somewhat easier because I couldn't control my environment, such as the TV in my room. For the first few years, I spent a lot of time staring at walls and thinking, trying to move repeatedly. I never gave up hope and continued training hard. Subconsciously, I still do it, which I believe helped a lot with Neuralink. I discussed this during an All Hands meeting at the Neuralink Austin facility. The body mapping exercises, where I visualize a hand or arm on the screen and mimic the motion, helped train the algorithm to understand my intentions, making the process seamless for me.

It's fascinating to know that I've been training to be world-class at this task. I hope other quadriplegics don't give up and keep trying because the human body is capable of amazing things. I heard a story about a girl who, after 18 years of trying, finally managed to wiggle her index finger. This gives me hope, and I continue my efforts even while lying down and watching TV. It's something I've become so accustomed to that I don't think I'll ever stop.

Seeing real-time progress in my recovery made me realize I'll never give up.

I can't thank them enough for the opportunity to visually see that what I'm doing is actually having some effect. It's a huge part of the reason why I know now that I'm going to keep doing it forever. Before using Neuralink, I was doing it every day and just assuming that things were happening. I wasn't getting back any mobility or sensation, so I could have been running up against a brick wall for all I knew. With Neuralink, I get to see all the signals happening in real-time and see that what I'm doing can actually be mapped. When we started doing click calibrations, for instance, when I go to click my index finger for a left click, it actually recognizes that. This changed how I think about what's possible with retraining my body to move.

I'll never give up now. The signal that there's still a powerhouse of brain activity is crucial. As the technology develops, the brain, which is the most important part of the human body, can do a lot of the control. What did it feel like when you first could wiggle the index finger and saw the environment respond? It was very cool, but it made sense to me. There are signals still happening in my brain, and as long as you have something near it that can measure and record those signals, you should be able to visualize it in some way. It was cool to see that their technology worked and that everything they had worked so hard for was going to pay off.

I hadn't moved a cursor or interacted with a computer at that point, so it just made sense. I didn't know much about BCI at that point either, so I didn't know what sort of step this was actually making. I didn't know if this was a huge deal or just a step towards something much better down the road. I just thought that they knew it turned on, so I was like, cool, this is cool.

When was the first time you were able to move a mouse cursor? It must have been within the first week or two weeks that I was able to first move the cursor. Again, it kind of made sense to me. When everyone around you starts clapping for something you've done, it's easy to say, okay, I did something cool. That was impressive in some way. What exactly that meant hadn't really set in for me.

Moving a cursor with just my mind for the first time was mind-blowing!

When I moved the cursor for the first time, I thought, "That's cool," but I expected it to happen because it made sense to me. This was when I moved the cursor with just my mind, without physically trying to move. I can delve into the difference between attempted movement and imagined movement. Attempted movement involves me physically trying to move, like attempting to move my hand to the right, left, forward, and back, or attempting to lift my finger up and down or kick. Even if you can't see it, I'm physically trying to do all these things, like attempting to shrug my shoulders. This is what I was doing for the first couple of weeks when they were going to give me cursor control during body mapping.

When Nir told me to imagine doing it, it made sense, but it's not something people practice. If you started school as a child and they said, "Write your name with this pencil," you would do it. But if they said, "Now imagine writing your name with that pencil," kids might think, "I guess that kind of makes sense," and do it. However, we're not taught to do things this way; we learn how to do things physically. We think about thought experiments, but that's not the same as physical actions. Imagined movement never really connected with me. You could describe it like a professional athlete imagining swinging a baseball bat or a golf club, then physically doing it. I don't have that connection, so imagining something versus attempting it didn't resonate much with me. Mentally, I just had to accept what was going on and try.

Attempted movement made sense to me. If I try to move, a signal is sent in my brain, and as long as they can pick that up, they should be able to map it to what I'm trying to do. When I first moved the cursor like that, it felt natural, like, "Yes, this should happen." I wasn't surprised by it.

To clarify, there is a difference between imagined movement and attempted movement. In imagined movement, you're not attempting to move at all; you're visualizing doing it. Theoretically, this might involve different parts of the brain, but not necessarily. All these signals can still be represented in the motor cortex. The difference lies in the naturalness of imagining something versus attempting it and the fatigue that comes with it over time.

Attempted movement felt right to me. When I imagine, I start visualizing in my mind, but with attempted movement, I actually start trying to move. Having done combat sports my whole life, like wrestling, when I imagine a move, I move my muscles. There's a bit of activation versus just visualizing yourself doing it. Naturally, if you tell someone to imagine doing something, they might close their eyes and start physically doing it. It was very hard at the beginning, but attempted movement worked. It worked like a charm.

Controlling a cursor with just my thoughts blew my mind and opened up a world of digital telepathy possibilities!

At the beginning, it was very hard, but attempted worked. It worked just like it should, like a charm. I remember one Tuesday when we were messing around, and a swear word came out of your mouth when you figured out you could just do the direct cursor control. It blew my mind, no pun intended, when I first moved the cursor just with my thoughts without attempting to move. Over the following weeks, as I got better at cursor control, the model improved, making it easier for me to move it without much effort.

I noticed something fascinating when I was watching the signals of my brain one day. As I attempted to move to the right, I saw spikes on the screen indicating that the signal was being sent before I actually attempted to move. This is similar to how the signal to move a body part is sent before the actual movement, creating a delay. My brain was anticipating what I wanted to do, which was intriguing and always in the back of my mind.

As I continued experimenting with the attempted movement and cursor control, I saw that the cursor was anticipating my movements better and better. One day, while playing web grid, I looked at a target before attempting to move. I was training my eyes to look ahead, and the cursor just shot over to the target. It was wild; I had to take a step back because it felt like it shouldn't be happening. All day, I was smiling and giddy, realizing that I could just think it and it would happen.

This realization made me understand that this technology is way more impressive than I ever thought. It opened up a whole new world of possibilities for what could happen with this technology and what I might be capable of. It felt like digital telepathy, controlling a digital device with my mind.

Discovering that imagined and attempted movements can be used together to control a cursor efficiently was my "4-minute mile" moment—once you know it's possible, it changes everything!

I had an "aha" moment when I realized that combining attempted movement and imagined movement works effectively. Now, I intermix these approaches because I found that there is some interplay that maximizes efficiency with the cursor. It's not just one or the other; I use them in parallel. Sometimes, I experiment with these methods. For instance, I might get an idea and try it out, then inform the team that I deviated from the standard procedure to test a new approach. This kind of discovery is not just beneficial for me but for anyone using the Neuralink, showing that new possibilities exist.

I liken this to the four-minute mile analogy, where once the first person broke the barrier, it became possible for many others. Similarly, demonstrating that direct control without attempted movement is possible opens new avenues for users.

For those unfamiliar, the Link app is an application created by Neuralink to help me interact with a computer. It includes various settings and modes, such as body mapping and calibration. Calibration is crucial as it translates brain activity into cursor control. The more time I spend calibrating, the better the models become, enhancing cursor control. I often test the model's efficiency by playing a challenging game like Snake; if I can control the game well, it indicates a good model.

The Link app also includes web Grid and voice controls, allowing me to connect or disconnect from the computer using voice commands. The implant charger is essential for connecting to the Link app, as it wakes up the implant from hibernation mode. I can set the implant to wake up periodically, depending on my needs.

Balancing fame and responsibilities is tough, especially when you're pioneering new tech.

I am very forgetful and often forget to do things. I have a lot of data collection tasks that they want me to do daily, including body mapping. However, I've been slacking on this because I've been doing so much media and traveling. I've been a terrible candidate for this data collection due to my busy schedule. They want me to track how well the Neuralink is performing over time to provide data to the FDA, creating charts to show its performance from day one to day 90 to day 180.

The calibration step involves a bubble game where yellow bubbles pop up on the screen. Initially, it is open loop, meaning I have no control over the cursor, which moves on its own across the screen. I follow the cursor with my intentions to different bubbles, and the algorithm trains off the signals it receives. There are different methods, such as Center Out Target, where a bubble in the middle is surrounded by eight bubbles. The cursor moves from the middle to one side, back to the middle, and around the circle while I follow it, training off my intentions.

For calibration, I generally perform attempted movements because it works better. As I progress through calibration, the models improve, making it easier to use. I've tried calibration with imagined movement, but it doesn't work as well. In closed loop calibration, I haven't experimented much, but the different ways we are doing calibration now might improve it. Initially, imagined movement doesn't work well, but after about 15 minutes of attempted movement, I can feel a difference. The cursor starts to anticipate my actions, indicating that it is learning.

Turning calibration into a game makes it more engaging and fun!

Exploring how to do every aspect of this most effectively, there are so many lessons to be learned. Thank you for being a pioneer in all these different, super technical ways. It's also interesting to hear that there's a different feeling to the experience when it's calibrated in various ways. I imagine your brain is doing something different, and that's why there's a different feeling to it. Trying to define the words and measurements for those feelings would be fascinating. At the end of the day, you can measure your actual performance on whether it's Snake or Web Grid to see what works well. For the open-loop calibration, the attempted movement works best for now.

In the open loop, you don't get feedback that you did something. Is that frustrating? No, it makes sense to me. We've done it with a cursor and without a cursor in open loop. For example, in the center-out task, you'll start calibration with a bubble lighting up. I push towards that bubble, and when it's pushed towards that bubble for, say, three seconds, the bubble will pop, and then I come back to the middle. I'm doing it all just by my intentions, which is what it's learning anyway. As long as I follow what they want me to do, like following the yellow brick road, it will all work out.

Is the bubble game fun? They always feel so bad making me do calibration, like "Oh, we're about to do a 40-minute calibration," and I'm like, "Alright, do you guys want to do two of them?" I'm always asking to do whatever they need. It's not bad; I get to lie there or sit in my chair and do these things with some great people. I get to have great conversations, give them feedback, and talk about all sorts of things. I could throw something on my TV in the background and split my attention between them. It's not bad at all.

Is there a score that you get? Can you do better on the bubble game? I would love that. I would love some numerical feedback for calibration. I would like to know what they're looking at, like if they see a number while I'm doing calibration that indicates it's going well. I would love that because I would like to know if what I'm doing is effective. However, they've told me that it's not necessarily one-to-one, and it doesn't always mean that calibration is going well. They don't want to skew what I'm experiencing or want me to change things based on a number that isn't always accurate to how the model will turn out or the end result.

One thing I enjoy striving for is keeping the time between targets as low as possible towards the end of calibration. At the beginning, it can be four, five, or six seconds between popping bubbles, but towards the end, I like to keep it below 1.5 seconds or even one second. In my mind, this translates nicely to something like Web Grid, where hitting a target every second means I'm doing really well. That's a way to get a score on the calibrations: the speed of moving from bubble to bubble.

Chasing excellence means embracing the grind, even if it means hours of calibration to break records.

The process of obtaining a score on the calibrations involves measuring the speed at which one can move from Bubble to Bubble. Initially, there is an open loop phase, which then transitions to a closed loop phase. The closed loop phase provides feedback on the model's effectiveness. When I first gain cursor control, I am closing the loop, meaning I am completing the cycle of control. Although I don't fully understand the technicalities, I know that initially, the loop is open and uncontrollable, but once I gain control, it becomes closed.

The calibration process typically takes around 10 to 15 minutes, but efforts are being made to reduce this time significantly. The goal is to minimize the duration of calibration to make it more practical for daily or weekly use. Ideally, the calibration time would be reduced to just a few minutes or even eliminated altogether as we learn more about the brain. Currently, achieving really good models requires about 40 to 45 minutes of calibration. Despite the time commitment, I am willing to invest this time if it results in high-performing models that can break records on web grid.

Web grid is a benchmarking game used to measure the performance of a Brain-Computer Interface (BCI). It consists of a grid where a single box lights up, and the user must move the cursor to click on it. The size of the grid can vary, and larger grids offer more bits per second (BPS) with each click. I prefer playing on larger grids, such as a 35x35 grid, to maximize BPS. Recently, I achieved 8.5 BPS, surpassing my previous record of 8 BPS. However, I encountered a 5-second lag that prevented me from potentially reaching 9 BPS.

Pushing boundaries in gaming isn't just about the score—it's about inspiring innovation and progress.

You and Elon are basically the same person. The last time I did a podcast with him, he came in extremely frustrated that he couldn't beat Uber Lilith as a Droid. That was about a year ago, I think. I forget if it was solo or not, but I could tell some percentage of his brain was thinking, "I wish I was attempting it right now." He did it that night; he stayed up and did it, which is crazy to me. In a fundamental way, it's really inspiring. What you're doing is inspiring in that way because it's not just about the game. Everything you're doing has an impact. By striving to do well on web grid, you're helping everybody figure out how to create the system, decoding the software, the hardware, the calibration—all of it—so you can do everything else really well.

It's just really fun. That's part of the thing, making it fun. It's addicting. I've joked about how when they put this thing in my brain, they must have flipped a switch to make me more susceptible to these kinds of games, to make me addicted to web grid or something.

Do you know Bliss's high score? He said like 14 or something. 17.1 or something, 17.7. He told me he does it on the floor with peanut butter and fasts. It sounds like cheating, like performance-enhancing. The first time Nolan played this game, he asked, "How good are we at this game?" I think you told me, "You're going to try to beat me. I'm going to get there someday." I fully believe you. I think I can. I'm excited for that.

I've been playing first off with the dwell cursor, which really hampers my web grid playing ability. Basically, I have to wait three seconds for every click. So, you can't do the click; you have to click by dwelling. Three seconds, which sucks. It really slows down how high I'm able to get. I still hit like 50, I think, 50-something trials per minute, which was pretty good. One of the settings is how slow you need to be moving to initiate a click. I can tell when I'm on that threshold to start initiating a click just a bit early, so I'm not fully stopped over the target when I go to click. I'm doing it on my way to the targets a little to try to time it just right.

So, you're slowing down just a hair right before the target. This is like a lead performance. But it still sucks that there's a ceiling of three seconds. I can get down to 0.2 and 0.1. Point one is what I've played with a little bit too. I have to adjust a ton of different parameters to play with 0.1, and I don't have control over all that on my end yet. It also changes how the models are trained. If I train a model in web grid, like a bootstrap on a model, which is them training models as I'm playing web grid based on the web grid data, if I play web grid for ten minutes, they can train off that data specifically to get me a better model. If I do that with 0.3 versus 0.1, the models come out different. The way they interact is just much different. I have to be really careful. I found that doing it with 0.3 is better in some ways unless I can do it with 0.1 and change all the different parameters. Then that's more ideal because obviously, 0.3 is faster than 0.1. I could get there.

Even when setbacks hit hard, your attitude can turn a tough day into one of the best experiences of your life.

Can you click using your brain for right now? It's the hover clicking with the dwell cursor. Before all the thread retraction stuff happened, we were calibrating clicks—left click, right click. That was my previous ceiling before I broke the record again with the dwell cursor. I think it was on a 35x35 grid with left and right click. You get more BPS, more bits per second, using multiple clicks because it's more difficult. You're supposed to do either a left click or a right click, with blue targets for left click and orange targets for right click.

My previous record was 7.5 with the blue and orange targets. I think if I went back to that now, doing the click calibration and being able to initiate clicks on my own, I would break that 10 ceiling in a couple of days max. You'll start making Bliss nervous about his 17. Why do you think we haven't given him the EXA exactly?

When the retractions happened, some of the threads retracted, and it sucked. It was really, really hard. The day they told me was the day of my big Neuralink tour at their Fremont facility. They told me right before we went over there, and it was really hard to hear. My initial reaction was to go in, fix it, take it out, and fix it. The first surgery was so easy. I went to sleep, a couple of hours later I woke up, and here we are. I didn't feel any pain, didn't take any pain pills or anything. I just knew that if they wanted to, they could go in and put in a new one the next day if that’s what it took because I just wanted it to be better. I didn’t want to lose the capability I had so much fun playing with for a few weeks, for a month. It had opened up so many doors for me, so many more possibilities that I didn’t want to lose it after a month. I thought it would have been a cruel twist of fate if I had gotten to see the view from the top of this mountain and then have it all come crashing down after a month. I knew I was just starting to climb the mountain, and there was so much more that I knew was possible. To have all of that taken away was really, really hard.

But then, on the drive over to the facility, I talked with my parents about it, prayed about it. I decided I wasn’t going to let this ruin my day or this amazing tour they had set up for me. I wanted to show everyone how much I appreciated all the work they were doing. I wanted to meet all the people who made this possible and have one of the best days of my life. And I did. It was amazing and absolutely one of the best days I’ve ever been privileged to experience. For a few days afterward, I was pretty down in the dumps. I didn’t know if it was ever going to work again. Then I made the decision that even if I lost the ability to use the Neuralink, if I could keep giving them data in any way, I would do that. If I needed to do data collection or body mapping every day for a year, I would do it because I knew everything I was doing helped everyone to come after me. That’s all I wanted. The whole reason I did this was to help people, and I knew that anything I could do to help, I would continue to do, even if I never got to use the cursor again. I was just happy to be a part of it. Everything I had done was just a perk, something I got to experience. I know how amazing it’s going to be for everyone to come after me, so I might as well keep trucking along.

When you find the right method, everything clicks into place and progress feels unstoppable.

I know how amazing this experience is going to be for everyone who comes after me, so I might as well keep trucking along. You were able to work your way up to get the performance back, which is like going from Rocky one to Rocky two. When did you first realize that this was possible, and what gave you the strength, motivation, and determination to increase back up and beat your previous record?

Within a couple of weeks, I felt like I was interviewing an athlete. I want to thank my parents. The road back was long and hard, with many difficulties and dark days. There was a turning point when they switched how they were measuring the neuron spikes in my brain. They moved from individual spike detection to something called spike band power. If you watched previous segments with either me or DJ, you probably have some context. When they did that, it was a light bulb moment. I saw an uptick in performance immediately and felt that this was better. Everything before this point sucked, but I knew that if we kept doing what we were doing, I could get back to my previous performance levels.

They gave me the dwell cursor, which sucked at first, but it provided a path forward to continue using it and hopefully help out. I ran with it and never looked back. I'm the kind of person who rolls with the punches anyway. The feedback loop on figuring out how to do the spike detection in a way that would work well for me was crucial.

The update to my implant was similar to a software update for a Tesla or an iPhone. This firmware change enabled us to record averages of populations of neurons nearby individual electrodes. We had less resolution about which individual neuron was doing what but a broader picture of what was going on nearby an electrode overall. The feedback was immediate. When we flipped that switch, I hit three or four BPS right out of the box, which was a light bulb moment indicating we were on the right path.

From there, we needed to re-engineer the UX to make it useful for independent use. The goal was to use it independently without needing constant involvement from the team. This is the start of the journey, and hopefully, we get back to where I can do multiple clicks and control applications much more fluidly and naturally, ultimately getting that web grid number up.

Struggling with tech can turn into muscle memory, even when you're trying to sleep!

Sometimes, I wonder how hard it is to avoid accidentally clicking. I have to continuously keep it moving. Like I said, there's a threshold where it will initiate a click, so if I ever drop below that, it'll start, and I have three seconds to move it before it clicks anything. If I don't want it to ever get there, I just keep it moving at a certain speed, constantly doing circles on the screen, moving it back and forth to keep it from clicking stuff.

I actually noticed a couple of weeks back that when I was not using the implant, I was just moving my hand back and forth or in circles like I was trying to keep the cursor from clicking. I was doing it while trying to go to sleep, and I realized, "Okay, this is a problem." To avoid the clicking, I guess, does that create problems when you're gaming, like accidentally clicking a thing? Yeah, it happens in chess. I've lost a number of games because I'll accidentally click something. I think the first time I ever beat you was because of an accidental click. It's a nice excuse, right? Anytime you lose, you could just say that was accidental.

You mentioned the app improved a lot from version one when you first started using it. It was very different. Can you talk about the trial and error you went through with the team? Like, 200 plus pages of notes—what's that process like? It's a lot of me just using it day in and day out and saying, "Hey, can you guys do this for me? Give me this; I want to be able to do that. I need this." I think a lot of it just doesn't occur to them until someone is actually using the app, using the implant. It's just something they never would have thought of, or it's very specific to even me, maybe what I want.

I'm a little worried about the next people that come. Maybe they will want things much different than how I've set it up or the advice I've given the team. They're going to look at some of the things they've added for me and think, "That's a dumb idea; why would he ask for that?" I'm really looking forward to getting the next people on because I guarantee they're going to think of things I've never thought of and improvements that make me think, "Wow, that's a really good idea; I wish I would have thought of that." They'll also give me some pushback about what I'm asking them to do, saying, "That's a bad idea; let's do it this way." I'm more than happy to have that happen.

It's just a lot of different interactions with different games or applications, the internet, just with the computer in general. Tons of bugs end up popping up left, right, and center. It's just me trying to use it as much as possible and showing them what works, what doesn't work, and what I would like to be better. They take that feedback and usually create amazing things for me, solving problems in ways I would have never imagined. They're so good at everything they do. I'm just really thankful that I'm able to give them feedback and they can make something of it. A lot of my feedback is really dumb; it's just like, "I want this, please do something about it." They come back with super well-thought-out solutions, way better than anything I could have ever thought of or implemented myself. They're just great; they're really, really cool.

As the BCI community grows, would you like to hang out with the other folks with new links? What relationship, if any, would you want to have with them? You said they might have a different set of ideas on how to use the thing. Would you be intimidated by their web grid performance? No, I hope they compete. I hope day one they wipe the floor with me. I hope they beat it and crush it, double it if they can. On one hand, it's only going to push me to be better because I'm super competitive. I want other people to push me. I think that is important for anyone trying to achieve greatness; they need other people around them who are going to push them to be better.

Push yourself to greatness by embracing competition and working hard while having fun.

Grid performance is something I hope to compete in. I hope that on day one, they wipe the floor with me. I hope they beat it and crush it, even double it if they can. On one hand, it's only going to push me to be better because I'm super competitive. I want other people to push me. I think that is important for anyone trying to achieve greatness; they need other people around them who are going to push them to be better. I even made a joke about it on X once: once the next people get chosen, cue buddy cop music. I'm just excited to have other people to do this with and to share experiences with. I'm more than happy to interact with them as much as they want and to give them advice. I don't know what kind of advice I could give them, but if they have questions, I'm more than happy to help.

When asked what advice I would have for the next participant in the clinical trial, I would say that they should have fun with this because it is a lot of fun. I hope they work really, really hard because it's not just for us; it's for everyone that comes after us. They should come to me if they need anything and go to Neuralink if they need anything. Neuralink moves mountains; they do absolutely anything for me that they can, and it's an amazing support system to have. It puts my mind at ease for so many things that I have had questions about and things I want to do. They are always there, and that's really nice. I would tell them not to be afraid to go to Neuralink with any questions, concerns, or anything they are looking to do with this. Any help that Neuralink is capable of providing, I know they will. I would advise them to work their ass off because it's really important that we try to give our all to this. So, have fun and work hard. Maybe that's what I'll just start saying to people: have fun, work hard. Now you're a real pro athlete, just keep it short.

Talking about what I have been able to do now that I have a Neuralink implant, the freedom I gain from this way of interacting with the outside world is significant. I can play video games all night by myself, and that’s a kind of freedom. People in my position just want more independence. The more load I can take away from people around me, the better. If I'm able to interact with the world without using my family or friends to help me with things, the better. If I can sit up on my computer all night and not need someone to sit me up or position my iPad, and not have them wait up for me all night until I'm ready to be done using it, it takes a load off all of us. It’s really all I can ask for. It's something I could never thank Neuralink enough for, and I know my family feels the same way. Being able to have the freedom to do things on my own at any hour of the day or night means the world to me.

Racing against a low battery to break records in Web Grid is the ultimate adrenaline rush!

When the battery is above 50%, I feel like I have time. However, as it drops to 30% and then 20%, the pressure increases. At 10%, a low battery popup appears, which disrupts my web grid flow. This popup can cover my web grid, and if I want to break a record, I need to do it within the next 30 seconds before the popup interferes. After dismissing the popup, I usually have about 10 minutes left before the battery dies. This is what's generally going on in my head, along with whatever song is playing.

I am very passionate about breaking records in web grid. It has evolved from a leisurely activity that puts me at ease to a serious endeavor. If I don't break a record, I feel like I've wasted five hours of my life. Despite the stress, it's fun. Have you ever tried web grid with multiple targets? You can get higher BPS (beats per second) with that. BPS is calculated as the log of the number of targets times correct minus incorrect divided by time. Essentially, more options make the task more difficult but can increase BPS. There's also Zen mode, which covers the whole screen with a grid. However, I prefer a mode with a giant BPS number in the background, which we should name "metal mode" because it’s like the opposite of Zen mode—super hard mode.

I also play Civilization 6 and usually go with Korea because they focus on science tech victories. This wasn't planned, but it aligns with my interests. By rushing tech and science, you can get so far ahead technologically that you dominate the game. At one point, you might have advanced units like musket men and planes while others are still using bows and arrows. This allows for easy domination victories or winning through science. I've even accidentally won through diplomatic victories by focusing on science, which was frustrating because it ended the game unexpectedly.

Isolationist strategy in Civilization 7 sounds like a blast—science and tech all the way!

In Civilization 6, you don't need a giant civilization to succeed, especially with Korea. You can keep it small, focus on science, and build up your tech. I plan to establish a military unit and position them all around my border to keep everyone out. Then, I will focus on building up my resources, adopting a very isolationist approach. My main goal will be to work on science and technology. It's so much fun, and I recently saw a Civilization 7 trailer. I'm incredibly pumped for its release, which is probably coming out next year. Come on, S7, hit me up! I'll alpha or beta test whatever; that'd be amazing.

Regarding improvements to the Link app and the overall experience, I would like to get back to the click on demand feature. It would be great to connect to more devices; right now, it's just the computer. I want to use it on my phone, different consoles, and various platforms. Essentially, I want to control as much as possible. An Optimus robot would be pretty cool to control. The Link app itself seems to be getting dialed in to what it might look like down the road, and we've gotten through a lot of what I want from it. The only other thing I would say is more control over all the parameters that I can tweak with my cursor. There are many factors that go into how the cursor moves, like gain and friction. I want as much control over my environment as possible, especially an advanced mode for power users.

Speech has also been useful. While using the app, speech-to-text is a handy feature, although I also use a virtual keyboard. Another area for improvement is typing or texting in a different way. Right now, it's mainly dictation and a virtual keyboard that I use with the cursor. We've experimented with finger spelling like sign language, which seems promising. I have this intuition that at some point, I'll be able to think of the letter I want, and it will pop up without needing to finger spell. This would be a significant leap, but it’s something I believe is possible based on my experience with cursor control.

Embracing brain tech upgrades can transform lives, from restoring vision to breaking language barriers.

Before, it just happened upon me, but now that I know that it was possible, I think I could make it happen with other things. I think it would be much, much simpler. Would you get an upgraded implant device? Sure, absolutely, whenever they'll let me. So, you don't have any concerns? For you, the surgery and your experience, all of it was like no regrets? No, everything's been good so far. Yep, you just keep getting upgrades. Yeah, I mean, why not? I've seen how much it's impacted my life already, and I know that everything from here on out is just going to get better and better. So, I would love to get the upgrade.

What future capabilities are you excited about, sort of beyond this kind of telepathy? Is Vision interesting? So, for folks who, for example, are blind, enabling people to see or for speech? Yeah, there's a lot that's very, very cool about this. I mean, we're talking about the brain, so this is just motor cortex stuff. There's so much more that can be done. The vision one is fascinating to me. I think that is going to be very, very cool. To give someone the ability to see for the first time in their life would just be, I mean, it might be more amazing than even helping someone like me. That just sounds incredible.

The speech thing is really interesting. Being able to have some sort of real-time translation and cut away that language barrier would be really cool. Any sort of actual impairment that it could solve, like with speech, would be very, very cool. Also, there are a lot of different disabilities that all originate in the brain, and you would be able to hopefully solve a lot of those. I know there's already stuff to help people with seizures that can be implanted in the brain. This would do, I imagine, the same thing. So, you could do something like that.

I know that even someone like Joe Rogan has talked about the possibilities of being able to stimulate the brain in different ways. I'm not sure how ethical a lot of that would be; that's beyond me, honestly. But I know that there is a lot that can be done when we're talking about the brain and being able to go in and physically make changes to help people or to improve their lives. So, I'm really looking forward to everything that comes from this, and I don't think it's all that far off. I think a lot of this can be implemented within my lifetime, assuming that I live a long life.

What you were referring to is things like people suffering from depression or things of that nature potentially getting help. Yeah, flip a switch like that, make someone happy. I know Joe has talked about it more in terms of wanting to experience what a drug trip feels like, like wanting to experience what it would be like to be on mushrooms or something like that, DMT. You can just flip that switch in the brain. My buddy Bane has talked about being able to wipe parts of your memory and re-experience things for the first time, like your favorite movie or your favorite book. Just wipe that out real quick and then re-fall in love with Harry Potter or something. I told him I don't know how I feel about people being able to just wipe parts of your memory. That seems a little sketchy to me. He said they're already doing it, so it sounds legit.

I would love memory replay, just like actually high-resolution replay of all memories. I saw an episode of Black Mirror about that once. I don't think I want it. Black Mirror always kind of considers the worst case, which is important. I think people don't consider the best case or the average case enough. I don't know what it is about us humans; we want to think about the worst possible thing. We love drama. It's like, how is this new technology going to kill everybody? We just love that. Yes, let's watch. Hopefully, people don't think about that too much with me; it'll ruin a lot of my plans.

The power of touch can transform lives, offering independence and connection in ways we often take for granted.

Yes, let's watch. Hopefully, people don't think about that too much with me; it'll ruin a lot of my plans. Yeah, yeah. I assume you're going to have to take over the world. I mean, I love your Twitter. You tweeted, 'I'd like to make jokes about hearing voices in my head since getting the Neuralink, but I feel like people would take it the wrong way. Plus, the voices in my head told me not to.' Yeah, yeah, yeah. Please never stop."

"So, you're talking about Optimus. Is that something you would love to be able to do—to control the robotic arm or the entirety of Optimus?" "Oh yeah, for sure, for sure, absolutely. You think there's something fundamentally different about just being able to physically interact with the world?" "Oh, 100%."

"I know another thing with being able to give people the ability to feel sensation and stuff too by going in with the brain and having the Neuralink maybe do that. That could be something that could be translated through, transferred through the Optimus as well. There's all sorts of really cool interplay between that and also, like you said, just physically interacting. I mean, 99% of the things that I can't do myself, I obviously need a caretaker for—someone to physically do things for me. If an Optimus robot could do that, I could live an incredibly independent life and not be such a burden on those around me. That would change the way people like me live, at least until whatever this is gets cured. Being able to interact with the world physically like that would just be amazing."

"And not just for having it be a caretaker or something, but something like I talked about—just being able to read a book. Imagine an Optimus robot just being able to hold a book open in front of me, get that smell again. I might not be able to feel it at that point, or maybe I could again with the sensation and stuff. But there's something different about reading a physical book than staring at a screen or listening to an audiobook. I actually don't like audiobooks. I've listened to a ton of them at this point, but I don't really like them. I would much rather read a physical copy."

"One of the things you would love to be able to experience is opening the book, bringing it up to you, and to feel the touch of the paper." "Oh man, the touch, the smell. I mean, it's just something about the words on the page. They've replicated that page color on the Kindle and stuff, but it's just not the same. So, just something as simple as that."

"One of the things you miss is touch." "I do, yeah. A lot of things that I interact with in the world, like clothes or literally any physical thing that I interact with in the world, a lot of times what people around me will do is they'll just come rub it on my face. They'll lay something on me so I can feel the weight. They will rub a shirt on me so I can feel the fabric. There's something very profound about touch, and it is something that I miss a lot and something I would love to do again. But we'll see."

"What would be the first thing you do with a hand that can touch?" "Give your mom a hug after that, right?" "Yeah, yeah. I know that's one thing that I've asked God for basically every day since my accident—just being able to one day move, even if it was only my hand, so that way I could squeeze my mom's hand or something, just to show her how much I care and how much I love her and everything. Along those lines, being able to just interact with the people around me—handshake, give someone a hug, anything like that. Being able to help me eat, I'd probably get really fat, which would be a terrible, terrible thing. Also, beat Bliss in chess on a physical chessboard."

Even in the darkest times, human kindness and resilience shine through, reminding us of our capacity to help and inspire each other.

There are many upsides, you know, and any way to find some way to feel like I'm bringing Bliss down to my level. He’s just such an amazing guy, and everything about him is just so above and beyond that anything I can do to take him down a notch, I'm happy. Yeah, humble him a bit; he needs it, especially as he's sitting next to me.

Did you ever make sense of why God puts good people through such hardship? Oh man, I think it's all about understanding how much we need God. I don't think there's any light without the dark. If all of us were happy all the time, there would be no reason to turn to God ever. There would be no concept of good or bad. As much as the darkness and the evil in the world exist, it makes us all appreciate the good and the things we have so much more. When I had my accident, one of the first things I said to one of my best friends was that everything about this accident has made me understand and believe that God is real and that there really is a God. My interactions with Him have all been real and worthwhile. My friend, however, had a very different reaction; he said that seeing me go through this accident made him believe that there isn't a God.

I believe that it is a way for God to test us, to build our character, and to send us through trials and tribulations to ensure that we understand how precious He is and the things He has given us. Hopefully, we grow from all of that. A huge part of being here is not just to have an easy life and do everything that's easy but to step out of our comfort zones and really challenge ourselves because that’s how we grow.

What gives you hope about this whole thing we have going on, hum, civilization? Oh man, I think people are my biggest inspiration. Even just being at Neuralink for a few months, looking people in the eyes and hearing their motivations for why they’re doing this is so inspiring. I know they could be other places, at cushier jobs, working somewhere else doing X, Y, or Z that doesn’t really mean that much. But instead, they’re here, and they want to better humanity and the people around them. They want to make better lives for their own family members who might have disabilities, or they look at someone like me and say, "I can do something about that, so I'm going to."

I’ve always connected most with people in the world. I love learning about people, how they developed, and where they came from. To see how much people are willing to do for someone like me when they don’t have to and are going out of their way to make my life better gives me a lot of hope for humanity in general. It shows human resiliency and what we’re able to endure. It also shows how much we want to be there and help each other and how much satisfaction we get from that. I think that’s one of the reasons we’re here: to help each other. That always gives me hope, realizing that there are people out there who still care and want to help.

Thank you for being one such human being and continuing to be a great human being through everything you’ve been through and being an inspiration to many people, including myself, for many reasons, including your epic, unbelievably great performance on Web Grid. I will be training all night tonight to try to catch up. You can do it, and I believe in you that you can. Once you come back, I eventually beat Bliss. Yeah, for sure, absolutely. I'm rooting for you; the whole world is rooting for you. Thank you for everything you’ve done, man. Thanks, thanks, man.

Thanks for listening to this conversation with Nolan Arbaugh and before that with Elon Musk, DJ Saw, Matthew McDougall, and Bliss Chapman. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Aldous Huxley in "The Doors of Perception":

"We live together, we act on and react to one another, but always and in all circumstances, we are by ourselves. The martyrs go hand in hand into the arena; they are crucified alone. Embrace the lovers desperately trying to fuse their insulated ecstasies into a single self-transcendence in vain. By its very nature, every embodied spirit is doomed to suffer and enjoy its solitude. Sensations, feelings, insights, fancies—all these are private and, except through symbols and at secondhand, incommunicable. We can pool information about experiences but never the experiences themselves. From family to nation, every human group is a society of island universes."

Thank you for listening, and hope to see you next time.

Watch: youtube.com/watch?v=Kbk9BiPhm7o