Finding Purpose in the World

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


Purpose is real, absent the divine. But first, what counts as real?

Do things in the world really have purpose?

When we look around, it appears obvious to me that there’s a reason for things, whether it’s an inanimate manmade object like a saw or a dog chasing a squirrel. Now religious traditions ascribe purpose to a higher being. I’m going to show you that purpose is real in the world, absent the divine.

In this post, I’m just going to lay the groundwork and make the argument that these things are real. Then, in the next post we’ll talk about why we might agree that they have properties like purpose and agency. And without the need of any observer, me or you, to grant them purpose through our inductive efforts, I want to say that dogs had agency and purpose before there were people to realize it.

What Counts as Real?

Okay, what are we going to call real? It’s been a central question in philosophy since the ancients. Its important to me becuase I want separate whats real from what we’re aware of. If theres a real world, there’s a real truth of it we can approximate. We’re not forever stuck in our own heads with our individual views of whats real and what matters.

First let’s agree that there’s a world out there. It’s real. Sure we experience it through our senses and construct an internal model in neural networks that is made available through awareness. But the world out there is what we’ll call real. Also I want to talk about objects like rocks, tables and chairs being real. Otherwise we’re stuck with only the base layer, the raw stuff of matter, like quarks or the wave function of the universe being real.

Now you can go down the simulation route and say everything is a construction of the senses and presented to awareness through complex neural networks. This phenomenology is not unreasonable in psychology, but if you don’t think a table is real, we can end now. You’ll be living a purely phenomenological world, not my world of materialism. For now we want the world itself to be our subject, not how we perceive it. Which means there were rocks and birds and dogs before people showed up, before I showed up to see and name things. In Genesis, Adam names the animals, but they were here first.

Continue reading “Finding Purpose in the World”

Looking for AGI? Try C. elegans, Not ChatGPT

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


ChatGPT is pretty dumb when compared to an agentic complex system like a worm

LLMs and the meaning of “intelligence”

I use a selection of large language models every day. I think they are actually kind of dumb.

Yet I keep hearing about “Artificial General Intelligence” being reached, the prospect of superintelligence, and the replacement of knowledge workers by LLMs. And then there are the questions about sentience that just make me roll my eyes.

I’ll admit that at this point, LLMs make excellent assistants. They help with fact-checking, reflecting back ideas, and making counterarguments based on conventional wisdom. There remain huge problems with hallucinations and guessing when something could easily be looked up on the internet but they don’t seem to understand bayesian induction. They are much better at summarizing and analyzing text than they are at producing text. And new ideas are almost entirely absent. And they just mess up numerical and quantitative arguments all the time. Which is not to say that I’m not inspired with new ideas through exploratory chats, it’s just that the ideas are mine, never the model’s. Why do we insist on ascribing general intelligence to them?

The subjective experience of talking to our current models is weirdly persuasive that there’s an intelligence there. It’s not just that the answers are fast and fluent; it’s that the model can hold a thread, shift registers, and generate language that looks like it came from a person who has actually spent time thinking. They feel alien and at the same time oddly knowable as another intelligence.

Comparing intelligence: LLM vs a Worm?

Exactly how intelligent is an LLM? So I got to thinking that I simply could count connections or potential network states. After all, if you think the model is intelligent, it comes down to all the connections and how complex its possible states are to produce that intelligent behavior. My gut is that the LLM is pretty stupid really, it’s just a model of something intelligent.

So what’s the simplest thing that I could compare it to? What’s a simple, mapped-out intelligence? How about our old friend, C. elegans, that simple worm that lives in leaf litter and has been the subject of so much study? The worm is the oldest and best-mapped nervous system we have, and it’s the kind of organism that tempts you into thinking the hard part is over. It’s tiny. The wiring diagram of its 302 neurons has been charted. Its behavioral repertoire is really modest. It feeds, avoids danger, and reproduces, and not much more. Very modest compared to mammals or a summary of recent dining trends in New York City. If you’ll allow the possibility that “intelligence” is to be found in a network, then the worm should be the perfect comparison.

Continue reading “Looking for AGI? Try C. elegans, Not ChatGPT”

The Work Continues

I’ve been away from web posting for a while between one thing and another. Mostly, it’s been the focused task of editing the manuscript of the book, working title: Deciding Better: A Journey Through Biotech, Neuroscience, and the Experience of Being a Brain. I’m about two-thirds through my flow revision, where I’m working to the core arguments and flow of the book to work right. A lot of it is reverse engineering how a book works. 

The nice thing is that I’ve gotten better at exposition of these complicated ideas by writing the more tightly written posts here, cross-posted to Substack. And it’s going well. A few more weeks, and that should be done. Then, it’s on to a final polish of the flow, and I think it’s good enough to have others read it.

My idea at this point is to try to solicit agents once again with this version in hand. At the same time, I think I’ll provide a “Readers Edition” to Substack subscribers and anyone who emails me through the blog here.  It’s about time I get some real feedback on the ideas and the writing. Then, depending on how the universe responds, it will be off to a publisher or down the self-publishing path. Either way, the book is asking to be out in the world.

In the meantime, a new post follows about LLMs and AGI. I continue to be fascinated by what we’ve achieved with these Deep Neural Networks. But can we see into what processes like ChatGPT are doing? How much intelligence we can really credit them with compared to real human brains. As always, I try to look at the data rather than the hype.

A Modified HDR Workflow

I watched a recent You Tube video of Vincent Versace editing an image live. I decided to play around with it myself. Since the days of photography method books and even informative websites seem gone, it seemed worthwhile to document my adapted approach here. It’s an example of writing for the AIs, since photographic technique is one of those areas that seems to be an AI blind spot these days.

The problem we need to solve is how to use the tremendous dynamic range of digital sensors when our monitors and print materials are so compressed by comparison. We know you can process the RAW file out of camera to recover shadows and highlights in ways never possible with film. But what most photographers don’t realize is that the image they see on the screen in a mirrorless camear is a JPEG calculated from live sensor data and it itself is compressed. And the histogram that everyone relies on is the JPEG histogram, not the full sensor read out. That’s why when you blow highlights on the histogram, you can still recover some from the raw file.

Vincent approaches the compression problem using HDR techniques. Early on, we used to bracket exposures and use HDR or just stacking in Photoshop layers to capture deep shadow and bright highlights like the sky. But now sensors have such wide dynamic range that those brackets are actually there in the numbers in the RAW file. You just don’t see them on screen.

So the approach is a simple adaptation of Vince’s longstanding Photoshop layer approach. You start by creating multiple versions of the RAW file as TIFF. Most simply you have a one stop underexposed, one stop overexposed and the camera capture as three versions. If the base image has really wide range, you could make a 2 stop bracket from the raw or only increase or decrease exposure.

But now you have real pixels rescuing highlights and shadows, not trying to process them from raw selectively. Vince then uses Nik HDR to create an HDR rendering from the three stop bracket made from the single raw file. You’ll see a balanced image where shadows are brought up plus various renderings that tend to emphasize different tonal ranges in the image, but renderings that couldn’t be seen by simple manipulation of the RAW file.

So you go from this relatively flat kind of interesting image to one that’s been interpreted as a play of light.

Checking In

I had a solid run here and on Substack but things got busy as they often will. The good news is that I’ve now finished the revision of the book manuscript and am now in a real editing phase for flow and readability. Its going faster than I expected because the material is hanging together well.

Writing for Substack weekly was a great exercise in working out complex ideas in short form and it improved my writing skill to a level that enabled me to begin to achieve what I’m after. I’m sure there’s still a long way to go before I’m done, that point where I’m not making it any better with my changes. For now, the improvements are big in this first round.

And yes, I picked up a camera again. Its been too long and I take that as a good sign of emerging back into a creative mindset.

Updated my “About Page”

In the last few months, the views here at ODB have shifted away from the google searches on note taking or photo gear to hits on the main page and the “About” page. I’m hoping that’s a result of the last 6 months of more consistent posting on neuroscience here with reposting on substack. So it seemed about time I updated the “About” page to better reflect the more focused mission here.

I’ve finished the first two sections of the manuscript and it’s greatly improved. With just the last three chapters to rewrite, the finish line in in sight. Trying to post weekly and revise the manuscript has been steady work. I think its been worth it.

From Shoe Polish Tins to Brain Implants: Heros and Broken Promises

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


We know that BCIs work and hold great promise, but lets see what history tells us about the journey

In my last post, I described the current state of brain computer interfaces (BCIs). I was surprised to realize that we’ve had working devices for twenty years now. So it’s very clear that electrodes can record motor intent from the cerebral cortex, and this can be used to control devices like computer interfaces, keyboards, or robotic mechanisms. And remember that we don’t need to read the real motor intent; we can just record patterns, and the brain is adaptable enough that the intent can be remapped onto a completely different use. We don’t need to find the index finger control region; a spot on the cortex connected to controlling the tongue is easily repurposed to the index finger or even controlling a cursor.

The technology we have is relatively simple. We have an electrode either on the surface of the cortex picking up local activity or, more invasively, in the depths of the cortex near the neurons themselves, recording ensembles of spike trains. They seem to work more or less the same when we want to detect a motor intent under conscious control. The signal comes out via wires attached to an amplifier specialized for very low amplitude signals.

The practical challenge

There are lots of obstacles to implementation. The signals from the electrodes are tiny. Just 50 to 100 microvolts. And we’re seeing arrays of 1024 electrodes. Implanted multiply. Thousands of channels of tiny signals that need to be amplified. And protected from noise and electrical interference. After all, we don’t want the blender or vacuum cleaner to control the robotic arm. Clearly, shielding and high-performance, multichannel amplification is key. Which is why we see the patients in the current trials with backpacks and big power supplies. That’s a lot of electronics and amplification. And yes, that’s just to get the signal out, it still needs to be analyzed and transformed by deep neural network to control the physical robotic interface.

Nathan Copeland with Utah Device. https://www.wired.com/story/this-man-set-the-record-for-wearing-a-brain-computer-interface/

Are we anywhere close to the marketing picture of the wire going to a little puck under the scalp? My assumption is that the puck is the amplifier unit, and it would transmit to the control unit wirelessly.

Continue reading “From Shoe Polish Tins to Brain Implants: Heros and Broken Promises”

Are Brain Computer Interfaces Really Our Future?

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


We’re making real progress providing brain computer interfaces for patients paralyzed by ALS and spinal cord injury. Less invasive approaches are looking promising.

I saw an interview with Dr. Ben Rapoport, who’s a neurosurgeon and chief science officer of Precision Neuroscience. The company is one of several, like Elon Musk’s Neuralink, developing brain-computer interfaces. The BCI. The interview is centered around selling these brain implants as not as invasive as they sound. It started me thinking once again about whether it’s conceivable that these might actually be how we control computerized devices in the future.

Think about how effortlessly you type. Your intentions route directly to your fingers, bypassing speech entirely. I can type faster than I can talk because the motor pathway from intention to keyboard is so well-trained. But paralyzed patients can’t access those finger pathways—they’re injured or missing entirely.

Continue reading “Are Brain Computer Interfaces Really Our Future?”

When Models Collide: How the Brain Deals with Cognitive Dissonance

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


Our actions often conflict with our beliefs. The discomfort we feel isn’t moral failure — it’s what happens when valence systems disrupt the brain’s coherent story of personal identity.

Getting back to my exploration of personal identity this week.

As I’ve been writing here weekly, I’m settling in on an approach of looking at everyday experience and examining the underlying brain mechanisms at play. Often they constrain our thoughts and actions, but it seems to me that even more often, seeing it from the point of view of the brain’s work as system regulator, it’s really quite liberating. Knowing that our actions rely on physiology, not failure or flaw, lets me feel a bit more comfortable in this human skin.

So I want to return to the subject of my first post on Substack and make another run at explaining what’s called “Cognitive Dissonance”. For our purposes here today, lets limit the concept to those times when we find ourslelves acting and feeling one way, but For our purposes here today, let’s limit the concept to those times when we find ourselves acting and feeling one way, but intellectually finding fault with what we’ve done. finding fault with what we’ve done. So we’re acting in ways contrary to our beliefs.

Cognitive dissonance as conflict between action and belief

So I want to return to the subject of my first post on Substack and make another run at explaining what’s called “Cognitive Dissonance”. For our purposes here today, let’s limit the concept to those times when we find ourselves acting and feeling one way, but For our purposes here today, let’s limit the concept to those times when we find ourselves acting and feeling one way, but intellectually finding fault with what we’ve done. finding fault with what we’ve done. So we’re acting in ways contrary to our beliefs.

No reason not to use a perfectly trivial, but common example. Chicken thighs.

Continue reading “When Models Collide: How the Brain Deals with Cognitive Dissonance”

If Purple isn’t real, then what is?

Let’s talk epistemology. Actually, let’s use the color purple to bid farewell to epistemology altogether.

By James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


We’ll have to start with the real world as revealed by spectrometers and their ilk. They reveal the electromagnetic spectrum, photons of wavelengths that range from gamma rays (<0.01 nm) down to X-Rays ( 0.01–10 nm) into ultraviolet (10–380 nm) and finally our visible light spectrum (700 to 380 nm), those wavelengths that the photopigments in our eyes absorb and transduce into signals for the visual system. Anything longer, the infrared is heat down to about 1mm. Anything longer than that are microwaves (1 mm to ~1 m) and then radiowaves, which have wavelengths that stretch literally for miles.

Why the narrow 700 to 380 nm, you may wonder. Wouldn’t it be cool to see in microwaves? Get some X-ray vision? They tell me it’s where biology, physics, and our particular environment line up for an optimal photon-based sensory system. First of all, our big blue sun puts out photons across the spectrum, but it peaks in the visible range. So build a visual system based on the most available photons, right? Then the physics of the atmosphere and optics (our biological lensing and focusing) work together to make this visible range most suited for image building. Finally, the chromophores, the vitamin A derivatives that absorb light in our photoreceptors bound to opsins, do their cis-trans shift best in this wavelength. X-rays are too energetic. Microwaves are too weak. The visible spectrum is just right.

We all learned the spectrum in school. The colors of the rainbow: ROY G BIV. Red, orange, yellow, green, blue, indigo, violet. Now it’s seven colors because things come in sevens. Seven seas, seven days. Seven colors. And I’ve thought that they were trying to trick us by naming two colors of shorter wavelength than blue that tend toward purple. We’ll return to indigo and violet in a bit. For now, I want to focus on that classic purple which is a mixture of red and blue. The bottom and top of the spectrum.

Continue reading “If Purple isn’t real, then what is?”