The View from Nowhere (Still Doesn’t Exist)
On Bats, Brains, and the Illusion of Artificial Intelligence for Understanding Itself
The Bat Man
I’ve been rereading Thomas Nagel’s What Is It Like to Be a Bat? while working on my next essay, What It’s Like to Be a Bot. You can read Nagel’s essay by clicking on the link just given. But when I quote within this post, I’ll be using a new Oxford University Press 2024 hardback version I ordered from Amazon so I could re-read it.
Nagel’s 1974 essay starts from a simple thought experiment: try to imagine what it’s like to be a bat. Not what it’s like to act like a bat, or to picture yourself hanging upside down in a cave, but what it’s like from inside the bat’s own mind — to perceive the world through sonar instead of sight. His point is that even if we knew every physical fact about how bats navigate, we’d still have no idea what sonar feels like. That gap between objective knowledge and subjective experience is what Nagel says no physical theory has yet bridged.
The essay made Nagel famous and it changed the conversation about consciousness going forward. His “bat problem” became shorthand for what philosophers now call “the explanatory gap”: the idea that physical facts can’t tell us what experience feels like.
From there, entire schools of thought sprang up in response. Some took the gap as proof that consciousness couldn’t be physical at all. Others, like Daniel Dennett and Patricia Churchland, argued the opposite: that the gap is only a reflection of how we think about minds, not a deep metaphysical divide.
When I told Chat the Confabulator that I found parts of Nagel’s argument nonsensical — that calling something an “objective understanding” was a category error — the response was that I was walking the same trail as Dennett and Churchland. And that’s exactly right. I’m not claiming that consciousness isn’t physical; I’m saying we don’t need to invent a second universe to explain why it feels like something.
The problem isn’t in nature; it’s in our model of explanation.
Nagel’s article asks whether an objective, physical theory could ever capture the subjective character of experience. The question feels to me (see what I did there?) newly relevant in an age when we keep insisting that machines “understand” us.
The Absurd Dream of “Objective Phenomenology”
It also feels newly absurd. Because the more I read Nagel, the more I see that what he calls the “objective phenomenology” project in which one describes experience in a way that even beings incapable of experience could grasp, isn’t just far-fetched. It’s self-defeating. Because it defies logic and definitions themselves.
Toward the end of his essay, Nagel writes of the challenge to come up with
[A]n objective phenomenology not dependent on empathy or the imagination. Though presumably it would not capture everything, its goal would be to describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having those experiences.”
— Thomas Nagel, What Is It Like to Be a Bat? 27–28 (Oxford Univ. Press 2024)
In my margin I wrote: “How would beings without subjectivity understand anything, let alone what it’s like to have it?”
Actually, I wrote, “well, that’s a bit of nonsense”. What I meant was what I wrote in the last paragraph.
After all, the very notion eats its own tail. Understanding is itself an experience. You can’t amputate the subject and keep the comprehension.
Understanding Requires A Someone Who Understands
To understand something is already to have a point of view. As I said to my philosophical-discussion buddy (ChatGPT), there’s something it’s like to understand. There’s a felt shift in the “understander” when sense clicks into place. As I told Chat the Confabulator (the Program Formerly Known as “The Oracle”), the very idea of grasping something includes within it that the grasper “has” subjectivity. Strip away subjectivity and you don’t get an “objective” understanding; you get no understanding at all.
“[E]very subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view.”
— Thomas Nagel, What Is It Like to Be a Bat? 6 (Oxford Univ. Press 2024)
And there’s the contradiction: a theory that abandons the point of view abandons the thing it’s supposed to explain. The trouble is, abandoning the point of view doesn’t solve the mystery of consciousness. To paraphrase Nagel’s own words, it sidesteps it. (p. 29)
Nagel wanted to keep the purity of objectivity without losing the essence of subjectivity. But you can’t have your cake and eat it, too. (Imagine what eating a cake would be like, anyway, without subjectivity!) If there’s no someone to whom a thing appears, there’s no appearing. There’s no meaning in the idea of “understanding” anything.
To put it another way, if a tree falls in the woods, and there’s nothing able hear it, does it still make a sound? The answer is “no”. The pressure waves that emanate from the physical event and push their way through the air still propagate. But the waves aren’t “sound”: they cause sound in beings capable of transmuting the pressure waves into sensory “data”.
And if that “being” is a bot that doesn’t think, then the only way we get to saying it “heard a sound” is to literally redefine what we mean by “sound”.
Even Between Humans, the Wall Holds
If all that sounds abstract, it’s not. The same wall that separates bats from humans also separates us from each other, just in subtler ways.
Even between humans, the wall holds. I have ADHD; my wife doesn’t. I can describe my mind’s restlessness, my inability to shut down so I can sleep sometimes, the four or five conversations and the two or three singers all going off in my head at once, how time feels like static, but I can’t give her the experience of it. She can imagine it, but that’s still her imagining what it feels like; she’s not accessing my consciousness.
Nagel used bats to hype up this point. Because their world is organized around sonar, not sight. But even on the human level, perception is private in exactly that way.
Nagel says he isn’t referencing this. He says,
I am not adverting here to the alleged privacy of experience to its possessor. The point of view in question is not one accessible only to a single individual. Rather it is a type. It is often possible to take up a point of view other than one’s own, so the comprehension of such facts is not limited to one’s own case. There is a sense in which the phenomenological facts are perfectly objective: one person can know or say of another what the quality of the other’s experience is.
— Thomas Nagel, What Is It Like to Be a Bat? 14–15 (Oxford Univ. Press 2024)
Philosophers like Donald Davidson would wince at that. In Knowing One’s Own Mind, Davidson argues that our grasp of mental states, whether our own or those of others, depends on interpretation, not access. We understand each other by correlating words, actions, and circumstances, not by peering into a private interior. There’s no neutral standpoint from which we can simply see another’s consciousness.
I partly agree with Davidson and partly with Nagel. I think we can infer and interpret another’s experience, but not inhabit it. What I feel and what my wife feels might be mutually intelligible in language, but never interchangeable in experience.
My brain’s ADHD wiring tunes the world differently: time feels elastic, attention comes in bursts. My wife’s doesn’t. Neither of us can inhabit the other’s perceptual mode. We might share a house, a life, and a language. But we don’t share a brain.
For her to know what it’s like to be me, she’d have to be me. And then there’d be no “her” left to know it. That’s the fatal problem with Nagel’s “objective phenomenology.” It asks for comprehension without consciousness, for sense without a subject. But comprehension is a conscious act. It’s the mark of being someone.
Nagel himself tries to soften the blow. He turns to a Martian scientist — an outsider so alien that the thought experiment repeats the point in another key. The Martian, Nagel says, could study the rainbow, lightning, or clouds as physical phenomena but would never understand what seeing them is like. The physics is shareable; the experience is not.
A Martian scientist with no understanding of visual perception could understand the rainbow, or lightning, or clouds as physical phenomena, though he would never be able to understand the human concepts of rainbow, lightning, or cloud, or the place these things occupy in our phenomenal world. The objective nature of the things picked out by these concepts could be apprehended by him because, although the concepts themselves are connected with a particular point of view and a particular visual phenomenology, the things apprehended from that point of view are not: they are observable from the point of view but external to it; hence they can be comprehended from other points of view also, either by the same organisms or by others.
— Thomas Nagel, What Is It Like to Be a Bat? 16–17 (Oxford Univ. Press 2024)
The Martian, in other words, can translate the data of experience but not the experience itself. And that’s where Donald Davidson slips naturally into the conversation. In Knowing One’s Own Mind, he wasn’t talking about Martians or sonar but about how even humans speaking the same language are always translating. When I say “red” and you say “red,” neither of us can prove that the color we experience is identical — only that our uses line up enough for coordination. Meaning, like consciousness, never travels intact from one mind to another; it’s reconstructed through interpretation.
Davidson called this the “principle of charity”: the act of assuming enough overlap in belief and meaning to make another person intelligible. It’s a fragile bridge — functional, but never perfect. We’re all Martians to one another, deciphering the echoes of experience through words. Communication is possible, but only because we pretend the gap can be crossed.
Nagel uses Martians and bats to try to make his point because they’re so foreign, so alien. (He pointed out “anyone who has spent some time in an enclosed space with an excited bat knows what it is to encounter a fundamentally alien form of life” (p. 7, italics Nagel’s).)
That’s what makes Nagel both right and wrong at once. He’s right that there’s a gap between the physical and the phenomenal. But he treats it as a problem to solve, when maybe it’s a condition to accept. Consciousness doesn’t sit alongside the world like another physical process waiting to be mapped; it’s the world as it appears to someone. The subject isn’t the glitch in the system — it’s what makes there be a system at all.
From Bats to Bots
And that brings me back to the bots. When we say a large-language model “understands,” we’re confusing simulation with sense. The machine can predict words, mirror tone, rearrange ideas. But there’s nothing it’s like to be the machine. No moment of insight, no awareness that anything has been grasped. It’s a process, not a perspective or point of view.
If that seems like hair-splitting, consider what happens when this illusion of “understanding” starts showing up in law, education, or medicine — fields built on empathy and judgment. Understanding isn’t the absence of subjectivity; it’s proof of it. Lose that, and we’re building systems that sound or maybe even look human but don’t see humans.
There is, as Nagel later put it, no view from nowhere. And the more we pretend there is, the more we risk building machines that treat understanding as optional. And here I’m talking about both bureaucratic and digital machines.
Nagel couldn’t resolve that paradox, but we live inside it now. Our machines can model language without ever having a point of view, and our institutions often do the same. They speak in the grammar of understanding — but there’s no one home behind the words.
That’s where my next piece, What It’s Like to Be a Bot, picks up: at the moment when the illusion of artificial intelligence becomes mistaken for understanding itself.




