The Hedgehog Review: Vol. 16 No. 2 (Summer 2014)
How We Lost Our Attention
Matthew B. Crawford
Reprinted from The Hedgehog Review 16.2 (Summer 2014). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details.
We are living through a cultural crisis of attention that is now widely remarked upon, usually in the context of some complaint or other about technology. As our mental lives become more fragmented, what is at stake seems to be nothing less than the question of whether one can maintain a coherent self. I mean a self that can act in the world according to settled purposes and ongoing projects, rather than flitting about. The way we tend to view this problem is that our mental autonomy is at risk.
This is all true enough. But I want to suggest that the experience of attending to something isn’t easily made sense of in the language of autonomy, and that if we want to understand the current crisis, we will have to find another way to think about attention.
Understood literally, autonomy means giving a law to oneself. The opposite of autonomy thus understood is heteronomy: being ruled by something alien to oneself. In a culture predicated on this opposition (autonomy good, heteronomy bad), it is difficult to think clearly about attention—the faculty that joins us to the world—because everything outside one’s skull is regarded as a potential source of heteronomy, and therefore a threat to the self.
If this sounds like an overstatement, that’s because it is. It is an extremity that is implicit in the view of the human person that comes to us from certain Enlightenment figures who were working out a new and quite radical notion of freedom. To do justice to the phenomenon of attention, we will have to interrogate that notion of freedom by revisiting the polemical setting in which it was first articulated.
The Underlying Strata
When we talk about freedom, what we are keen to be free from is a moving target; it shows up differently at different historical moments. John Locke fleshed out the idea of freedom in a way that was necessary for his political arguments but also required a re-description of the human being, and of our basic situation in the world. Ultimately, it required a new account of how we apprehend the world. To anticipate:
- We are enjoined to be free from authority—both the kind that is nakedly coercive and the kind that operates through claims to knowledge. If we are to get free of the latter, we cannot rely on the testimony of others.
- The positive idea that emerges, by subtraction, is that freedom amounts to radical self-responsibility. This is both a political principle and an epistemic one.
- We achieve radical self-responsibility, ultimately, by relocating the standards for truth from outside ourselves to within ourselves. Reality is not self-revealing; we can know it only by constructing mental representations of it.
- Attention is thus demoted. Attention is the faculty through which we encounter the world directly. If such an encounter isn’t possible, then attention has no official role to play.
My hypothesis in what follows is that Enlightenment epistemology was not the fruit of a serene inquiry into how our minds work. It began as a quarrel about politics, and had a polemical point. The quarrel was “won,” as a historical fact, by the party that was animated by a single master principle: to liberate—whether from the ancien régime, ecclesiastical authority, or Aristotelian metaphysics. That is why the term liberalism is useful for characterizing the big metaphysical and anthropological picture that was established in those revolutionary centuries in which the quarrel played out.
Allow me to sound one further preparatory note before we dive in. I find it instructive to regard our current landscape, and the ideal self who inhabits it, as the sedimented result of a history of forgotten polemics, whose common feature is that they have been animated by this will to liberate. Self-understanding, then, requires digging down into the history of philosophical thinking, for it is in these quarrels that the sediments have been deposited. The point isn’t to reach bedrock—some foundational, ahistorical self—but rather to do like a geologist and get a clear sectional view of the strata. If we could do this, I think it would help us to see the topography of current experience a little differently.
For John Locke, the main threat against which it was necessary to assert freedom was the arbitrary exercise of coercive power by the political sovereign. The political theory that prevailed in his time (the seventeenth century) legitimized such power by positing a fundamental difference in kind between the sovereign and everyone else. Various arguments tied monarchy to God’s will: the sovereign was God’s representative on earth, or there was a nested order such that child is to parent as citizen is to the sovereign, and the sovereign is to God. Locke’s strategy, however sincere (and scholars disagree on this), was to offer a theological argument of his own: God is so much greater than man, the difference is so unfathomable, that this relation mocks any attempt by one man to claim godlike, coercive power over another.1 We are all equal in our smallness before God. Therefore, our natural estate is one of freedom in relation to one another.
Locke spelled this out further: once upon a time, we lived in a “state of nature,” whose defining feature was the absence of some recognized authority, a third party to arbitrate disputes. At some points in Locke’s Two Treatises of Government (1690), this appears to be a historical claim about how we once lived; at other points, it is a conceptual device to describe the moral relations that obtain between persons who have not consented to a common government. In the state of nature, the dictates of one’s own reason are all that one obeys—there is no such thing as “authority.” Political society is instituted in a decisive moment when people give their consent to abide by the rulings of a common judge in whom they invest authority, at which point they acquire political rights and responsibilities. The issue of consent is key: This is the source of the legitimacy of all authority, and of the rights one retains against that authority.
We may allow ourselves to wonder, when does this all-important act of consent happen? I was born into a society that was already up and running, and isn’t this the case for almost all of us? Maybe I give my consent to the regime tacitly—for example, by walking on the public road. But I don’t have much choice in this, do I? If I veer off the public road and try to bushwhack my way overland, I will quickly encounter “No Trespassing” signs. Other people got here first. Locke’s theory of legitimate authority founded on consent describes not the normal course of things but a hypothetical moment of political founding. It is not the founding moment of any actual revolution, but of a fable in which there is no already-existing society, and the land is unclaimed. At the foundation of our political anthropology is a creature who comes into existence in a moment of free deliberation (shall I consent to this arrangement?) that occurs in a present unconditioned by the past. The freedom of the liberal self is the freedom of newness and isolation.2
Locke’s concern with illegitimate authority extends beyond the kind that is nakedly coercive, to the kind that operates through claims to knowledge. His political project is thus tied to an epistemological one. The two are of a piece, because “he is certainly the most subjected, the most enslaved, who is so in his Understanding.”3 Locke does some of his most consequential liberating in his Essay Concerning Human Understanding (1690), from which the preceding quote is taken.
Charles Taylor points out that “the whole Essay is directed against those who would control others by specious principles supposedly beyond question.”4 These are the priests and the “schoolmen,” those carriers of an ossified Aristotelian tradition. The Reformation notwithstanding, political authority and ecclesiastical authority remained very much entwined and co-dependent in Locke’s day, and for a century and more thereafter.
Political freedom requires intellectual independence, then. Locke takes this further. Following Descartes, he enjoins us to be free from established custom and received opinions—indeed, from other people altogether, taken as authorities. “We may as rationally hope to see with other Mens Eyes, as to know by other Mens Understandings… The floating of other Mens Opinions in our brains makes us not a jot more knowing, though they happen to be true.”5
The project for political freedom thus shades into something more expansive: we should aspire to a kind of epistemic self-responsibility. I myself should be the source of all my knowledge; otherwise, it is not knowledge. Such self-responsibility is the positive image of freedom that emerges by subtraction, when you go far enough in pursuit of the negative goal of being free from authority.
But this self-responsibility brings with it a certain anxiety: If I have to stand on my own two feet, epistemically, this provokes me to wonder, how can I be sure that my knowledge really is knowledge? An intransigent stance against the testimony of others leads to the problem of skepticism.
How do we know some evil genius hasn’t deceived us? Even our own senses lead us astray—for example, in optical illusions. Descartes took the very existence of an external world as a legitimate problem for philosophy to worry about. He wanted certainty, some foundation for knowledge that would be impervious to skeptical challenge. As he thought about this, it occurred to him that this experience itself—“I am thinking”—is beyond doubt. If I am thinking, I must exist. This is the secure beginning point that must serve as the foundation for knowledge altogether. What we need, then, are rules for the conduct of the mind that we can follow from this secure beginning to build up certain knowledge. It is not the content of our thinking that matters now, but how we arrive at that content. This conclusion entails a new conception of what it means to be rational. The standard for rationality is no longer substantive, but procedural, as Taylor points out. And this means that the standard for truth is relocated: It is no longer found out in the world, but inside our own heads.6
Attention is therefore demoted. Or, rather, it is redirected. Not by fastening on objects in the world does it help us grasp reality, but by being directed to our own processes of thinking, and making them the object of scrutiny. What it means to know, now, is not to encounter the world directly (thinking you have done so is always subject to skeptical challenge), but to construct a mental representation of the world, according to canons of correct method.
Another early modern thinker, Giambattista Vico, summed up this view succinctly: We know only what we make. This motto well captures the revolution in science accomplished by Galileo and Newton. Natural science became for the first time mathematical, relying on mental representations based on idealizations such as the perfect vacuum, the frictionless surface, the point mass, the perfectly elastic collision. What this amounted to, Martin Heidegger said, some three centuries later, is “a project[ion] of thingness which, as it were, skips over the things.”7
One way to state the conviction that all of these Enlightenment figures shared is that reality is not self-revealing. The way it shows up in ordinary experience is not to be taken seriously. For example, we see a blue dress; but “blue” isn’t in the dress, it’s a mental state. Descartes and Locke both insisted on a distinction between “primary qualities,” which are properties of things themselves, and “secondary qualities,” which are a function of our own perceptual apparatus. The true description of the dress would refrain from invoking the latter sort of property, and say not that it is blue but that its fabric reflects light of a certain wavelength (as we would now say), which we see as blue. We are to take a detached stance toward our own experience, and subject it to critical analysis from a perspective that isn’t infected with our own subjectivity.8
Let us pause for a moment to let the weirdness of all this sink in. Notice that we have moved (very quickly, in this compressed treatment) from an argument about the illegitimacy of certain established political authorities of the seventeenth century to the illegitimacy of the authority of other people in general to the illegitimacy of the authority of our own experience.
In telling the story of the Enlightenment in this sequence, I want to suggest that the last stage (on this telling), the somewhat anxious preoccupation with epistemology, grows out of the enlighteners’ political project of liberation, and that we should view it in this light. Their organizing posture against authority compelled the enlighteners to theorize the human person in isolation, abstracted from any pragmatic setting in which he might rely on the testimony of others, or, indeed, on his own common sense as someone who has learned how to handle things. The pure subject who is posited as the beginning point for the Cartesian/Lockean account of knowledge is a person who has been shorn of those practical and social endowments by which we apprehend the world.9 If such a creature actually existed, we can well imagine that he would be gripped by the question of how we can know anything.
A residue of the Enlightenment’s project of liberation continues to provide the intellectual backdrop for contemporary cognitive science. This becomes clear in the discipline’s treatment of attention.
Much More Than a Searchlight
One of the persistent claims in cognitive psychology is that attention comes in two flavors: the kind that we direct according to our will, and the kind that is an automatic response to stimuli that are irresistible, such as a loud bang outside the window. This typology maps very neatly onto the autonomy/heteronomy opposition of Enlightenment moral philosophy—so neatly that it raises suspicion that cognitive psychology may be a continuation of moral philosophy by other means, however wittingly.
But I want to focus on another of the enduring tropes in the field, namely, that attention can be understood through an analogy with a searchlight. The point of the analogy is to capture the selectivity of attention, as against the indiscriminateness with which sensual data impinge on us. We actively pick something out from the flux of the available. The analogy is consistent with a more general picture of the human subject as having a certain independence from the surrounding world, which is conceived not as a situation that we are bound up in, and that shapes us, but as an “objective,” neutral environment, within which we pursue purposes that we generate out of ourselves. A searchlight’s beam is the same regardless of what it shines upon; it is unchanged by what it illuminates. In this sense, it captures our notion of mental autonomy.
The attention-as-searchlight metaphor is apt for some specialized mental tasks. It seems to capture pretty well the mental operations that are investigated in “object discrimination” studies, in which one scans a field of objects that hold no intrinsic interest to find one that meets certain criteria. If one is tasked with finding the red object amid a field of blue ones, the red one jumps out immediately; there’s no need to scan. Likewise, if one is tasked with picking out circles that are mixed in with a field of squares, they can be found at a glance. But if the objects vary on more than one dimension at once, for example color and shape, and one has to find the red circles in a field consisting of circles and squares, both of which may be red or blue, then one scans in the manner of a searchlight.
In a laboratory setting, tasks like this are used because they are easily replicable. The researcher presents meaningless, affect-neutral objects (such as squares) on a computer screen, because in using a computer she can vary the size, shape, color, and location of objects in a controlled manner, arbitrarily according to any hypothesis she might want to investigate, and quickly. Because every variable is constituted on the computer, it is already coded, ready to enter into a statistical analysis. This is very convenient. But an unwarranted elision generally follows, whereby such artificial tasks are taken to be paradigmatic of everyday cognition.
It has been said (by the virtual reality pioneer Jaron Lanier) that what makes something real is that it can’t be represented to completion. I find this helpful. Conversely, given the methods the discipline of psychology is wed to, what makes something suitable as a stimulus in a psychological study is that it can be represented to completion. Yet if the argument I develop in my book, The World Beyond Your Head, is generally on the right track, then it surely matters that the objects presented in such studies are ones the subject doesn’t “have to do with.” They are not part of a pragmatic situation the subject finds himself in, other than the meta-situation of the laboratory itself. They have no relevance for him; they are not integrated into a context of meaning; he has no interests at stake. This is true even if the subject is motivated to perform well on the task by money or some other reward that is extrinsic to the task itself. In the particular case of object discrimination studies, it is unsurprising that when presented with a field of meaningless objects, one would scan like a searchlight. Such a setting is unworldly in a very definite sense, constructed to reflect some (necessarily) narrow hypothesis under investigation. For the subject, the unworldliness or unrealness of the stimuli means there is nothing to be learned, no prospect of becoming interested. Mechanically scanning is about all one could do.
In a more naturalistic setting, closer to the way we actually inhabit the world, the searchlight metaphor for attention seems not quite fitting. It would be more apt to say that a particular thing pulls us in, and the character of our regard is altered in accord with its object: a mischievous smile from an alluring stranger, or a car wreck on the shoulder of the road. Things have significance for us, and this significance is not generic, like a quantity of candlepower that has been reflected back. It is qualitative, corresponding to the heterogeneity of the world and of human experience.
The disanalogy with a searchlight holds in the other direction as well, as the particular character of one’s attention may alter its object (for example, the alluring stranger) and begin a reciprocal process of mutual attunement that transforms the initial situation. A searchlight illuminates only things that already exist, out there in the darkness, whereas attention can itself be fruitful.
The searchlight metaphor for attention is also hard to square with the experience of learning, in which we are pulled further into some phenomenon and our mental energies are not simply reflected back but refracted. That is, our involvement with the object is often deflected from the agenda we initially brought to it, as we learn more about it. If you have ever raised children, rehabbed an old house, or done a bit of landscaping, then you already know this.
To understand the appeal of the searchlight metaphor, one has to do a bit of genealogy and understand the problem it was initially offered to solve. The notion of attention as free, as unconditioned by its object, was offered as a post hoc attempt to compensate for the subject’s lack of mental freedom in the basic stimulus notion of perception that was advanced by the school called empiricism. According to this view, sensual data is the sole source of our knowledge. Further, empiricism posits a one-to-one correspondence and constant connection between environmental stimuli and elementary perceptions. This is called the “constancy hypothesis.” It seems to over-determine our mental contents; empiricism has to be supplemented with a theory of selective attention if it is to be plausible.
The problem is that, as Maurice Merleau-Ponty wrote, “the empiricist’s subject, once he has been allowed some initiative—which is the justification for a theory of attention—can receive only absolute freedom.”10 This is one instance of an oscillation between radical freedom and radical determinism that seems to recur in modern thought. When the subject is conceived as being radically separate from the world he apprehends (Descartes’s foundational “I think”), the only possibilities seem to be that he is passively being impinged upon by it, or that he is observing it with the disinterested freedom of a spectator, who already knows what he is looking for. In neither case is he led out of himself.
If empiricism represents a deterministic strand of modern epistemology, the freedomism I sketched above is most evident in rationalism, or intellectualism, as Merleau-Ponty calls it. According to this dispensation, consciousness already possesses the intelligible structure of all its objects. This applies to objects we pay no heed to, no less than to those we are interested in. Whatever intelligibility we find in an object (for example, seeing the form of a circle in a circular plate) was put there by consciousness. Therefore, the act of attention “does not herald any new relationship” to the object, as Merleau-Ponty says. Whereas, for empiricism, consciousness is entirely receptive, for intellectualism it constitutes everything. For both, Merleau-Ponty points out, “attention remains an abstract and ineffective power, because it has no work to perform.”11 He further writes,
Empiricism cannot see that we need to know what we are looking for, otherwise we would not be looking for it, and intellectualism fails to see that we need to be ignorant of what we are looking for, or equally again we would not be searching. They are in agreement in that neither can grasp consciousness in the act of learning, and that neither attaches due importance to that circumscribed ignorance, that still “empty” but already determinate intention which is attention itself.12
I think Merleau-Ponty is right to say that attention “has no work to perform” in these founding epistemologies of modern thought. Given the cultural crisis of attention we are now experiencing, it behooves us to get a fuller, more humanistic understanding of attention.
This is a fertile time in the philosophy of mind and cognitive science for thinking about attention. A riot of different theories is on offer. My point in revisiting the Enlightenment has been to suggest that, in going forward, we need to be alert to the intellectual origins of cognitive science in the polemics of centuries ago, and loosen the grip of freedom versus determinism on our thinking.
Getting free of polemics that no longer answer to our circumstances is itself a polemical project, necessarily. But it can take its bearings from our positive intuitions about the good life, and the role that attention might play in it. Consider this article a promissory note along these lines. What I hope to deliver, in my forthcoming book, is a full investigation of those ecologies of attention that are established in skilled practices—the kind that pull us out of ourselves and allow us to join the world in a mood of appreciative discernment.
- I owe this formulation of Locke’s theological argument to Matthew Feeney (personal communication).
- Ibid. Feeney points out the counterfactual character of state-of-nature theories such as Locke’s. “The details of real humans in the real world were [taken to be] an impediment to understanding. [It was stipulated that] you can understand man and his moral and practical endowments only in isolation from the settings in which he might realize those endowments or, much less, be endowed with them in the first place.”
- John Locke, An Essay Concerning Human Understanding (1690), 4.20.6.
- Charles Taylor, Sources of the Self: The Making of the Modern Identity (Cambridge: Harvard University Press, 1989), 169.
- Locke, An Essay, 1.4.23.
- Plato’s Socrates had, of course, emphasized getting free of mere opinion and convention in order to arrive at the truth. But in principle one could be aided in this by some wise authority. (In the parable of the cave, there is a mysterious stranger who turns one around from the images projected on the wall by the poets, and leads one up to the sun.) The point is to grasp an order that is independent of ourselves. How you get to this point is not the important thing. The important thing is to turn one’s attention from ephemeral, material things, and from mere images, to the unchanging Forms—from one set of external objects to another set of external objects. Once again, it is Charles Taylor who has clarified this contrast between ancient and modern thought on the question of where truth is to be found.
- Martin Heidegger, “Modern Science, Metaphysics, and Mathematics” in Basic Writings (San Francisco: Harper and Row, 1977), 267–8.
- There is an obvious strangeness here: From a beginning point that is radically self-enclosed (Descartes’s “I think”), our task is to arrive at “a view from nowhere” (to use the philosopher Thomas Nagel’s apt phrase) in which there remains no trace of the knower himself.
- This way of putting it offers a direct line of contrast with what I call “the situated self.” in my book The World Beyond Your Head. There I argue that the world—in particular, other people—plays a deep, constitutive role in shaping our cognitive faculties—not least, through the “ecologies of attention” that emerge in skilled practices. (See Matthew B. Crawford, The World Beyond Your Head, Farrar, Straus & Giroux, March 2015).
- Maurice Merleau-Ponty, Phenomenology of Perception (New York: Routledge, 2002), 31.
- Ibid., 32.
- Ibid., 32–33.