The Hedgehog Review

The Hedgehog Review: Vol. 20 No. 1 (Spring 2018)

Digital Metaphysics: The Cybernetic Idealism of Warren McCulloch

Leif Weatherby

The Hedgehog Review

The Hedgehog Review: Spring 2018

(Volume 20 | Issue 1)

In the autumn of 1948, Warren McCulloch, neurophysiologist, bohemian cold warrior, and a founder of machine learning, stood before a gathering of brain scientists at the California Institute of Technology. The occasion was the inaugural Hixon Symposium, and the topic, cognitive behavior—more specifically, “Cerebral Mechanisms in Behavior.” John von Neumann, for whom modern computer architecture is named, was in the audience. Questions about the new digital machines hung in the air, even as the brain remained the dominant topic. McCulloch decided to talk metaphysics.

He divided the world into “mind” and “body,” noting that the physicist claims to study only the latter, unless we compel him to include himself, as physicist, in his account of matter. Then he is faced with a choice: refuse, and remain a physicist, or assent, and become a metaphysician.1 McCulloch thought he could do this dilemma one better, by developing what he called an “experimental epistemology.” Every path forward, as he would later claim, lay “through the den of the metaphysician.”2 The physicist, in reality, has no choice but to assent, because the “synthetic a priori is the theme of all our physiological psychology,” McCulloch concluded.3

The term “synthetic a priori” is taken from the philosophy of Immanuel Kant.4 To know something a priori is to know it in isolation from experience. The category of “unity,” for instance, is something we don’t derive from experience, but rather take to experience. (Similarly, we apply causality to the world, but can’t learn about cause and effect from it.) “Synthetic,” in Kant’s vocabulary, means that what we know does not proceed from its mere concept: This kind of knowledge counts as “about” something outside itself. One example of a synthetic a priori judgment would be addition (7 + 5 = 12). Another would be the Newtonian principle that every action causes an equal and opposite reaction.

You’d hardly expect to hear technical terms from Kant’s philosophy in a setting like the Hixon Symposium, but McCulloch was telling a roomful of scientists that he was searching for the basis of any possible knowledge that is really informative about the world. Somehow, he was going to do that “experimentally,” even mathematically. The “synthetic a priori” was to be found somewhere between the brain and the new digital “automata.” By placing the physicist’s dilemma at the center of his project, McCulloch was reanimating the program of German Idealism—the philosophical movement that began with Kant and ended with Hegel. In doing so, he produced the beginning of a metaphysics we urgently need in the era of Big Data and machine learning, as the digital fades from the horizon of our control, and even our ability to grasp it conceptually.

McCulloch’s name is again in the news, in the wake of the recent “explosion” in artificial intelligence. “Machine learning”—in which software programs called “neural nets” (McCulloch had called them “nervous nets”) are exposed to millions of iterations of specified processes and build layered “knowledge” of those processes—has moved from twentieth-century fantasy to twenty-first-century fact.5 The New York Times Magazine devoted a cover story to the “new machine learning” last year, recounting parts of this history even while burying McCulloch in a hyperlink.6 But his influence is felt everywhere, in our artifacts and our algorithms. Hardware and software alike, it turns out, were part of a metaphysics McCulloch drew from German Idealism.

At the beginning of the digital revolution, there existed a speculative energy that we could use now. It was put at the service not of innovation or disruption but of maintenance and politics, of establishing categories to put our digital world on a better course. McCulloch’s evocation of Kant can show us a way to think through the balance of the digitization of our world while avoiding the extremes of digital utopianism and digital denialism.

Invasive Bits and Data

In the twenty-first century, bits and data have entered the world in new and increasingly invasive ways. An effusion of new buzzwords—for example, Internet of Things, Big Data, machine learning—underscores this shift. Philosophers and CEOs alike imagine the world as bits and data. The so-called digital philosophy associated with Stephen Wolfram (among others) even maintains that matter itself is digital. Silicon Valley magnate Elon Musk, who runs companies that aim to colonize Mars and to hook brains directly into computers, thinks there is only a vanishingly small chance that we live in “base reality” rather than a computer simulation.7 We might call this general philosophy “data metaphysics,” since it projects the quantifiability and structure of digital data onto being as such.

Even if we don’t buy the premise of data metaphysics (and we probably shouldn’t), we can no longer imagine the digital through the visual metaphor of the screen, the synecdoche of the computer. In fact, there’s no concrete metaphor for the digital at all anymore, not since we entered the “petabyte era,” in which the digital forms and guides large-scale social processes. As early as 2008, an article in Wired included the assertion that “at petabytes we ran out of organizational analogies.”8 Today, “the digital” is a set of quantities interacting algorithmically at levels of sheer volume and complexity so far outside what we can imagine that size-based metaphors have broken down entirely.9 We use another cybernetic term, “black box,” to denote that we can’t see what the digital is even by analogy. Algorithms guide or even dictate social processes like corporate management,10 medical diagnosis, and infrastructure design.11 Data, according to this fantasy, are set to replace frameworks, hypotheses, and decisions. But what if, instead of phasing out intellectual labor, the new presence of data is mutating our categories, changing the very way we imagine the world, and ourselves in it?

We should proceed cautiously when we change the terms of our metaphysics, but caution is hardly the watchword of the new style of the digital. Computer scientist Pedro Domingos writes in Master Algorithm that machine learning is “the scientific method on steroids.”12 It will change—indeed, has already changed—every field, from health care to politics to journalism. Websites use machine learning to “decide” which headlines to push; political charisma is heavily filtered through individual-level predictions of voting behavior; deep-learning programs can diagnose cancer better than doctors can. The “master algorithm” will create a “perfect” understanding of the world and of society, connecting sciences both to other sciences and to social processes. In other words, the process we have called “automation” since the Industrial Revolution will itself be automated: “The Industrial Revolution automated manual work and the Information Revolution did the same for mental work, but machine learning automated automation itself.”13 In the final clause of this sentence, Domingos imagines society running on the steam of some other intelligence. Automation has been a major force in global history for at least two centuries. To automate automation is to imagine historical causality itself as controlled by artificial intelligence. Domingos is sanguine about automated automation making things better, but it isn’t clear why. This kind of thinking leaves us without a conceptual foundation on which to build an understanding of the ubiquitous digital processes in our society, deferring even historical causality to machine learning. McCulloch’s group of scientists had a glimmer of such an approach, a way to understand and govern the very machine learning they had set in motion.

Digital processes that can only seem abstract to us are now causal factors in our society; the world itself now features processes that used to be the preserve of the mind. The implicit assumption is that these processes know more or better about the world than we do. Big Data can target individuals rather than general rules, and thereby forge an unmediated link between data and world. At least at the level of empirical and infrastructural design, Big Data can replace cause with correlation as the primary means of framing inquiry. The Bigger the Data get, the less we’ll have to rely on the “lazy” notion of causality at all.

As Internet researcher Viktor Mayer-Schönberger and journalist Kenneth Cukier write in their book Big Data, “In the age of small data, we were driven by hypotheses about how the world worked, which we then attempted to validate by collecting and analyzing data. In the future, our understanding will be driven more by the abundance of data rather than by hypotheses.”14 Big Data will “fundamentally transform the way we make sense of the world,” fundamentally altering causality.

When McCulloch turned to Kant, it was precisely to think through digital causality. For Kant, our knowledge of the world could not be derived only from empirical observation. Nature is a composite of the “law” we give to it and a “thing in itself,” about which we can know nothing. For our knowledge to be meaningful, it has to be incomplete. This means that “nature” is a composite of mind and world that never touches the “real,” or the thing itself. Metaphysics was not about creating a picture of the world the way it “really was”—for example, as data—but instead about the material sites where meaning was generated. For McCulloch, one such site was the digital, where a strange new order of signs was on the rise.

McCulloch never thought the real would yield to data; nor did he ever think humans would defer to their machines. Instead, he saw that the machines would make new principles of abstraction—new kinds of cognition—available. It was a kind of mutated Kantian question. Kant had wanted to know how much mind is in the world, and McCulloch thought the sum might shift. That is, the shape of the relation between abstraction and the real might change with the new machines.

Cybernetic Idealism

Before the recent renewal of interest in his work, McCulloch was probably best known for cybernetics, the postwar scientific movement that he founded with mathematician Norbert Wiener and anthropologist Gregory Bateson. Equal parts universal science and pop culture fad, cybernetics (based on the Greek word for “steersman”) was a collaboration of machine design, physiology, and philosophical ambition, providing a template for the picture of science and technology we take for granted today.15 Information theory, feedback loops, and human-machine interfaces were central: Animals and humans, machines and information, were cast in a new vocabulary of communication and control, based on the capacities of new communication technologies. “Information” came to denote the measure of communicated intelligence, increasingly measured in binary digits, or bits. Historian of science Paul Erickson and his colleagues describe this period as a passage from Enlightenment reason to “quantifying rationality,”16 a shift from a qualitative capacity to judge to an extensive but narrow push to measure. But some Enlightenment notions survived the transition.

Cybernetics, for one, was shot through with German Idealism. Its founders constantly returned to the philosophical trajectory running from Kant to Hegel, extending it back to include Gottfried Wilhelm Leibniz, the Enlightenment polymath who invented the infinitesimal calculus independently from Newton, casually laid out the principles for binary notation as an aid for his own mathematical reasoning, and attempted to construct a machine for computations. Leibniz did all this while conceiving, on a parallel track, the notion of a “universal characteristic,” a universal logical language. Leibniz animates, for example, Wiener’s best-known philosophical statement about the cybernetics movement:


The mechanical brain does not secrete thought “as the liver does bile,” as the earlier materialists claimed, nor does it put it out in the form of energy, as the muscle puts out its activity. Information is information, not matter or energy. No materialism which does not admit this can survive at the present day.17


No materialism without information meant that information was itself material18—that messages were not vapor, but something weightier. “Information is…not matter” means that it was not a part of regular physics as it existed before cybernetics. A materialism based in physics would be possible only if physics could be altered to include information, a point Wiener always connected to Leibniz.19

Neither matter nor mind survives autonomously in cybernetics. The signature notions of cybernetics—feedback, information, control—could not be described in the language of matter or the language of ideas alone. To speak cybernetics was to abandon the binary built into so many disciplinary vocabularies and to include the physicist himself in physics. That meant employing a “dialectical” form of reasoning that would reverberate through all subsequent digital technologies.

Historian of science Peter Galison has argued that Wiener “vaulted cybernetics into a philosophy of nature,”20 using Leibniz among other sources. But it was a philosophy of nature striated by information, constituted of cognition, now also to be included in the empire of physics. Leibniz, in other words, authorized the philosophical ambitions of cybernetics to move beyond the binary alternative between material and ideal. Leibniz and Kant were also sources for McCulloch’s search for the conditions of cognition—the “synthetic a priori”—in the digital structure of the brain.

Embodied Computation

With the expanded, supercharged digital regime of the twenty-first century, we have returned to the lexicon of the 1948 Hixon Symposium, using the brain to describe the digital and vice versa. The digital processing of vast amounts of data through “neural nets,” crucial software entities in machine learning, is suddenly ubiquitous. The notion that machines might learn or adapt to inputs gained prominence during the 1940s, when digitized data were not so big at all.

Follow the hyperlink to which the 2016 New York Times Magazine article on machine learning consigned McCulloch and you can read a 1943 paper by him and his protégé, Walter Pitts, with the curious title “A Logical Calculus of Ideas Immanent in Nervous Activity.”21 In it, McCulloch and Pitts contend that the “all-or-none” character of neurons (excitatory impulses either cause them to fire or not, with no other possible state) means that the brain is a digital computing machine, in the sense that it can encode the propositions of Boolean logic, the flow-chart algebra of propositions developed in the nineteenth century by the mathematician George Boole. To describe this neuronal logic, McCulloch and Pitts introduced the term “nervous nets”:


Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms.22


Every synapse either fires or does not; it is, like a switch, either “on” or “off.” McCulloch saw that this was precisely analogous to the switching boards telephone operators used, and recognized that this switch-like behavior was also a principle for the construction of calculating machines like the ones von Neumann would work on. The infrastructure of our digital machines still relies on the combination of ones and zeroes to code logical propositions like “and” and “or.” McCulloch and Pitts were not alone in seeing this point, but they thought it could lead far beyond deterministic calculations.

The nets are by all appearances a characterization of brains, a contribution to neuroscience. The twist, however, is that the nets are not actual neurons at all, but instead a generalized mathematical form for the possibility of embodied computation. Hovering between neurophysiology and the new machine design, McCulloch and Pitts set down principles for digital operation and organization. Their paper not only formed the basis for both dominant contemporary approaches to artificial intelligence (serial and parallel processing)—no mean legacy on its own—but also directly influenced von Neumann. McCulloch and Pitts wrote that this digital activity of the nets would always have a “semiotic” character, giving rise to psychological understanding. Propositions encoded in nets, in other words, were signs (in Greek, semos) written in the brain.23 The interface between matter and meaning was limited to the form and order of these signs. Only at this interface could a world come into view; the nets were the premise of any understanding of nature.

Nets operating dynamically should, in principle, be able to “compute” anything any machine could, or possibly anything a mind could “think.” Because these nets were not meant to be a representation of the brain itself but only of one of its functions, one could, in principle, design nets that could perform tasks too complex for the brain. It might even be that these nets would begin to make “their own” associative chains. This is what the recent wave of machine learning is now testing, something possible only because of the amount of digitized data available to test the nets, data that did not exist in 1943. The theory remained speculative, building only in fits and starts toward its current success for two generations. Now that its practical implications in machine learning are being felt across the globe, a look at its metaphysics is overdue.

Digital Metaphysics

It was Walter Pitts, according to one of McCulloch’s young collaborators, Jerome Lettvin, who brought Leibniz into the working group:


Walter had read Leibniz, who had shown that any task which can be described completely and unambiguously in a finite number of words can be done by a logical machine. Leibniz had developed the concept of computers almost three centuries back and had even developed a concept of how to program them. I didn't realize that at the time. All I knew was that Walter had dredged this idea out of Leibniz, and then he and Warren sat down and asked whether or not you could consider the nervous system such a device. So they hammered out the essay at the end of ’42.24


Pitts, who was in his late teens at the time, had run away from home and been taken on as a student by the philosopher Rudolf Carnap at the University of Chicago. McCulloch met Pitts and, enchanted by him, moved him into his home near the University of Illinois. Lettvin’s account implies that Pitts and McCulloch used Leibniz to define the question of neuronal-logical activity in the same terms in which Alan Turing had imagined a “universal” machine just a few years before.

As Turing would after him, Leibniz had disaggregated tasks into a finite number of steps, a concept of the algorithm that Pitts and McCulloch would then look for in the brain. As their contemporary and philosopher Paul Schrecker would put it:


If the ultimate aim of Leibniz’s efforts were formulated in a very few words, it could be called the invention of a general method of constructing algorisms [sic]. In order to approach this aim he had not only to analyze the formal structure of algorisms, but also to investigate the particular structure of reality which facilitates the reliability and efficacy of this operational procedure. These two inquiries are the task of logic and metaphysics respectively.25


To program, in Lettvin’s gloss, is to fit the formal structure of algorithms efficaciously into a reliable structure of reality. Although logic and metaphysics remained separate in Leibniz, Pitts and McCulloch saw them as converging in Turing’s work and in their own. From that vantage point, the “automata theory” that would then be realized in computers was really a way of testing the boundaries between the material and the logical. This is why McCulloch called it “experimental epistemology.” Kant might have called it “metaphysics.”26

In support of an application for a Guggenheim grant by another member of his research group, McCulloch described their research this way: “What we seek to understand ultimately is what Kant called the transcendental unity of apperception.”27 In late reflections on his work with a group at the Research Laboratory of Electronics at the Massachusetts Institute of Technology, he reiterated this aim, though in slightly different words: “to understand the physiological foundation of perception.”28 To this end, he had worked closely with the neurophysiologist Joannes Gregorius Dusser de Barenne, who in turn had studied with the neo-Kantian Rudolf Magnus, who had sought a “physiological a priori,” a physical seat of perception. But McCulloch broke with this notion and with his positivist predecessors, reminding readers that Kant himself had “rejected the cerebro-spinal fluid” as the seat of cognition. How can we square “experimental epistemology” with this rejection? If McCulloch et al. were seeking the “synthetic a priori” experimentally, how could this be anything other than a physiological basis for perception?

The answer is to be found in the question in the 1961 lecture title that McCulloch set as his life’s work as early as 1917: “What is a Number, that a Man May Know it, and What is a Man, that He May Know a Number?” McCulloch wanted to see Kant’s principle in numbers and quantity. But this “quantity” was not just a measure of mind; it was an original capacity—what Kant called a “faculty”—that transcended any dualism between matter and idea. The digital was a form of cognition, not just a technique of measurement. But it was cognition embedded in material. Every event in the neural net had a “semiotic” character—this meant that the “logical calculus” was the site of the sign, where material organization and meaning coincided before we could pick them apart analytically. This structure was the essence of the digital. Digitally encoded signs constituted the “world” and our understanding of it, as irreconcilable but permanently linked aspects of the nets. Kant had allowed McCulloch to make the digital transcendental and real at the same time.

McCulloch was exploiting Kant’s argumentative strategy here, not adopting his views. Kant thought that our tendency to imagine “things” as separate from “ideas” was secondary to the process of judgment, which first produced these elements (“synthetically”) before they could be separated.29 For “judgment” McCulloch substituted “neural propositional logic.” The unified source of that synthesis was in the capacity to calculate quantity. That capacity existed because the brain—or at least the formal “net”—was digital.

Even if brains couldn’t be entirely described by the theoretical nets, they could compute propositions, manipulate discrete values. This meant that our “contribution to nature,” in Kant’s sense, was also digital. The new machines that von Neumann and others were designing meant that no obvious limit could be set on what counted as digital embodiment. On the other side of the revolution that McCulloch shepherded into existence, we find ourselves asking his question again: What is the digital? How is it integrated into a nondigital world, and how should we administer that integration? McCulloch’s idealism points to the formation of categories—crucially, the category of causality—as the hinge on which the digital-social interface turns.

Toward a Philosophy of the Digital

Kant famously stated that he had “awoken from his dogmatic slumber”30 by reading the Scottish Enlightenment philosopher David Hume. Hume maintained a bright line between “matters of fact” and “relations of ideas.” This meant that mental habit was central. If one wanted to form a meaningful sentence about the world (“this causes that”), then one would have to habituate the mind by noticing common correlations and regularly drawing the conclusion that one thing “caused” another. Kant disagreed. Cause, he reasoned, could not just be a mental habit, because it had a hidden premise: not that one thing followed another in time, but that it necessarily did so. To conceive of a necessity in the world was to add something more than habit to observation—to contribute a law to nature.

McCulloch and Pitts concluded their 1943 paper by making this contribution to nature digital. “Causality,” they wrote, “which requires description of states and a law of necessary connection relating them, has appeared in several forms in several sciences, but never, except in statistics, has it been as irreciprocal as in this theory.”31 The “state” of a neural net—just like the state of a Turing machine—could be specified at a given time t, and the relation between successive times could provide “the law of necessary connection” that would allow one to “compute from the description of any state that of the succeeding state.” But neuronal habits could, they continued, never give rise to the concept of necessity, because “the inclusion of disjunctive relations prevents complete determination” of the preceding state. Knowledge would remain incomplete, abstract, and semiautonomous, just as Kant had claimed:


Thus our knowledge of the world, including ourselves, is incomplete as to space and indefinite as to time. This ignorance, implicit in all our brains, is the counterpart of the abstraction which renders our knowledge useful. The role of brains in determining the epistemic relations of our theories to our observations and of these to the facts is all too clear, for it is apparent that every idea and every sensation is realized by activity within that net, and by no such activity are the actual afferents fully determined.32


In other words, the brain establishes a way to receive and then independently structure impulses from outside itself, which makes its activity autonomous and necessarily “abstract.” It realizes states of affairs that are both matters of its logical structure and “afferent impulses.” The brain, then, is the intersection of mind and world, and even the source of both of those terms. “Experimental epistemology” confirms this point:


Thus empiry [empiricism or empirical results] confirms that if our nets are undefined, our facts are undefined, and to the “real” we can attribute not so much as one quality or “form.” With determination of the net, the unknowable object of knowledge, the “thing in itself,” ceases to be unknowable.33


This conclusion appears to be little more than a restatement of Kant’s principle in neurophysiological terms. The problem of real knowledge of the world (the “synthetic a priori”) is recast as the partial autonomy of neural nets. But McCulloch went one step further.

McCulloch’s question, “What is a Man, that He May Know a Number?,” places a digital capacity at the apex of his experimental epistemology. McCulloch shifted the Kantian framework from a purely epistemological endeavor into a technological one. The McCulloch-Pitts neuron shows how logical quantities are central to the understanding of embodied cognition, or, in other words, how the digital is real. Even if the paper had not led to the proliferation of actual digital technologies—both mainframe and personal computers, in addition to machine learning techniques—it would still stand as a philosophical achievement. So far from “reducing” human knowledge to number, it made number a dynamic feature of any possible cognition seated at the knife’s edge between idea and matter. The digital, we might say, is the specification of the conjunction of these two, the material organization of signs.

Digital embodiment might take any number of forms—in the brain, in the computer, in machine learning processes, or in something as yet unknown. But it can never itself provide its own interpretive framework, because it can never fully “determine” its nets. The digital is a real factor in the way our world is organized—even in the very way we should understand “world” at all—but is still limited to propositions. The digital never exceeds the order of signs; it plays a role in our world in precisely the way that all signs do, giving rise to the world—constituting that world, as Kant had it—and our understanding of it at the same time. McCulloch’s insistence on the “semiotic” character of digital embodiment militates against any “data metaphysics”; using Kant to ground a view of the digital limits it to symbols, but also takes the way we use those symbols extremely seriously, as agents in our shared world. The digital is not, for this way of looking at things, anything other than (very) long series of signs. But without signs, we could have no world in the first place. The digital, precisely as a kind of abstraction, constitutes our metaphysics, forcing us to re-evaluate how we deal in even the most basic categories, like that of causality.

McCulloch liked to call causality a “superstition.”34 As sociologist William Davies has recently argued, the shift from traditional statistical methods to data-driven processes abandons causality in favor a deeper and more specific ability to influence and control behavior, like self-care in nutrition and medication, or even how one votes.35 But Davies is not as sanguine as Domingos in his description of the “master algorithm” about the promises of this new category, and with good reason. When we let data make decisions for us, we confer causality on the digital without participating in the way that causality governs our world. This deferral persists even in the concept of “correlation,” which, if anything, harbors an even deeper implicit commitment to the notion that the data and reality are one and the same. But correlations are abstractions, too: The absence of cause doesn’t ensure that we are dealing in the real. Kant allowed the McCulloch group to think of the digital as the generative site of meaning, something both abstract and metaphysical, epochal yet not inevitable. We need to regain this sense that the digital is not inevitable, and that means participating in its causal actions in our world. Doing this, in turn, means understanding it.

Even the Biggest Data of all is still just a set of interlocking propositions. When we seek correlations in data, we’re seeking an understanding of nature that is both encoded in digital nets and, precisely because of that, necessarily incomplete, abstract yet real. This doesn’t mean that the new data processing can’t be of major importance to our society, as it clearly already is. It does mean, however, that we shouldn’t confuse the pragmatic success of algorithms with the structure of reality. A digital metaphysics based on McCulloch’s Kantianism reminds us that the size and volume of statements don’t change the fact that they are abstract. Those abstractions are real, embedded in the materiality of brains and digital machines. They constitute our world, and can’t be subtracted from it. But data literally can’t be promoted to decision makers. Even when we “defer” to machine learning, we’re really just deferring to extraordinarily complex sets of inputs—symbolic inputs—conditioned by human-machine interactions in the first place. The digital is causal in the same way any material system of signs is causal: It stabilizes channels of symbolic and other exchange, but can’t fully determine the shape that exchange will take. Automation can’t be automated, because signs persist as open-ended abstract systems making up our world.

The digital, for McCulloch, was—to repeat—real but not inevitable, as it often seems today. The point was all too obvious in the 1940s, when the first digital computers were still under construction. Now it is virtually impossible to opt out of the digital’s causal force field. The digital is part of reality, but it is not the motor of history. McCulloch suggests to us, by way of Kant, that it is not by brute force but by close reading (maybe of some new kind) that we may come to live with the digital.36 We have to find a way not to defer to a data-based understanding of a world that nevertheless includes data. To do that, we have to pay close attention to the way digital processes work as sets of signs—as irreducibly semiotic processes. McCulloch pointed toward a philosophy of the digital that we urgently need to elaborate today.

Endnotes

  1. Warren Sturgis McCulloch, Embodiments of Mind (Cambridge, MA: MIT Press, 1965), 73.
  2. Ibid., 156.
  3. Ibid., 74.
  4. Immanuel Kant, Critique of Pure Reason, trans. Paul Guyer and Allen W. Wood (Cambridge, England: Cambridge University Press, 1998), 142–43.
  5. Ethem Alpaydin gives an excellent short overview in Machine Learning: The New AI (Cambridge, MA: MIT University Press, 2016).
  6. Gideon Lewis-Kraus, “The Great A.I. Awakening,” The New York Times Magazine, December 14, 2016, https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html.
  7. Andrew Griffin, “Elon Musk: The Chance We Are Not Living in a Computer Simulation Is ‘One in Billions,’” The Independent, June 2, 2016, https://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-artificial-intelligence-computer-simulation-gaming-virtual-reality-a7060941.html.
  8. Chris Anderson, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete,” Wired, June 23, 2008, https://www.wired.com/2008/06/pb-theory/.
  9. See Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge, MA: Harvard University Press, 2015).
  10. Kathleen O’Toole, “Susan Athey: How Big Data Changes Business Management,” Insights by Stanford Business, September 20, 2013, https://www.gsb.stanford.edu/insights/susan-athey-how-big-data-changes-business-management.
  11. Keller Easterling, Extrastatecraft: The Power of Infrastructure Space (New York, NY: Verso Books, 2014); Orit Halpern, Beautiful Data: A History of Vision and Research since 1945 (Durham, NC: Duke University Press, 2014); Joshua J. Yates, “Saving the Soul of the Smart City,” The Hedgehog Review 19, no. 2 (2017): 18–35, http://www.iasc-culture.org/THR/THR_article_2017_Summer_Yates.php.
  12. Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (New York, NY: Basic Books, 2015), 13.
  13. Ibid., 9.
  14. Viktor Mayer-Schönberger and Kenneth Cukier, Big Data: A Revolution That Will Change the Way We Live, Work, and Think (New York, NY: Houghton-Mifflin, 2013), 68–69, 70.
  15. Ronald Kline, The Cybernetic Moment, or Why We Call Our Age the Information Age (Baltimore, MD: Johns Hopkins University Press, 2015); N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago, IL: University of Chicago, 1999).
  16. Paul Erickson et al., “Enlightenment Reason, Cold War Rationality, and the Rule of Rules,” in How Reason Almost Lost Its Mind: The Strange Career of Cold War Rationality (Chicago, IL: University of Chicago Press, 2013): 27–50.
  17. Norbert Wiener, Cybernetics, or Communication and Control in the Animal and the Machine (Cambridge, MA: MIT Press, 1985), 132. First published 1948.
  18. See Norbert Wiener, The Human Use of Human Beings (London, England: Free Association, 1989), 18 ff.
  19. Wiener posited this connection as early as 1932 in a paper titled “Back to Leibniz! Physics Reoccupies an Abandoned Position,” Technology Review 34 (1932), 201–03, 222–24. See also Wiener, The Human Use of Human Beings.
  20. Peter Galison, “The Ontology of the Enemy: Norbert Wiener and the Cybernetic,” Critical Inquiry 21, no. 1 (1994): 228–66, especially 233.
  21. Warren S. McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biology 52, no. l/2, 1990, 99–115 (reprint from Bulletin of Mathematical Biophysics 5, 115–133 (1943), https://link.springer.com/article/10.1007/BF02478259.
  22. McCulloch, Embodiments of Mind, 19. See also Tara Abraham, “(Physio)Logical Circuits: The Intellectual Origins of the McCulloch-Pitts Neural Networks,” Journal of the History of the Behavioral Sciences 38, no. 1 (2002): 3–25.
  23. McCulloch, Embodiments of Mind, 37.
  24. James A. Anderson and Edward Rosenfeld, eds., Talking Nets: An Oral History of Neural Networks (Cambridge, MA: MIT, 2000), 3.
  25. Paul Schrecker, “Leibniz and the Art of Inventing Algorisms,” Journal of the History of Ideas 8, no. 1 (1947): 107–16, especially here 108. Also cited by Lily Kay, “From Logical Neurons to Poetic Embodiments of Mind: Warren S. McCulloch’s Project in Neuroscience,” Science in Contex 14, no. 4 (2001), 591–614, here 595, and in Abraham, “(Physio)Logical Circuits.”
  26. Here I am building on excellent work done by Michael A. Arbib and Orit Halpern; see Arbib, “Warren McCulloch’s Search for the Logic of the Nervous System,” Perspectives in Biology and Medicine 43k, no. 2 (2000): 193–216; and especially Halpern, “Cybernetic Sense,” Interdisciplinary Science Reviews 37, no. 3 (2012): 218–36. See also Jean-Pierre Dupuy, On the Origins of Cognitive Science: The Mechanization of the Mind, trans. M.B. DeBevoise (Cambridge, MA: MIT, 2009), 93–95.
  27. Warren S. McCulloch Papers, B: M139: III, American Philosophical Society, Philadelphia, PA, Jerome Lettvin, Letter from McCulloch to Henry Moe at the Guggenheim Foundation, December 30, 1959. Also cited in Halpern, “Cybernetic Sense,” 232.
  28. Warren S. McCulloch, “Recollections of the Many Sources of Cybernetics,” in The Collected Works of Warren S. McCulloch, vol. 1, ed. Rook McCulloch (Salinas, CA: Intersystems, 1989), 21–49; also http://www.univie.ac.at/constructivism/archive/fulltexts/2312.html.
  29. Kant, Critique of Pure Reason, 245–67.
  30. McCulloch recapitulates this story in Embodiments of Mind, 6.
  31. McCulloch, Embodiments of Mind, 35. McCulloch also calls Kant’s notion of causality one of two “fertile succubi,” 297.
  32. Ibid., 35.
  33. Ibid.
  34. Wendy Hui Kyong Chun argues that this delicate relation between knowledge of the unknowable and embodiment is at the root of software in general. See Chun, Programmed Visions: Software and Memory (Cambridge, MA: MIT Press, 2011), 153–57.
  35. William Davies, “How Statistics Lost Their Power—and Why We Should Fear What Comes Next,” The Guardian, January 19, 2017, https://www.theguardian.com/politics/2017/jan/19/crisis-of-statistics-big-data-democracy.
  36. This point runs parallel to that made by Johanna Drucker for graphical display in the digital: “Thus the representation of knowledge is as crucial to its cultural force as any other facet of its production. The graphical forms of display that have come to the fore in digital humanities in the last decade are borrowed from a mechanistic approach to realism, and the common conception of data in those forms needs to be completely rethought for humanistic work.” Johanna Drucker, “Humanities Approaches to Graphical Display,” Digital Humanities Quarterly 5, no. 1 (2011), http://www.digitalhumanities.org/dhq/vol/5/1/000091/000091.html. Accessed November 30, 2017.

Leif Weatherby is assistant professor of German at New York University and the author of Transplanting the Metaphysical Organ: German Romanticism between Leibniz and Marx.

Reprinted from The Hedgehog Review 20.1 (Spring 2018). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details.

Who We Are

Published three times a year by the Institute for Advanced Studies in Culture, The Hedgehog Review offers critical reflections on contemporary culture—how we shape it, and how it shapes us.

IASC Home | Research | Scholars | Events | Media

IASC Newsletter Signup

First Name Last Name Email Address
   

Follow Us . . . FacebookTwitter