The Hedgehog Review

The Hedgehog Review: Vol. 18 No. 2 (Summer 2016)

The Future of Search

The Internet of Us: Knowing More and Understanding Less in the Age of Big Data

Michael Patrick Lynch

New York, NY: W.W. Norton, 2016.

The Hedgehog Review

The Hedgehog Review: Summer 2016

(Volume 18 | Issue 2)

Today, to search is to google. Specifically, it is to use Google’s search engine to find something on the Web. As for those other searches that once helped define the human condition—for meaning, love, purpose, or God––those have, in little more than a decade, assumed almost secondary importance.

From its now almost apocryphal beginnings at Stanford in 1998, Google was described by its cofounders Larry Page and Sergey Brin as a technology designed to “organize the world’s information.” In an early press release, Brin declared that “a perfect search engine will process and understand all the information in the world.” In its first decade, Google focused on the former undertaking––organizing information on a global scale––by trying to map the World Wide Web, essentially an ever-expanding and highly fragile set of documents connected by hyperlinks. Google’s search engine helped people navigate the Web by tracing the links among webpages. Google’s search engineers thought of the Web as a medium of documents. Accordingly, the search engine they designed was document-centric, keyword based, and highly contextual. Search results were always embedded in particular texts—documents that, once you clicked on one, framed information in a particular way.

Google’s first generation of search technology captured an order intrinsic to the Web itself. In their original paper outlining the “anatomy of a large scale hyper-textual Web search engine,” Page and Brin explained that they had started from the insight that, as John Battelle explained in Wired in 2005, the Web “was loosely based on the premise of citation—after, all what is a link but a citation?” The original aim of Google, then, was to trace all of these links among pages, not only the outgoing links from individual pages but also the incoming links. The goal was a more complete model of the citational structure of the Web. And the groundbreaking technology Page and Brin devised was PageRank—a proprietary algorithm that modeled the links that constituted the Web.

What distinguished Google from other search engines, including the early Yahoo product, was that it did not simply collect citations. The PageRank algorithm took the citational logic a step further by differentiating among pages, determining the value of a page according to the number and quality of links to it. A page with more links, or with links from other highly ranked pages, would have a higher value, because PageRank recognized it as more important. The PageRank value of a website was basically a function of a page’s degree of connectedness.

To read the full article, please subscribe to our print ($25 yearly) or digital ($10 yearly) editions or purchase a copy at select Barnes & Noble bookstores.

Chad Wellmon is an associate professor of German studies at the University of Virginia and a fellow at the Institute for Advanced Studies in Culture. He is the author most recently of Organizing Enlightenment: Information Overload and the Invention of the Modern Research University and coeditor of Anti-Education: Friedrich Nietzsche’s Lectures on Education.

Reprinted from The Hedgehog Review 18.2 (Summer 2016). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details.

Who We Are

Published three times a year by the Institute for Advanced Studies in Culture, The Hedgehog Review offers critical reflections on contemporary culture—how we shape it, and how it shapes us.

IASC Home | Research | Scholars | Events | Media

IASC Newsletter Signup

First Name Last Name Email Address
   

Follow Us . . . FacebookTwitter