Case Analysis

Read the following three articles.

  • Varian, H.R. (2010). Computer Mediated Transactions, American Economic Review 100(2): 1–10.
  • Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of information technology, 30(1), 75-89.
  • Runge & Seufert (2021, Apr 26) – Apple Is Changing How Digital Ads Work. Are Advertisers Prepared?
  • Answer the following questions.  
  1. Varian (2010) introduces the idea of computer mediated transactions (CMT). What is a CMT and are the four types he talks about?
  2. What is surveillance capitalism?
  3. According to Zuboff, what are data extraction and data analysis? What is their relevance to surveillance capitalism?
  4. How does surveillance capitalism relate to CMT? Are they the same/different? Use evidence from the readings to support your answer.
  5. Briefly outline three potential benefits of surveillance capitalism and three potential risks of surveillance capitalisism.
  6. How will Apple’s new approach to privacy settings change the profit calculus of “Big Tech” and surveillance capitalism moving forward?

Computer Mediated Transactions

Author(s): Hal R. Varian

Source: The American Economic Review , May 2010, Vol. 100, No. 2, PAPERS AND
PROCEEDINGS OF THE One Hundred Twenty Second Annual Meeting OF THE AMERICAN
ECONOMIC ASSOCIATION (May 2010), pp. 1-10

Published by: American Economic Association

Stable URL: https://www.jstor.org/stable/27804953

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at

is collaborating with JSTOR to digitize, preserve and extend access to The American Economic
Review

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

American Economie Review: Papers & Proceedings 100 (May 2010): 1-10
http://www.aeaweb.org/articles.php?doi=10.1257/aer. 100.2.1

RICHARD T. ELY LECTURE

Computer Mediated Transactions

By Hal R. Varian*

Every now and then a set of technologies
becomes available that sets off a period of
“combinatorial innovation.” Think of standard
ized mechanical parts in the 1800s, the gasoline
engine in the early 1900s, electronics in the
1920s, integrated circuits in the 1970s, and the
Internet in the last decade or so.

The component parts of these technologies
can be combined and recombined by innovators
to create new devices and applications. Since
these innovators are working in parallel with
similar components, it is common to see simul
taneous invention. There are many well-known
examples, such as the electric light, the airplane,
the automobile, and the telephone. Many schol
ars have described such periods of innovation,
using terms such as “recombinant growth,”
“general purpose technologies,” “cumulative
synthesis,” and “clusters of innovation.”1

The Internet and the Web are wonderful
examples of combinatorial innovation. In the
last 15 years we have seen a huge proliferation
of Web applications, all built from a basic set of
component technologies.

The Internet itself was a rather unlikely inno
vation; I like to describe it as a “lab experiment
that got loose.” Since the Internet arose from the
research community rather than from the pri
vate sector, it had no obvious business model.
Other public computer networks, such as AOL,
CompuServe, and Minitel, generally used sub
scription models but were centrally controlled
and offered little scope for innovation at the user
level. The Internet won out over these alterna

tives, precisely because it offered a flexible set

*UC Berkeley and Google, 1600 Amphitheatre
Parkway, Mountain View, CA 94043 (e-mail: [email protected]

1 See, for example, Weitzman (1998), Bresnahan and
Trajtenberg (1995), Bresnahan (forthcoming), Rosenberg
(1976), Usher (1998), and Schumpeter (2000).

of component technologies which encouraged
combinatorial innovation.

The earlier waves of combinatorial innovation

required decades or more to play out. For exam
ple, Hounshell (1984) argues that the Utopian
vision of interchangeable parts took well over a
century to be realized. The Web was invented in
the early 1990s but didn’t really become widely
used till the mid-1990s. Since then we have seen

a huge number of novel applications, from Web
browsers, to search engines, to social networks,
to mention just a few examples. As with the
Internet, the Web initially had no real business

model but offered a fertile ground for combina
torial innovation.

Why was innovation so rapid on the Internet?
The reason is that the component parts were all
bits. They were programming languages, pro
tocols, standards, software libraries, produc
tivity tools and the like. There was no time to

manufacture, no inventory management, and
no shipping delay. You never run out of HTML,
just like you never run out of e-mail. New tools
could be sent around the world in seconds, and
innovators could combine and recombine these

bits to create new Web applications.
This parallel invention has led to a burst of

global innovation in Web applications. It is true
that the Internet was an American innovation,
but the Web was invented by an Englishman liv
ing in Switzerland. Linux, the most used oper
ating system on the Web, came from Finland,
as did MySQL, a widely used database for Web
applications. Skype, which uses the Internet for
voice communication, came from Estonia.

Of course there were many other technologies
with worldwide innovation such as automobiles,
airplanes, photography, and incandescent light
ing to name just a few. However, applications
for the Internet, which is inherently a commu
nications technology, could be developed every
where in the world in parallel, leading to the
very rapid innovation we have observed.

1

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

2 AEA PAPERS AND PROCEEDINGS MA Y 2010

My interest in this lecture is in the economic
aspects of these technological developments.
I start with a point so mundane and obvious, it
barely seems worth mentioning.

I. Computer Mediated Transactions

Nowadays, most economic transactions
involve a computer. Sometimes the computer
takes the form of a smart cash register, some
times it is part of a sophisticated point of sale
system, and sometimes it is a Web site. In each
of these cases, the computer creates a record of
the transaction.

The record-keeping role was the original
motivation for adding the computer to the trans
action. Creating a record of transactions is the
first step in building an accounting system,
thereby enabling a firm to understand how its
business is doing.

But now that these computers are in place,
they can be used for many other purposes. In
this lecture I would like to explore some of the
ways that computer mediation can affect eco
nomic transactions. I argue that these computer
mediated transactions have enabled significant
improvements in the way transactions are car
ried out and will continue to impact the econ
omy for the foreseeable future.

I classify impact of computer mediated trans
actions into four main categories.

Facilitate new forms of contract;
facilitate data extraction and analysis;
facilitate controlled experimentation;
facilitate personalization and customiza
tion.

II. Enable New Forms of Contract

Contracts are fundamental to commerce. The

simplest commercial contract says “I will do X
if you do Y,” as in “I will give you $1 if you give

me a cup of coffee.” Of course, this requires
that the actions be verifiable. Just because I ask

for coffee doesn’t mean that I will get some. As
Abraham Lincoln supposedly remarked, “If this
is coffee, please bring me some tea; but if this is
tea, please bring me some coffee.”

A computer in the middle of a transaction can
observe and verify many aspects of a transaction.
The record produced by the computer can allow
the contracting parties to condition the contract

on terms that were previously unobservable,
thereby allowing for more efficient transactions.

I am not claiming that increased observability
will necessarily lead to more efficient contracts.
There are counterexamples to the claim that
“more information is better” such as the famous

Hirshleifer (1971) example. I am only claiming
that additional information allows for more effi
cient contracts.

Of course, the study of contracts is a highly
developed field in economics, and it is hardly
novel to suggest that contractual form depends
on what is observable. What is interesting, I
think, is the way that progress in information
technology enables new contractual forms.

Consider, for example, a rental-car agency
that buys insurance based on accident rates,
and that accident rates in turn depend on the
speed at which a vehicle is operated. All rent
ers would prefer to drive within the speed limit
if they were compensated with a lower rental
fee. However, if there is no way to monitor the
speed of the rental car, such a contractual provi
sion is unenforceable. Putting a computer trans

mitter in the trunk of the car that records the

vehicle’s speed makes the contract enforceable
and offers a Pareto improvement on the original
arrangements.2

The transportation sector has capitalized on
the availability of computerized transmitters to
create more efficient contracts in a number of
areas.

Car dealers are selling cars with “starter
interrupt” devices that inhibit operations if
car payments are missed (Press 2006).3
Similar interrupt devices attached to breath
analyzers are mandated for drunk driving
offenders in many states.
Parents can buy a device known as
“MyKey” which allows them to limit auto
speed, cap the volume on the radio, require
seat belt use and encourage other safe-driv
ing habits for teenage drivers (Bunkley and
Vlasic 2008).4

2 This is a particularly simple case. If drivers have het
erogeneous preferences, those who prefer to speed may be
made worse off by the availability of such a device.

3 Los Angeles Times. 2006. “For Some High-Risk Auto
Buyers, Repo Man Is a High-Tech Gadget.” November 18.

4Bunkley, Nick and Bill Vlasic. 2008. “Ensuring Junior
Goes for a Mild Ride.” New York Times, October 7.

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

VOL. 100 NO. 2 RICHARD T. ELY LECTURE 3

In the economics literature, Hubbard
(2000) and Baker and Hubbard (2004)
examine a variety of ways that vehicular
monitoring systems have impacted the
trucking industry.

There are many other examples of computer
mediated contracts. One nice example is the
work of Dana and Spier (2001) and Mortimer
(2008) that describes the efficiency gains result
ing from revenue sharing in the videotape rental
industry.

Videotapes were originally purchased by
retail stores from distributors for about $65 per
tape. Since the videos were so expensive, stores
bought only a few. As a result, the popular vid
eos quickly disappeared from the shelves, mak
ing everyone unhappy.

In 1998, retailers and distributors adopted a
new business model, a revenue sharing arrange
ment in which the stores paid a small upfront fee
of $3 to $8 but split the revenue when the video
was rented, with 40 percent to 60 percent going
to the retailer. Stores no longer had an incentive
to economize on purchase, and all parties to the
transaction?retailers, distributors, and custom
ers?were made better off.

Sharing revenue at point of sale requires
that both parties be able to monitor the trans
action. The technological innovations of bar
code scanning, the computerized cash regis
ter, and dial-up computer networks were the
technologies that enabled revenue sharing
arrangements.

Of course, when a transaction takes place
online, revenue sharing is much easier. Online
advertising is a case in point where revenue
from an advertiser for an ad impression or click
may be split among publishers, ad exchanges, ad
networks, affiliates and other parties based on
contractual formulas.

I have emphasized the benefits from com
puters offering more information to contract
ing parties, but there are also cases where
computers can be used to improve contractual
performance by hiding information, by using
cryptographic methods. A picturesque exam
ple is the “cocaine auction protocol” which
describes an auction mechanism designed to
hide as much information as possible (Stajano
and Anderson 1999).

Finally, I should mention “algorithmic game
theory,” an exciting hybrid of computer science

and economic theory. This subject brings com
putational considerations to game theory (how
a particular solution can be computed) and
strategic considerations to algorithm design (is
a particular algorithm actually incentive com
patible?). See Nisan, Roughgarden, Tardos,
Vazirani, eds (2007) for a comprehensive col
lection of articles and Varian (1995) for an early
contribution.5

A. Some History of Monitoring Technologies

Though I have emphasized computer medi
ated transactions, the definition of computer can
be considered to be quite broad. The earliest
example I have been able to find for an account
ing technology that enabled new forms of con
tract involves Mediterranean shipping circa
3300 BC.

The challenge was how to write a “bill of lad
ing” for long distance trade in societies that were
pre-literate and pre-numerate. The brilliant solu
tion was to introduce small clay tokens, known
as “bullae,” which were small representations of
the material being transported. As each barrel of
olive oil was loaded onto a ship, a barrel-shaped
token was placed in clay envelope. After the load
ing was completed, the envelope was baked in a
kiln and given to the ship’s captain. At the other
end of the voyage, the envelope was broken open
and the tokens were compared to the barrels of oil
on the ship as they were unloaded. If the numbers
matched, the contract was verified. Later marks
were scratched on the outside of the bullae that
indicated the number of tokens inside, and some
authors believe that this innovation led to the

invention of writing between 3400 and 3300 BC.
(Glassner, Bahrani and de Miero 2005).
A somewhat more recent example is the

invention of the cash register in 1883 by James
Ritty. Ritty was a saloon owner who discov
ered that his employees were stealing money.
He hit upon the idea of building a device which
would record each transaction on a paper
tape, an invention that he patented under the
name of “the incorruptible cashier” (Patent
271,368). Ritty’s machine formed the basis for
the National Cash Register company, founded
in 1884. The NCR device added a cash drawer

5 Varian, Hal R. 2005. “Technology Levels the Business
Playing Field,” New York Times, August 25.

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

4 AEA PAPERS AND PROCEEDINGS MA Y 2010

and a bell that sounded “ka-ching” whenever the
drawer was opened, which alerted the owner to
the transaction, thereby discouraging pilfering.
It is thought that this improved monitoring tech
nology made retailers willing to hire employees
outside the immediate family, leading to larger
and more efficient establishments. See Yates

(2000) for a more detailed account of the role
of office machinery in the development of com
mercial enterprises.

B. Online Adverlising

Online advertising serves as a poster child
for algorithmic mechanism design. A Pasadena
company called GoTo came up with the idea of
ranking search results using an auction. Users
did not find this particular form of search
attractive, so GoTo switched to using their auc
tion to rank advertisements. In the original
auction, ads were ranked by “bid per click”
and advertisers paid the amount they bid.
After consultation with auction theorists, GoTo
moved to second-price auction: an advertiser
paid a price per click determined by the bid of
the advertiser in the next lower position. See
Battelle (2005) and Levy (2009) for accounts
of the development of these auctions.

There is a fundamental divergence of incen
tives in advertising. The publisher (the content
provider) has space on its the Web page for an
ad and it wants to sell these ad impressions to
the highest bidders. The advertiser doesn’t care
directly about ad impressions but does care
about visitors to its Web site, and ultimately
about sales of its products. So the publisher
wants to sell impressions, but the advertiser
wants to buy clicks.

This is like an international trade transaction

where the buyer wants to pay in euros but the
seller wants to receive dollars. The solution in

both cases is the same: an exchange rate. In the
context of online advertising the exchange rate
is the predicted clickthrough rate, an estimate of
how many clicks a particular ad impression will
receive. This allows one to convert the advertis

er’s offered bid per click to an equivalent bid per
impression, allowing the publisher to sell each
impression to the highest bidder.

This mechanism aligns the interests of the
buyer and seller but creates other problems. If
the advertiser pays only for clicks, then it has no
direct incentive to economize on impressions.

But excessive impressions impose an attention
cost on users, so some further attention to ad
quality is important in order to ensure that ad
impressions remain relevant to users.
Nowadays, the major providers of search

engine advertising all estimate clickthrough
rates along with other measures of ad quality
and use auctions to sell ads. Economists have
applied the tools of game theory and mecha
nism design to analyze the properties of these
auctions. See, for example, Athey and Ellison
(2007), Edelman, Ostrovsky and Schwartz
(2007), Varian (2007), and Varian (2009).

III. Enable Data Extraction and Analysis

The data from computer mediated transac
tions can be analyzed and used to improve the
performance of future transactions.

An illustrative example is Sabre air passen
ger reservation system offered by American
Airlines. The original conception in 1953 was
simply to automate the creation of an airline res
ervation. However, by the time the system was
released in 1960, it had become apparent that
such a system could also be used to study pat
terns in the airline reservation process: the acro
nym Sabre stands for Semi-Automatic Business
Research Environment (Sabre 2009).

The existence of airline reservation systems
enabled sophisticated differential pricing (also
known as “yield management”) in the trans
portation business. See Smith, Leimkuhler and
Darrow (1992) for the history of yield manage
ment in the airline industry and Talluri and van
Ryzin (2004) for a textbook treatment.

Many firms have built data warehouses
based on transaction level data which can
then be used as input to analytic models for
customer behavior. A prominent example is
supermarket scanner data, which has been
widely used in economic analysis (see Nevo
and Wolfram 2002, Hendel and Nevo 2006 for
just two examples). Scanner data has also been
useful in constructing price indexes (Feenstra,
Shapiro, eds. 2003; Farm Foundation 2003),
since it allows for much more direct and timely
access to prices. The fact that the data is timely
is worth emphasizing, since it allows for real
time analysis and intervention at both the busi
ness and policy level.

Choi and Varian (2009b) and Choi and
Varian (2009a) use real-time publicly available

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

VOL. 100 NO. 2 RICHARD T. ELY LECTURE 5

search engine data to predict the current level
of economic activity for automobile, real estate,
retail trade, travel, and unemployment indica
tors. There are many other sources of real-time
data, such as credit card data, package delivery
data, and financial data. Clements and Hendry
(2003) and Castle and Hendry (2009) have
coined the term “nowcasting” to describe the use
of real-time data to estimate the current state of

the economy. They use a variety of econometric
techniques to deal with the problems of variable
selection, gaps, lags, structural changes, and so
on. One promising development is that much of
the real-time data is also available at state and

city levels, allowing for regional macroeco
nomic analysis.

In the last 20 years or so, the field of machine
learning has made tremendous strides in “data
mining.” This term was once pejorative (at
least among econometricians) but now enjoys
a somewhat better reputation due to the excit
ing applications developed by computer scien
tists and statisticians; see Hastie, Friedman and
Tibshirani (2009) for a technical overview. One
of the big problems with data mining is over
fitting, but various sorts of cross-validation
techniques have been developed that mitigate
this problem. Econometricians have only begun
to utilize these techniques; the previously men
tioned work by Castle and Hendry (2009) is
noteworthy in this respect.

IV. Enable Experimentation

As econometricians have observed, “if you
torture the data long enough it will confess to
anything.” It is difficult to establish causality
from retrospective data analysis, so it is note
worthy that computer mediation allows one not
only to measure economic activity but also to
conduct controlled experiments.

In particular, it is relatively easy to imple
ment experiments on Web-based systems. Such
experiments can be conducted at the query level,
the user level, or the geographic level.
In 2008, Google ran 6,000 experiments

involving Web search which resulted in 450-500
changes in the system. Some of these were
experiments with the user interface, some were
basic changes to the algorithm (Hoff 2009).
The ads team at Google ran a similar number
of experiments, tweaking everything from the
background color of the ads, to the spacing

between the ads and search results, to the under
lying ranking algorithm.
In the 1980s, Japanese manufacturers touted

their “kaizen” system that allowed for “continu
ous improvement” of the production process. In
a well-designed Web-based business, one can
have continuous improvement of the product
itself?the Web site.

Google and other search engines also offer
various experimental platforms to advertisers
and publishers, such as “ad rotation,” which
rotates ad cr?atives among various alternatives
to choose the one that performs best, and “Web
site optimizer,” a system that allows Web sites
to try out different designs or layouts and deter
mine which performs best.

Building a system that allows for experimen
tation is critical for future improvement, but it is
all too often left out of initial implementation.
This is a shame, since it is the early versions
of a system that are often in need of the most
improvement.
Cloud computing, which I will discuss later

in the lecture, offers a model for “software as
service,” which typically means software which
is hosted in a remote data center and accessed
via a Web interface. There are numerous advan

tages to this architecture, but one that is not
sufficiently appreciated is the fact that it allows
for controlled experiments which can in turn
lead to continuous improvement of the system.
Alternatives, such as packaged software, make
experimentation much more difficult.

Ideally, experiments lead to understanding
of causal relations that can then be modeled. In

the case of Web applications there are typically
two “economic agents”: the users and the appli
cations. The applications are already modeled
via the source code that is used to implement
them, so all that is necessary is to model the
user behavior. The resulting model will often
take the form of a computer simulation which
can be used to understand how the system
works.

A nice example of this are the Bid Simulator
and Bid Forecasting tools offered by Google and
Yahoo. These tools give an estimate of the cost
and clicks associated with possible bids. The
cost per click is determined by the rules of the
auction and can be calculated directly; the clicks
are part of user behavior and must be estimated
statistically. Putting them together gives a model
of the auction outcomes.

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

6 AEA PAPERS AND PROCEEDINGS MA Y 2010

A. How Experiments Change Business

The fact that computer mediation drastically
reduces the cost of experimentation changes the
role of management. As Kohavi, Longbotham,
Sommerfield, and Henne (2008) have empha
sized, decisions should be based on carefully
controlled experiments rather than “the Highest
Paid Person’s Opinion (HiPPO).”

If experiments are costly, expert opinion
by management is a plausible way to make
decisions. But when experiments are cheap,
they are likely to provide more reliable answers
than opinions?even opinions from highly paid
people. Furthermore, even when experienced
managers have better-than-average opinions,
it is likely that there are more useful things
for them to do than sit around a table debating
about which background colors will appeal to

Web users. The right response from managers to
such questions should be “run an experiment.”

Businesses have always engaged in experi
mentation in one form or another. But the
availability of computer mediated transactions
makes these experiments much more inexpen
sive and flexible than they have been in the past.

V. Customization and Personalization

Finally, computer mediated transactions allow
for customization and personalization of the
interactions by basing current transactions on
earlier transactions or other relevant information.

Instead of a “one size fits all” model, the Web
offers a “market of one.” Amazon, for example,
makes suggestions of things to buy based on
your previous purchases, or on purchases of
consumers like you. These suggestions can be
based on “recommender systems” of various
sorts (Resnick and Varian 1997).

Not only content, but prices may also be per
sonalized, leading to various forms of differ
ential pricing. What are the welfare effects of
such personalized pricing? Acquisiti and Varian
(2005) examine a model in which firms can con
dition prices on past history. They find that the
ability of firms to extract surplus is quite limited
when consumers are sophisticated. In fact, firms
have to offer some sort of “enhanced services”

in order to justify higher prices.
Varian (2005b) suggests that there is a “third

welfare theorem” that applies to (admittedly
extreme) cases with perfect price discrimination

and free entry: perfect price discrimination
results in the optimal amount of output being
sold while free entry pushes profits to zero, con
ferring all benefits to the consumers. See Ulph
and Vulkan (2007) for a theoretical analysis of
first-degree price discrimination.

The same sort of personalization can occur in
advertising. Search engine advertising is inher
ently customized since ads are shown based on
the user’s query. Google and Yahoo offer ser
vices that allow users to specify their areas of
interest and then see ads related to those inter

ests. It is also relatively common for advertisers
to use various forms of “re-targeting” that allow
them to show ads based on previous responses
of users to related ads.

VI. Transactions among Workers

My emphasis so far has been on transactions
among buyers, sellers and advertisers. But com
puters can also mediate transactions among
workers. The resulting improvements in com
munication and coordination can lead to produc
tivity gains, as documented in the large literature
on the impact of computers on productivity.

In a series of works, Paul David (1990, 1991a,
1991b) has drawn an extended analogy between
the productivity impact of electricity at the end
of the nineteenth century and the productivity
impact of computing at the end of the twenti
eth century.6 Originally factories were powered
by waterwheels which drove a shaft, and all
the machines in the factory had to connect to
this central shaft. The manufacturing process
involved moving the piece being assembled
from station to station during assembly.

The power source evolved from waterwheels to
steam engines to electric motors. Eventually elec
tric motors were attached to each machine, which
allowed more flexibility in how the machines
were arranged within the factory. However, fac
tories still stuck to the time-honored arrange

ments, grouping the same sort of machines in the
same location?all the lathes in one place, saws
in another, and drills in yet another.

It wasn’t until Henry Ford invented the
assembly line in the first decade of the twenti
eth century that the flexibility offered by electric

6 See David (1990), David (1991b), and David (1991a).

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

VOL. 100 NO. 2 RICHARD T. ELY LECTURE 7

motors was really appreciated.7 As David (1990)
shows, the productivity impact of the assembly
line was huge, and over the last century manu
facturing has become far more efficient.8

I want to extend David (1990)’s assembly line
analogy to examine “knowledge worker produc
tivity” (Drucker 1999). Prior to the widespread
use of the personal computer, office documents
were produced via a laborious process. A memo
was dictated to a stenographer who later typed
the document, making a half-dozen carbon
copies. The typed manuscript was corrected by
the author and circulated for comments. Just as

with pre-assembly line production, the partially
produced product was carried around from sta
tion to station for modification. When the com

ments all came back, the document was retyped,
reproduced and recirculated.

In the latter half of the twentieth century there
were some productivity enhancements for this
basic process, such as Wite-Out, Post-it notes,
and photocopy machines. But the basic produc
tion process remained the same for a century.
When the personal computer came along, edit

ing became much easier, and the process of col
laborative document production involved handing
floppy disks back and forth. The advent of e-mail
allowed one to eliminate the floppy disk and sim
ply mail attachments from person to person.
All of these effects contributed to improv

ing the quantity and quality of collaborative
document production. However, they all mim
icked the same physical process: circulating a
document from person to person for comment.
Editing, version control, tracking changes,
circulation of the documents and other tasks
remained difficult.

Nowadays, we have a new model for docu
ment production enabled by “cloud comput
ing” (Armbrust, Fox, Griffith, Joseph, Katz,
Konwinski, Lee, Patterson, Rabkin, Stoica, and
Zaharia 2009, Wikipedia 2009b). In this model,
documents live “in the cloud,” that is, in some

7 Ford (1923) suggests that the inspiration for the assem
bly line came from observing the meatpacking plants in
Chicago, where an animal carcass was hung on hooks and
moved down a line where workers carved off different
pieces. If you could use this process to disassemble a cow,
Ford figured you could use it to assemble a car.

8 I do not mean to imply that the only benefit from elec
tric motors came from improved factory layout. Motors
were also more efficient than drive belts, and the building
construction was simpler.

data center on the Internet. The documents can

be accessed any time, from anywhere, on any
device, by any authorized user.

Cloud computing dramatically changes the pro
duction process for knowledge work. Now there
is a single master copy that can be viewed and
edited by all relevant parties, with version con
trol, check points and document restore built in.
All sorts of collaboration, including collaboration
across time and space, have become far easier.

Instead of carrying the document around from
collaborator to collaborator, a single master
copy of the document can be edited by all inter
ested parties. By allowing workflow to be re
organized, cloud computing changes knowledge
worker productivity the same way that electric
ity changed the productivity of physical labor.

VII. Deployment of Applications

As mentioned earlier, cloud computing offers
“software as service.” This architecture reduces

support costs and makes it easier to update and
improve applications.
But cloud computing doesn’t just offer “soft

ware as service,” it also offers “platform as
service,” which means that software develop
ers can deploy new applications using the cloud
infrastructure.

Nowadays, it is possible for a small company
to purchase data storage, hosting services, an
applications development environment, and
Internet connectivity “off the shelf” from ven
dors such as Amazon, Google, IBM, Microsoft,
Sun and others.

The “platform as service” model turns what
used to be a fixed cost for small Web applica
tions into a variable cost, dramatically reducing
entry costs. Computer engineers can not only
explore the combinatorial possibilities of generic
components to create new inventions?they can
actually purchase standardized services in the
market in order to deploy those innovations.

This development is analogous to what hap
pened in the book publishing business. At one
time publishers owned facilities for printing and
binding books. Due to the strong economies of
scale in this process, most publishers have out
sourced the actual production process to a hand
ful of specialized book production facilities.

Similarly, in the future it is likely that there
will be a number of cloud computing vendors
that will offer computing on a utility-based

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

8 AEA PAPERS AND PROCEEDINGS MA Y 2010

model. This production model dramatically
reduces the entry costs of offering online ser
vices and will likely lead to a significant increase
in businesses that provide such specialized ser
vices (Armbrust et al. 2009).

The hallmarks of modern manufacturing are
routinization, modularization, standardization,
continuous production, and miniaturization.
These practices have had a dramatic impact
on manufacturing productivity in the twentieth
century. The same practices can be applied to
knowledge work in the twenty-first century.

Computers, for example, can automate routine
tasks, such as spell-checking, or data retrieval.
Communications technology allows tasks to be
modularized and routed to the workers best able
to perform those tasks. Just as the miniaturization
of the electric motor allowed physical production
to be rearranged in 1910, the miniaturization of
the computer?from the mainframe, to the work
station, to the PC, to the laptop, to the mobile
phone?allows knowledge production to be rear
ranged at both a local and a global scale.

VIII. Micro-Multinationals

One interesting implication of computer
mediated transactions among knowledge work
ers is that interactions are no longer constrained
by time or distance.

E-mail and other tools allow for asynchro
nous communication over a distance, which
allows for optimization of tasks on a global
basis. Knowledge work can be subdivided into
tasks, much like the physical work in Adam
Smith’s pin factory. But even more, those tasks
can be exported around the world to where they
can most effectively be performed.

For example, consultants at McKinsey rou
tinely send their PowerPoint slides to Bangalore
for beautification. There are many other cognitive
tasks of this sort that can be outsourced, including
translation, proofreading, document research and
so on. Amazon’s Mechanical Turk (Wikipedia
2009a) is an intriguing example of how computers
can aid in matching up workers and tasks. As of

March 2007 there were reportedly over 100,000
workers from 100 countries who were providing
services via the Mechanical Turk (Pontin 2007).9

9 Pontin, Jason. 2007. “Artificial Intelligence, With Help
From the Humans.” New York Times, March 25.

The dramatic drop in communications costs
in the last decade has led to the emergence of
what I call “micro-multinationals” (Varian
2005a). Nowadays, a ten- or 12-person company
can have communications capabilities that only
the largest multinationals could afford 15 years
ago. Using tools like e-mail, Web pages, wikis,
voiceover IP, and video conferencing, tiny com
panies can coordinate workflow on a global
basis. By handing work off from one time zone
to the next, these companies can effectively
work around the clock, giving them a poten
tial competitive advantage over firms that are
restricted to one time zone.

Many micro-multinationals have a common
history: a student comes to the United States for
graduate school and uses the Internet and the
collaborative tools available in scientific work
groups. Some may get bitten by the start-up bug.
They draw on their friends and colleagues back
home, who have other contacts living abroad.
The collaborative technologies mentioned above
allow such loose groups to collaborate on pro
ducing computer code which may end up as a
working product.

As Saxenian (2006) has pointed out, “emigra
tion” means something quite different now than
it did 30 years ago. As she puts it, “brain drain”
has been replaced by “brain circulation.” Now
we have e-mail, Web pages, wikis, voiceover IP,
and a host of other collaborative technologies that
allow an immigrant to maintain ties to his social
and professional network in his home country.

I began this essay with a discussion of combi
natorial innovation and pointed out that innova
tion has been so rapid in the last decade because
innovators around the world can work in par
allel, exploring novel combinations of software
components. When the innovations are suffi
ciently developed to be deployed, they can be
hosted using cloud computing technology and
managed by global teams?even by tiny com
panies. I believe that these capabilities will offer
a huge boost to knowledge worker productivity
in the future.

REFERENCES

Acquisiti, Alessandro, and Hal R. Varian. 2005.
“Conditioning Prices on Purchase History.”

Marketing Science, 24(3): 367-81.
Armbrust, Michael, Armando Fox, Rean Griffith,

Anthony D. Joseph, Randy H. Katz, Andrew

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

VOL. 100 NO. 2 RICHARD T. ELY LECTURE 9

Konwinski, Gunho Lee, David A. Patterson,
Ariel Rabkin, Ion Stoica, and Matei Zaharia.
2009. “Above the Clouds: A Berkeley View of
Cloud Computing.” http://www.eecs.berkeley.
edu/Pubs/TechRpts/2009/EECS-2009-28.
html.

Athey, Susan, and Glenn Ellison. 2007. “Posi
tion Auctions with Consumer Search.”
http://kuznets.fas.harvard.edu/~athey/posi
tion.pdf.

Baker, George P., and Thomas N. Hubbard. 2004.
“Contractibility and Asset Ownership: On
Board Computers and Governance in U.S.
Trucking.” Quarterly Journal of Economics,
119(4); 1443-79.

Battelle, John. 2005. The Search. New York:
Portfolio-Penguin.

Bresnahan, Timothy. Forthcoming. “General Pur
pose Technologies,” In Handbook of the Eco
nomics of Innovation, ed. Bronwyn Hall and
Nathan Rosenberg. Amsterdam: Elsevier
North Holland.

Bresnahan, Timothy F., and M. Trajtenberg.
1995. “General Purpose Technologies:
‘Engines of Growth’?” Journal of Economet
rics, 65(1): 83-108.

Castle, Jennifer L., and David Hendry. 2009a.
“Nowcasting from Disaggregates in the Face
of Location Shifts.” www.economics.ox.ac.uk/

members/jennifer.castle/Nowcast09JoF.pdf.
Choi, Hyunyoung, and Hal R. Varian. 2009b.

“Predicting the Present with Google Trends.”
http://googleresearch.blogspot.com/2009/04/
predicting-present-with-google-trends.html.

Choi, Hyunyoung, and Hal R. Varian. 2009.
“Predicting Initial Claims for Unemploy
ment Benefits.” http://googleresearch.blogspot.
com/2009/07/posted-by-hal-varian-chief
economist.html.

Clements, M. P., and David F. Hendry. 2003.
“Forecasting in the National Accounts at
the Office for National Statistics.” Unpub
lished.

Dana, James D. Jr., and Kathryn E. Spier. 2001.
“Revenue Sharing and Vertical Control in the
Video Rental Industry.” Journal of Industrial
Economics, 49(3): 223-45.

David, Paul A. 1990. “The Dynamo and the
Computer: An Historical Perspective on the
Modern Productivity Paradox.” American
Economic Review, 80(2): 355-61.

David, Paul A. 1991. “Computer and the Dynamo:
The Modern Productivity Paradox in the

Not-Too-Distant Mirror.” In Technology and
Productivity: The Challenge for Economic
Policy, 315-48. Paris: OECD.

David, Paul A. 1991. “General Purpose Engines,
Investment, and Productivity Growth: From
the Dynamo Revolution to the Computer

Revolution.” In Technology and Investment:
Crucial Issues for the 90s, ed. E. Deiaco, E.
Hornel, and G. Vickery, London: Pinter Pub
lishers.

Drucker, Peter F. 1999. “Knowledge-worker pro
ductivity: the biggest challenge.” California

Management Review, 41(2): 79-94.
Edelman, Benjamin, Michael Ostrovsky, and

Michael Schwarz. 2007. “Internet Advertis
ing and the Generalized Second-Price Auc
tion: Selling Billions of Dollars Worth of
Keywords.” American Economic Review,
97(1): 242-59.

Farm Foundation. 2003. “Food CPI, Prices,
and Expenditures: A Workshop on the
Use of Scanner Data in Policy Analysis.”
http://www.ers.usda.gov/briefing/CPIFood
AndExpenditures/ScannerConference.htm.

Feenstra, Robert C. and Matthew Shapiro, eds.
2003. Scanner Data and Price Indexes, Vol.
64. Chicago: University of Chicago Press.

Ford, Henry. 1923. My Life and Work. Garden
City, NY: Doubleday, Page & Company.

Glassner, Jean-Jacques, Zainab Bahrani, and
Marc Van de Miero. 2005. The Invention of
Cuneiform: Writing in Sumer. Baltimore:
Johns Hopkins University Press.

Hastie, Trevor, Robert Tibshirani, and Jerome
Friedman. 2009. The Elements of Statistical
Learning: Data Mining, Inference, and Pre
diction. New York: Springer.

Hendel, Igal, and Aviv Nevo. 2006. “Measur
ing the Implications of Sales and Consumer
Inventory Behavior.” Econometrica, 74(6):
1637-73.

Hirshleifer, Jack. 1971. “The Private and Social
Value of Information and the Reward to Inven

tive Activity.” American Economic Review,
61(4): 561-74.

Hoff, Rob. 2009. “Google Search Guru Sing
hal: We Will Try Outlandish Ideas.” Busi
ness Week, October. http://www.businessweek.
com/the_thread/techbeat/archives/2009/10/
google_search_g.html.

Hounshell, David A. 1984. From the American
System to Mass Production, 1800-1932. Bal
timore: Johns Hopkins University Press.

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

10 AEA PAPERS AND PROCEEDINGS MA Y 2010

Hubbard, Thomas N. 2000. “The Demand for
Monitoring Technologies: The Case of Truck
ing.” Quarterly Journal of Economics, 115(2):
533-60.

Kohavi, Ron, Roger Longbotham, Dan Sommer
field, and Randal M. Henne. 2008. “Controlled
Experiments on the Web: Survey and Practical
Guide.” Data Mining and Knowledge Discov
ery, 19(1): 140-81.

Levy, Steve. 2009. “Secret of Googlenom
ics: Data-Fueled Recipe Brews Profitabil
ity.” Wired, 17(6). http://www.wired/com/
culture/culturereviews/magazine/17-06/nep_
googlenomics?currentPage=all.

Mortimer, Julie H. 2008. “Vertical Contracts in
the Video Rental Industry.” Review of Eco
nomic Studies, 75(1): 165-99.

Nevo, Aviv, and Catherine Wolfram. 2002. “Why
Do Manufacturers Issue Coupons? An Empir
ical Analysis of Breakfast Cereals.” RAND
Journal of Economics, 33(2): 319-39.

Nisan, Noam, Tim Roughgarden, Eva Tardos,
and Vijay V. Vazirani, eds. 2007. Algorithmic
Game Theory. Cambridge: Cambridge Univer
sity Press.

Resnick, Paul and Hal R. Varian. 1997. “Recom
mender Systems.” Communications of the
Association for Computer Machinery, (3):
56-8.

Rosenberg, Nathan. 1976. “Technological Change
in the Machine Tool Industry.” In Perspectives
in Technology, 9-31. Cambridge: Cambridge

University Press.
Sabre. 2009. http://www.sabreairlinesolutions.

com/about/history.htm (accessed March 6,
2010).

Saxenian, Anna Lee. 2006. The New Argonauts:
Regional Advantage in a Global Economy.
Cambridge, MA: Harvard University Press.

Schumpeter, Joseph A. 2000. “The Analy
sis of Economic Change” In Essays on
Entrepreneurs, Innovations, Business
Cycles and the Evolution of Capitalism, ed.
Richard V. Cl?mence, 134-49. New Bruns
wick: Transactions Publishers, (Originally
published in Review of Economic Statistics,
May 1935).

Smith, Barry C, John F. Leimkuhler, and Ross
M. Darrow. 1992. “Yield Management at
American Airlines.” Interfaces, 22(1): 8-31.

Stajano, Frank, and Ross Anderson. 1999. “The
Cocaine Auction Protocol: On The Power
Of Anonymous Broadcast.” In Proceedings
of Information Hiding Workshop Lecture
Notes in Computer Science, A3A-A1. Berlin:
Springer-Verlag.

Talluri, Kalyan T., and Garrett J. van Ryzin.
2004. The Theory and Practice of Revenue

Management. Boston: Kluwer Academic.
Ulph, David, and Nir Vulkan. 2007. “Electronic

Commerce, Price Discrimination, and Mass
Customisation.” Unpublished.

Usher, Abbott Payson. 1998. A History of Mechan
ical Invention. New York: Dover Publications.

Varian, Hal R. 1995. “Economic Mechanism
Design for Computerized Agents.” In USENIX
Workshop on Electronic Commerce, 13-21.
New York: USENIX.

Varian, Hal R. 2005. “Competition and Mar
ket Power.” In The Economics of Information
Technology: An Introduction, ed. Joseph Far
rell, Carl Shapiro, and Hal R. Varian, 1-46.
New York: Cambridge University Press.

Varian, Hal R. 2007. “Position Auctions.” Inter
national Journal of Industrial Organization,
25(6):1163-78.

Varian, Hal R. 2009. “Online Ad Auctions.”
American Economic Review, 99(2): 430-34.

Weitzman, Martin L. 1998. “Recombinant
Growth.” Quarterly Journal of Economics,
113(2): 331-60.

Wikipedia. 2009a. “Amazon Mechanical Turk.”
http://en.wikipedia.org/wiki/Amazon_
Mechanical_Turk (accessed March 6, 2010).

Wikipedia. 2009b. “Cloud Computing.” http://
en.wikipedia.org/wiki/Cloud_computing
(accessed March 6, 2010).

Yates, JoAnne. 2000. “Business Use of Infor
mation and Technology from 1880-1950.”
In A Nation Transformed by Information:

How Information Has Shaped the United
States from Colonial Times to the Present,
ed. Alfred D. Chandler and James Cortada,
107-35. New York: Oxford University Press.

This content downloaded from
�������������69.166.59.244 on Wed, 03 Nov 2021 19:33:42 UTC�������������

All use subject to https://about.jstor.org/terms

  • Contents
    • p. 1
    • p. 2
    • p. 3
    • p. 4
    • p. 5
    • p. 6
    • p. 7
    • p. 8
    • p. 9
    • p. 10
  • Issue Table of Contents
    • The American Economic Review, Vol. 100, No. 2 (May 2010) pp. i-xii, 1-651, i-viii
      • Front Matter
      • Editors’ Introduction [pp. x-x]
      • Foreword [pp. xi-xi]
      • RICHARD T. ELY LECTURE
        • Computer Mediated Transactions [pp. 1-10]
      • REVISITING AND RETHINKING THE BUSINESS CYCLE
        • Okun’s Law and Productivity Innovations [pp. 11-15]
        • Indicators for Dating Business Cycles: Cross-History Selection and Comparisons [pp. 16-19]
        • Real-Time Macroeconomic Monitoring: Real Activity, Inflation, and Interactions [pp. 20-24]
        • The Business Cycle in Changing Economy: Conceptualization, Measurement, Dating [pp. 25-29]
      • UNDERSTANDING THE IMPACT OF FISCAL POLICY
        • Some Fiscal Calculus [pp. 30-34]
        • Fiscal Policy in a Model With Financial Frictions [pp. 35-40]
        • Debt Consolidation and Fiscal Stabilization of Deep Recessions [pp. 41-45]
      • BANKING AND SECURITIZATION
        • Asset Fire Sales and Credit Easing [pp. 46-50]
        • The Great Recession: Lessons from Microeconomic Data [pp. 51-56]
        • Loan Syndication and Credit Cycles [pp. 57-61]
      • ECONOMIC GROWTH AND OPEN-ECONOMY MACROECONOMICS
        • Growth, Size, and Openness: A Quantitative Approach [pp. 62-67]
        • Capital Flows and Macroeconomic Performance: Lessons from the Golden Era of International Finance [pp. 68-72]
        • The Marginal Product of Capital, Capital Flows, and Convergence [pp. 73-77]
        • The Quantitative Role of Capital Goods Imports in US Growth [pp. 78-82]
      • GROWTH IN A PARTIALLY DE-GLOBALIZED WORLD
        • Political Limits to Globalization [pp. 83-88]
        • Making Room for China in the World Economy [pp. 89-93]
        • What Parts of Globalization Matter for Catch-Up Growth? [pp. 94-98]
      • MEASURING INTANGIBLE CAPITAL
        • How Do You Measure a “Technological Revolution”? [pp. 99-104]
        • New Approaches to Surveying Organizations [pp. 105-109]
        • Artistic Originals as a Capital Asset [pp. 110-114]
      • INSTITUTIONS AND DEVELOPMENT
        • Institutions, Factor Prices, and Taxation: Virtues of Strong States? [pp. 115-119]
        • Coping with Political Instability: Micro Evidence from Kenya’s 2007 Election Crisis [pp. 120-124]
        • Do Traditional Institutions Constrain Female Entrepreneurship? A Field Experiment on Business Training in India [pp. 125-129]
        • Creating Property Rights: Land Banks in Ghana [pp. 130-134]
      • DEVELOPMENT, CULTURE, AND INSTITUTIONS
        • Cultural and Institutional Bifurcation: China and Europe Compared [pp. 135-140]
        • Equilibrium Fictions: A Cognitive Approach to Societal Rigidity [pp. 141-146]
        • Religious Conversion in Colonial Africa [pp. 147-152]
      • INNOVATION AND OPEN SCIENCE
        • The Public and Private Sectors in the Process of Innovation: Theory and Evidence from the Mouse Genetics Revolution [pp. 153-158]
        • Network Effects in Biology R&D [pp. 159-164]
        • Openness, Open Source, and the Veil of Ignorance [pp. 165-171]
      • AGRICULTURE AND ENERGY: NEW DIRECT AND INDIRECT LINKS CAN LEAD TO UNINTENDED CONSEQUENCES
        • Economywide Implications from US Bioenergy Expansion [pp. 172-177]
        • Ethanol Policy Effects on US Natural Gas Prices and Quantities [pp. 178-182]
        • Are Biofuels the Culprit? OPEC, Food, and Fuel [pp. 183-187]
      • HUMAN CAPITAL, HEALTH OUTCOMES, AND INEQUALITY
        • The Health Returns of Education Policies from Preschool to High School and Beyond [pp. 188-194]
        • Family Health, Children’s Own Health, and Test Score Gaps [pp. 195-199]
        • Time Preference, Noncognitive Skills and Well Being across the Life Course: Do Noncognitive Skills Encourage Healthy Behavior? [pp. 200-204]
        • Adult Child Migration and the Health of Elderly Parents Left Behind in Mexico [pp. 205-208]
      • HUMAN CAPITAL, WORK, AND OUTCOMES
        • Investment in General Human Capital and Turnover Intention [pp. 209-213]
        • Applicant Screening and Performance-Related Outcomes [pp. 214-218]
        • Stop the Clock Policies and Career Success in Academia [pp. 219-223]
        • Low Skilled Immigration and Work-Fertility Tradeoffs Among High Skilled US Natives [pp. 224-228]
      • NEW DIRECTIONS IN THE ECONOMIC ANALYSIS OF HUMAN CAPITAL
        • The Market for College Graduates and the Worldwide Boom in Higher Education of Women [pp. 229-233]
        • The Education-Health Gradient [pp. 234-238]
        • Inputs and Impacts in Charter Schools: KIPP Lynn [pp. 239-243]
        • Human Capital and Imperfectly Informed Financial Markets [pp. 244-249]
      • IMPLICIT MEASUREMENT OF TEACHER QUALITY
        • Using Performance on the Job to Inform Teacher Tenure Decisions [pp. 250-255]
        • Using Student Performance Data to Identify Effective Classroom Practices [pp. 256-260]
        • Subjective and Objective Evaluations of Teacher Effectiveness [pp. 261-266]
        • Generalizations about Using Value-Added Measures of Teacher Quality [pp. 267-271]
      • RESEARCH IN ECONOMIC EDUCATION
        • Achievement Goals, Locus of Control, and Academic Success in Economics [pp. 272-276]
        • The Effectiveness of Peer Tutoring on Student Achievement at the University Level [pp. 277-282]
        • Do Online Homework Tools Improve Student Results in Principles of Microeconomics Courses? [pp. 283-286]
        • The Efficacy of Collaborative Learning Recitation Sessions on Student Outcomes [pp. 287-291]
      • THE MASSACHUSETTS HEALTH INSURANCE EXPERIMENT: EARLY EXPERIENCES
        • How Sensitive are Low Income Families to Health Plan Prices? [pp. 292-296]
        • Disentangling the Effects of Health Reform in Massachusetts: How Important Are the Special Provisions for Young Adults? [pp. 297-302]
        • Patient Cost Sharing in Low Income Populations [pp. 303-308]
      • MIGRATION AND THE US LABOR MARKET
        • Culture and Intraracial Wage Inequality Among America’s African Diaspora [pp. 309-315]
        • Mexican Immigrant Employment Outcomes over the Business Cycle [pp. 316-320]
        • The Effect of Migration on Wages: Evidence from a Natural Experiment [pp. 321-326]
      • EQUILIBRIUM CONSEQUENCES OF SEARCH ON THE JOB
        • Directed Search on the Job, Heterogeneity, and Aggregate Fluctuations [pp. 327-332]
        • Unemployment and Small Cap Returns: The Nexus [pp. 333-337]
        • Wage Dispersion in the Search and Matching Model [pp. 338-342]
        • Job to Job Movements in a Simple Search Model [pp. 343-347]
      • GENDER, JOBS, SUCCESS, AND PLACEMENT
        • Can Mentoring Help Female Assistant Professors? Interim Results from a Randomized Trial [pp. 348-352]
        • Are There Gender Differences in the Job Mobility Patterns of Academic Economists? [pp. 353-357]
        • Female Hires and the Success of Start-up Firms [pp. 358-361]
        • Gender Differences in Wealth at Retirement [pp. 362-367]
      • HOUSING AND LABOR MARKETS
        • Dynamic Asset Pricing in a System of Local Housing Markets [pp. 368-372]
        • Local Multipliers [pp. 373-377]
        • þÿ�þ�ÿ���C���h���a���n���g���e���s��� ���i���n��� ���T���r���a���n���s���p���o���r���t���a���t���i���o���n��� ���I���n���f���r���a���s���t���r���u���c���t���u���r���e��� ���a���n���d��� ���C���o���m���m���u���t���i���n���g��� ���P���a���t���t���e���r���n���s��� ���i���n��� ���U���S��� ���M���e���t���r���o���p���o���l���i���t���a���n��� ���A���r���e���a���s���,��� ���1���9���6���0�������2���0���0���0��� ���[���p���p���.��� ���3���7���8���-���3���8���2���]
        • Place Based Policies, Heterogeneity, and Agglomeration [pp. 383-387]
      • CAPITAL FLOWS, CONTAGION, AND REGULATORY RESPONSES
        • Risk and Global Economic Architecture: Why Full Financial Integration May be Undesirable [pp. 388-392]
        • Decoupling and Recoupling [pp. 393-397]
        • Credit Externalities: Macroeconomic Effects and Policy Implications [pp. 398-402]
        • Excessive Volatility in Capital Flows: A Pigouvian Taxation Approach [pp. 403-407]
      • INTERMEDIATION IN INTERNATIONAL TRADE
        • Wholesalers and Retailers in US Trade [pp. 408-413]
        • Imports “[Ent]” Us: Retail Chains as Platforms for Developing-Country Imports [pp. 414-418]
        • Facts and Figures on Intermediated Trade [pp. 419-423]
        • Intermediation and Economic Integration [pp. 424-428]
      • TRADE AND THE INTERNAL ORGANIZATION OF FIRMS
        • Trading Favors within Chinese Business Groups [pp. 429-433]
        • Does Product Market Competition Lead Firms to Decentralize? [pp. 434-438]
        • Offshoring and Wage Inequality: Using Occupational Licensing as a Shifter of Offshoring Costs [pp. 439-443]
        • Intrafirm Trade and Product Contractibility [pp. 444-448]
      • TRADE AND CLIMATE CHANGE
        • Can Openness Mitigate the Effects of Weather Shocks? Evidence from India’s Famine Era [pp. 449-453]
        • Climate Shocks and Exports [pp. 454-459]
        • Oil Monopoly and the Climate [pp. 460-464]
        • Trade and Carbon Taxes [pp. 465-469]
      • COMPETITION AND MARKET STRUCTURE
        • “One Discriminatory Rent” or “Double Jeopardy”: Multicomponent Negotiation for New Car Purchases [pp. 470-474]
        • The Impact of Commissions on Home Sales in Greater Boston [pp. 475-479]
        • Investigating Income Effects in Scanner Data: Do Gasoline Prices Affect Grocery Purchases? [pp. 480-484]
        • An Individual Health Plan Exchange: Which Employees Would Benefit and Why? [pp. 485-489]
      • MORTGAGE MARKET AND THE FINANCIAL CRISIS
        • What “Triggers” Mortgage Default? [pp. 490-494]
        • Learning to Cope: Voluntary Financial Education and Loan Performance during a Housing Crisis [pp. 495-500]
        • Issuer Credit Quality and the Price of Asset Backed Securities [pp. 501-505]
        • Statistical Default Models and Incentives [pp. 506-510]
      • MORTGAGES
        • Consumer Confusion in the Mortgage Market: Evidence of Less than a Perfectly Transparent and Competitive Market [pp. 511-515]
        • The Failure and Promise of Mandated Consumer Mortgage Disclosures: Evidence from Qualitative Interviews and a Controlled Experiment with Mortgage Borrowers [pp. 516-521]
      • ASSET PRICING: NEW RISK CHANNELS
        • Affine Disagreement and Asset Pricing [pp. 522-526]
        • The Short and Long Run Benefits of Financial Integration [pp. 527-531]
        • Growth Opportunities and Technology Shocks [pp. 532-536]
        • Confidence Risk and Asset Prices [pp. 537-541]
      • LONG RUN RISKS AND ASSET MARKETS
        • Long Run Risks, the Macroeconomy, and Asset Prices [pp. 542-546]
        • Long Run Risks, Credit Markets, and Financial Structure [pp. 547-551]
        • Long Run Risk, the Wealth-Consumption Ratio, and the Temporal Pricing of Risk [pp. 552-556]
        • An Equilibrium Term Structure Model with Recursive Preferences [pp. 557-561]
      • INTERNATIONAL FINANCIAL MARKETS
        • Global Interest Rates, Currency Returns, and the Real Value of the Dollar [pp. 562-567]
        • Estimation of De Facto Flexibility Parameter and Basket Weights in Evolving Exchange Rate Regimes [pp. 568-572]
        • Growth in a Time of Debt [pp. 573-578]
      • DEMAND AND SUPPLY FOR GOVERNMENT BONDS
        • Interest Rate Risk in Credit Markets [pp. 579-584]
        • Price Pressure in the Government Bond Market [pp. 585-590]
        • Repo Market Effects of the Term Securities Lending Facility [pp. 591-596]
      • DESIGNING ONLINE ADVERTISING MARKETS
        • Optimal Auction Design and Equilibrium Selection in Sponsored Search Auctions [pp. 597-602]
        • Online Advertising: Heterogeneity and Conflation in Market Design [pp. 603-607]
        • The Impact of Targeting Technology on Advertising Markets and Media Competition [pp. 608-613]
      • FIELD EXPERIMENTS IN FIRMS
        • Wage Subsidies for Microenterprises [pp. 614-618]
        • Why Do Firms in Developing Countries Have Low Productivity? [pp. 619-623]
        • Self-Control and the Development of Work Arrangements [pp. 624-628]
        • What Capital is Missing in Developing Countries? [pp. 629-633]
      • PROCEEDINGS OF THE ONE HUNDRED TWENTY-SECOND ANNUAL MEETING [pp. 635, 637-651]
      • Back Matter

Electronic copy available at: http://ssrn.com/abstract=2594754

Research article

Big other: surveillance capitalism and the
prospects of an information civilization
Shoshana Zuboff1,2

1Harvard Business School Emerita, Boston, MA, USA;
2Berkman Center for Internet and Society, Cambridge, MA, USA

Correspondence:
S Zuboff, Berkman Center for Internet and Society, Cambridge, MA, USA.
E-mail: [email protected]

Abstract
This article describes an emergent logic of accumulation in the networked sphere,
‘surveillance capitalism,’ and considers its implications for ‘information civilization.’ The
institutionalizing practices and operational assumptions of Google Inc. are the primary lens
for this analysis as they are rendered in two recent articles authored by Google Chief
Economist Hal Varian. Varian asserts four uses that follow from computer-mediated
transactions: ‘data extraction and analysis,’ ‘new contractual forms due to better monitor-
ing,’ ‘personalization and customization,’ and ‘continuous experiments.’ An examination of
the nature and consequences of these uses sheds light on the implicit logic of surveillance
capitalism and the global architecture of computer mediation upon which it depends. This
architecture produces a distributed and largely uncontested new expression of power that I
christen: ‘Big Other.’ It is constituted by unexpected and often illegible mechanisms of
extraction, commodification, and control that effectively exile persons from their own
behavior while producing new markets of behavioral prediction and modification. Surveil-
lance capitalism challenges democratic norms and departs in key ways from the centuries-
long evolution of market capitalism.
Journal of Information Technology (2015) 30, 75–89. doi:10.1057/jit.2015.5

Keywords: surveillance capitalism; big data; Google; information society; privacy; internet of
everything

Introduction

A
recent White House report on ‘big data’ concludes, ‘The
technological trajectory, however, is clear: more and
more data will be generated about individuals and will

persist under the control of others’ (White House, 2014: 9).
Reading this statement brought to mind a 2009 interview with
Google Chairperson Eric Schmidt when the public first
discovered that Google retained individual search histories
that were also made available to state security and law
enforcement agencies, ‘If you have something that you don’t
want anyone to know, maybe you shouldn’t be doing it in the
first place, but if you really need that kind of privacy, the
reality is that search engines including Google do retain this
information for some time … It is possible that that informa-
tion could be made available to the authorities’ (Newman,
2009). What these two statements share is the attribution of
agency to ‘technology.’ ‘Big data’ is cast as the inevitable
consequence of a technological juggernaut with a life of its
own entirely outside the social. We are but bystanders.

Most articles on the subject of ‘big data’ commence with
an effort to define ‘it.’ This suggests to me that a reasonable
definition has not yet been achieved. My argument here is
that we have not yet successfully defined ‘big data’ because we
continue to view it as a technological object, effect or
capability. The inadequacy of this view forces us to return
over and again to the same ground. In this article I take a
different approach. ‘Big data,’ I argue, is not a technology or an
inevitable technology effect. It is not an autonomous process,
as Schmidt and others would have us think. It originates in the
social, and it is there that we must find it and know it. In this
article I explore the proposition that ‘big data’ is above all the
foundational component in a deeply intentional and highly
consequential new logic of accumulation that I call surveil-
lance capitalism. This new form of information capitalism
aims to predict and modify human behavior as a means to
produce revenue and market control. Surveillance capitalism
has gradually constituted itself during the last decade, embodying

Journal of Information Technology (2015) 30, 75–89
© 2015 JIT Palgrave Macmillan All rights reserved 0268-3962/15
palgrave-journals.com/jit/

Electronic copy available at: http://ssrn.com/abstract=2594754

a new social relations and politics that have not yet been well
delineated or theorized. While ‘big data’ may be set to other
uses, those do not erase its origins in an extractive project
founded on formal indifference to the populations that
comprise both its data sources and its ultimate targets.

Constantiou and Kallinikos (2014) provide important clues
to this new direction in their article ‘New games, new rules: big
data and the changing context of strategy,’ as they lift the veil on
the black box, that is ‘big data,’ to reveal its epistemic contents
and their indigenous problematics. ‘New games’ is a powerful
and necessary contribution to this opaque intellectual territory.
The article builds on earlier warnings (e.g. boyd and Crawford,
2011; Bhimani and Willcocks, 2014) to sharply delineate the
epistemic features of ‘big data’ – heterogeneous, unstructured,
trans-semiotic, decontextualized, agnostic – and to illuminate
the epistemological discontinuities such data entail for the
methods and mindsets of corporate strategy’s formal, deductive,
inward-focused, and positivistic conventions.

In claiming this black box for the known world,
Constantiou and Kallinikos (2014) also insist on the mys-
teries that remain unsolved. ‘Big data,’ they warn, heralds
‘a transformation of contemporary economy and society … a
much wider shift that makes everydayness qua data imprints
an intrinsic component of organizational and institutional
life … and also a primary target of commercialization
strategies …’ Such changes, they say, concern ‘the blurring
of long-established social and institutional divisions … the
very nature of firms and organizations and their relations to
individuals qua users, customers or clients, and citizens.’
These challenges also ‘recast management … as a field and
social practice in a new context whose exact outlines still
remain unclear … (10).’

In this brief article, I aim to contribute to a new discussion
on these still untheorized new territories in which the roiling
ephemera of Constantiou’s and Kallinikos’s ‘big data’ are
embedded: the migration of everydayness as a commercializa-
tion strategy; the blurring of divisions; the nature of the firm
and its relation to populations. In preparation for the argu-
ments I want to make here, I begin with a very brief review of a
few foundational concepts. I then move on to a close examina-
tion of two articles by Google Chief Economist Hal Varian that
disclose the logic and implications of surveillance capitalism as
well as ‘big data’s’ foundational role in this new regime.

Computer mediation meets the logic of accumulation
Nearly 35 years ago I first developed the notion of ‘computer
mediation’ in an MIT Working Paper called ‘The Psycholo-
gical and Organizational Implications of Computer-
Mediated Work’ (Zuboff 1981; see also Zuboff, 2013 for a
history of this concept and its meaning). In that paper and
subsequent writing I distinguished ‘computer-mediated’
work from earlier generations of mechanization and auto-
mation designed to substitute for or simplify human labor
(e.g. Zuboff, 1988, 1985, 1982). I observed that information
technology is characterized by a fundamental duality that
had not yet been fully appreciated. It can be applied to
automate operations according to a logic that hardly differs
from that of centuries past: replace the human body with
machines that enable more continuity and control. But when
it comes to information technology, automation simulta-
neously generates information that provides a deeper level of

transparency to activities that had been either partially or
completely opaque. It not only imposes information (in the
form of programmed instructions), but it also produces
information. The action of a machine is entirely invested in
its object, but information technology also reflects back on its
activities and on the system of activities to which it is related.
This produces action linked to a reflexive voice, as computer-
mediation symbolically renders events, objects, and pro-
cesses that become visible, knowable, and shareable in a new
way. This distinction, to put it simply, marks the difference
between ‘smart’ and ‘dumb.’

The word I coined to describe this unique capacity is
informate. Information technology alone has the capacity to
automate and to informate. As a result of the informating
process, computer-mediated work extends organizational
codification resulting in a comprehensive ‘textualization’ of
the work environment – what I called ‘the electronic text.’
That text created new opportunities for learning and therefore
new contests over who would learn, how, and what. Once a
firm is imbued with computer mediation, this new ‘division of
learning’ becomes more salient than the traditional division of
labor. Even at the early stages of these developments in the
1980s, the text was somewhat heterogeneous. It reflected
production flows and administrative processes along with
customer interfaces, but it also revealed human behavior:
phone calls, keystrokes, bathroom breaks and other signals of
attentional continuity, actions, locations, conversations, net-
works, specific engagements with people and equipment, and
so forth. I recall writing the words in the summer of 1985 that
appeared in the final chapter of In the Age of the Smart
Machine. They were regarded as outlandish then. ‘Science
fiction,’ some said; ‘subversive,’ others complained: ‘The
informated workplace, which may no longer be a “place” at
all, is an arena through which information circulates, informa-
tion to which intellective effort is applied. The quality, rather
than the quantity, of effort will be the source from which
added value is derived … learning is the new form of labor’
(Zuboff, 1988: 395).

Today we must strain to imagine when these conditions –
computer mediation, textualization, learning as labor – were
not the case, at least for broad sectors of the labor force. Real-
time information-based computer-mediated learning has
become so endogenous to everyday business activities that the
two domains are more or less conflated. This is what most of us
do now as work. These new facts are institutionalized in
thousands, if not millions, of new species of action within firms.
Some of these are more formal: continuous improvement
methodologies, enterprise integration, employee monitoring,
ICT systems that enable the global coordination of distributed
manufacturing operations, professional activities, teams, custo-
mers, supply chains, inter-firm projects, mobile and temporary
workforces, and marketing approaches to diverse configura-
tions of consumers. Some are less formal: the unceasing flow of
email, online search, smartphone activities, apps, texts, video
meetings, social media interactions, and so forth.

The division of learning, however, is no pure form. During
20 years of fieldwork, I encountered the same lesson in
hundreds of variations. The division of learning, like the
division of labor, is always shaped by contests over these
questions: Who participates and how? Who decides who
participates? What happens when authority fails? In the
market sphere, the electronic text and what can be learned

Big other S Zuboff
76

from it were never – and can never be – ‘things in themselves.’
They are always already constituted by the answers to these
questions. In other words, they are already embedded in the
social, their possibilities circumscribed by authority and
power.

The key point here is that when it comes to the market
sphere, the electronic text is already organized by the logic
of accumulation in which it is embedded and the conflicts
inherent to that logic. The logic of accumulation organizes
perception and shapes the expression of technological affor-
dances at their roots. It is the taken-for-granted context of any
business model. Its assumptions are largely tacit, and its power
to shape the field of possibilities is therefore largely invisible. It
defines objectives, successes, failures, and problems. It deter-
mines what is measured, and what is passed over; how
resources and people are allocated and organized; who is
valued in what roles; what activities are undertaken – and to
what purpose. The logic of accumulation produces its own
social relations and with that its conceptions and uses of
authority and power.

In the history of capitalism, each era has run toward a
dominant logic of accumulation – mass production-based
corporate capitalism in the 20th century shaded into financial
capitalism by that century’s end – a form that continues to hold
sway. This helps to explain why there is so little real competitive
differentiation within industries. Airlines, for example, have
immense information flows that are interpreted along more or
less similar lines toward similar aims and metrics, because firms
are all evaluated according to the terms of a single shared logic
of accumulation.1 The same could be said for banks, hospitals,
telecommunications companies, and so forth.

Still, capitalism’s success over the longue durée has
depended upon the emergence of new market forms expres-
sing new logics of accumulation that are more successful at
meeting the ever-evolving needs of populations and their
expression in the changing nature of demand.2 As Piketty
acknowledges in his Capital in the Twenty-First Century,
‘There is no single variety of capitalism or organization of
production … This will continue to be true in the future, no
doubt more than ever: new forms of organization and owner-
ship remain to be invented’ (Piketty, 2014: 483). The philoso-
pher and legal scholar Roberto Unger has also written
persuasively on this point:

The concept of a market economy is institutionally indeter-
minate … it is capable of being realized in different legal and
institutional directions, each with dramatic consequences for
every aspect of social life, including the class structure of
society and the distribution of wealth and power … Which of
its institutional realizations prevails has immense importance
for the future of humanity … a market economy can adopt
radically divergent institutional forms, including different
regimes of property and contract and different ways of
relating government and private producers. The forms now
established in the leading economies represent the fragment
of a larger and open-ended field of possibilities.

(Unger 2007: 8, 41)

New market forms emerge in distinct times and places. Some
rise to hegemony, others exist in parallel to the dominant
form, and others are revealed in time as evolutionary dead
ends.

How can these conceptual building blocks help us make
sense of ‘big data’? Some points are obvious: three of the
world’s seven billion people are now computer-mediated in a
wide range of their daily activities far beyond the traditional
boundaries of the workplace. For them the old dream of
ubiquitous computing (Weiser, 1991) is a barely noticeable
truism. As a result of pervasive computer mediation, nearly
every aspect of the world is rendered in a new symbolic
dimension as events, objects, processes, and people become
visible, knowable, and shareable in a new way. The world is
reborn as data and the electronic text is universal in scale and
scope.3 Just a moment ago, it still seemed reasonable to focus
our concerns on the challenges of an information workplace
or an information society. Now the enduring questions of
authority and power must be addressed to the widest possible
frame that is best defined as ‘civilization’ or more specifically –
information civilization. Who learns from global data flows,
what, and how? Who decides? What happens when authority
fails? What logic of accumulation will shape the answers to
these questions? Recognizing their civilizational scale lends
these questions new force and urgency. Their answers will
shape the character of information civilization in the century
to come, just as the logic of industrial capitalism and its
successors shaped the character of industrial civilization over
the last two centuries.

In the brief space of this work, my ambition is to begin the
task of illuminating an emergent logic of accumulation that vies
for hegemony in today’s networked spaces. My primary lens for
this brief exploration is Google, the world’s most popular
website. Google is widely considered to be the pioneer of ‘big
data’ (e.g. Mayer-Schönberger and Cukier, 2013), and on the
strength of those accomplishments it has also pioneered the
wider logic of accumulation I call surveillance capitalism, of
which ‘big data’ is both a condition and an expression. This
emerging logic is not only shared by Facebook and many other
large Internet-based firms, it also appears to have become the
default model for most online startups and applications. Like
Constantiou and Kallinikos (2014), I begin this discussion with
the characteristics of the data in ‘big data’ and how they are
generated. But where those authors trained their sights on the
data’s epistemic features, I want to consider their individual,
social, and political significance.

This discussion here is organized around two extraordinary
documents written by Google’s Chief Economist Hal Varian
(Varian, 2014, 2010). His claims and observations offer a
starting point for insights into the systemic logic of accumula-
tion in which ‘big data’ are embedded. I note here that while
Varian is not a Google line executive, his articles invite a close
inspection of Google’s practices as a prime exemplar of this
new logic of accumulation. In both pieces, Varian illustrates
his points with examples from Google. He often uses the first
person plural in these instances, such as, ‘Google has been so
successful with our own experiments that we have made them
available to our advertisers and publishers in two programs.’
Or, ‘Google has seen 30 trillion URLs, crawls over 20 billion of
those a day and answers 100 billion search queries a month …
we have had to develop new types of databases that can store
data in massive tables spread across thousands of machines
and can process queries on more than a trillion records in a
few seconds. We published descriptions of these tools …’
(Varian, 2014: 27, 29). It therefore seems fair to assume that
Varian’s perspectives reflect the substance of Google’s

Big other S Zuboff
77

business practices, and, to a certain extent, the worldview that
underlies those practices.

In the two articles I examine here, Varian’s theme is the
universality of ‘computer-mediated economic transactions.’
He writes, ‘The computer creates a record of the transaction
… I argue that these computer-mediated transactions have
enabled significant improvements in the way transactions are
carried out and will continue to impact the economy for the
foreseeable future’ (2010: 2). The implications of Varian’s
observation are significant. The informating of the economy,
as he observes, is constituted by a pervasive and continuous
recording of the details of each transaction. In this vision,
computer mediation renders an economy transparent and
knowable in new ways. This is a sharp contrast to the classic
neoliberal ideal of ‘the market’ as intrinsically ineffable and
unknowable. Hayek’s conception of the market was as an
incomprehensible ‘extended order’ to which mere individuals
must subjugate their wills (Hayek, 1988: 14–15). It was
precisely the unknowability of the universe of market transac-
tions that anchored Hayek’s claims for the necessity of radical
freedom from state intervention or regulation. Given Varian’s
new facts of a knowable market, he asserts four new ‘uses’ that
follow from computer-mediated transactions: ‘data extraction
and analysis,’ ‘new contractual forms due to better monitor-
ing,’ ‘personalization and customization,’ and ‘continuous
experiments’ (Varian, 2014). Each one of these provides
insights into an emerging logic of accumulation, the division
of learning that it shapes, and the character of the information
civilization toward which it leads.

Data, extraction, analysis
The first of Varian’s new uses is ‘data extraction and analysis
… what everyone is talking about when they talk about big
data’ (Varian, 2014: 27). I want to examine each word in this
phrase – ‘data,’ ‘extraction,’ and ‘analysis’ – as each conveys
insights into the new logic of accumulation.

Data
The data from computer-mediated economic transactions is a
significant dimension of ‘big data.’ There are other sources too,
including flows that arise from a variety of computer-mediated
institutional and trans-institutional systems. Among these we
can include a second source of computer-mediated flows that is
expected to grow exponentially: data from billions of sensors
embedded in a widening range of objects, bodies, and places. An
often cited Cisco White Paper predicts $14.4 trillion of new
value associated with this ‘Internet of Everything’ (Cisco,
2013a, b). Google’s new investments in machine learning,
drones, wearables, self-driving cars, nano particles that ‘patrol’
the body for signs of disease, and smart devices for the home are
each essential components of this growing network of smart
sensors and Internet-enabled devices intended as a new intelli-
gent infrastructure for objects and bodies (Bradshaw, 2014a, b;
Kovach, 2013; BBC News, 2014; Brewster, 2014; Dwoskin, 2014;
Economist, 2014; Fink, 2014; Kelly, 2014; Lin, 2014; Parnell,
2014; Winkler and Wakabayashi, 2014). A third source of data
flows from corporate and government databases including those
associated with banks, payment-clearing intermediaries, credit
rating agencies, airlines, tax and census records, health care
operations, credit card, insurance, pharmaceutical, and telecom
companies, and more. Many of these data, along with the data

flows of commercial transactions, are purchased, aggregated,
analyzed, packaged, and sold by data brokers who operate, in the
US at least, in secrecy – outside of statutory consumer protec-
tions and without consumers’ knowledge, consent, or rights of
privacy and due process (U.S. Committee on Commerce,
Science, and Transportation, 2013).

A fourth source of ‘big data,’ one that speaks to its
heterogeneous and trans-semiotic character, flows from pri-
vate and public surveillance cameras, including everything
from smartphones to satellites, Street View to Google Earth.
Google has been at the forefront of this contentious data
domain. For example, Google Street View was launched in
2007 and encountered opposition around the world. German
authorities discovered that some Street View cars were
equipped with scanners to scrape data from private Wi-Fi
networks (O’Brien and Miller, 2013). According to the
Electronic Privacy Information Center’s (EPIC) summary of
a lawsuit filed by 38 states’ Attorneys General and the District
of Columbia, the court concluded that ‘the company engaged
in unauthorized collection of data from wireless networks,
including private WiFi networks of residential Internet users.’
The EPIC report summarizes a redacted version of an FCC
report revealing that ‘Google intentionally intercepted payload
data for business purposes and that many supervisors and
engineers within the company reviewed the code and the
design documents associated with the project’ (EPIC, 2014b).
According to the New York Times account of Google’s
eventual seven million dollar settlement of the case, ‘the search
company for the first time is required to aggressively police its
own employees on privacy issues …’ (Streitfeld, 2013). Street
View was restricted in many countries and continues to face
litigation over what claimants have characterized as ‘secret,’
‘illicit,’ and ‘illegal’ data gathering tactics in the US, Europe,
and elsewhere (Office of the Privacy Commission of Canada,
2010; O’Brien, 2012; Jammet, 2014).

In Street View, Google developed a declarative method
that it has repeated in other data ventures. This modus
operandi is that of incursion into undefended private terri-
tory until resistance is encountered. As one consumer watch-
dog summarized it for the New York Times, ‘Google puts
innovation ahead of everything and resists asking permis-
sion’ (Streitfeld, 2013; see also Burdon and McKillop, 2013).
The firm does not ask if it can photograph homes for its
databases. It simply takes what it wants. Google then
exhausts its adversaries in court or eventually agrees to pay
fines that represent a negligible investment for a significant
return.4 It is a process that Siva Vaihyanathan has called
‘infrastructure imperialism’ (Vaidhyanathan, 2011). EPIC
maintains a comprehensive online record of the hundreds of
cases launched against Google by countries, states, groups,
and individuals, and there are many more cases that never
become public (EPIC, 2014a, b).

These institutionally produced data flows represent the
‘supply’ side of the computer-mediated interface. With these
data alone it is possible to construct detailed individual
profiles. But the universality of computer-mediation has
occurred through a complex process of causation that includes
subjective activities too – the demand side of computer-
mediation. Individual needs drove the accelerated penetration
curves of the Internet. In less than two decades after the
Mosaic web browser was released to the public, enabling easy
access to the World Wide Web, a 2010 BBC poll found that

Big other S Zuboff
78

79% of people in 26 countries considered Internet access to be
a fundamental human right (BBC, 2010).

Outside the market-based hierarchical spaces of the work-
place, Internet access, indexing, and search meant that indivi-
duals were finally free to pursue the resources they needed for
effective life unimpeded by the monitoring, metrics, insecurity,
role requirements, and secrecy imposed by the firm and its logic
of accumulation. Individual needs for self-expression, voice,
influence, information, learning, empowerment, and connection
summoned all sorts of new capabilities into existence in just a
few years: Google’s searches, iPod’s music, Facebook’s pages,
YouTube’s videos, blogs, networks, communities of friends,
strangers, and colleagues, all reaching out beyond the old
institutional and geographical boundaries in a kind of exultation
of hunting and gathering and sharing information for every
purpose or none at all. It was mine, and I could do with it what I
wished!5 These subjectivities of self-determination found expres-
sion in a new networked individual sphere characterized by what
Benkler (2006) aptly summarized as non-market forms of ‘social
production.’

These non-market activities are a fifth principal source of
‘big data’ and the origin of what Constantiou and Kallinikos
(2014) refer to as its ‘everydayness.’ ‘Big data’ are constituted by
capturing small data from individuals’ computer-mediated
actions and utterances in their pursuit of effective life. Nothing
is too trivial or ephemeral for this harvesting: Facebook ‘likes,’
Google searches, emails, texts, photos, songs, and videos, loca-
tion, communication patterns, networks, purchases, movements,
every click, misspelled word, page view, and more. Such data are
acquired, datafied, abstracted, aggregated, analyzed, packaged,
sold, further analyzed and sold again. These data flows have been
labeled by technologists as ‘data exhaust.’ Presumably, once the
data are redefined as waste material, their extraction and eventual
monetization are less likely to be contested.

Google became the largest and most successful ‘big data’
company, because it is the most visited website and therefore has
the largest data exhaust. Like many other born-digital firms,
Google rushed to meet the waves of pent-up demand that
flooded the networked individual sphere in the first years of the
World Wide Web. It was a heroic exemplar of individual
empowerment in the quest for effective life. But as pressures
for profit mounted, Google’s leaders were concerned about the
effect that fees-for-service might have on user growth. They
opted instead for an advertising model. The new approach
depended upon the acquisition of user data as the raw material
for proprietary analyses and algorithm production that could
sell and target advertising through a unique auction model with
ever more precision and success. As Google’s revenues rapidly
grew, they motivated ever more comprehensive data collection.6

The new science of big data analytics exploded, driven largely by
Google’s spectacular success.

Eventually it became clear that Google’s business is the
auction business, and its customers are advertisers (see useful
discussions of this turning point in Auletta, 2009;
Vaidhyanathan, 2011; and Lanier, 2013). AdWords, Google’s
algorithmic auction method for selling online advertising,
analyzes massive amounts of data to determine which adver-
tisers get which one of 11 sponsored links on each results page.
In a 2009 Wired article on ‘Googlenomics,’ Google’s Varian
commented, ‘Why does Google give away products … ?
Anything that increases Internet use ultimately enriches
Google …’ The article continues, ‘… more eyeballs on the

Web lead inexorably to more ad sales for Google … And since
prediction and analysis are so crucial to AdWords, every bit of
data, no matter how seemingly trivial, has potential value’
(Levy, 2009). The theme is reiterated in Mayer-Schönberger
and Cukier’s Big Data: ‘Many companies design their systems
so that they can harvest data exhaust … Google is the
undisputed leader … every action a user performs is consid-
ered a signal to be analyzed and fed back into the system’
(2013: 113). This helps explain why Google outbid all
competitors for the privilege of providing free Wi-Fi to
Starbuck’s 3 billion yearly customers (Schmarzo, 2014). More
users produce more exhaust that improves the predictive value
of analyses and results in more lucrative auctions. What
matters is quantity not quality. Another way of saying this is
that Google is ‘formally indifferent’ to what its users say or do,
as long as they say it and do it in ways that Google can capture
and convert into data.

Extraction
This ‘formal indifference’ is a prominent, perhaps decisive,
characteristic of the emerging logic of accumulation under
examination here. The second term in Varian’s phrase,
‘extraction,’ also sheds light on the social relations implied by
formal indifference. First, and most obvious, extraction is a
one-way process, not a relationship. Extraction connotes a
‘taking from’ rather than either a ‘giving to,’ or a reciprocity of
‘give and take.’ The extractive processes that make big data
possible typically occur in the absence of dialogue or consent,
despite the fact that they signal both facts and subjectivities of
individual lives. These subjectivities travel a hidden path to
aggregation and decontextualization, despite the fact that they
are produced as intimate and immediate, tied to individual
projects and contexts (Nissembaum, 2011). Indeed, it is the
status of such data as signals of subjectivities that makes them
most valuable for advertisers. For Google and other ‘big data’
aggregators, however, the data are merely bits. Subjectivities
are converted into objects that repurpose the subjective for
commodification. Individual users’ meanings are of no inter-
est to Google or other firms in this chain. In this way, the
methods of production of ‘big data’ from small data and the
ways in which ‘big data’ are valued reflect the formal
indifference that characterizes the firm’s relationship to its
populations of ‘users.’ Populations are the sources from which
data extraction proceeds and the ultimate targets of the
utilities such data produce.

Formal indifference is evident in the aggressiveness with
which Google pursues its interests in extracting signals of
individual subjectivities. In these extractive activities it follows
the Street View model: incursions into legally and socially
undefended territory until resistance is encountered. Its
practices appear designed to be undetectable or at least
obscure, and had it not been for NSA whistleblower Edward
Snowden aspects of its operations, especially as they overlap
state security interests, would still be hidden. Most of what was
known about Google’s practices erupted from the conflicts it
produced (Angwin, 2014). For example, Google has faced legal
opposition and social protest in relation to claims of (1) the
scanning of email, including those of non-Gmail users and those
of students using its educational apps (Herold, 2014; Plummer,
2014), (2) the capture of voice communications (Menn
et al., 2010), (3) bypassing privacy settings (Angwin, 2012;

Big other S Zuboff
79

Owen, 2014), (4) unilateral practices of data bundling across
its online services (CNIL, 2014; Doyle, 2013), (5) its extensive
retention of search data (Anderson, 2010; O’Brien and
Crampton, 2007), (6) its tracking of smartphone location data
(Mick, 2011; Snelling, 2014), and (7) its wearable technologies
and facial recognition capabilities (EPIC, 2014a, https://epic.
org/privacy/google/glass/). These contested data gathering
moves face substantial opposition in the EU as well as the US
(Barker and Fontanella-Khan, 2014; Gabriel, 2014; Garside,
2014; Kopczynski, 2014; Mance et al., 2014; Steingart, 2014;
Vasagar, 2014).

‘Extraction’ summarizes the absence of structural recipro-
cities between the firm and its populations. This fact alone lifts
Google, and other participants in its logic of accumulation, out
of the historical narrative of Western market democracies. For
example, the 20th-century corporation canonized by scholars
like Berle and Means (1991) and Chandler Jr (1977) originated
in and was sustained by deep interdependencies with its
populations. The form and its bosses had many failings and
produced many violent facts that have been well documented,
but I focus here on a different point. That market form
intrinsically valued its populations of newly modernizing
individuals as its source of employees and customers; it
depended upon its populations in ways that led over time to
institutionalized reciprocities. In return for its rigors, the form
offered a quid pro quo that was consistent with the self-
understanding and demand characteristics of its populations.
On the inside were durable employment systems, career
ladders, and steady increases in wages and benefits for more
workers (Sklar, 1988). On the outside were the dramas of
access to affordable goods and services for more consumers
(Cohen, 2003).

The ‘five dollar day’ was emblematic of this systemic logic,
recognizing as it did that the whole enterprise rested upon a
consuming population. The firm, Ford realized, had to value
the worker-consumer as a fundamental unity and the essential
component of a new mass production capitalism. This social
contract hearkened back to Adam Smith’s original insights
into the productive reciprocities of capitalism, in which price
increases were balanced with wage increases, ‘so that the
labourer may still be able to purchase that quantity of those
necessary articles which the state of the demand for labour …
requires that he should have’ (Smith, 1994: 939–940). It was
these reciprocities that helped constitute a broad middle class
with steady income growth and a rising standard of living.
Indeed, considered from the vantage point of the last 30-plus
years during which this market form was systematically
deconstructed, its embeddedness in the social order through
these structural reciprocities appears to have been one of its
most salient features (Davis, 2011, 2013).

Google and the ‘big data’ project represent a break with this
past. Its populations are no longer necessary as the source of
customers or employees. Advertisers are its customers along
with other intermediaries who purchase its data analyses.
Google employs only about 48,000 people as of this writing,
and is known to have thousands of applicants for every job
opening. (As contrast: at the height of its power in 1953,
General Motors was the world’s largest private employer.)
Google, therefore, has little interest in its users as employees.
This pattern is true of hyperscale high tech companies
that achieve growth mainly by leveraging automation. For
example, the top three Silicon Valley companies in 2014 had

revenues of $247 billion, only 137,000 employees, and a
combined market capitalization of $1.09 trillion. In contrast,
even as late as 1990, the three top Detroit automakers
produced revenues of $250 billion with 1.2 million employees
and a combined market capitalization of $36 billion (Manyika
and Chui, 2014).

This structural independence of the firm from its popula-
tions is a matter of exceptional importance in light of the
historical relationship between market capitalism and democ-
racy. For example, Acemoglu and Robinson elaborate the
mutual structuring of (1) early industrial capitalism’s depen-
dency on the masses, (2) prosperity, and (3) the rise of
democracy in 19th-century Britain. Examining that era’s
successful new market forms and the accompanying shift
toward democratic institutions they observe, ‘Clamping down
on popular demands and undertaking a coup against inclusive
political institutions would … destroy … gains, and the elites
opposing greater democratization and greater inclusiveness
might find themselves among those losing their fortunes from
this destruction’ (2012: 313–314). Google bears no such risks.
On the contrary, despite its role as the ‘chief utility for the
World Wide Web’ (Vaidhyanathan, 2011: 17) and its sub-
stantial investments in technologies with explosive social
consequences such as artificial intelligence, robotics, facial
recognition, wearables, nanotechnology, smart devices, and
drones, Google has not been subject to any meaningful public
oversight (see e.g. the discussion in Vaidhyanathan, 2011: 44–
50; see also Finamore and Dutta, 2014; Gibbs, 2014; Trotman,
2014; Waters, 2014). In an open letter to Europe, Google
Chairperson Eric Schmidt recently expressed his frustration
with the prospect of public oversight, characterizing it as
‘heavy-handed regulation’ and threatening that it would create
‘serious economic dangers’ for Europe (Schmidt, 2014).

Analysis
Google’s formal indifference toward and functional distance
from populations is further institutionalized in the necessities
of ‘analysis’ that Varian emphasizes. Google is the pioneer of
hyperscale. Like other hyperscale businesses – Facebook,
Twitter, Alibaba, and a growing list of high volume informa-
tion businesses such as telecoms and global payments firms –
data centers require millions of ‘virtual servers’ that exponen-
tially increase computing capabilities without requiring sub-
stantial expansion of physical space, cooling, or electrical
power demands.7 Hyperscale businesses exploit digital mar-
ginal cost economics to achieve scale quickly at costs that
approach zero.8 In addition to these material capabilities,
Varian notes that analysis requires data scientists who have
mastered the new methods associated with predictive analy-
tics, reality mining, patterns-of-life analysis, and so forth.
These highly specialized material and knowledge require-
ments further separate subjective meaning from objective
result. In doing so, they eliminate the need for, or possibility
of, feedback loops between the firm and its populations. The
data travel through many phases of production, only to return
to their source in a second phase of extraction in which the
objective is no longer data but revenue. The cycle then begins
again in the form of new computer-mediated transactions.

This examination of Varian’s combination of data, extrac-
tion, and analysis begins to suggest some key features of the
new logic of accumulation associated with big data and

Big other S Zuboff
80

spearheaded by Google. First, revenues depend upon data
assets appropriated through ubiquitous automated operations.
These constitute a new asset class: surveillance assets. Critics of
surveillance capitalism might characterize such assets as
‘stolen goods’ or ‘contraband’ as they were taken, not given,
and do not produce, as I shall argue below, appropriate
reciprocities. The cherished culture of social production in
the networked individual sphere relies on the very tools that
are now the primary vehicles for the surveillance-based
appropriation of the most lucrative data exhaust. These
surveillance assets attract significant investment that can be
called surveillance capital. Google has, so far, triumphed in the
networked world through the pioneering construction of this
new market form that is a radically disembedded and extrac-
tive variant of information capitalism, one that can be
identified as surveillance capitalism. This new market form
has quickly developed into the default business model for
most online companies and startups, where valuations routi-
nely depend upon ‘eyeballs’ rather than revenue as a predictor
of remunerative surveillance assets.

Monitoring and contracts
Varian says, ‘Because transactions are now computer
mediated, we can observe behavior that was previously
unobservable and write contracts on it. This enables transac-
tions that were simply not feasible before … Computer-
mediated transactions have enabled new business models …’
(2014: 30). Varian offers examples: If someone stops making
monthly car payments, the lender can ‘instruct the vehicular
monitoring system not to allow the car to be started and to
signal the location where it can be picked up.’ Insurance
companies, he suggests, can rely on similar monitoring
systems to check if customers are driving safely and thus
determine whether or not to maintain their insurance or pay
claims. He also suggests that one can hire an agent in a remote
location to perform tasks and use data from their smartphones
– geolocation, time stamping, photos – to ‘prove’ that they
actually performed according to the contract.

Varian does not appear to realize that what he is celebrating
here is not new contract forms but rather the ‘un-contract.’
His version of a computer-mediated world transcends the
contract form by stripping away governance and the rule of
law. Varian appears to be aiming for what Oliver Williamson
calls ‘a condition of contract utopia’ (1985: 67). In William-
son’s transaction economics, contracts exist to mitigate the
inevitability of uncertainty. They operate to economize on
‘bounded rationality’ and safeguard against ‘opportunism’ –
both intractable conditions of contracts in the real world of
human endeavor. He observes that certainty requires
‘unbounded rationality’ derived from ‘unrestricted cognitive
competence,’ which in turn derives from ‘fully described’
adaptations to ‘publicly observable’ contingent events.
Williamson notes that such conditions inhere to ‘a world of
planning’ rather than to ‘the world of governance’ in which
‘other things being equal … relations that feature personal
trust will survive greater stress and will display greater
adaptability’ (31–32, 63).

Varian’s vision of the uses of computer-mediated transac-
tions empties the contract of uncertainty. It eliminates the
need for – and therefore the possibility to develop – trust.
Another way of saying this is that contracts are lifted from the

social and reimagined as machine processes. Consensual
participation in the values from which legitimate authority is
derived, along with free will and reciprocal rights and obliga-
tions, are traded in for the universal equivalent of the prison-
er’s electronic ankle bracelet. Authority, which I have
elsewhere described as ‘the spiritual dimension of power,’
relies on social construction animated by shared foundational
values. In Varian’s economy, authority is supplanted by
technique, what I have called ‘the material dimension of
power,’ in which impersonal systems of discipline and control
produce certain knowledge of human behavior independent of
consent (Zuboff, 1988). This subject deserves a more detailed
exploration than is possible here, so I limit myself to a few key
themes.

In describing this ‘new use,’ Varian lays claim to vital
political territory for the regime of surveillance capitalism. From
Locke to Durkheim, the contract and the rule of law that
supports it have been understood as derived from the social
and the trust and organic solidarity of which the social is an
effect (Durkheim, 1964: 215; Locke, 2010: 112–115, 339). For
Weber, ‘the most essential feature of modern substantive law,
especially private law is the greatly increased significance of legal
transactions, particularly contracts, as a source of claims guar-
anteed by legal coercion … one can … designate the contem-
porary type of society … as a “contractual” one’ (1978: 669).

As Hannah Arendt suggests, ‘the great variety of contract
theories since the Romans attests to the fact that the power of
making promises has occupied the center of political thought
over the centuries.’ Most vivid is the operation of the contract
as it enhances the mastery of individuals and the resilience of
society. These goods derive precisely from the unpredictability
‘which the act of making promises at least partially dispels …’
For Arendt, human fallibility in the execution of contracts is
the price of freedom. The impossibility of perfect control
within a community of equals is the consequence of ‘plurality
and reality … the joy of inhabiting together with others a
world whose reality is guaranteed for each by the presence of
all.’ Arendt insists that ‘the force of mutual promise or
contract’ is the only alternative ‘to a mastery which relies on
domination of one’s self and rule over others; it corresponds
exactly to the existence of freedom which was given under the
condition of non-sovereignty’ (1998: 244).

In contrast to Arendt, Varian’s vision of a computer-
mediated world strikes me as an arid wasteland – not a
community of equals bound through laws in the inevitable
and ultimately fruitful human struggle with uncertainty. In
this futurescape, the human community has already failed. It
is a place adapted to the normalization of chaos and terror
where the last vestiges of trust have long since withered and
died. Human replenishment from the failures and triumphs of
asserting predictability and exercising over will in the face of
natural uncertainty gives way to the blankness of perpetual
compliance. Rather than enabling new contractual forms,
these arrangements describe the rise of a new universal
architecture existing somewhere between nature and God that
I christen Big Other. It is a ubiquitous networked institutional
regime that records, modifies, and commodifies everyday
experience from toasters to bodies, communication to
thought, all with a view to establishing new pathways to
monetization and profit. Big Other is the sovereign power of
a near future that annihilates the freedom achieved by the rule
of law. It is a new regime of independent and independently

Big other S Zuboff
81

controlled facts that supplants the need for contracts, govern-
ance, and the dynamism of a market democracy. Big Other is
the 21st-century incarnation of the electronic text that aspires
to encompass and reveal the comprehensive immanent facts of
market, social, physical, and biological behaviors. The institu-
tional processes that constitute the architecture of Big Other
can be imagined as the material instantiation of Hayek’s
‘extended order’ come to life in the explicated transparency
of computer-mediation.

These processes reconfigure the structure of power, con-
formity, and resistance inherited from mass society and
symbolized for over half a century as Big Brother. Power can
no longer be summarized by that totalitarian symbol of
centralized command and control. Even the panopticon of
Bentham’s design, which I used as a central metaphor in my
earlier work (Zuboff, 1988, Ch. 9,10), is prosaic compared to
this new architecture. The panopticon was a physical design
that privileged a single point of observation. The anticipatory
conformity it induced required the cunning production of
specific behaviors while one was inside the panopticon, but
that behavior could be set aside once one exited that physical
place. In the 1980s it was an apt metaphor for the hierarchical
spaces of the workplace. In the world implied by Varian’s
assumptions, habitats inside and outside the human body are
saturated with data and produce radically distributed oppor-
tunities for observation, interpretation, communication, influ-
ence, prediction, and ultimately modification of the totality of
action. Unlike the centralized power of mass society, there is
no escape from Big Other. There is no place to be where the
Other is not.

In this world of no escape, the chilling effects of anticipatory
conformity9 give way as the mental agency and self-possession
of anticipation is gradually submerged into a new kind of
automaticity. Anticipatory conformity assumes a point of
origin in consciousness from which a choice is made to
conform for the purposes of evasion of sanctions and social
camouflage. It also implies a difference, or at least the
possibility of a difference, between the behavior one would
have performed and the behavior one chooses to perform as
an instrumental solution to invasive power. In a world of Big
Other, without avenues of escape, the agency implied in the
work of anticipation is gradually submerged into a new kind of
automaticity – a lived experience of pure stimulus-response.
Conformity is no longer a 20th century-style act of submission
to the mass or group, no loss of self to the collective produced
by fear or compulsion, no psychological craving for accep-
tance and belonging. Conformity now disappears into the
mechanical order of things and bodies, not as action but as
result, not cause but effect. Each one of us may follow a
distinct path, but that path is already shaped by the financial
and, or, ideological interests that imbue Big Other and invade
every aspect of ‘one’s own’ life. False consciousness is no
longer produced by the hidden facts of class and their relation
to production, but rather by the hidden facts of commoditized
behavior modification. If power was once identified with the
ownership of the means of production, it is now identified
with ownership of the means of behavioral modification.

Indeed, there is little difference between the ineffable
‘extended order’ of the neoliberal ideal and the ‘vortex of
stimuli’ responsible for all action in the vision of the classical
theorists of behavioral psychology. In both worldviews,
human autonomy is irrelevant and the lived experience of

psychological self-determination is a cruel illusion. Varian
adds a new dimension to both hegemonic ideals in that now
this ‘God view’ can be fully explicated, specified, and known,
eliminating all uncertainty. The result is that human persons
are reduced to a mere animal condition, bent to serve the new
laws of capital imposed on all behavior through an implacable
feed of ubiquitous fact-based real-time records of all things
and creatures. Hannah Arendt treated these themes decades
ago with remarkable insight as she lamented the devolution of
our conception of ‘thought’ to something that is accomplished
by a ‘brain’ and is therefore transferable to ‘electronic
instruments’:

The last stage of the laboring society, the society of jobholders,
demands of its members a sheer automatic functioning, as
though individual life had actually been submerged in the
over-all life process of the species and the only active decision
still required of the individual were to let go, so to speak, to
abandon his individuality, the still individually sensed pain
and trouble of living, and acquiesce in a dazed, ‘tranquilized,’
functional type of behavior. The trouble with modern theories
of behaviorism is not that they are wrong but that they could
become true, that they actually are the best possible concep-
tualization of certain obvious trends in modern society. It is
quite conceivable that the modern age – which began with
such an unprecedented and promising outburst of human
activity – may end in the deadliest, most sterile passivity
history has ever known.

(Arendt, 1998: 322)

Surveillance capitalism establishes a new form of power in
which contract and the rule of law are supplanted by the
rewards and punishments of a new kind of invisible hand.
A more complete theorization of this new power, while a
central concern of my new work, exceeds the scope of this
article. I do want to highlight, however, a few key themes that
can help us appreciate the unique character of surveillance
capitalism.

According to Varian, people agree to the ‘invasion of
privacy’ represented by Big Other if they ‘get something they
want in return … a mortgage, medical advice, legal advice – or
advice from your personal digital assistant’ (2014: 30). He is
quoted in a similar vein by a PEW Research report, ‘Digital
Life in 2025:’ ‘There is no putting the genie back in the bottle
… Everyone will expect to be tracked and monitored, since the
advantages, in terms of convenience, safety, and services, will
be so great … continuous monitoring will be the norm (PEW
Research, 2014). How to establish the validity of this assertion?
To what extent are these supposed reciprocities the product of
genuine consent? This question opens the way to another
radical, perhaps even revolutionary, aspect of the politics of
surveillance capitalism. This concerns the distribution of
privacy rights and with it the knowledge of and choice to
accede to Big Other.

Covert data capture is often regarded as a violation,
invasion, or erosion of privacy rights, as Varian’s language
suggests. In the conventional narrative of the privacy threat,
institutional secrecy has grown, and individual privacy rights
have been eroded. But that framing is misleading, because
privacy and secrecy are not opposites but rather moments in a
sequence. Secrecy is an effect of privacy, which is its cause.
Exercising one’s right to privacy produces choice, and one can

Big other S Zuboff
82

choose to keep something secret or to share it. Privacy rights
thus confer decision rights; privacy enables a decision as to
where one wants to be on the spectrum between secrecy and
transparency in each situation. US Supreme Court Justice
Douglas articulated this view of privacy in 1967: ‘Privacy
involves the choice of the individual to disclose or to reveal
what he believes, what he thinks, what he possesses …’
(Warden v. Hayden, 387 US 294,323, 1967, Douglas, J.,
dissenting, quoted in Farahany, 2012: 1271).

The work of surveillance, it appears, is not to erode privacy
rights but rather to redistribute them. Instead of many people
having some privacy rights, these rights have been concen-
trated within the surveillance regime. Surveillance capitalists
have extensive privacy rights and therefore many opportu-
nities for secrets. These are increasingly used to deprive
populations of choice in the matter of what about their lives
remains secret. This concentration of rights is accomplished in
two ways. In the case of Google, Facebook, and other
exemplars of surveillance capitalism, many of their rights
appear to come from taking others’ without asking – in
conformance with the Street View model. Surveillance capi-
talists have skillfully exploited a lag in social evolution as the
rapid development of their abilities to surveil for profit outrun
public understanding and the eventual development of law
and regulation that it produces. In result, privacy rights, once
accumulated and asserted, can then be invoked as legitimation
for maintaining the obscurity of surveillance operations.10

The mechanisms of this growing concentration of privacy
rights and its implications received significant scrutiny from
legal scholars in the US and Europe, even before Edward
Snowden accelerated the discussion. This is a rich and growing
literature that raises many substantial concerns associated
with the anti-democratic implications of the concentration of
privacy rights among private and public surveillance actors
(Schwartz, 1989; Solove, 2007; Michaels, 2008; Palfrey, 2008;
Semitsu, 2011; Richards, 2013; Calo, 2014; Reidenberg, 2014;
Richards and King, 2014). The global reach and implications
of this extraction of rights – as well as data – present many
challenges for conceptualization, including how to overcome
the very secrecy that makes them problematic in the first
place. Further, the dynamics I describe occur in what was until
quite recently a blank area – one that is not easily captured by
our existing social, economic, and political categories. The
new business operations frequently elude existing mental
models and defy conventional expectations.

These arguments suggest that the logic of accumulation that
undergirds surveillance capitalism is not wholly captured by the
conventional institutional terrain of the private firm. What is
accumulated here is not only surveillance assets and capital, but
also rights. This occurs through a unique assemblage of business
processes that operate outside the auspices of legitimate demo-
cratic mechanisms or the traditional market pressures of con-
sumer reciprocity and choice. It is accomplished through a form
of unilateral declaration that most closely resembles the social
relations of a pre-modern absolutist authority. In the context of
this new market form that I call surveillance capitalism, hypers-
cale becomes a profoundly anti-democratic threat.

Surveillance capitalism thus qualifies as a new logic of
accumulation with a new politics and social relations that
replaces contracts, the rule of law, and social trust with the
sovereignty of Big Other. It imposes a privately administered
compliance regime of rewards and punishments that is

sustained by a unilateral redistribution of rights. Big Other
exists in the absence of legitimate authority and is largely free
from detection or sanction. In this sense Big Other may be
described as an automated coup from above: not a coup d’état,
but rather a coup des gens.

Personalization and communication
Varian claims that ‘nowadays, people have come to expect
personalized search results and ads.’ He says that Google
wants to do even more. Instead of having to ask Google
questions, it should ‘know what you want and tell you before
you ask the question.’ ‘That vision,’ he asserts, ‘has now been
realized by Google Now …’ Varian concedes that ‘Google
Now has to know a lot about you and your environment to
provide these services. This worries some people’ (2014: 28).
However, Varian reasons that people share such knowledge
with doctors, lawyers, and accountants whom they trust. He
then continues, ‘Why am I willing to share all this private
information? Because I get something in return …’ (2014: 28).

In fact, surveillance capitalism is the precise opposite of the
trust-based relationships to which Varian refers. Doctors,
attorneys, and other trusted professionals are held to account
by mutual dependencies and reciprocities overlain by the force
of professional sanction and public law. Google, as we have seen,
does not bear such burdens. Its formal indifference and distance
from ‘users,’ combined with its current freedom from mean-
ingful regulation, sanction, or law, buffer it and other surveil-
lance capitalists from the consequences of mistrust. Instead of
Varian’s implied reciprocities, the coup des gens introduces
substantial new asymmetries of knowledge and power.

For example, Google knows far more about its populations
than they know about themselves. Indeed, there are no means
by which populations can cross this divide, given the material,
intellectual, and proprietary hurdles required for data analysis
and the absence of feedback loops. Another asymmetry is
reflected in the fact that the typical user has little or no
knowledge of Google’s business operations, the full range of
personal data that they contribute to Google’s servers, the
retention of those data, or how those data are instrumentalized
and monetized. It is by now well known that users have few
meaningful options for privacy self-management (for a recent
review of the ‘consent dilemma,’ see Solove, 2013). Surveil-
lance capitalism thrives on the public’s ignorance.

These asymmetries in knowledge are sustained by asymme-
tries of power. Big Other is institutionalized in the automatic
undetectable functions of a global infrastructure that is also
regarded by most people as essential for basic social participa-
tion. The tools on offer by Google and other surveillance
capitalist firms respond to the needs of beleaguered second
modernity individuals – like the apple in the garden, once
tasted they are impossible to live without. When Facebook
crashed in some US cities for a few hours during the summer
of 2014, many Americans called their local emergency services
at 911 (LA Times, 2014). Google’s tools are not the objects of a
value exchange. They do not establish constructive producer-
consumer reciprocities. Instead they are the ‘hooks’ that lure
users into extractive operations and turn ordinary life into the
daily renewal of a 21st-century Faustian pact. This social
dependency is at the heart of the surveillance project. Powerful
felt needs for effective life vie against the inclination to resist
the surveillance project. This conflict produces a kind of

Big other S Zuboff
83

psychic numbing that inures people to the realities of being
tracked, parsed, mined, and modified – or disposes them to
rationalize the situation in resigned cynicism (Hoofnagle et al.,
2010). The key point here is that this Faustian deal is
fundamentally illegitimate; it is a choice that –21st-century
individuals should not have to make. In the world of
surveillance capitalism, the Faustian pact required to ‘get
something in return’ eliminates the older entanglements of
reciprocity and trust in favor of a wary resentment, frustra-
tion, active defense, and, or, desensitization.

Varian’s confidence in Google Now appears to be buoyed
by the facts of inequality. He counsels that the way to predict
the future is to observe what rich people have, because that is
what the middle class and the poor will want too. ‘What do
rich people have now?’ he asks. ‘Personal assistants’ is his
answer. The solution? ‘That’s Google Now (2014: 29),’ he says.
Varian’s bet is that Google Now will be so vital a resource in
the struggle for effective life that ordinary people will accede to
the ‘invasions of privacy’ that are its quid pro quo.

In this formulation Varian exploits a longstanding insight
of capitalism but bends it to the objectives of the surveillance
project. Adam Smith wrote insightfully on the evolution of
luxuries into necessities. Goods in use among the upper class
and deemed to be luxuries can in time be recast as ‘neces-
saries,’ he noted. The process occurs as ‘the established rules of
decency’ change to reflect new customs and patterns intro-
duced by elites. These changing rules both reflect and trigger
new lower cost production methods that transform former
luxuries into affordable necessities (Smith, 1994: 938–939).
Scholars of early modern consumption describe the ‘consumer
boom’ that ignited the first industrial revolution in late 18th-
century Britain as new middle-class families began to buy the
sorts of goods – china, furniture, textiles – that only the rich
had enjoyed. Historian Neil McKendrick describes this new
‘propensity to consume … unprecedented in the depth to
which it penetrated the lower reaches of society …’
(McKendrick, 1982: 11) as luxuries were reinterpreted as
‘decencies’ and those were reinterpreted as ‘necessities’
(Weatherill, 1993). In 1767, the political economist Nathaniel
Forster worried that ‘fashionable luxury’ was spreading ‘like a
contagion,’ as he complained of the ‘perpetual restless ambi-
tion in each of the inferior ranks to raise themselves to the
level of those immediately above them’ (Forster, 1767: 41).
Historically, this powerful evolutionary characteristic of
demand led to the expansion of production, jobs, higher
wages, and lower cost goods. Varian has no such reciprocities
in mind. Instead, he regards this mechanism of demand
growth as the inevitable force that will push ordinary people
into Google Now’s Faustian pact of ‘necessaries’ in return for
surveillance assets.

Varian is confident that psychic numbing will ease the way
for this unsavory drama. He writes, ‘Of course there will be
challenges. But these digital assistants will be so useful that
everyone will want one, and the statements you read today
about them will just seem quaint and old fashioned’ (2014:
29). But perhaps not. There is a growing body of evidence to
suggest that people in many countries may resist the coup des
gens as trust in the surveillance capitalists is hollowed out by
fresh outbreaks of evidence that suggest the remorseless
prospect of Varian’s future society. These issues are now a
matter of serious political debate within Germany and the EU
where proposals to ‘break up’ Google are already being

discussed (Mance et al., 2014; see also Barker and Fontanella-
Khan, 2014; Döpfner, 2014; Gabriel, 2014; Vasagar, 2014).
A recent survey by the Financial Times indicates that both
Europeans and Americans are substantially altering their
online behavior as they seek more privacy (Kwong, 2014).
One group of scholars behind a major study of youth online
behavior concludes that a ‘lack of knowledge’ rather than a
‘cavalier attitude toward privacy,’ as tech leaders have alleged,
is an important reason why large numbers of youth ‘engage
with the digital world in a seemingly unconcerned manner’
(Hoofnagle et al., 2010). New legal scholarship reveals the
consumer harm in lost privacy associated with Google and
surveillance capitalism (Newman, 2014). WikiLeaks founder,
Julian Assange, has published a sobering account of Google’s
leadership, politics, and global ambitions (Assange, 2014).
The PEW Research Center’s latest report on public percep-
tions of privacy in the post-Snowden Era indicates that 91% of
US adults agree or strongly agree that consumers have lost
control over their personal data, while only 55% agree or
strongly agree that they are willing to ‘share some information
about myself with companies in order to use online services
for free’ (Madden, 2014).

Continuous experiments
Because ‘big data’ analysis yields only correlational patterns,
Varian advises the need for continuous experiments that can
tease out issues of causality. Such experiments ‘are easy to do
on the web,’ assigning treatment and control groups based
on traffic, cookies, usernames, geographic areas, and so on
(2014: 29). Google has been so successful at experimentation
that they have shared their techniques with advertisers and
publishers. Facebook has consistently made inroads here too,
as it conducts experiments in modifying users’ behavior with
a view to eventually monetizing its knowledge, predictive
capability, and control. Whenever these experiments have
been revealed, however, they have ignited fierce public debate
(Bond et al., 2012; Flynn, 2014; Gapper, 2014; Goel, 2014;
Kramer et al., 2014; Lanier, 2014; Zittrain, 2014).

Varian’s enthusiasm for experimentation speaks to a larger
point, however. The business opportunities associated with
the new data flows entail a shift from the a posteriori analysis
to which Constantiou and Kallinikos (2014) refer, to the real-
time observation, communication, analysis, prediction, and
modification of actual behavior now and soon (Foroohar,
2014; Gibbs, 2014; Lin, 2014; Trotman, 2014; Waters, 2014).
This entails another shift in the source of surveillance assets
from virtual behavior to actual behavior, while monetization
opportunities are refocused to blend virtual and actual beha-
vior. This is a new business frontier comprised of knowledge
about real-time behavior that creates opportunities to inter-
vene in and modify behavior for profit. The two entities at the
vanguard of this new wave of ‘reality mining,’ ‘patterns of life
analysis,’ and ‘predictive analytics’ are Google and the NSA.
As the White House report puts it, ‘there is a growing
potential for big data analytics to have an immediate effect
on a person’s surrounding environment or decisions being
made about his or her life’ (2014: 5). This is what I call the
reality business, and it reflects an evolution in the frontier of
data science from data mining to reality mining in which,
according to MIT’s Sandy Pentland, ‘sensors, mobile phones,
and other data capture devices’ provide the ‘eyes and ears’ of a

Big other S Zuboff
84

‘world-spanning living organism’ from ‘a God’s eye view’
(Pentland, 2009: 76, 80). This is yet another rendering of the
‘extended order,’ fully explicated by computer-mediation. The
electronic text of the informated workplace has morphed into a
world-spanning living organism – an inter-operational, beha-
vior-modifying, market-making, and proprietary God view.

Nearly 70 years ago historian Karl Polanyi observed that the
market economies of the 19th and 20th centuries depended
upon three astonishing mental inventions that he called
‘fictions.’ The first was that human life can be subordinated
to market dynamics and be reborn as ‘labor.’ Second, nature
can be subordinated and reborn as ‘real estate.’ Third, that
exchange can be reborn as ‘money.’ The very possibility of
industrial capitalism depended upon the creation of these
three critical ‘fictional commodities.’ Life, nature, and
exchange were transformed into things, that they might be
profitably bought and sold. ‘[T]he commodity fiction,’ he
wrote, ‘disregarded the fact that leaving the fate of soil and
people to the market would be tantamount to annihilating
them.’

With the new logic of accumulation that is surveillance
capitalism, a fourth fictional commodity emerges as a domi-
nant characteristic of market dynamics in the 21st century.
Reality itself is undergoing the same kind of fictional meta-
morphosis as did persons, nature, and exchange. Now ‘reality’
is subjugated to commodification and monetization and
reborn as ‘behavior.’ Data about the behaviors of bodies,
minds, and things take their place in a universal real-time
dynamic index of smart objects within an infinite global
domain of wired things. This new phenomenon produces the
possibility of modifying the behaviors of persons and things
for profit and control. In the logic of surveillance capitalism
there are no individuals, only the world-spanning organism
and all the tiniest elements within it.

Conclusion
Technologies are constituted by unique affordances, but the
development and expression of those affordances are shaped by
the institutional logics in which technologies are designed,
implemented, and used. This is, after all, the origin of the hack.
Hacking intends to liberate affordances from the institutional
logics in which they are frozen and redistribute them in
alternative configurations for new purposes. In the market
sphere, these circumscribing logics are logics of accumulation.
With this view in mind, my aim has been to begin to identify and
theorize the currently institutionalizing logic of accumulation
that produces hyperscale assemblages of objective and subjective
data about individuals and their habitats for the purposes of
knowing, controlling, and modifying behavior to produce new
varieties of commodification, monetization, and control.

The development of the Internet and methods to access the
World Wide Web spread computer mediation from bounded
sites of work and specialized action to global ubiquity both at
the institutional interface and in the intimate spheres of
everyday experience. High tech firms, led by Google, perceived
new profit opportunities in these facts. Google understood
that were it to capture more of these data, store them, and
analyze them, they could substantially affect the value of
advertising. As Google’s capabilities in this arena developed
and attracted historic levels of profit, it produced successively
ambitious practices that expand the data lens from past virtual

behavior to current and future actual behavior. New moneti-
zation opportunities are thus associated with a new global
architecture of data capture and analysis that produces
rewards and punishments aimed at modifying and commodi-
tizing behavior for profit.

Many of the practices associated with capitalizing on these
newly perceived opportunities challenged social norms asso-
ciated with privacy and are contested as violations of rights
and laws. In result, Google and other actors learned to obscure
their operations, choosing to invade undefended individual
and social territory until opposition is encountered, at which
point they can use their substantial resources to defend at low
cost what had already been taken. In this way, surveillance
assets are accumulated and attract significant surveillance
capital while producing their own surprising new politics and
social relations.

These new institutional facts have been allowed to stand for
a variety of reasons: they were constructed at high velocity and
designed to be undetectable. Outside a narrow realm of
experts, few people understood their meaning. Structural
asymmetries of knowledge and rights made it impossible for
people to learn about these practices. Leading tech companies
were respected and treated as emissaries of the future. Nothing
in past experience prepared people for these new practices,
and so there were few defensive barriers for protection.
Individuals quickly came to depend upon the new information
and communication tools as necessary resources in the
increasingly stressful, competitive, and stratified struggle for
effective life. The new tools, networks, apps, platforms, and
media thus became requirements for social participation.
Finally, the rapid buildup of institutionalized facts – data
brokerage, data analytics, data mining, professional specializa-
tions, unimaginable cash flows, powerful network effects, state
collaboration, hyperscale material assets, and unprecedented
concentrations of information power – produced an over-
whelming sense of inevitability.

These developments became the basis for a fully institutio-
nalized new logic of accumulation that I have called surveil-
lance capitalism. In this new regime, a global architecture of
computer mediation turns the electronic text of the bounded
organization into an intelligent world-spanning organism that
I call Big Other. New possibilities of subjugation are produced
as this innovative institutional logic thrives on unexpected and
illegible mechanisms of extraction and control that exile
persons from their own behavior.

Under these conditions, the division of learning and its
contests are civilizational in scope. To the question ‘who
participates?’ the answer is – those with the material, knowl-
edge, and financial resources to access Big Other. To the
question ‘who decides?’ the answer is, access to Big Other is
decided by new markets in the commodification of behavior:
markets in behavioral control. These are composed of those
who sell opportunities to influence behavior for profit and
those who purchase such opportunities. Thus Google, for
example, may sell access to an insurance company, and this
company purchases the right to intervene in an information
loop in your car or your kitchen in order to increase its
revenues or reduce its costs. It may shut off your car, because
you are driving too fast. It may lock your fridge when you put
yourself at risk of heart disease or diabetes by eating too much
ice cream. You might then face the prospect of either higher
premiums or loss of coverage. Google’s Chief Economist Hal

Big other S Zuboff
85

Varian celebrates such possibilities as new forms of contract,
when in fact they represent the end of contracts. Google’s
rendering of information civilization replaces the rule of law
and the necessity of social trust as the basis for human
communities with a new life-world of rewards and punish-
ments, stimulus and response. Surveillance capitalism offers a
new regime of comprehensive facts and compliance with facts.
It is, I have suggested, a coup from above – the installation of a
new kind of sovereign power.

The automated ubiquitous architecture of Big Other, its
derivation in surveillance assets, and its function as pervasive
surveillance, highlights other surprising new features of this
logic of accumulation. It undermines the historical relationship
between markets and democracies, as it structures the firm as
formally indifferent to and radically distant from its populations.
Surveillance capitalism is immune to the traditional reciprocities
in which populations and capitalists needed one another for
employment and consumption. In this new model, populations
are targets of data extraction. This radical disembedding from
the social is another aspect of surveillance capitalism’s anti-
democratic character. Under surveillance capitalism, democracy
no longer functions as a means to prosperity; democracy
threatens surveillance revenues.

Will surveillance capitalism be the hegemonic logic of
accumulation in our time, or will it be an evolutionary dead-
end that cedes to other emerging information-based market
forms? What alternative trajectories to the future might be
associated with these competing forms? I suggest that the
prospects of information civilization rest on the answers to
these questions. There are many dimensions of surveillance
capitalism that require careful analysis and theorization if
we are to reckon with these prospects. One obvious dimen-
sion is the imbrication of public and private authority in the
surveillance project. Since Edward Snowden, we have
learned of the blurring of public and private boundaries in
surveillance activities including collaborations and con-
structive interdependencies between state security authori-
ties and high tech firms. Another key set of issues involves
the relationship of surveillance capitalism – and its potential
competitors – to overarching global concerns such as
equality and climate disruptions that effect all our future
prospects. A third issue concerns the velocity of social
evolution compared to that at which the surveillance project
is institutionalized. It seems clear that the waves of lawsuits
breaking on the shores of the new surveillance fortress are
unlikely to alter the behavior of surveillance capitalists.
Were surveillance capitalists to abandon their contested
practices according to the demands of aggrieved parties, the
very logic of accumulation responsible for their rapid rise to
immense wealth and historic concentrations of power
would be undermined. The value of the steady flow of legal
actions is rather to establish new precedents and ultimately
new laws. The question is whether the lag in social evolution
can be remedied before the full consequences of the
surveillance project take hold.

Finally, and most important for all scholars and citizens, is
the fact that we are at the very beginning of the narrative that
will carry us toward new answers. The trajectory of this
narrative depends in no small measure on the scholars drawn
to this frontier project and the citizens who act in the knowl-
edge that deception-induced ignorance is no social contract,
and freedom from uncertainty is no freedom.

Notes
1 For a recent example of this, see ‘JetBlue to Add Bag Fees, Cut
Legroom’ (Nicas, 2014).

2 See Braudel’s discussion on this point (1984: 620).
3 Consider that in 1986 there were 2.5 optimally compressed
exabytes, only 1% of which were digitized (Hilbert, 2013: 4). In
2000, only a quarter of the world’s stored information was digital
(Mayer-Schönberger and Cukier, 2013: 9). By 2007, there were
around 300 optimally compressed exabytes with 94% digitized
(Hilbert, 2013: 4). Digitization and datafication (the application of
software that allows computers and algorithms to process and
analyze raw data) combined with new and cheaper storage
technologies produced 1200 exabytes of data stored worldwide
in 2013 with 98% digital content (Mayer-Schönberger and
Cukier, 2013: 9).

4 The EU Court’s 2014 ruling on the ‘right to be forgotten’ arguably
represents the first time that Google has been forced to
substantially alter its practices as an adaptation to regulatory
demands – the first chapter of what is sure to be an evolving story.

5 For an extended discussion of this theme, see Zuboff and Maxmin
(2002, especially chapters 4, 6, and 10).

6 With the competitive advantage of Google’s exponentially
expanding data capture, Google’s ad revenues jumped from $21
billion in 2008 to over $50 billion in 2013. By February 2014, 15
years after its founding, Google’s $400 billion dollar market value
edged out Exxon for the #2 spot in market capitalization, making
it the second richest company after Apple (Farzad, 2014).

7 Consider these facts in relation to Google and Facebook, the most
hyper of the hyperscale firms. Google processes four billion
searches a day. A 2009 presentation by Google engineer Jeff
Dean indicated that it was planning the capacity for ten million
servers and an exabyte of information. His technical article
published in 2008 described new analytics that allowed Google
to process 20 petabytes of data per day (1000 petabytes = 1
exabyte), or about 7 exabytes a year (Dean and Ghemawat, 2008;
Dean, 2009). One analyst observed that these numbers have likely
been substantially exceeded by now, ‘particularly given the
volume of data being uploaded to YouTube, which alone has 72h
worth of video uploaded every minute’ (Wallbank, 2012). As for
Facebook, it has more than a billion users. At the time of its float
on the US stock market in 2012, it claimed to have more than
seven billion photos uploaded each month and more than 100
petabytes of photos and videos stored in its servers (Ziegler,
2012).

8 Smaller firms without hyperscale revenues can leverage some of
these capabilities with cloud computing services (Manyika and
Chui, 2014; Münstermann et al., 2014).

9 See my discussion of anticipatory conformity in Zuboff (1988:
346–356). For an update, see recent research on Internet search
behavior in Marthews and Tucker (2014).

10 This process is apparantly exemplified in the US federal lawsuit
concerning Google’s data mining of student emails sent and received
by users of its Apps for Education cloud service. See Herold (2014).

References

Acemoglu, D. and Robinson, J.A. (2012). Why Nations Fail: The origins of power,
prosperity, and poverty, New York, NY: Crown Business.

Anderson, N. (2010). Why Google keeps your data forever, tracks you with ads,
ArsTechnica.8 March [WWW document] http://arstechnica.com/tech-policy/
news/2010/03/google-keeps-your-data-to-learn-from-good-guys-fight-off-bad-
guys.ars(accessed 21 November 2014).

Big other S Zuboff
86

Angwin, J. (2012). Google faces new privacy probes, Wall Street Journal. 16 March
[WWW document] http://online.wsj.com/articles/SB100014240527023046
92804577283821586827892 (accessed 21 November 2014).

Angwin, J. (2014). Dragnet Nation: A quest for privacy, security, and freedom in a
world of relentless surveillance, New York: Times Books.

Arendt, H. (1998). The Human Condition, Chicago, IL: University of Chicago Press.
Assange, J. (2014). When Google Met WikiLeaks, New York, NY: OR Books.
Auletta, K. (2009). Googled: The end of the world as we know it, New York, NY:

Penguin Books.
Barker, A. and Fontanella-Khan, J. (2014). Google feels political wind shift against

it in Europe, Financial Times. 21 May [WWW document] http://www.ft.com/
intl/cms/s/2/7848572e-e0c1-11e3-a934-00144feabdc0.html#axzz3JjXPNno5
(accessed 21 November 2014).

BBC (2010). Internet access ‘a human right’, BBC News. 8 March [WWW
document] http://news.bbc.co.uk/2/hi/8548190.stm.

BBC News (2014). Wearables tracked with Raspberry Pi. 1 August [WWW
document] http://www.bbc.com/news/technology-28602997 (accessed 22
November 2014).

Benkler, Y. (2006). The Wealth of Networks: How social production transforms
markets and freedom, New Haven, CT: Yale University Press.

Berle, A.A. and Means, G.C. (1991). The Modern Corporation and Private
Property, New Brunswick, NJ: Transaction Publishers.

Bhimani, A. and Willcocks, L. (2014). Digitisation, ‘Big Data’ and the
Transformation of Accounting Information, Accounting and Business Research
44(4): 469–490.

Bond, R.M., Fariss, C.J., Jones, J.J., Kramer, A.D.I., Marlow, C., Settle, J.E. and
Fowler, J.H. (2012). A 61-million-person Experiment in Social Influence and
Political Mobilization, Nature 482(Sept 13): 295.

boyd, danah and Crawford, K. (2011). Six provocations for big data. Presented at
the A Decade in Internet Time: Symposium on the Dynamics of the Internet and
Society, Oxford Internet Institute. [WWW document] http://www.ssrn.com/
abstract=1926431.

Bradshaw, T. (2014a). Google bets on ‘internet of things’ with $3.2bn Nest deal,
Financial Times. , 13 January [WWW document] http://www.ft.com/intl/cms/s/
0/90b8714a-7c99-11e3-b514-00144feabdc0.html#axzz3hbfec0he (accessed 22
November 2014).

Bradshaw, T. (2014b). Google buys UK artificial intelligence start-up, Financial
Times. 27 January [WWW document] http://www.ft.com/intl/cms/s/0/
f92123b2-8702-11e3-aa31-00144feab7de.html#axzz3HBfEc0HE (accessed 22
November 2014).

Braudel, F. (1984). The Perspective of the World, New York, NY: Harper & Row.
Brewster, T. (2014). Traffic lights, fridges and how they’ve all got it in for us,

Register. 23 June [WWW document] http://www.theregister.co.uk/2014/06/23/
hold_interthreat/ (accessed 22 November 2014).

Burdon, M. and McKillop, A. (2013). The Google Street View Wi-Fi Scandal and
Its Repercussions for Privacy Regulation (Research Paper No. 14-07), University
of Queensland TC Beime School of Law. [WWW document] http://papers.ssrn.
com/sol3/papers.cfm?abstract_id=2471316.

Calo, R. (2014). Digital Market Manipulation, George Washington Law Review
82(4): 995–1051.

Chandler, Jr A.D. (1977). The Visible Hand: The Managerial Revolution in
American Business, Cambridge, MA: Belknap Press.

Cisco (2013a). Embracing the internet of everything to capture your share of $14.4
trillion, Cisco Systems, Inc. [WWW document] http://www.cisco.com/web/
about/ac79/docs/innov/IoE_Economy.pdf (accessed 9 June 2014).

Cisco (2013b). The internet of everything: global private sector economic analysis,
Cisco Systems, Inc. [WWW document] http://www.cisco.com/web/about/ac79/
docs/innov/IoE_Economy_FAQ.pdf (accessed 22 November 2014).

CNIL (2014, September 25). Google privacy policy: WP29 proposes a compliance
package, Commission Nationale de L’informatique et Des Libertés.
[WWW document] http://www.cnil.fr/english/news-and-events/news/article/
google-privacy-policy-wp29-proposes-a-compliance-package/ (accessed 21
November 2014).

Cohen, L. (2003). A Consumers’ Republic: The politics of mass consumption in
postwar America, New York, NY: Knopf.

Constantiou, I.D. and Kallinikos, J. (2014). New Games, New Rules: Big data and
the changing context of strategy, Journal of Information Technology, advance
online publication 9 September, doi: 10.1057/jit.2014.17.

Davis, G. (2011). The Twilight of the Berle and Means Corporation, Seattle
University Law Review 34(4): 1121–1138.

Davis, G. (2013). After the Corporation, Politics & Society 41(2): 283–308.

Dean, J. (2009). Challenges in building large-scale information retrieval systems,
Google Fellow Presentation. [WWW document] http://static.googleusercontent.
com/media/research.google.com/en/us/people/jeff/WSDM09-keynote.pdf
(accessed 22 November 2014).

Dean, J. and Ghemawat, S. (2008). MapReduce: Simplified data processing on
large clusters, Communications of the ACM 51(1): 107.

Döpfner, M. (2014). Why we fear Google, Frankfurter Allgemeine Zeitung.
[WWW document] http://www.faz.net/aktuell/feuilleton/debatten/mathias-
doepfner-s-open-letter-to-eric-schmidt-12900860.html (accessed 17 April
2014).

Doyle, J. (2013, November 15). Google facing legal action in EVERY EU country
over ‘data goldmine’ collected about users, Daily Mail Online. [WWW
document] http://www.dailymail.co.uk/sciencetech/article-2302870/Google-
facing-legal-action-EVERY-EU-country-data-goldmine-collected-users.html
(accessed 21 November 2014).

Durkheim, E. (1964). The Division of Labor in Society, New York, NY: Free Press.
Dwoskin, E. (2014). What secrets your phone is sharing about you, Wall Street

Journal. 14 January [WWW document] http://online.wsj.com/articles/
SB10001424052702303453004579290632128929194.

Economist (2014). The new GE: Google, everywhere. 18 January [WWW
document] http://www.economist.com/news/business/21594259-string-deals-
internet-giant-has-positioned-itself-become-big-inventor-and.

EPIC (2014a). Google glass and privacy, Electronic Privacy Information Center.
[WWW document] https://epic.org/privacy/google/glass/ (accessed 15
November 2014).

EPIC (2014b). Investigations of Google Street View, Electronic Privacy Information
Center. [WWW document] https://epic.org/privacy/streetview/ (accessed 21
November 2014).

Farahany, N.A. (2012). Searching Secrets, University of Pennsylvania Law Review
160(5): 1239–1308.

Farzad, R. (2014). Google at $400 billion: a new no. 2 in market cap,
BusinessWeek: Technology. 12 February [WWW document] http://www
.businessweek.com/articles/2014-02-12/google-at-400-billion-a-new-no-dot-
2-in-market-cap.

Finamore, E. and Dutta, K. (2014). ‘Summoning the demon’: artificial intelligence
is real threat to humanity, says PayPal founder, The Independent. [WWW
document] http://www.independent.co.uk/life-style/gadgets-and-tech/news/
tesla-boss-elon-musk-warns-artificial-intelligence-development-is-summoning-
the-demon-9819760.html (accessed 22 November 2014).

Fink, E. (2014). This drone can steal what’s on your phone, CNNMoney. 20 March
[WWW document] http://money.cnn.com/2014/03/20/technology/security/
drone-phone/index.html (accessed 22 November 2014).

Flynn, K. (2014). Facebook will share users’ political leanings with ABC news,
BuzzFeed, Huffington Post. 31 October [WWW document] http://www.
huffingtonpost.com/2014/10/31/facebook-buzzfeed-politics_n_6082312.html
(accessed 22 November 2014).

Foroohar, R. (2014). Tech titans are living in a naïve, dangerously insular bubble,
Time. 24 January[WWW document] http://business.time.com/2014/01/24/eric-
schmidt-george-soros-a-tale-of-two-titans/.

Forster, N. (1767). An Enquiry into the Causes of the Present High Price of
Provisions, London, UK: J. Fletcher and Co.

Gabriel, S. (2014). Sigmar Gabriel political consequences of the Google debate,
Frankfurter Allgemeine Zeitung. 20 May [WWW document] http://www.faz.net/
aktuell/feuilleton/debatten/the-digital-debate/sigmar-gabriel-consequences-of-
the-google-debate-12948701.html.

Gapper, J. (2014). We are the product facebook has been testing, FT. [WWW
document] http://www.ft.com/intl/cms/s/0/6576b0c2-0138-11e4-a938-
00144feab7de.html#axzz3R6dH0dDm (accessed 5 July 2014).

Garside, J. (2014). From Google to Amazon: EU goes to war against power of US
digital giants, Guardian. 5 July [WWW document] http://www.theguardian.
com/technology/2014/jul/06/google-amazon-europe-goes-to-war-power-digital-
giants (accessed 21 November 2014).

Gibbs, S. (2014). Google’s founders on the future of health, transport – and robots,
Guardian. 7 July [WWW document] http://www.theguardian.com/technology/
2014/jul/07/google-founders-larry-page-sergey-brin-interview (accessed 21
November 2014).

Hayek, F.A. (1988). The Fatal Conceit: The errors of socialism, Chicago, IL:
University of Chicago Press.

Herold, B. (2014). Google under fire for data-mining student email messages –
education week, Education Week. 26 March [WWW document] http://www
.edweek.org/ew/articles/2014/03/13/26google.h33.html.

Big other S Zuboff
87

Hilbert, M. (2013). Technological Information Inequality As an Incessantly
Moving Target: The redistribution of information and communication capacities
between 1986 and 2010, Journal of the American Society for Information Science
and Technology 65(4): 821–835.

Hoofnagle, C.J., King, J., Li, S. and Turow, J. (2010). How different are young
adults from older adults when it comes to information privacy attitudes and
policies? SSRN Electronic Journal [WWW document] http://www.ssrn.com/
abstract=1589864.

Jammet, A. (2014). The Evolution of EU Law on the Protection of Personal Data,
Center for European Law and Legal Studies 3(6): 1–18.

Kelly, H. (2014). Smartphones are fading. Wearables are next,
CNNMoney. 19 March [WWW document] http://money.cnn.com/2014/03/
19/technology/mobile/wearable-devices/index.html (accessed 22 November
2014).

Kopczynski, P. (2014). French consumer rights watchdog sues
Google, Facebook, Twitter for privacy violations, Reuters. 25 March
[WWW document] http://rt.com/news/france-facebook-google-suit-129/
(accessed 21 November 2014).

Kovach, S. (2013). Google’s plan to take over the world, Business Insider. 18 May
[WWW document] http://www.businessinsider.com/googles-plan-to-take-over-
the-world-2013-5 (accessed 22 November 2014).

Kramer, A.D.I., Guillory, J.E. and Hancock, J.T. (2014). Experimental Evidence
of Massive-Scale Emotional Contagion through Social Networks, Proceedings of
the National Academy of Sciences 111(24): 8788–8790.

Kwong, R. (2014). Did privacy concerns change your online behaviour?, FT Data
Blog. 17 September [WWW document] http://blogs.ft.com/ftdata/2014/09/17/
didprivacy-concerns-change-your-online-behaviour/ (accessed 21 November
2014).

Lanier, J. (2013). Who Owns the Future? New York, NY: Simon & Schuster.
Lanier, J. (2014). Should Facebook manipulate users?: Lack of transparency in

Facebook study, The New York Times. 30 June [WWW document] http://www
.nytimes.com/2014/07/01/opinion/jaron-lanier-on-lack-of-transparency-in-
facebook-study.html.

LA Times, A. T. S (2014, August 1). 911 calls about Facebook outage angers L.A.
County sheriff’s officials, Los Angeles Times. [WWW document] http://www
.latimes.com/local/lanow/la-me-ln-911-calls-about-facebook-outage-angers-la-
sheriffs-officials-20140801-htmlstory.html.

Levy, S. (2009). Secret of googlenomics: data-fueled recipe brews profitability,
Wired. [WWW document] http://archive.wired.com/culture/culturereviews/
magazine/17-06/nep_googlenomics (accessed 22 November 2014).

Lin, P. (2014). What if your autonomous car keeps routing you past Krispy Kreme?
The Atlantic. 22 January [WWW document] http://www.theatlantic.com/
technology/archive/2014/01/what-if-your-autonomous-car-keeps-routing-you-
past-krispy-kreme/283221/ (accessed 22 November 2014).

Locke, J. (2010). Two Treatises of Government. New York: Kessinger Publishing,
LLC.

Madden, M. (2014). Public perceptions of privacy and security in the post-
Snowden era [WWW document] http://www.pewinternet.org/2014/11/12/
public-privacy-perceptions/.

Mance, H., Ahmed, M. and Barker, A. (2014). Google break-up plan emerges
from Brussels, Financial Times. 21 November [WWW document] http://www.ft.
com/intl/cms/s/0/617568ea-71a1-11e4-9048-00144feabdc0.
html#axzz3JjXPNno5 (accessed 21 November 2014).

Manyika, J. and Chui, M. (2014). Digital era brings hyperscale challenges,
Financial Times. 13 August [WWW document] http://www.ft.com/intl/cms/s/0/
f30051b2-1e36-11e4-bb68-00144feabdc0.html?siteedition=intl#axzz3JjXPNno5
(accessed 22 November 2014).

Marthews, A. and Tucker, C. (2014). Government Surveillance and Internet Search
Behavior, Cambridge, MA: Digital Fourth. [WWW document] http://www.ssrn
.com/abstract=2412564.

Mayer-Schönberger, V. and Cukier, K. (2013). Big Data: A revolution that will
transform how we live, work, and think. Reprint edn Boston, MA: Houghton
Mifflin Harcourt.

McKendrick, N. (1982). The Consumer Revolution of Eighteenth-Century
England, in N. McKendrick, J. Brewer and J.H. Plumb (eds.) The Birth of a
Consumer Society: The commercialization of eighteenth-century England,
Bloomington, IL: Indiana University Press.

Menn, J., Schåfer, D. and Bradshaw, T. (2010). Google set for probes on data
harvesting, Financial Times. 17 May [WWW document] http://www.ft.com/intl/
cms/s/2/254ff5b6-61e2-11df-998c-00144feab49a.html#axzz3JjXPNno5
(accessed 21 November 2014).

Michaels, J.D. (2008). All the President’s Spies: Private-public intel-
ligence partnerships in the war on terror, California Law Review 96(4):
901–966.

Mick, J. (2011). ACLU fights for answers on police phone location data tracking,
Daily Tech. 4 August [WWW document] http://www.dailytech.com/ACLU
+Fights+for+Answers+on+Police+Phone+Location+Data+Tracking/
article22352.htm (accessed 21 November 2014).

Münstermann, B., Smolinski, B. and Sprague, K. (2014). The Enterprise IT
Infrastructure Agenda for 2014, McKinsey & Company White Paper : 1–8.

Newman, J. (2009). Google’s Schmidt Roasted for Privacy Comments, PCWorld.
11 December [WWW document] http://www.pcworld.com/article/184446/
googles_schmidt_roasted_for_privacy_comments.html (accessed 21 November
2014).

Newman, N. (2014). The costs of lost privacy: consumer harm and rising economic
inequality in the age of Google, William-Mitchell Law Review 40(2): 12.

Nicas, J. (2014). JetBlue to add bag fees, reduce legroom, Wall Street Journal.
20 November [WWW document] http://online.wsj.com/articles/jetblue-to-add-
bag-fees-reduce-legroom-1416406199.

Nissembaum, H. (2011). A Contextual Approach to Privacy Online, Daedalus
140(4): 32–48.

O’Brien, K.J. (2012). European regulators may reopen Google Street View
inquiries, The New York Times. 2 May [WWW document] http://www.nytimes.
com/2012/05/03/technology/european-regulators-to-reopen-google-street-view-
inquiries.html.

O’Brien, K.J. and Crampton, T. (2007). E.U. probes Google over data retention
policy, The New York Times. 26 May [WWW document] http://www.nytimes.
com/2007/05/26/business/26google.html.

O’Brien, K.J. and Miller, C.C. (2013). Germany’s complicated relationship with
Google Street View, Bits Blog. 23 April [WWW document] http://bits.blogs.
nytimes.com/2013/04/23/germanys-complicated-relationship-with-google-
street-view/ (accessed 21 November 2014).

Office of the Privacy Commission of Canada (2010). Google contravened
Canadian privacy law, investigation finds, Office of the Privacy Commissioner of
Canada. 19 October [WWW document] https://www.priv.gc.ca/media/nr-c/
2010/nr-c_101019_e.asp (accessed 21 November 2014).

Owen, J. (2014). Google in court again over ‘right to be above British law’ on
alleged secret monitoring, The Independent. 8 December.

Palfrey, J. (2008). The Public and the Private the United States Border with
Cyberspace, Mississippi Law Journal 78(2): 241–294.

Parnell, B.-A. (2014). Is Google building SKYNET? Ad kingpin buys AI firm
DeepMind, Register. 27 January [WWW document] http://www.theregister.co.
uk/2014/01/27/google_deep_mind_buy/ (accessed 22 November 2014).

Pentland, A. (2009). Reality mining of mobile communications: Toward a new deal
on data, in The Global Information Technology Report, World Economic Forum
& INSEAD, pp. 75–80.

PEW Research Center (2014). Digital life in 2015. (Research Report). [WWW
document] http://www.pewinternet.org/2014/03/11/digital-life-in-2025/.

Piketty, T. (2014). Capital in the Twenty-First Century, Cambridge, MA: Belknap
Press of Harvard University Press.

Plummer, Q. (2014). Google email tip-off draws privacy concerns, Tech Times.
5 August [WWW document] http://www.techtimes.com/articles/12194/
20140805/google-email-tip-off-draws-privacy-concerns.htm (accessed 21
November 2014).

Reidenberg, J.R. (2014). Data surveillance state in the United States and Europe,
Wake Forest Law Review 48(1): 583.

Richards, N.M. (2013). The Dangers of Surveillance, Harvard Law Review
126: 1934–1965.

Richards, N.M. and King, J.H. (2014). Big Data Ethics. (Accepted Paper Series)
Saint Louis, MO: Wake Forest Law Review.

Schmarzo, B. (2014). The value of data: Google gets It!!, EMC InFocus. 10 June
[WWW document] https://infocus.emc.com/william_schmarzo/the-value-of-
data-google-gets-it/.

Schmidt, E. (2014). A chance for growth, Frankfurter Allgemeine Zeitung. 9 April
[WWW document] http://www.faz.net/aktuell/feuilleton/debatten/eric-schmidt-
about-the-good-things-google-does-a-chance-for-growth-12887909.html.

Schwartz, P. (1989). The Computer in German and American Constitutional Law:
Towards an American right of informational self-determination, American
Journal of Comparative Law 37(4): 675–701.

Semitsu, J.P. (2011). From Facebook to Mug Shot: How the dearth of social
networking privacy rights revolutionized online government surveillance, Pace
Law Review 31(1): 291.

Big other S Zuboff
88

Sklar, M.J. (1988). The Corporate Reconstruction of American Capitalism:
1890–1916: The market, the law, and politics, New York, NY: Cambridge
University Press.

Smith, A. (1994). The Wealth of Nations. (E. Cannan, ed.), Later Printing edn
New York: Modern Library.

Snelling, D. (2014). Google Maps is tracking you! How your smartphone knows
your every move, Express.co.uk. 18 August http://www.express.co.uk/life-style/
science-technology/500811/Google-Maps-is-tracking-your-every-move
(accessed 21 November 2014).

Solove, D.J. (2007). ‘I’ve Got Nothing to Hide’ and Other Misunderstandings of
Privacy, San Diego Law Review 44: 745.

Solove, D.J. (2013). Introduction: Privacy self-management and the consent
dilemma, Harvard Law Review 126(7): 1880–1904.

Steingart, G. (2014). Google debate our weapons in the digital battle for freedom,
Frankfurter Allgemeine Zeitung. 23 June [WWW document] http://www.faz.net/
aktuell/feuilleton/debatten/the-digital-debate/google-debatte-waffen-im-
digitalen-freiheitskampf-13005653.html.

Streitfeld, D. (2013). Google admits Street View project violated privacy, New York
Times. 12 March [WWW document] http://www.nytimes.com/2013/03/13/
technology/google-pays-fine-over-street-view-privacy-breach.html.

Trotman, A. (2014). Google boss Larry Page: Europe needs to be more like
Silicon Valley and support technology, Telegraph. 31 October [WWW
document] http://www.telegraph.co.uk/technology/google/11202850/Google-boss-
Larry-Page-Europe-needs-to-be-more-like-Silicon-Valley-and-support-technology
.html.

Unger, R.M. (2007). Free Trade Reimagined: The world division of labor and the
method of economics, Princeton, NJ: Princeton University Press.

U.S. Committee on Commerce, Science, and Transportation (2013). A review of
the data broker industry: collection, use and sale of consumer data for marketing
purposes, Office of Oversight and Investigations. [WWW document] http://
www.commerce.senate.gov/public/?a=Files.Serve&File_id=0d2b3642-6221-
4888-a631-08f2f255b577.

Vaidhyanathan, Siva. (2011). The Googilization of Everything, Berkeley, CA:
University of California Press.

Varian, H.R. (2010). Computer Mediated Transactions, American Economic
Review 100(2): 1–10.

Varian, H.R. (2014). Beyond Big Data, Business Economics 49(1): 27–31.
Vasagar, J. (2014). Google could face ‘cyber courts’ in Germany over privacy rights,

Financial Times. 27 May [WWW document] http://www.ft.com/intl/cms/s/0/
a7580826-e59d-11e3-8b90-00144feabdc0.html#axzz3JjXPNno5 (accessed 21
November 21 2014).

Wallbank, P. (2012). How much server space do Internet companies need to run
their sites? Decoding the New Economy. 23 August [WWW document] http://
paulwallbank.com/2012/08/23/how-much-server-space-do-internet-companies-
need-to-run-their-sites/ (accessed 22 November 2014).

Waters, R. (2014). FT interview with Google co-founder and CEO Larry Page – FT.
com, Financial Times. 31 October [WWW document] http://www.ft.com/intl/
cms/s/2/3173f19e-5fbc-11e4-8c27-00144feabdc0.html#axzz3JjXPNno5
(accessed 21 November 2014).

Weatherill, L. (1993). The Meaning of Consumer Behavior in the Seventeenth and
Early Eighteenth-Century England, in J. Brewer and R. Porter (eds.)
Consumption and the World of Goods, London, UK: Routledge.

Weber, M. (1978). Economy and Society: An outline of interpretive sociology.
Vol. 1 Berkeley, CA: University of California Press.

Weiser, M. (1991). The Computer for the 21st Century, Scientific American 265(3):
94–104.

White House (2014). Big Data: seizing opportunities, preserving values (Report for
the President), Washington D.C., USA: Executive Office of the President.
[WWW document] http://www.whitehouse.gov/sites/default/files/docs/
big_data_privacy_report_may_1_2014.pdf.

Williamson, O.E. (1985). The Economic Institutions of Capitalism, New York;
London: Free Press.

Winkler, R. and Wakabayashi, D. (2014). Google to buy nest labs for $3.2 billion –
update, EuroInvestor. 14 January [WWW document] http://www.euroinvestor.
com/news/2014/01/14/google-to-buy-nest-labs-for-32-billion-update/12658007
(accessed 22 November 2014).

Ziegler, C. (2012). Facebook IPO facts and figures: the house that 100 petabytes
built, Verge. 1 February [WWW document] http://www.theverge.com/2012/2/1/
2764905/facebook-ipo-facts-and-figures-the-house-that-100-petabytes-built
(accessed 22 November 2014).

Zittrain, J. (2014). Facebook could decide an election without anyone ever finding
out, New Republic. 1 June [WWW document] http://www.newrepublic.com/
article/117878/information-fiduciary-solution-facebook-digital-
gerrymandering.

Zuboff, S. (1981). Psychological and Organizational Implications of Computer-
Mediated Work, MIT Working Paper, Center for Information Systems Research.

Zuboff, S. (1982). New Worlds of Computer-Mediated Work, Harvard Business
Review 60(5): 142–152.

Zuboff, S. (1985). Automate/Informate: The two faces of intelligent technology,
Organizational Dynamics 14(2): 5–18.

Zuboff, S. (1988). In the Age of the Smart Machine: The future of work and power,
New York, NY: Basic Books.

Zuboff, S. (2013). Computer-Mediated Work, In Sociology of Work: An
Encyclopedia, Thousand Oaks, CA: SAGE Publications, Inc. [WWW document]
http://knowledge.sagepub.com/view/sociology-of-work/n41.xml.

Zuboff, S. and Maxmin, J. (2002). The Support Economy: Why corporations are
failing individuals and the next episode of capitalism, New York, NY: Viking
Penguin.

About the author

Shoshana Zuboff is the Charles Edward Wilson Professor of
Business Administration (Emerita) and a Faculty Associate
at the Berkman Center for Internet and Society, Harvard
Law School. She is currently completing a new book, Master
or Slave? The Fight for the Soul of Our Information Civiliza-
tion (forthcoming, Public Affairs and Eichborn 2016). She is
also the author of In the Age of the Smart Machine: The
future of work and power and The Support Economy: Why
corporations are failing individuals and the next episode of
capitalism.

Big other S Zuboff
89

  • Big other: surveillance capitalism and the prospects of an information civilization
    • Introduction
    • Computer mediation meets the logic of accumulation
    • Data, extraction, analysis
      • Data
      • Extraction
      • Analysis
    • Monitoring and contracts
    • Personalization and communication
    • Continuous experiments
    • Conclusion
    • Notes
    • AcemogluD.RobinsonJ.A.2012Why Nations Fail: The origins of power, prosperity, and povertyNew York, NYCrown BusinessAndersonN.2010Why Google keeps your data forever, tracks you with ads, ArsTechnica.8 March [WWW document] http://arstechnica
    • About the author

Digital
Article

Technology

Apple Is Changing
How Digital Ads
Work. Are
Advertisers
Prepared?
by Julian Runge and Eric Seufert

For the exclusive use of J. Li, 2022.

This document is authorized for use only by Jia ye Li in MIS 441 – Global E-Commerce-1 taught by Richard Johnson, Washington State University from Jan 2022 to Jun 2022.

Apple Is Changing How
Digital Ads Work. Are
Advertisers Prepared?

by Julian Runge and Eric Seufert
Published on HBR.org / April 26, 2021 / Reprint H06BYT

Normform/Getty Images

Apple is turning the privacy settings of its mobile ecosystem upside

down. When it releases its app tracking transparency (ATT) framework

with iOS 14.5 on April 26, it will shut off a stream of data that app

developers, measurement companies, and advertisers have used to link

users’ behavior across apps and mobile websites — a move that could

reshape the digital advertising industry. With the update, the “identifier

for advertisers” (IDFA), which has been activated by default on Apple

devices and provides access to user-level data to app publishers, will be

HBR / Digital Article / Apple Is Changing How Digital Ads Work. Are Advertisers Prepared?…

Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 1

For the exclusive use of J. Li, 2022.

This document is authorized for use only by Jia ye Li in MIS 441 – Global E-Commerce-1 taught by Richard Johnson, Washington State University from Jan 2022 to Jun 2022.

switched off and users will need to grant apps explicit permission to access

it. With in-app prompts asking users, “Allow [app name] to track your

activity across other companies’ apps and websites?” opt-in rates will

likely be low.

We anticipate that Apple’s ATT initiative will deliver a major blow to

targeted advertising, which is crucial to the business models of publishers

of online content such as Facebook, Google, and many news outlets. But

while large digital content providers will feel the effects of ATT, the large

proprietary datasets they’ve amassed may protect them in the long term.

Smaller companies, such as e-commerce operations that rely on targeted

advertising to reach customers, and mobile measurement providers, which

collect and organize app data, will likely find it harder going — a point

Facebook has tried to bring home in a campaign responding to Apple’s

policy changes.

Through the rollout of ATT, Apple is re-imagining the role that advertising

plays within its ecosystem. The move will allow the company to more

tightly control users’ app experiences and content curation. It will also

allow Apple to push adoption of its own target advertising solution — its

in-house ad tracking services use friendlier language than what is required

of third-party apps and it recently introduced new ad spots on the App

Store. Establishing itself as a leader in privacy can serve to strengthen its

brand and have lasting positive effects on its hardware sales to boot.

While ATT might be the most impactful change to the digital advertising

ecosystem to date, more restrictions around user privacy are in the offing.

Developments such as private click measurement (PCM), Google’s

Federated Learning of Cohorts (FLoC), the end of third-party cookies in

Chrome, and governmental privacy regulations such as GDPR and CCPA

all point to a new privacy-centric era on the horizon. That means that

advertisers and advertising firms need to learn how to play by a new set of

rules — and fast. Here’s a primer on how you can be prepared to navigate

the changes.

HBR / Digital Article / Apple Is Changing How Digital Ads Work. Are Advertisers Prepared?…

Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 2

For the exclusive use of J. Li, 2022.

This document is authorized for use only by Jia ye Li in MIS 441 – Global E-Commerce-1 taught by Richard Johnson, Washington State University from Jan 2022 to Jun 2022.

What ATT Changes

Apple’s new approach to privacy presents a clear problem for advertisers

who rely on targeted advertising — in other words, most digital advertisers

— in that it will make it much harder to meaningfully link user behavior

across apps and mobile websites in the iOS ecosystem. Depending on opt-

in rates (which, again, are expected to be low), this presents a major

challenge for advertising targeting algorithms that achieve their current

good performance by observing not only what ads users view and click on,

but also who then proceeds to take relevant actions on the website or in

the app of the advertiser.

Overall, ATT can be expected to make ads substantially less relevant for

consumers and to make them perform substantially less well for

advertisers — except for ads delivered by Apple’s own personalized ads

system. It also reduces the precision of advertising measurement across

iOS apps and mobile websites. Many industry insiders expect Google to

make a similar change in the Android ecosystem at some point in the

future, effectively rendering digital advertising less relevant across the

board and its measurement much less granular and precise.* These

changes in the digital measurement landscape roll back some of the

innovations that became possible through digitization, namely precise

measurement through user-level attribution and advertising experiments.

To aid advertisers in navigating the limitation in data availability

introduced by ATT, Apple is offering a measurement solution called

SKAdNetwork (SKAN) that makes performance data available at the

campaign level. However, not only is there a limit on the number of

available campaign slots per advertiser, SKAN also adds a random time

delay on the observation of performance events such as purchases or cart-

adds and restricts how and how many of such events can be observed per

campaign.

SKAN falls within the sphere of differential privacy, an approach to

marketing measurement that uses statistical methods to make it

HBR / Digital Article / Apple Is Changing How Digital Ads Work. Are Advertisers Prepared?…

Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 3

For the exclusive use of J. Li, 2022.

This document is authorized for use only by Jia ye Li in MIS 441 – Global E-Commerce-1 taught by Richard Johnson, Washington State University from Jan 2022 to Jun 2022.

impossible to infer any individual user’s behavior while still allowing

linking of behavior across different digital properties. Differential privacy

is likely to become more prevalent. Other tech companies, such as Google,

are investing significantly into such technologies as well but there may be

a long way to go before wide acceptance and adoption as a new privacy-

safe measurement approach.

In the meantime, more traditional measurement solutions that are

privacy-safe by default will likely stand to gain in relevance. For example,

marketing mix models (MMMs) were developed on and for aggregate

advertising and sales data observed over time and do not require any

linking of lower-level tracking data. They make use of natural variation in

a firm’s marketing mix or, where possible, of explicitly induced

randomization over time and/or geographies to measure advertising

effects. Bearing testament to the likely renaissance of MMMs in marketing

measurement, Facebook published an open-source computational package

that allows advertisers to implement MMMs in a guided manner.

How You Can Adapt

So what should advertisers and advertising firms do? We believe that

internalizing the following strategic viewpoints can help businesses

navigate this changing privacy landscape.

1) Embrace privacy preservation methodologies like differential
privacy (Apple) and federated learning (Google). These are the primary
means by which large platforms are ushering in new privacy protections

for consumers — firms that are planning ahead should build advertising

technology that aligns with them.

When privacy policy changes, the biggest pain point for advertisers is

infrastructure upgrades. This sea change should be seen as an opportunity

to invest in new and innovative technologies that not only comply with

platform regulations but do so in a way that is forward looking. New

restraints on the data that can be used for measurement and analysis can

HBR / Digital Article / Apple Is Changing How Digital Ads Work. Are Advertisers Prepared?…

Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 4

For the exclusive use of J. Li, 2022.

This document is authorized for use only by Jia ye Li in MIS 441 – Global E-Commerce-1 taught by Richard Johnson, Washington State University from Jan 2022 to Jun 2022.

create competitive advantage in moments of dramatic change, when

competitors are reticent to invest or adapt.

2) Understand that workarounds to new privacy regulations are not a
viable, long-term solution. It may seem relatively cheap or
straightforward to build solutions that preserve advertising workflows and

measurement schemas by sneakily contravening platform policies — using

device fingerprinting or server-to-server conversion management — but

taking this approach merely delays the inevitable pain of adaptation. A

firm should make investments into real solutions, not gimmicks that

exploit loopholes or are predicated on rules not being fully enforced,

especially since the privacy landscape is currently mostly dictated by large

platforms that mostly operate according to their own rules.

3) Transition advertising measurement away from deterministic,
user-centric models. Instead, use more holistic, macro-level models that
look at variations in ad spend and revenue over time to attribute efficiency

to channel-specific ad campaigns. This approach requires sophisticated

data-science expertise, and these types of models can be difficult to tune

properly, but a measurement solution that relies on statistical

sophistication is more robust and durable than one that relies on the

precision of user identity. Tools like MMMs not only provide insight from

data that is readily available and affirmable such as revenue and ad spend,

but they also allow for traditional advertising channels such as television

and out-of-home to be included in the advertising media mix and

accommodated for in measurement.

4) Deepen your understanding of your audience and rely less on
niche products. The products that suffer most in the loss of the identifier-
based advertising targeting are those that target niche audiences and

depend on very high rates of monetization participation, or very extreme

levels of monetization from a small segment of the customer base. Building

a more broadly appealing product is a strategy for overcoming the

degradation of advertising effectiveness: The more people that are

HBR / Digital Article / Apple Is Changing How Digital Ads Work. Are Advertisers Prepared?…

Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 5

For the exclusive use of J. Li, 2022.

This document is authorized for use only by Jia ye Li in MIS 441 – Global E-Commerce-1 taught by Richard Johnson, Washington State University from Jan 2022 to Jun 2022.

receptive to your product, the less targeted your ads must be in order to

reach customers.

5) Get more creative and use it as a means of differentiation. Absent
the targeting capabilities that are unlocked with device identifiers and

behavioral histories, advertisers can focus on ad creative as a way to

increase the reception their ads receive with potential customers. Novel,

creative, and attractive ads can’t fully replace the efficiency lost in digital

advertising from the deprecation of advertising identifiers, but it can help

to reach the most relevant segment of an audience by penetrating through

generic, nondescript advertising from competitors. With precision

targeting largely removed from the advertiser’s toolbox, ad creative can be

used as a way to stand out to the most appropriate portions of the broader

audiences to which ads will be exposed.

Apple’s ATT framework may be the most economically impactful and

brazen change to privacy policy in years. It won’t be the only one,

however. As this step is likely the start of a new era rather than an outlier

event, we recommend using the opportunity to brush up on privacy

technologies such as differential privacy and federated learning and to

sustainably revamp your marketing measurement toolkit.

*Correction: An earlier version of this article stated that Google had
announced a similar move in the Android ecosystem. Google has not publicly
announced this change.

Julian Runge is a behavioral economist and digital marketing
researcher. He holds a Ph.D. in Economics and Management Science
from Humboldt University Berlin and was a repeat visiting researcher
at Stanford University. Julian also worked as a researcher focused on
academic collaboration in Facebook’s marketing science research
group. He currently advises companies how to strategically use data,
science, and experimentation to fuel their growth and customer
experiences.

HBR / Digital Article / Apple Is Changing How Digital Ads Work. Are Advertisers Prepared?…

Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 6

For the exclusive use of J. Li, 2022.

This document is authorized for use only by Jia ye Li in MIS 441 – Global E-Commerce-1 taught by Richard Johnson, Washington State University from Jan 2022 to Jun 2022.

Eric Seufert is a media strategist, quantitative marketer, and author
who has spent his career working for transformative consumer
technology and media companies. He is the author of the book
Freemium Economics and developed Theseus, an open-source Python
library for marketing cohort analysis. Eric now runs Heracles, a
strategy consultancy that specializes in marketing science and growth
strategy; Mobile Dev Memo, a mobile advertising and freemium
monetization trade blog; and QuantMar, a knowledge-sharing platform
for quantitative marketers.

HBR / Digital Article / Apple Is Changing How Digital Ads Work. Are Advertisers Prepared?…

Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 7

For the exclusive use of J. Li, 2022.

This document is authorized for use only by Jia ye Li in MIS 441 – Global E-Commerce-1 taught by Richard Johnson, Washington State University from Jan 2022 to Jun 2022.

Needs help with similar assignment?

We are available 24x7 to deliver the best services and assignment ready within 3-4 hours? Order a custom-written, plagiarism-free paper

Get Answer Over WhatsApp Order Paper Now