What is Surveillance Capitalism?
We can't remember. They never forget.
Surveillance Capitalism is the manifestation of George Orwell's prophesied Memory Hole, combined with the constant surveillance, storage and analysis of our thoughts and actions, with such minute precision, and artificial intelligence algorithmic analysis, that our future thoughts and actions can be predicted, and manipulated, for the concentration of power and wealth of the very few.
The 33 citations below barely scratch the surface of Surveillance Capitalism and yet they provide a terrifying display of the powerful forces arrayed against democracy.
Surveillance Capitalism desensitizes us to the destruction of individual autonomy, rights, freedom of thought and action, privacy, sovereignty, thoughtful analysis and memory, while demanding and ensuring corporations, and the 1%, have absolute rights, privacy and impunity.
Surveillance Capitalism relies on the 24 hour news cycle overwhelming our capacity to consider the manipulations, before they are quickly buried and forgotten in the Memory Hole.
Please feel free to leave comments or questions below the citations.
But Bezos, given how much he works and profits to destroy the privacy of everyone else (to say nothing of the labor abuses of his company), is about the least sympathetic victim imaginable of privacy invasion. In the past, hard-core surveillance cheerleaders in Congress such as Dianne Feinstein, Pete Hoekstra, and Jane Harman became overnight, indignant privacy advocates when they learned that the surveillance state apparatus they long cheered had been turned against them.
Jeff Bezos Protests the Invasion of His Privacy, as Amazon Builds a Sprawling Surveillance State for Everyone Else
Glenn Greenwald, The Intercept (February 8 2019)
Unbeknownst to her—because she didn’t read the fine print—some data from the research study, along with her liquor purchase history, has made it to one of the two employment agencies that have come to dominate the market. Every employer who screens her application with the agency now sees that she’s been profiled as a “depressed unreliable.” No wonder she can’t get work. But even if she could discover that she’s been profiled in this way, what recourse does she have?
It’s time for a Bill of Data Rights
Martin Tisne, MIT Technology Review (December 14, 2018)
If her risk factor fluctuated upward—whether due to some suspicious pattern in her movements, her social associations, her insufficient attention to a propaganda-consumption app, or some correlation known only to the AI—a purely automated system could limit her movement. It could prevent her from purchasing plane or train tickets. It could disallow passage through checkpoints. It could remotely commandeer “smart locks” in public or private spaces, to confine her until security forces arrived.
The Panopticon Is Already Here -
Xi Jinping is using artificial intelligence to enhance his government’s totalitarian control—and he’s exporting this technology to regimes around the globe.
Ross Andersen, The Atlantic (September 2020)
But AI Now, which was established last year to grapple with the social implications of artificial intelligence, expresses in the document particular dread over affect recognition, “a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and ‘worker engagement’ based on images or video of faces.” The thought of your boss watching you through a camera that uses machine learning to constantly assess your mental state is bad enough, while the prospect of police using “affect recognition” to deduce your future criminality based on “micro-expressions” is exponentially worse.
Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History”
Sam Biddle, The Intercept (December 6 2018)
At the beginning of October, Amazon was quietly issued a patent that would allow its virtual assistant Alexa to decipher a user’s physical characteristics and emotional state based on their voice. Characteristics, or “voice features,” like language accent, ethnic origin, emotion, gender, age, and background noise would be immediately extracted and tagged to the user’s data file to help deliver more targeted advertising.
Amazon’s Accent Recognition Technology Could Tell the Government Where You’re From
Belle Lin, The Intercept (November 15 2018)
Selling products based on emotions also offers opportunities for advertisers to manipulate consumers. “If you’re a woman in a certain demographic and you’re depressed, and we know that binge shopping is something you do … knowing that you’re in kind of a vulnerable state, there’s no regulation preventing them from doing something like this,” King said.
Amazon’s Accent Recognition Technology Could Tell the Government Where You’re From
Belle Lin, The Intercept (November 15 2018)
For now, people who want to hold onto their privacy and minimize surveillance risk shouldn’t buy a speaker at all, recommended Granick. “You’re basically installing a microphone for the government to listen in to you in your home,” she said.
Amazon’s Accent Recognition Technology Could Tell the Government Where You’re From
Belle Lin, The Intercept (November 15 2018)
AI Now’s Whittaker singles out corporate secrecy as confounding the already problematic practices of affect recognition: “Because most of these technologies are being developed by private companies, which operate under corporate secrecy laws, our report makes a strong recommendation for protections for ethical whistleblowers within these companies.” Such whistleblowing will continue to be crucial, wrote Whittaker, because so many data firms treat privacy and transparency as a liability, rather than a virtue:
Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History”
Sam Biddle, The Intercept (December 6 2018)
“Most people don’t know what’s going on,” said Emmett Kilduff, the chief executive of Eagle Alpha, which sells data to financial firms and hedge funds.
Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret
JENNIFER VALENTINO-DeVRIES, NATASHA SINGER, MICHAEL H. KELLER and AARON KROLIK, The New York Times (DEC. 10, 2018)
“We look to understand who a person is, based on where they’ve been and where they’re going, in order to influence what they’re going to do next,” Ms. Greenstein said.
Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret
JENNIFER VALENTINO-DeVRIES, NATASHA SINGER, MICHAEL H. KELLER and AARON KROLIK, The New York Times (DEC. 10, 2018)
Tell All Digital, a Long Island advertising firm that is a client of a location company, says it runs ad campaigns for personal injury lawyers targeting people anonymously in emergency rooms.
Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret
JENNIFER VALENTINO-DeVRIES, NATASHA SINGER, MICHAEL H. KELLER and AARON KROLIK, The New York Times (DEC. 10, 2018)
Several businesses claim they can track about half the mobile devices in the US, with precise locations updated up to 14,000 times a day in some cases. This data is sold or analyzed for advertising and retail, among other uses. Sales of location-targeted advertising reached an estimated $21 billion this year, and it’s a growing market. The data is anonymized, but those with access to the raw data could easily identify someone without consent. Companies aren’t content with just tracking your location, either—they want to predict your future movements too, as this patent from Facebook shows.
The scale of location tracking by our smartphone apps has been exposed
The Download, MIT Technology Review, 12/11/18
The power of the digital dead to manipulate the living is enormous; who better to sell us a product than someone we’ve loved and lost? Thus our digital representations might be more talkative, pushy, and flattering than we are—and if that’s what their makers think is best, who’s going to stop them?
Digital immortality: How your life’s data means a version of you could live forever
Courtney Humphries, MIT Technology Review (October 18, 2018)
the dominant services, which are mostly owned by Google and Facebook — things like YouTube and WhatsApp — these things are driven by the manipulation model where all the money is made by third parties who are trying to manipulate the people who are their users. If that’s the way the system is designed at its core, I don’t think it has any chance to be good; it’s born to be terrible.
Jaron Lanier Helped Create Social Media, And Now He’s Begging You To Leave It Behind
Ja’han Jones, HuffPost (12/12/2018)
The next generation of high-end cars will come equipped with software and hardware (cameras and microphones, for now) to analyze drivers’ attentiveness, irritation, and other states.
Alexa, Should We Trust You?
Judith Shulevitz, The Atlantic (November 2018 Issue)
Virtual assistants able to discern and react to their users’ frame of mind could create a genuine-seeming sense of affinity, a bond that could be used for good or for ill.
Alexa, Should We Trust You?
Judith Shulevitz, The Atlantic (November 2018 Issue)
My biggest concern is with young people, whose brains are still developing from birth through adolescence. There’s a process called pruning [the process of removing neurons that are damaged or degraded to improve the brain’s networking capacity]. This could be affected through all the time using tech. We don’t have data on that — but it certainly can raise a concern.
Is our constant use of digital technologies affecting our brain health? We asked 11 experts.
Brian Resnick, Julia Belluz, and Eliza Barclay, Vox (Nov 29, 2018)
Those digital bread crumbs amass over time, equipping tech companies with staggeringly precise information about each of us. Product designers then use that data, alongside machine-learning tools, to study how we react to certain interfaces, rewards and inputs, and to identify patterns in our behaviors. That allows them to predict, fairly precisely, Brown says, how we’ll react in the future.
You're Addicted to Your Smartphone. This Company Thinks It Can Change That
Haley Sweetland Edwards, Time (April 13, 2018)
“People joke all the time about trying to build a ‘diaper product,'” he says. “The idea is, ‘Make something so addictive, they don’t even want to get up to pee.'”
You're Addicted to Your Smartphone. This Company Thinks It Can Change That
Haley Sweetland Edwards, Time (April 13, 2018)
Big tech now employs mental health experts to use persuasive technology, a new field of research that looks at how computers can change the way humans think and act. This technique, also known as persuasive design, is built into thousands of games and apps, and companies like Twitter, Facebook, Snapchat, Amazon, Apple, and Microsoft rely on it to encourage specific human behavior starting from a very young age.
Tech companies use “persuasive design” to get us hooked. Psychologists say it’s unethical.
Chavie Lieber, Vox ( Aug 8, 2018)
The founding father of this research is B.J. Fogg, a behavioral scientist at Stanford University [where there’s a lab dedicated to this field]. Fogg has been called the “millionaire maker,” and he developed an entire field of study based off research that proved that with some simple techniques, tech can manipulate human behavior. His research is now the blueprint for tech companies who are developing products to keep consumers plugged in.
Tech companies use “persuasive design” to get us hooked. Psychologists say it’s unethical.
Chavie Lieber, Vox ( Aug 8, 2018)
Facebook showed advertisers how it has the capacity to identify when teenagers feel “insecure”, “worthless” and “need a confidence boost”, according to a leaked documents based on research quietly conducted by the social network.
Facebook told advertisers it can identify teens feeling 'insecure' and 'worthless'
Sam Levin, The Guardian (1 May 2017)
The internal report produced by Facebook executives, and obtained by the Australian, states that the company can monitor posts and photos in real time to determine when young people feel “stressed”, “defeated”, “overwhelmed”, “anxious”, “nervous”, “stupid”, “silly”, “useless” and a “failure”.
Facebook told advertisers it can identify teens feeling 'insecure' and 'worthless'
Sam Levin, The Guardian (1 May 2017)
The recent document, described as “confidential,” outlines a new advertising service that expands how the social network sells corporations’ access to its users and their lives: Instead of merely offering advertisers the ability to target people based on demographics and consumer preferences, Facebook instead offers the ability to target them based on how they will behave, what they will buy, and what they will think. These capabilities are the fruits of a self-improving, artificial intelligence-powered prediction engine, first unveiled by Facebook in 2016 and dubbed “FBLearner Flow.”
Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document
Sam Biddle, The Intercept (April 13 2018)
The document does not detail what information from Facebook’s user dossiers is included or excluded from the prediction engine, but it does mention drawing on location, device information, Wi-Fi network details, video usage, affinities, and details of friendships, including how similar a user is to their friends. All of this data can then be fed into FBLearner Flow, which will use it to essentially run a computer simulation of a facet of a user’s life, with the results sold to a corporate customer. The company describes this practice as “Facebook’s Machine Learning expertise” used for corporate “core business challenges.”
Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document
Sam Biddle, The Intercept (April 13 2018)
Pasquale, the law professor, told The Intercept that Facebook’s behavioral prediction work is “eerie” and worried how the company could turn algorithmic predictions into “self-fulfilling prophecies,” since “once they’ve made this prediction, they have a financial interest in making it true.” That is, once Facebook tells an advertising partner you’re going to do some thing or other next month, the onus is on Facebook to either make that event come to pass, or show that they were able to help effectively prevent it (how Facebook can verify to a marketer that it was indeed able to change the future is unclear).
Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document
Sam Biddle, The Intercept (April 13 2018)
“We’re seeing a resegregation of society that’s catalyzed by algorithms,” Wylie said. Sites like Facebook reward informational echo chambers where partisan views are reinforced instead of challenged. “Instead of a common fabric,” he said, “we’re tearing that fabric apart.”
Christopher Wylie Warns Senators: Cambridge Analytica, Steve Bannon Want ‘Culture War’
Ryan Grenoble, HuffPost 05/16/2018
One major takeaway from both studies is the breadth of Russian interference that appeared on Instagram, which is owned by Facebook and was not frequently mentioned when its parent company testified on Capitol Hill. The study says that as attention was focused on Facebook and Twitter in 2017, the Russians shifted much of their activity to Instagram.
Russian Troll Farms Are Still Using Social Media To Meddle In U.S Politics
Mary Clare Jalonick, HuffPost (12/18/2018)
The military exploited Facebook’s wide reach in Myanmar, where it is so broadly used that many of the country’s 18 million internet users confuse the Silicon Valley social media platform with the internet. Human rights groups blame the anti-Rohingya propaganda for inciting murders, rapes and the largest forced human migration in recent history.
A Genocide Incited on Facebook, With Posts From Myanmar’s Military
Paul Mozur, The New York Times (Oct. 15, 2018)
They then turn to their well-organized army of “social media specialists” via group chats in apps like WhatsApp and Telegram, sending them lists of people to threaten, insult and intimidate; daily tweet quotas to fill; and pro-government messages to augment.
Saudis’ Image Makers: A Troll Army and a Twitter Insider
Katie Benner, Mark Mazzetti, Ben Hubbard and Mike Isaac, The New York Times (10/12/18)
It is only now, a decade after the financial crisis, that the American public seems to appreciate that what we thought was disruption worked more like extraction—of our data, our attention, our time, our creativity, our content, our DNA, our homes, our cities, our relationships. The tech visionaries’ predictions did not usher us into the future, but rather a future where they are kings.
An Alternative History of Silicon Valley Disruption
Nitasha Tiku, Wired (10.22.18)
“The taxpayers in this country should not be subsidizing a guy who’s worth $150 billion, whose wealth is increasing by $260 million every single day,” said Sanders. “That is insane. He has enough money to pay his workers a living wage. He does not need corporate welfare. And our goal is to see that Bezos pays his workers a living wage.”
Bernie Sanders’ problem with Amazon
Brian Heater, TechCrunch (8/28/2018)
I asked Hoffman to estimate what share of fellow Silicon Valley billionaires have acquired some level of “apocalypse insurance,” in the form of a hideaway in the U.S. or abroad. “I would guess fifty-plus per cent,” he said, “but that’s parallel with the decision to buy a vacation home. Human motivation is complex, and I think people can say, ‘I now have a safety blanket for this thing that scares me.’ ” The fears vary, but many worry that, as artificial intelligence takes away a growing share of jobs, there will be a backlash against Silicon Valley, America’s second-highest concentration of wealth. (Southwestern Connecticut is first.) “I’ve heard this theme from a bunch of people,” Hoffman said. “Is the country going to turn against the wealthy? Is it going to turn against technological innovation? Is it going to turn into civil disorder?”
Doomsday Prep for the Super-Rich
Evan Osnos, The New Yorker (January 30, 2017)