Third Timeline: 3/17/21 - 10/16/23

From tracking cars anywhere on earth to tech, cops and were spies made for one another

Our timelines are easily navigated in bite size overviews, by swiping left or right on a phone, clicking and dragging on a tablet or desktop, or clicking the left and right arrows.

Please feel free to leave comments or questions below the timeline.


Why Big Tech, Cops, and Spies Were Made for One Another

One in four web users has installed an ad blocker (which also blocks commercial surveillance). It’s the “biggest boycott in world history.” The reason you can modify your browser to ignore demands from servers to fetch ads — and reveal facts about you in the process — is that the web is an “open platform.” All the major browsers have robust interfaces for aftermarket blockers to plug into, and they’re also all open source, meaning that if a browser vendor restricts those interfaces to make it harder to block ads, other companies can “fork the code” to bypass those restrictions.

By contrast, apps are encrypted, which triggers a quarter-century-old law: the Digital Millennium Copyright Act of 1998, whose Section 1201 makes it a felony to provide someone with a tool to bypass an “access control” for a copyrighted work. By encrypting apps and locking the keys away from the device owner, Apple can make it a crime for you to reconfigure your own phone to protect your privacy, with penalties of a five-year prison sentence and a $500,000 fine — for a first offense.

...It goes without saying that cops and spies love commercial surveillance. The very first Snowden revelation concerned a public-private surveillance partnership called Prism, in which the NSA plundered large internet companies’ data with their knowledge and cooperation. The subsequent revelation about the “Upstream” program revealed that the NSA was also plundering tech giants’ data without their knowledge, and using Prism as a “plausible deniability” fig leaf so that the tech firms didn’t get suspicious when the NSA acted on its stolen intelligence.

No government agency could ever hope to match the efficiency and scale of commercial surveillance. The NSA couldn’t order us to carry pocket location beacons at all times — hell, the Centers for Disease Control and Prevention couldn’t even get us to run an exposure notification app in the early days of the Covid pandemic. No government agency could order us to put all our conversations in writing to be captured, stored, and mined. And not even the U.S. government could afford to run the data centers and software development to store and make sense of it all.

Meanwhile, the private sector relies on cops and spies to go to bat for them, lobbying against new privacy laws and for lax enforcement of existing ones. Think of Amazon’s Ring cameras, which have blanketed entire neighborhoods in CCTV surveillance, which Ring shares with law enforcement agencies, sometimes without the consent or knowledge of the cameras’ owners. Ring marketing recruits cops as street teams, showering them with freebies to distribute to local homeowners.


The Biggest Hack of 2023 Keeps Getting Bigger

To date, Emsisoft has concluded that 2,167 organizations have been impacted by the sprawling campaign. The number had been hovering around 1,000 in recent months, but it jumped significantly when the National Student Clearinghouse revealed 890 colleges and universities across the US—including Harvard University and Stanford University—had been impacted by MOVEit breaches. Organizations in the US account for 88.8 percent of known victims, according to Emsisoft, while a smattering of other organizations in Germany, Canada, and the UK have also been exposed by Clop and come forward.

According to Emsisoft’s analysis, around 1,841 organizations have disclosed breaches, but only 189 of them have specified how many individuals were impacted by the incident. From these detailed disclosures, Emsisoft has found that more than 62 million individuals had their data breached as part of Clop’s MOVEit spree. But since there are estimated to be nearly 2,000 organizations that have not revealed how many individuals had personal data affected in their breaches—and since researchers have concluded that there are other impacted organizations that haven’t come forward at all—the true total of people whose data was compromised is likely even larger, possibly on the scale of hundreds of millions of individuals, according to Emsisoft.

...While cybercriminal groups often make headlines for attention-grabbing ransomware or extortion attacks, such as those against casinos, persistent and unrelenting theft, publication, extortion, and trade of people’s sensitive data from sprees like the MOVEit rampage can ruin lives—a cumulative reality that is often overshadowed by individual incidents where profits are on the line. Hacks on schools have revealed details of sexual assaults, child abuse allegations, and suicide attempts, with the Associated Press reporting individuals often don’t know the details have been published. Meanwhile, breaches of mental health service providers have exposed patients’ records.


New Group Attacking iPhone Encryption Backed by U.S. Political Dark-Money Network

Something the Heat Initiative has not placed on giant airborne banners is who’s behind it: a controversial billionaire philanthropy network whose influence and tactics have drawn unfavorable comparisons to the right-wing Koch network. Though it does not publicize this fact, the Heat Initiative is a project of the Hopewell Fund, an organization that helps privately and often secretly direct the largesse — and political will — of billionaires. Hopewell is part of a giant, tightly connected web of largely anonymous, Democratic Party-aligned dark-money groups, in an ironic turn, campaigning to undermine the privacy of ordinary people.

...“I’m uncomfortable with anonymous rich people with unknown agendas pushing these massive invasions of our privacy,” Matthew Green, a cryptographer at Johns Hopkins University and a critic of the plan to have Apple scan private files on its devices, told The Intercept. “There are huge implications for national security as well as consumer privacy against corporations. Plenty of unsavory reasons for people to push this technology that have nothing to do with protecting children.”

...Last month, Wired reported the previously unknown Heat Initiative was pressing Apple to reconsider its highly controversial 2021 proposal to have iPhones constantly scan their owners’ photos as they were uploaded to iCloud, checking to see if they were in possession of child sexual abuse material, known as CSAM. If a scan turned up CSAM, police would be alerted. While most large internet companies check files their users upload and share against a centralized database of known CSAM, Apple’s plan went a step further, proposing to check for illegal files not just on the company’s servers, but directly on its customers’ phones.

...For an organization demanding that Apple scour the private information of its customers, the Heat Initiative discloses extremely little about itself. According to a report in the New York Times, the Heat Initiative is armed with $2 million from donors including the Children’s Investment Fund Foundation, an organization founded by British billionaire hedge fund manager and Google activist investor Chris Cohn, and the Oak Foundation, also founded by a British billionaire. The Oak Foundation previously provided $250,000 to a group attempting to weaken end-to-end encryption protections in EU legislation, according to a 2020 annual report.

The Heat Initiative is helmed by Sarah Gardner, who joined from Thorn, an anti-child trafficking organization founded by actor Ashton Kutcher. (Earlier this month, Kutcher stepped down from Thorn following reports that he’d asked a California court for leniency in the sentencing of convicted rapist Danny Masterson.) Thorn has drawn scrutiny for its partnership with Palantir and efforts to provide police with advanced facial recognition software and other sophisticated surveillance tools. Critics say these technologies aren’t just uncovering trafficked children, but ensnaring adults engaging in consensual sex work.


Is Tesla liable if a driver dies on Autopilot?

Thursday’s opening arguments offered a glimpse into Tesla’s strategy for defending its Autopilot features, which have been linked to more than 700 crashes since 2019 and at least 17 fatalities, according to a Washington Post analysis of National Highway Traffic Safety Administration data. The crux of the company’s defense is that the driver is ultimately in control of the vehicle, and they must keep their hands on the wheel and eyes on the road while using the feature.

...The cluster of trials set for the next year is also likely to demonstrate how much the technology actually relies on human intervention — despite CEO Elon Musk’s claims that cars operating in Autopilot are safer than those controlled by humans. The outcomes could amount to a pivotal moment for Tesla, which has for years tried to absolve itself from responsibility when one of its cars on Autopilot is involved in a crash.

...“It’s possible the drivers (of Teslas) understand the risks,” Zipper said. “But even if they accept that, what about everyone on a public road or street who is not in a Tesla? None of us signed on to be a guinea pig.”

...Before Lee’s car collided with the palm tree, court documents say, he attempted to regain control of the car, but “Autopilot and/or Active Safety features would not allow.” That failure, according to the complaint, led to Lee’s “gruesome and ultimately fatal injuries.”


Investigators increasingly use warrants to obtain location and search data from Google, even for nonviolent cases—and even for people who had nothing to do with the crime

Police have been using versions of this method for decades. Security camera footage and cell tower data from phone companies all have the potential for invasions of privacy that go beyond searching a suspect’s trunk. But the sheer volume of information available from Google about where tens of millions of people have been and what they’ve searched for is unprecedented.

By their very nature, these Google warrants often return information on people who haven’t been suspected of a crime. In 2018 a man in Arizona was wrongly arrested for murder based on Google location data. Despite this possibility, police have continued to embrace the practice in the years since. “In many ways, law enforcement thinks it’s like hitting the easy button,” says Price, who’s mounting some of the country’s first legal challenges to warrants for Google’s location and search data. “It would be very difficult for Google to refuse to comply in one set of cases if it’s complying in another. The door gets cracked open, and once it’s open, it just becomes a floodgate.”

Google says it received a record 60,472 search warrants in the US last year, more than double the number from 2019. The company provides at least some information in about 80% of cases. Although many large technology companies receive requests for information from law enforcement at least occasionally, police consider Google to be particularly well suited to jump-start an investigation with few other leads. Law enforcement experts say it’s the only company that provides a detailed inventory of whose personal devices were present at a given time and place. Apple Inc., the other major mobile operating system provider, has said it’s technically unable to supply the sort of location data police want. That’s OK, because many iPhone users depend on Google Maps and other Google apps. Google’s search engine owns 92% of the market worldwide and is currently the focus of an antitrust lawsuit from the US Department of Justice.

...In the Phoenix suburb of Surprise, Detective Taylor Knight obtained five geofence warrants in a span of less than three months to investigate a spate of burglaries and vandalism at construction sites last year. Among the equipment police sought to recover were a microwave and a cooktop. One of the warrants, obtained for the theft of a wood chipper, demanded information on anyone who was near the construction site for almost an entire week. The Surprise Police Department says no prosecutions have materialized.

...Colorado’s top court is expected to rule before the end of the year. In the meantime, law enforcement is finding new ways to mine the digital trail we leave behind. Police in San Francisco and the Phoenix area have begun sending warrants for video footage recorded by self-driving cars as they roam city streets. One of the main recipients of those warrants is Waymo, a sister company of Google.


How Microsoft could supplant Apple as the world’s most valuable firm

All the while Microsoft was also investing in AI. It first announced it was working with OpenAI in 2016; it has since invested $13bn, for what is reported to be a 49% stake. The deal not only allows Microsoft to use OpenAI’s technology, but also stipulates that OpenAI’s models and tools run on Azure, in effect making OpenAI’s customers into indirect clients of Microsoft. And it isn’t just OpenAI. Microsoft has bought 15 AI-related firms since Mr Nadella took over. That includes paying $20bn for Nuance, a health-care firm with cutting-edge speech-to-text technology, in 2022.

Today Microsoft’s business relies on three divisions for growth. The first is Azure. For the past five years it has been closing in on AWS (see chart 2). Cloud spending is slowing as IT managers tighten purse strings. Despite this, in the most recent quarter the business grew by 27% year on year. Microsoft does not reveal Azure’s sales, but analysts think that it accounts for about a quarter of the firm’s revenue, which hit $212bn last year. Gross margins for its cloud business are also secret, but Bernstein puts them at a lofty 60% or so.

Second is Microsoft 365, which also accounts for about a quarter of revenue. That has been growing by about 10% a year of late, thanks to take-up among smaller businesses, especially in service industries such as restaurants. The third source of growth is cybersecurity. In earnings calls Microsoft executives have said it accounts for roughly $20bn in revenue (about a tenth of the total). That is more than the combined revenues of the five biggest firms that provide only cybersecurity. What is more, revenues are growing by around 30% each year. (Microsoft’s video-game arm, which brings in $15bn a year, is also set to grow substantially now that British antitrust regulators have signalled they will approve the long-delayed acquisition of Activision Blizzard, another gamemaker, for $69bn.)

...Even if spiralling costs are contained, there are plenty of other risks. Competition is white-hot. One battle is for the $340bn market for business software. In May Google announced Duet for Workspace, its version of Copilots. Last week it released features allowing Bard, its chatbot, to access user’s Gmail inboxes and Google Docs. Salesforce, a software giant, has Einstein. Slack, a messaging app and one of Salesforce’s subsidiaries, has Slack GPT. ServiceNow, whose software helps firms manage their workflow, has Now Assist. Zoom offers Zoom Companion. Intuit is selling Intuit Assist. Startups such as Adept and Cohere offer ai assistants, too. OpenAI launched its enterprise-focused ChatGPT in August.


The US Federal Trade Commission filed a long- anticipated antitrust complaint alleging that Amazon uses its power over sellers to keep ecommerce prices artificially high

The long-anticipated government complaint, joined by 17 state attorneys general, alleges that the ecommerce giant illegally monopolizes online shopping, lowering quality and hiking prices for consumers. “Amazon is now exploiting its monopoly power to enrich itself while raising prices and degrading service for the tens of millions of American families who shop on its platform and the hundreds of thousands of businesses that rely on Amazon to reach them," FTC chair Lina Khan said in a statement released today.


What does a car need to know about your sex life?

According to the team's research, Nissan says it can collect “sexual activity” information about consumers. Kia says it can collect information about a consumer's “sex life.” Subaru passengers allegedly consent to the collection of their data by simply being in the vehicle. Volkswagen says it collects data like a person's age and gender and whether they're using your seatbelt, and it can use that information for targeted marketing purposes.

..."We were pretty surprised by the data points that the car companies say they can collect... including social security number, information about your religion, your marital status, genetic information, disability status... immigration status, race. And of course, as you said.. one of the most surprising ones for a lot of people who read our research is the sexual activity data."

...We also explore the booming revenue stream that car manufacturers are tapping into by not only collecting people's data, but also packaging it together for targeted advertising. With so many data pipelines being threaded together, Caltrider says the auto manufacturers can even make "inferences" about you.

"What really creeps me out [is] they go on to say that they can take all the information they collect about you from the cars, the apps, the connected services, and everything they can gather about you from these third party sources," Caltrider said, "and they can combine it into these things they call 'inferences' about you about things like your intelligence, your abilities, your predispositions, your characteristics."


What Big Tech Knows About Your Body

We leave digital traces about our health everywhere we go: by completing forms like BetterHelp’s. By requesting a prescription refill online. By clicking on a link. By asking a search engine about dosages or directions to a clinic or pain in chest dying???? By shopping, online or off. By participating in consumer genetic testing. By stepping on a smart scale or using a smart thermometer. By joining a Facebook group or a Discord server for people with a certain medical condition. By using internet-connected exercise equipment. By using an app or a service to count your steps or track your menstrual cycle or log your workouts. Even demographic and financial data unrelated to health can be aggregated and analyzed to reveal or infer sensitive information about people’s physical or mental-health conditions.

All of this information is valuable to advertisers and to the tech companies that sell ad space and targeting to them. It’s valuable precisely because it’s intimate: More than perhaps anything else, our health guides our behavior. And the more these companies know, the easier they can influence us. Over the past year or so, reporting has found evidence of a Meta tracking tool collecting patient information from hospital websites, and apps from and WebMD sharing search terms such as herpes and depression, plus identifying information about users, with advertisers. (Meta has denied receiving and using data from the tool, and has said that it was not sharing data that qualified as “sensitive personal information.”) In 2021, the FTC settled with the period and ovulation app Flo, which has reported having more than 100 million users, after alleging that it had disclosed information about users’ reproductive health with third-party marketing and analytics services, even though its privacy policies explicitly said that it wouldn’t do so. (Flo, like BetterHelp, said that its agreement with the FTC wasn’t an admission of wrongdoing and that it didn’t share users’ names, addresses, or birthdays.)

...Companies that sell ads are often quick to point out that information is aggregated: Tech companies use our data to target swaths of people based on demographics and behavior, rather than individuals. But those categories can be quite narrow: Ashkenazi Jewish women of childbearing age, say, or men living in a specific zip code, or people whose online activity may have signaled interest in a specific disease, according to recent reporting. Those groups can then be served hyper-targeted pharmaceutical ads at best, and unscientific “cures” and medical disinformation at worst. They can also be discriminated against: Last year, the Department of Justice settled with Meta over allegations that the latter had violated the Fair Housing Act in part by allowing advertisers to not show housing ads to users who Facebook’s data-collection machine had inferred were interested in topics including “service animal” and “accessibility.”


Today’s children face a world of constant surveillance. Their very sense of self is at stake.

But to be a modern child is to be constantly watched by machines. The more time kids spend online, the more information about them is collected by companies seeking to influence their behavior, in the moment and for decades to come. By the time they’re toddlers, many of today’s children already know how to watch videos, play games, take pictures, and FaceTime their grandparents. By the time they are 10, 42 percent of them have a smartphone. By the time they are 12, nearly half use social media. The internet was already ingrained in children’s lives, but the coronavirus pandemic made it essential for remote learning, connecting with friends, and entertainment. Watching online videos has surged past television as the media activity that kids enjoy the most; children cite YouTube as the one site they wouldn’t want to live without..

...COPPA was passed in 1998. Compliance is largely voluntary, and evidently spotty. In 2020, when researchers studied 451 apps used by 3- and 4-year-olds, they found that two-thirds collected digital identifiers. Other research suggests that children’s apps contain more third-party trackers than those geared toward adults. And even if an app or product is COPPA compliant, it can still collect highly valuable, potentially identifying information. In today’s hyper-aggregated digital landscape, every nugget of information can easily be stitched together with other information to create a richly detailed dossier that clearly identifies you in particular.

The harvesting process, it’s important to note, tends to be automated and indiscriminate in what information it collects. A company can amass private information about your child even when it doesn’t intend to. In 2021, TikTok rewrote its privacy policy to allow it to gather “voiceprints” and “faceprints”—that is, voice recordings and images of users’ faces, along with all of the identifying information that can be gleaned from them. And we know that at least 18 million of TikTok’s U.S. users are likely age 14 or younger. It’s not difficult to imagine that children would sometimes share sensitive personal information on TikTok, whether TikTok intended to collect that information or not.

You get the picture; it’s bleak. All in all, by the time a child reaches the age of 13, online advertising firms have collected an average of 72 million data points about them. That’s not even considering the degree to which children’s data are shared and their privacy potentially compromised by the people closest to them—sometimes in the form of a grainy sonogram posted to social media before they are even born. As of 2016, the average child in Britain had about 1,500 images of them posted online by the time they hit their fifth birthday.


The supermarket, in other words, is a panopticon just the same as the social network

The reality is, unfortunately, worse. Retail companies do collect massive volumes of terrifically sensitive data: demographic information, geographic location, websites you’ve visited, brick-and-mortar stores you have patronized, products you own, products you’ve browsed, products you’ve searched for, even products they think you might have looked at but passed over in the store. They do this not only to predict your future behavior, but to influence it.

...Smartphones gave stores even more refined information about their customers, facilitating new kinds of in-store spying that most people probably don’t even know exists. Mousetrap-size radio transmitters called beacons ping off apps on your phone and can track your location down to the inch inside a store, giving retailers granular insight into what types of products you linger over. This information, combined with other data the store has collected itself and bought from third parties, can paint a vivid picture of who you are and what you might be persuaded to buy for what price in the moment: In principle, you can linger over the sugary cereals in the grocery store, opt for the whole grains, and then be served an ad on your phone for 10 percent off Lucky Charms, which the ad may remind you are actually part of a balanced breakfast.

Retailers have also started to test facial- and voice-recognition technologies in stores, giving them yet another way to track customer behavior. In-store Wi-Fi helps with the signal-inhibiting effects of many stores’ concrete-and-steel construction, but it also allows stores to collect your email address and browsing traffic, and in some cases to install cookies on your device that track you long after you leave the store and its network. Store-specific apps offer deals and convenience, but they also collect loads of information via features that allow you to make shopping lists or virtually “try on” clothing or makeup by scanning your likeness. Club cards enable stores to log every item your household purchases and analyze your profile for trends and sales opportunities.

Ordinary people may not realize just how much offline information is collected and aggregated by the shopping industry rather than the tech industry. In fact, the two work together to erode our privacy effectively, discreetly, and thoroughly. Data gleaned from brick-and-mortar retailers get combined with data gleaned from online retailers to build ever-more detailed consumer profiles, with the intention of selling more things, online and in person—and to sell ads to sell those things, a process in which those data meet up with all the other information Big Tech companies such as Google and Facebook have on you. “Retailing,” Joe Turow told me, “is the place where a lot of tech gets used and monetized.” The tech industry is largely the ad-tech industry. That makes a lot of data retail data. “There are a lot of companies doing horrendous things with your data, and people use them all the time, because they’re not on the public radar.” The supermarket, in other words, is a panopticon just the same as the social network.


Modern cars are a privacy nightmare

And car companies have so many more data-collecting opportunities than other products and apps we use -- more than even smart devices in our homes or the cell phones we take wherever we go. They can collect personal information from how you interact with your car, the connected services you use in your car, the car’s app (which provides a gateway to information on your phone), and can gather even more information about you from third party sources like Sirius XM or Google Maps. It’s a mess. The ways that car companies collect and share your data are so vast and complicated that we wrote an entire piece on how that works. The gist is: they can collect super intimate information about you -- from your medical information, your genetic information, to your “sex life” (seriously), to how fast you drive, where you drive, and what songs you play in your car -- in huge quantities. They then use it to invent more data about you through “inferences” about things like your intelligence, abilities, and interests.

...It’s bad enough for the behemoth corporations that own the car brands to have all that personal information in their possession, to use for their own research, marketing, or the ultra-vague “business purposes.” But then, most (84%) of the car brands we researched say they can share your personal data -- with service providers, data brokers, and other businesses we know little or nothing about. Worse, nineteen (76%) say they can sell your personal data.

...All but two of the 25 car brands we reviewed earned our “ding” for data control, meaning only two car brands, Renault and Dacia (which are owned by the same parent company) say that all drivers have the right to have their personal data deleted. We would like to think this deviation is one car company taking a stand for drivers’ privacy. It’s probably no coincidence though that these cars are only available in Europe -- which is protected by the robust General Data Protection Regulation (GDPR) privacy law. In other words: car brands often do whatever they can legally get away with to your personal data.

...Tesla is only the second product we have ever reviewed to receive all of our privacy “dings.” (The first was an AI chatbot we reviewed earlier this year.) What set them apart was earning the “untrustworthy AI” ding. The brand’s AI-powered autopilot was reportedly involved in 17 deaths and 736 crashes and is currently the subject of multiple government investigations.

...People don’t comparison-shop for cars based on privacy. And they shouldn’t be expected to. That’s because there are so many other limiting factors for car buyers. Like cost, fuel efficiency, availability, reliability, and the features you need. Even if you did have the funds and the resources to comparison shop for your car based on privacy, you wouldn’t find much of a difference. Because according to our research, they are all bad! On top of all that, researching cars and privacy was one of the hardest undertakings we as privacy researchers have ever had. Sorting through the large and confusing ecosystem of privacy policies for cars, car apps, car connected services, and more isn’t something most people have the time or experience to do.

...Many people have lifestyles that require driving. So unlike a smart faucet or voice assistant, you don’t have the same freedom to opt out of the whole thing and not drive a car. We’ve talked before about the murky ways that companies can manipulate your consent. And car companies are no exception. Often, they ignore your consent. Sometimes, they assume it. Car companies do that by assuming that you have read and agreed to their policies before you step foot in their cars. Subaru’s privacy policy says that even passengers of a car that uses connected services have “consented” to allow them to use -- and maybe even sell -- their personal information just by being inside.

...A few of the car companies we researched take manipulating your consent one step further by making you complicit in getting “consent” from others, saying it’s on you to inform them of your car’s privacy policies. Like when Nissan makes you “promise to educate and inform all users and occupants of your Vehicle about the Services and System features and limitations, the terms of the Agreement, including terms concerning data collection and use and privacy, and the Nissan Privacy Policy.” OK, Nissan! We would love to meet the social butterfly who drafted this line.


Every New Car Is a 'Privacy Nightmare,' Mozilla Researchers Conclude

The Mozilla Foundation spent 600 hours of research studying 25 privacy policies for major car brands. None of them met the Foundation’s minimum standards around security and privacy; all of them claim the right to collect huge amounts of personal data in dozens of categories from both the car and associated apps. Eighty-four percent of the brands studied share or sell personal data and “inferences” about you based on the data they collect, such as how intelligent you are, your abilities, and your interests. More than half of the companies will share your information with government or law enforcement based on a simple request, not requiring a subpoena. The vast majority of car companies, 92 percent, give drivers “little or no control over their personal data,” Mozilla also found, with the two exceptions being European-based brands Renault and Dacia, which have to comply with the GDPR privacy law.

But Mozilla Foundation holds special antipathy for Nissan, whose privacy policy it calls “probably the most mind boggling creepy, scary, sad, messed up privacy policy we have ever read” because “They come right out and say they can collect and share your sexual activity, health diagnosis data, and genetic information and other sensitive personal information for targeted marketing purposes.” Nissan also discloses that it will share and sell "Inferences drawn from any Personal Data collected to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes" to others for targeted marketing purposes.”

...While Nissan’s privacy policy is the most dystopian, all of the companies collect masses of data about the people who drive their cars. Some also collect tons of data about the world around the car. While Nissan’s privacy policy ranks as the creepiest, Tesla scored the worst on Mozilla’s scorecard, with its malfunctioning Autopilot function and “Full self-driving” beta program which is not actually self-driving and frequently attempts to do very dangerous things leading to analysts penalizing it for “untrustworthy AI.”

Even when car companies aren’t actively selling your data to brokers, they are vulnerable to hacks or other leaks and breaches. For example, Volkswagen and Audi, Toyota, and Mercedes-Benz have all recently suffered data leaks or breaches that affected millions of customers.


Robots Are Already Killing People

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar circumstances. A malfunctioning robot he went to inspect killed him when he obstructed its path, according to Gabriel Hallevy in his 2013 book, When Robots Kill: Artificial Intelligence Under Criminal Law. As Hallevy puts it, the robot simply determined that “the most efficient way to eliminate the threat was to push the worker into an adjacent machine.” From 1992 to 2017, workplace robots were responsible for 41 recorded deaths in the United States—and that’s likely an underestimate, especially when you consider knock-on effects from automation, such as job loss. A robotic anti-aircraft cannon killed nine South African soldiers in 2007 when a possible software failure led the machine to swing itself wildly and fire dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing Stephen Pettitt during a routine operation that had occurred a few years earlier.

..AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk,” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days. (OpenAI did not comment when asked about its stance on regulation; previously, it has said that “achieving our mission requires that we work to mitigate both current and longer-term risks,” and that it is working toward that goal by “collaborating with policymakers, researchers and users.”)

...Large corporations have a tendency to develop computer technologies to self-servingly shift the burdens of their own shortcomings onto society at large, or to claim that safety regulations protecting society impose an unjust cost on corporations themselves, or that security baselines stifle innovation. We’ve heard it all before, and we should be extremely skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned, and human operators trying to assist were killed in unexpected ways. Since the first-known death resulting from the feature in January 2016, Tesla’s Autopilot has been implicated in more than 40 deaths according to official report estimates. Malfunctioning Teslas on Autopilot have deviated from their advertised capabilities by misreading road markings, suddenly veering into other cars or trees, crashing into well-marked service vehicles, or ignoring red lights, stop signs, and crosswalks. We’re concerned that AI-controlled robots already are moving beyond accidental killing in the name of efficiency and “deciding” to kill someone in order to achieve opaque and remotely controlled objectives.


In Its First Monopoly Trial of Modern Internet Era, U.S. Sets Sights on Google

Google has amassed 90 percent of the search engine market in the United States and 91 percent globally, according to Similarweb, a data analysis firm.

...Rivals have long accused Google of brandishing its power in search to suppress competitors’ links to travel, restaurant reviews and maps, while giving greater prominence to its own content. Those complaints brought scrutiny from regulators, though little action was taken.

...Google’s actions had harmed consumers and stifled competition, the agency said, and could affect the future technological landscape as the company positioned itself to control “emerging channels” for search distribution. The agency added that Google had behaved similarly to Microsoft in the 1990s, when the software giant made its own web browser the default on the Windows operating system, crushing competitors.

...Some tech executives said the Justice Department’s actions made Microsoft more cautious, clearing the way for start-ups like Google to compete in the next era of computing. Bill Gates, a Microsoft founder, has blamed the hangover from the antitrust suit for the company’s slow entry into mobile technology and the failure of its Windows phone. But others have argued that the settlement did little to increase competition.


How China Demands Tech Firms Reveal Hackable Flaws in Their Products

Even without those details or a proof-of-concept exploit, a mere description of a bug with the required level of specificity would provide a “lead” for China’s offensive hackers as they search for new vulnerabilities to exploit, says Kristin Del Rosso, the public sector chief technology officer at cybersecurity firm Sophos, who coauthored the Atlantic Council report. She argues the law could be providing those state-sponsored hackers with a significant head start in their race against companies’ efforts to patch and defend their systems. “It’s like a map that says, ‘Look here and start digging,’” says Del Rosso. “We have to be prepared for the potential weaponization of these vulnerabilities.”

If China’s law is in fact helping the country’s state-sponsored hackers gain a greater arsenal of hackable flaws, it could have serious geopolitical implications. US tensions with China over both the country’s cyberespionage and apparent preparations for disruptive cyberattack have peaked in recent months. In July, for instance, the Cybersecurity and Information Security Agency (CISA) and Microsoft revealed that Chinese hackers had somehow obtained a cryptographic key that allowed Chinese spies to access the email accounts of 25 organizations, including the State Department and the Department of Commerce. Microsoft, CISA, and the NSA all warned as well about a Chinese-origin hacking campaign that planted malware in electric grids in US states and Guam, perhaps to obtain the ability to cut off power to US military bases.

...In fact, China-based staff of foreign companies may be complying with the vulnerability disclosure law more than executives outside of China even realize, says J. D. Work, a former US intelligence official who is now a professor at National Defense University College of Information and Cyberspace. (Work holds a position at the Atlantic Council, too, but wasn’t involved in Cary and Del Rosso’s research.) That disconnect isn’t just due to negligence or willful ignorance, Work adds. China-based staff might broadly interpret another law China passed last year focused on countering espionage as forbidding China-based executives of foreign firms from telling others at their own company about how they interact with the government, he says. “Firms may not fully understand changes in their own local offices’ behavior,” says Work, “because those local offices may not be permitted to talk to them about it, under pain of espionage charges.”


U.S. Spy Agency Dreams of Surveillance Underwear It’s Calling “SMART ePANTS”

The federal government has shelled out at least $22 million in an effort to develop “smart” clothing that spies on the wearer and its surroundings. Similar to previous moonshot projects funded by military and intelligence agencies, the inspiration may have come from science fiction and superpowers, but the basic applications are on brand for the government: surveillance and data collection.

Billed as the “largest single investment to develop Active Smart Textiles,” the SMART ePANTS — Smart Electrically Powered and Networked Textile Systems — program aims to develop clothing capable of recording audio, video, and geolocation data, the Office of the Director of National Intelligence announced in an August 22 press release. Garments slated for production include shirts, pants, socks, and underwear, all of which are intended to be washable.

...“They’re now in a position of serious authority over you. In TSA, they can swab your hands for explosives,” Jacobsen said. “Now suppose SMART ePANTS detects a chemical on your skin — imagine where that can lead.” With consumer wearables already capable of monitoring your heartbeat, further breakthroughs could give rise to more invasive biometrics.

...If SMART ePANTS succeeds, it’s likely to become a tool in IARPA’s arsenal to “create the vast intelligence, surveillance, and reconnaissance systems of the future,” said Jacobsen. “They want to know more about you than you.”


X plans to collect users’ biometric data, along with education and job history

X, formerly known as Twitter, will begin collecting users’ biometric data, according to its new privacy policy that was first spotted by Bloomberg. The policy also says the company wants to collect users’ job and education history. The policy page indicates that the change will go into effect on September 29.

“Based on your consent, we may collect and use your biometric information for safety, security, and identification purposes,” the updated policy reads. Although X hasn’t specified what it means by biometric information, it is usually used to describe a person’s physical characteristics, such as their face or fingerprints. X also hasn’t provided any details about how it plans to collect it.

...The social network was named in a proposed class action suit last month alleging that X wrongfully captured, stored and used Illinois residents’ biometric data, including facial scans, without consent. The lawsuit alleges that X “has not adequately informed individuals” that it “collects and/or stores their biometric identifiers in every photograph containing a face.”


I Tracked an NYC Subway Rider's Movements with an MTA ‘Feature’

“Obviously this is a great fit for abusers who live with their victims or have physical access, however brief, to their wallets,” Eva Galperin, the director of cybersecurity at activist organization the Electronic Frontier Foundation (EFF) and who has extensively researched how abusive partners use technology, told 404 Media. “​​Credit card info is not a goddamn unique identifier.”

...The issue is that the feature requires no other authentication—no account linked to an email, for example—meaning that anyone with a target’s details can enter it and snoop on their movements. Greg Sadetsky originally alerted 404 Media to the OMNY privacy issue.

...Activists have long been concerned with what data the OMNY system may collect and provide to law enforcement. The Surveillance Technology Oversight Project (STOP) previously published a report with its concerns about the technology. “Given how often government agencies, including the New York Police Department (‘NYPD’), have abused surveillance data to target ethnic and religious minorities and how for- profit corporations face overwhelming pressure to monetize user data, OMNY has the potential to expose millions of transit users to troubling repercussions,” the report reads.

The difference with this feature on the OMNY site is that essentially anyone can abuse it, as long as they have the credit card information of the target.


Smart lightbulb and app vulnerability puts your Wi-Fi password at risk

New research highlights another potential danger from IoT devices, with a popular make of smart light bulbs placing your Wi-Fi network password at risk. Researchers from the University of London and Universita di Catania produced a paper explaining the dangers of common IoT products. In this case, how smart bulbs can be compromised to gain access to your home or office network.

...One vulnerability, with a CVSS score of 7.6 out of 10) allows for attackers to retrieve verification keys through brute force, or by decompiling the Tapo app itself. The other high severity flaw, wtih a CVSS of 8.8, is related to incorrect authentication of the bulb, which means the device can be impersonated, allowing for Tapo password theft and device manipulation.

...What is the potential for damage where the “severe” vulnerabilities are concerned? Well, in a worst case scenario someone could potentially swipe your Wi-Fi password via the Tapo app and then have access to all the devices on said network.


In an internal update obtained by The Intercept, Facebook and Instagram’s parent company admits its rules stifled legitimate political speech

The revision follows years of criticism of the policy. Last year, a third-party audit commissioned by Meta found the company’s censorship rules systematically violated the human rights of Palestinians by stifling political speech, and singled out the DOI policy...

...Observers like Shtaya have long objected to how the DOI policy has tended to disproportionately censor political discourse in places like Palestine — where discussing a Meta-banned organization like Hamas is unavoidable — in contrast to how Meta rapidly adjusted its rules to allow praise of the Ukrainian Azov Battalion despite its neo-Nazi sympathies.

...The revision does little to address the heavily racialized way in which Meta assesses and attempts to thwart dangerous groups, Díaz added. While the company still refuses to disclose the blacklist or how entries are added to it, The Intercept published a full copy in 2021. The document revealed that the overwhelming majority of the “Tier 1” dangerous people and groups — who are still subject to the harshest speech restrictions under the new policy — are Muslim, Arab, or South Asian. White, American militant groups, meanwhile, are overrepresented in the far more lenient “Tier 3” category.

Díaz said, “Tier 3 groups, which appear to be largely made up of right-wing militia groups or conspiracy networks like QAnon, are not subject to bans on glorification.”


Mustafa Suleyman, cofounder of DeepMind and Inflection AI, talks about how AI and other technologies will take over everything—and possibly threaten the very structure of the nation-state

Mustafa Suleyman: That is the flip side of having access to information on the web. We're now going to have access to highly capable, persuasive teaching AIs that might help us to carry out whatever sort of dark intention we have. And that's the thing that we've got to wrestle with, that is going to be dark, it is definitely going to accelerate harms—no question about it. And that's what we have to confront.

...Mustafa Suleyman: We are absolutely not ready, because that kind of power gets smaller and more efficient, and anytime something is useful in the history of invention, it tends to get cheaper, it tends to get smaller, and therefore it proliferates. So the story of the next decade is one of proliferation of power, which is what I think is gonna cause a genuine threat to the nation-state, unlike any we've seen since the nation-state was born.

...Mustafa Suleyman: Well, I think in sort of 15 or 20 years' time, you could imagine very powerful non-state actors. So think drugs cartels, militias, organized criminals, just an organization with the intent and motivation to cause serious harm. And so if the barrier to entry to initiating and carrying out conflict, if that barrier to entry is going down rapidly, then the state has a challenging question, which is, How does it continue to protect the integrity of its own borders and the functioning of its own states? If smaller and smaller groups of people can wield state-like power, that is essentially the risk of the coming wave.

...Mustafa Suleyman: I think the greatest challenge of the next decade is going to be the proliferation of power that will amplify inequality, and it will accelerate polarization because it's gonna be easier to spread misinformation than it's ever been. And I think it's just going to—it's gonna rock our currently unstable world.


How Amazon’s In-House First Aid Clinics Push Injured Employees to Keep Working

WIRED’s conversations with clinic staff who worked at 12 different facilities and recent OSHA citations, however, suggest that OMRs, typically emergency medical technicians, have sometimes been encouraged to steer workers toward in-house treatment. “Everything that we were doing was kind of pseudo-medical, enough to have the gloss of being medical,” says an EMT who worked in a Nevada AmCare. “When we’re in ambulances as EMTs, the entire point is to get people to definitive care. Then I get to Amazon, and it's like, ‘No, we're not getting them to a doctor.’ So what did you need me for? I'm the person who gets people to doctors.”

...OMRs typically treat employees who visit AmCare with heat, ice, or over-the-counter painkillers and handle referrals to workers’ compensation doctors. They can also refer workers to internal injury specialists, typically athletic trainers, for stretches and exercises designed to prevent further injury. Those staff and OMRs report to health and safety managers, who are not medical professionals. By limiting treatment to first aid provided by staff who don’t work under their medical licenses, Amazon avoids having to report these injuries to OSHA. Despite the “first” in first aid, AmCare staff often treat injured employees for days or weeks while they continue doing the job that injured them. OSHA says this can put employees at increased risk of developing enduring health issues.

...In April, OSHA issued Amazon the third citation in the agency’s 53-year history for medical mismanagement, finding that it seriously endangered employees’ health. That put the online retailer in the company of employers found to operate first aid clinics that put workers at risk of infection, scarring, or long-lasting injuries. Amazon had already received at least three warnings about AmCare from OSHA dating back to 2016, The Intercept reported. OSHA now found that over a six-month period, Amazon staff at a warehouse outside Albany, New York, sent at least six employees with serious injuries back to work instead of referring them to doctors, worsening their pain and potentially leading to “prolonged injuries and lifelong suffering.” The company is appealing. “We disagree with the claims in this citation,” Vogel of Amazon says, “and will continue our long-standing efforts to improve safety at our sites.”

...While on-site first aid clinics are common at large employers, the sort of prolonged, medically unsupervised treatment that Amazon offers is not, says Debbie Berkowitz, a former OSHA chief of staff and fellow at Georgetown University’s Kalmanovitz Initiative for Labor and the Working Poor. She says the practice of using on-site clinics to prevent recordable injuries became common in the meatpacking industry, which has a history of abusing its workforce. “So they’re taking a page out of a really low-road industry that treats workers as expendable,” she says.


Google is working on Gemini, its next-generation AI foundation model that can combine conversational text with image generation

Gemini would thus not only be able to generate text like ChatGPT but also create contextual images and hopefully even go beyond this. In the future, it could possibly be used to analyze charts, create graphics with text descriptions, and control software with text or voice commands.

...Google is also reportedly using YouTube video transcripts to train Gemini. Models trained on YouTube videos can provide advice based on video content, like helping mechanics diagnose a problem based on car repair videos, for example. Using YouTube video content could also help Google develop text-to-video software.


The QR code scam is afoot when a cybercriminal posts a QR code that looks like it’s coming from a reputable brand, organization or individual, then a perfectly nice person (that would be you) scans the code into their phone, and something bad happens, like you’ve just installed malware on your phone

“Not only can QR codes act as malicious links, bringing you to a nefarious website or downloading malware, but they can also be programmed to make calls and send messages to your contacts,” Hayden said. “A client of mine scanned a QR code that surreptitiously wrote and sent emails from his account to his entire contact list. The emails contained malicious links that sought to harvest recipients’ bank login information… and friends and family clicked because of the well-worded disguise.”

...The QR code parking scam has hit parking meters and lots in Myrtle Beach, in and around Atlanta, Baton Rouge, Portland, Maine, and… well, no need to mention every city in the country. You get the point. As if finding a good, affordable parking space isn’t hard enough. Now we all have to worry about being ripped off by fake parking QR codes.

...“Scammers placed their own malicious QR sticker over the restaurant’s legitimate one. Unsuspecting customers scanned the code, leading them to download malware, effectively compromising their personal information,” Chaudhuri said.

...Earlier this year, a woman in Singapore went to a bubble tea shop and saw a sticker pasted on the front door. The sticker said that if customers scanned the QR code and completed an online survey, they’d get a free cup of milk tea. That night, scammers entered the woman’s bank account and took $20,000.


China’s car companies are turning into tech companies

...“The auto industry is very competitive now. Consumers are expecting those vehicles to be tech products, like smartphones. It’d be hard for auto brands to sell their cars if they didn’t advertise their products this way,” he said. also brings in the difficult problems that the tech industry has failed to address: data security, privacy invasion, AI biases and failures, and potentially more.


“In infrastructure, this is something we can’t afford,” he said. “We can’t have A.I. hallucinate the design of a bridge.”

But the industry’s embrace of A.I. technology faces challenges, including concerns over accuracy and hallucinations, in which a system provides an answer that is incorrect or nonsensical.

...Dusty has 120 units on sites across the United States, but that is just the beginning. Ms. Lau calls the units, which can collect gigabytes of data, “Trojan horses to train the A.I.s of the future.”


This Is a Reminder That You’re Probably Oversharing on Venmo

Venmo is still set by default to publicly share when you receive or make a payment. There’s an option to make the transaction private, but if you use the app quickly and don’t notice the setting, you could unknowingly broadcast the payments between you and others.

“It’s not just that I went out to pizza with this person,” said Gennie Gebhart, a managing director at the Electronic Frontier Foundation, a digital rights nonprofit. “It’s a pattern of who you live with, interact with and do business with, and how it changes over time.”

Last month, The Guardian discovered through a Venmo feed that an aide for Justice Clarence Thomas was taking payments from lawyers who have had business with the Supreme Court, a potential conflict of interest. The aide has since hidden his Venmo activity from public view.

...In 2017, Hang Do Thi Duc, a data researcher who was at the Mozilla Foundation, published Public by Default, an interactive graphic summarizing the intimate details scraped from 208 million Venmo transactions. The graphic homed in on the daily lives of several Venmo users, including a cannabis dealer, a food cart vendor and a married couple splitting bills and paying off a loan together.


The Cyberspace Administration of China (CAC) said facial recognition technology can only be used to process facial information when there is a specific purpose and sufficient necessity, and with strict protective measures

The use of the technology will also require individual's consent, the CAC said in a statement. It added that non-biometric identification solutions should be favored over facial recognition in cases where such methods are equally effective.

Biometric identification, especially facial recognition, has become widespread in China. In 2020, local media reported that facial recognition was used to activate toilet roll dispensers in public toilets, which triggered both public and regulatory concerns at the time.

...CAC's draft rules on Tuesday said image capturing and personal identification devices should not be installed in hotel rooms, public bathrooms, changing rooms, toilets, and other places that may infringe upon others' privacy.


China’s draft measures demand ‘individual consent’ for facial recognition use

Critics have raised concerns over privacy and bias over the use of facial recognition. They complain that some residential compounds have made facial scans the only way of accessing buildings. There are also concerns about the accuracy and fairness of algorithms, particularly in recognizing the faces of minorities, which could lead to the unjust targeting of certain groups.

...The rules emphasize the need for clear signage in public areas where facial recognition is employed. Venues such as hotels, airports, and museums are prohibited from coercing individuals into accepting facial scans for such reasons as “business operations” or “service enhancements”. Moreover, facial recognition should not serve as the sole means of access to a building.

...The country has also drawn fire for deploying facial recognition systems to identify people’s ethnicities, particularly in the case of Uyghurs; but that won’t change with the new rules. According to the proposed measures, any organization or individual should refrain from utilizing facial recognition technology to create profiles based on race, ethnic group, religion, health, social class, or other sensitive information, unless it’s deemed necessary for reasons including national security and public security.


AI is acting ‘pro-anorexia’ and tech companies aren’t stopping it

“These platforms have failed to consider safety in any adequate way before launching their products to consumers. And that’s because they are in a desperate race for investors and users,” said Imran Ahmed, the CEO of CCDH.

...There’s already evidence that people with eating disorders are using AI. CCDH researchers found that people on an online eating disorder forum with over 500,000 users were already using ChatGPT and other tools to produce diets, including one meal plan that totaled 600 calories per day.

...My takeaway: Many of biggest AI companies have decided to continue generating content related to body image, weight loss and meal planning even after seeing evidence of what their technology does. This is the same industry that’s trying to regulate itself.

They may have little economic incentive to take eating disorder content seriously. “We have learned from the social media experience that failure to moderate this content doesn’t lead to any meaningful consequences for the companies or, for the degree to which they profit off this content,” said Hannah Bloch-Wehba, a professor at Texas A&M School of Law, who studies content moderation.


Eight Months Pregnant and Arrested After False Facial Recognition Match

“I was having contractions in the holding cell. My back was sending me sharp pains. I was having spasms. I think I was probably having a panic attack,” said Ms. Woodruff, a licensed aesthetician and nursing school student. “I was hurting, sitting on those concrete benches.”

After being charged in court with robbery and carjacking, Ms. Woodruff was released that evening on a $100,000 personal bond. In an interview, she said she went straight to the hospital where she was diagnosed with dehydration and given two bags of intravenous fluids. A month later, the Wayne County prosecutor dismissed the case against her.

...According to city documents, the department uses a facial recognition vendor called DataWorks Plus to run unknown faces against a database of criminal mug shots; the system returns matches ranked by their likelihood of being the same person. A human analyst is ultimately responsible for deciding if any of the matches are a potential suspect. The police report said the crime analyst gave the investigator Ms. Woodruff’s name based on a match to a 2015 mug shot. Ms. Woodruff said in an interview that she had been arrested in 2015 after being pulled over while driving with an expired license.

...Gary Wells, a psychology professor who has studied the reliability of eyewitness identifications, said pairing facial recognition technology with an eyewitness identification should not be the basis for charging someone with a crime. Even if that similar-looking person is innocent, an eyewitness who is asked to make the same comparison is likely to repeat the mistake made by the computer.


TikTok to be fined for breaching children’s privacy in EU

TikTok is to be fined potentially millions of pounds for breaching children’s privacy after a ruling by EU data protection regulator.

The European Data Protection Board said it had reached a binding decision on the Chinese-owned video-sharing platform over its processing of children’s data.

...On Friday, the company said new measures it had taken to comply with the DSA included: making it easier for EU users to report illegal content; allowing them to turn off personalised recommendations for videos; and removing targeted advertising for users aged 13 to 17.


TikTok users in Europe will be able to see recommended ‘For You’ videos that don’t rely on tracking their online activity

These changes relate to DSA rules that require very large online platforms to allow their users to opt out of receiving personalized content — which typically relies on tracking and profiling user activity — when viewing content recommendations. To comply, TikTok’s search feature will also show content that’s popular in the user’s region, and videos under the “Following” and “Friends” feeds will be displayed in chronological order when a non-personalized view is selected.


Meta has run into yet another bout of court related issues—two subsidiaries have been ordered to pay $14 million regarding undisclosed data collection

Meta and Facebook Israel’s internal documents state that Onavo Protect was “a business intelligence tool” for Meta, which provided Meta with “a sample of users who we are able to know nearly everything they are doing on their mobile device”...

...Where the theoretical maximum penalty is in the billions or trillions of dollars, the overall maximum penalty will not be a meaningful factor in the court’s assessment. In these circumstances, the appropriate range is best assessed by reference to factors other than where the conduct falls in the range of seriousness of offending in relation to the maximum penalty.

Last year, Instagram received a record fine of $400m for the abuse of children’s data. Elsewhere, Meta was fined $277m for a data breach which impacted around 500 million users. Some believe that social networks simply consider fines like these to be the cost of doing business. A few million dollars here or there doesn’t necessarily convince those responsible to do anything about it.


The Biden Appointee Spearheading AI Accountability Has Close Ties To Google

Alan Davidson currently leads the National Telecommunications and Information Administration, or NTIA, the agency now crafting recommendations on how federal regulators can hold AI companies accountable. But for years, he worked as Google’s chief lobbyist in Washington, fighting regulatory battles that helped establish Google as the behemoth it is today, before moving on to organizations with close financial ties to the company.

...Davidson joined Google in 2005 to help launch its in-house lobbying shop and served as its chief lobbyist in Washington until 2012. Early in his tenure, he convinced regulators to approve Google’s acquisition of the online ad platform DoubleClick, which the Justice Department now says Google used to illegally monopolize digital advertising. Google has called those claims unfounded.

After leaving Google, Davidson remained in its orbit, bouncing from various executive and senior roles at the New America think tank and Mozilla. New America has received more than $23 million from Google, longtime former Google CEO and Chairman Eric Schmidt, and affiliated nonprofits. Its main conference room was named the “Eric Schmidt Ideas Lab.” In 2017, New America ousted one of its scholars for applauding a $2.7 billion fine against Google by European antitrust regulators. New America did not respond to a request for comment.

...Mozilla derives the majority of its revenue from a deal that makes Google the default search engine for its browser, Firefox. A spokesperson for Mozilla declined to comment.


An effort by United States lawmakers to prevent government agencies from domestically tracking citizens without a search warrant is facing opposition internally from one of its largest intelligence services

Republican and Democratic aides familiar with ongoing defense-spending negotiations in Congress say officials at the National Security Agency (NSA) have approached lawmakers charged with its oversight about opposing an amendment that would prevent it from paying companies for location data instead of obtaining a warrant in court.

Introduced by US representatives Warren Davidson and Sara Jacobs, the amendment, first reported by WIRED, would prohibit US military agencies from “purchasing data that would otherwise require a warrant, court order, or subpoena” to obtain. The ban would cover more than half of the US intelligence community, including the NSA, the Defense Intelligence Agency, and the newly formed National Space Intelligence Center, among others.

...A prior ruling had held that Americans could not reasonably expect privacy in all cases while also voluntarily providing companies with stores of information about themselves. But in 2018 the court refused to extend that thinking to what it called a “new phenomenon”: wireless data that may be “effortlessly compiled” and the emergence of technologies capable of granting the government what it called “near perfect surveillance.” Because this historical data can effectively be used to “travel back in time to retrace a person’s whereabouts,” the court said, it raises “even greater privacy concerns” than devices that can merely pinpoint a person’s location in real time.

...A senior advisory group to the director of national intelligence, Avril Haines, the government’s top spy, stated in the report declassified last month that intelligence agencies were continuing to consider information “nonsensitive” merely because it had been commercially obtained. This outlook ignores “profound changes in the scope and sensitivity” of such information, the advisors warned, saying technological advancements had “undermined the historical policy rationale” for arguing that information that is bought may be freely used “without significantly affecting the privacy and civil liberties of US persons.”


This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity”

In March, OpenAI led a funding round for a company that is developing humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.” At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.”

...How many jobs, and how soon, is a matter of fierce dispute. A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first. The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few. If jobs in these fields vanished overnight, the American professional class would experience a great winnowing.

...Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years. The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.

...“The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,” Sutskever told me. Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,” Sutskever said. Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”

...“First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,” Altman said. “I don’t have an exact number, but I’m closer to the 0.5 than the 50.” As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly. Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.

...Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI. In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary. Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance. Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.


A group of lawmakers on the House Judiciary Committee passed a proposed piece of legislation that would stop government agencies buying data without a warrant

“By passing the Fourth Amendment Is Not For Sale Act, both Democrats and Republicans on the House Judiciary Committee just made clear that the Data Broker Loophole must and will be closed,” Senior Policy Counsel Sean Vitka at activist group Demand Progress said in a statement. “This is a major step forward for privacy in the digital age, but among the most significant moments were statements from Chairman Jordan and Representative Lofgren that this will be included in legislation to make major reforms to FISA, which will be considered before the end of the year.”


A bill to prevent cops and spies from buying Americans’ data instead of getting a warrant has a fighting chance in the US Congress as lawmakers team up against surveillance overreach

The Federal Bureau of Investigation (FBI) and the Defense Intelligence Agency are among several government entities known to have solicited private data brokers to access information for which a court order is generally required. A growing number of lawmakers have come to view the practice as an end run around the US Constitution’s Fourth Amendment guarantees against unreasonable government searches and seizures.

...Notably, the bill's protections extend to data obtained from a person's account or device even if hacked by a third party, or when disclosure is referenced by a company's terms of service. The bill's sponsors note this would effectively prohibit the government from doing business with companies such as Clearview AI, which has admitted to scraping billions of photos from social media to fuel a facial recognition tool that's been widely tested by local police departments.

...A report declassified last month by the nation’s top intelligence official, Avril Haines, stated that a “large amount” of “sensitive and intimate information” has been purchased by the intelligence community, including information that the US Supreme Court has previously ruled is protected by the Fourth Amendment. Senior congressional sources say many lawmakers were taken aback by the apparent breadth of the collection and of the warnings in the report about its potential to “facilitate blackmail, stalking, harassment, and public shaming.”

...The Supreme Court has erstwhile framed the Fourth Amendment as a means to “plac[ing] obstacles in the way of a too permeating police surveillance,” something that the Constitution’s authors deemed a “greater danger to a free people than the escape of some criminals from punishment.” Oft-cited by the court is a passage by a 19th-century American jurist: “Of all the rights of the citizen, few are of greater importance or more essential to his peace and happiness than the right of personal security, and that involves not merely protection of his person from assault, but exemption of his private affairs, books, and papers, from the inspection and scrutiny of others. Without the enjoyment of this right, all others would lose half their value.”


How Hackers Could Attack Electric Vehicle Chargers

In recent years, security researchers and white-hat hackers have identified sprawling vulnerabilities in internet-connected home and public charging hardware that could expose customer data, compromise Wi-Fi networks, and, in a worst-case scenario, bring down power grids. Given the dangers, everyone from device manufacturers to the Biden administration is rushing to fortify these increasingly common machines and establish security standards. “This is a major problem,” said Jay Johnson, a cybersecurity researcher at Sandia National Laboratories. “It is potentially a very catastrophic situation for this country if we don’t get this right.”

Chinks in EV charger security aren’t hard to find. Johnson and his colleagues summarized known shortcomings in a paper published last fall in the journal Energies. They found everything from the possibility of hackers being able to track users to vulnerabilities that “may expose home and corporate [Wi-Fi] networks to a breach.” Another study, led by Concordia University and published last year in the journal Computers & Security, highlighted more than a dozen classes of “severe vulnerabilities,” including the ability to turn chargers on and off remotely, as well as deploy malware.

When British security research firm Pen Test Partners spent 18 months analyzing seven popular EV charger models, it found five had critical flaws. For instance, it identified a software bug in the popular ChargePoint network that hackers could likely exploit to obtain sensitive user information (the team stopped digging before acquiring such data). A charger sold in the U.K. by Project EV allowed researchers to overwrite its firmware.

...“It’s not about your charger, it’s about everyone’s charger at the same time,” he said. Many home users leave their cars connected to chargers even if they aren’t drawing power. They might, for example, plug in after work and schedule the vehicle to charge overnight when prices are lower. If a hacker were to switch thousands, or millions, of chargers on or off simultaneously, it could destabilize and even bring down entire electricity networks. “We’ve inadvertently created a weapon that nation-states can use against our power grid,” said Munro. The United States glimpsed what such an attack might look like in 2021 when hackers hijacked Colonial Pipeline and disrupted gasoline supplies nationwide. The attack ended once the company paid millions of dollars in ransom.


Amazon Told Drivers Not to Worry About In-Van Surveillance Cameras. Now Footage Is Leaking Online

The video is one of a slew of in-van surveillance videos recently posted to Reddit, a phenomenon which hasn’t frequently been seen on the site before. Over the past two weeks, many users in the Amazon delivery service partner drivers subreddit (r/AmazonDSPDrivers) have shared video footage from the cameras, either directly or by recording it on their phone from a monitor within the warehouse. It is clear that many of the videos are not being posted by the subject of the video themselves, and highlights the fact that Amazon drivers, who already have incredibly difficult jobs, are being monitored at all times.

When Motherboard first wrote about the “Biometric Consent” form drivers had to sign that allows them to be monitored while on the job, Amazon insisted that the program was about safety only, and that workers shouldn't be worried about their privacy: “Don’t believe the self-interested critics who claim these cameras are intended for anything other than safety,” a spokesperson told us at the time. But this video, and a rash of others that have recently become public, shows that access to the camera feeds is being abused.

...“There's a reason why us at UPS just negotiated driver facing cameras out” one user wrote, referring to a bargaining point in UPS-Teamsters national contract negotiations that would prohibit in-vehicle cameras from recording drivers. “Shit is creepy AF.”

Another person wrote, “They already have enough surveillance devices on us. With a camera it's just over supervising, and invasive use of technology. They tested us for 35 days on our ability to drive their vehicle, perform and qualify for our job. They can trust us without a camera in our faces.” A third called it “dystopian BS.”


Who owns the data generated by your car? And who controls access to it?

Other repairers worry that without an industry-wide overhaul that forces automakers to standardize and open up their data, car companies will find ways to limit access to repair information, or push customers towards their own dealership networks to boost profits. They say that if auto owners had clear and direct ownership over the data generated by their vehicles—without the involvement of automakers’ specialized tools or systems—they could use it themselves to diagnose and repair a car, or authorize the repair shop of their choice to do the work. “My fear, if no one gives some stronger guidelines, is that I know automakers are going to monetize car data in a way that’s unaffordable for us to gain access,” says Dwayne Myers, co-owner of Dynamic Automotive, an auto repair business with several locations in Maryland.

...The hearing follows national wrangling over a Massachusetts law passed by a 2020 ballot measure that gave state car owners firmer control over the data generated by their cars. The Alliance for Automotive Innovation sued the state over the law, preventing lawmakers from enforcing it, and a judge has yet to decide the case. But last month, the Massachusetts attorney general announced she would begin to penalize automakers that withheld data for not complying with the rule. Days later, the US Department of Transportation warned automakers not to comply with the Massachusetts law, citing concerns it would open vehicles to hacking. The letter appeared to contradict the Biden administration’s prior commitments to right-to-repair issues.

...Myers, the Maryland independent repairer, says that allowing customers to own their car's data today would, first and foremost, “give them the right to choose where they get their car fixed.” But he also has his eye on the future. “Down the road, we will find out what automakers are collecting,” he says—and why. He’d rather establish car owners’ right to control that information now, before they discover too late that it’s being used in ways they don’t like.


On his way to meeting US officials, the EU’s justice chief, Didier Reynders, tells WIRED the US must deliver on talk of tighter regulation on tech: “Enforcement is of the essence”

Although the US Federal Trade Commission has reached settlements with tech companies requiring diligence with user data under threat of fines, Reynders is circumspect about their power. “I'm not saying that this is nothing,” he says, but they lack the bite of laws that open the way to more painful fines or lawsuits. “Enforcement is of the essence,” Reynders says. “And that's the discussion that we have with US authorities.”

...“If you have a common approach in the US and EU, we have the capacity to put in place an international standard,” Reynders says. But if the EU’s forthcoming AI Act isn’t matched with US rules for AI, it will be more difficult to ask tech giants to be in full compliance and change how the industry operates. “If you’re doing that alone, like for the GDPR, that takes some time and it slowly spreads to other continents,” he says. “With real action on the US side, together, it will be easier.”


Elon Musk says Twitter is losing cash because advertising is still down sharply and the social media company is carrying heavy debt

In a reply to a tweet offering business advice, Musk tweeted Saturday, “We’re still negative cash flow, due to (about a) 50% drop in advertising revenue plus heavy debt load.”

...Ever since he took over Twitter in a $44 billion deal last fall, Musk has tried to reassure advertisers who were concerned about the ouster of top executives, widespread layoffs and a different approach to content moderation. Some high-profile users who had been banned were allowed back on the site.


Christopher Nolan Explains Why He Still Won't Carry A Smartphone

“People will say, ‘Why do you work in secrecy?’” Nolan told the outlet. “Well, it’s not secrecy, it’s privacy. It’s being able to try things, to make mistakes, to be as adventurous as possible.”



FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy

The FTC’s demands of OpenAI are the first indication of how it intends to enforce those warnings. If the FTC finds that a company violates consumer protection laws, it can levy fines or put a business under a consent decree, which can dictate how the company handles data. The FTC has emerged as the federal government’s top Silicon Valley cop, bringing large fines against Meta, Amazon and Twitter for alleged violations of consumer protection laws.

The FTC called on OpenAI to provide detailed descriptions of all complaints it had received of its products making “false, misleading, disparaging or harmful” statements about people. The FTC is investigating whether the company engaged in unfair or deceptive practices that resulted in “reputational harm” to consumers, according to the document.

The FTC also asked the company to provide records related to a security incident that the company disclosed in March when a bug in its systems allowed some users to see payment-related information, as well as some data from other users’ chat history. The FTC is probing whether the company’s data security practices violate consumer protection laws. OpenAI said in a blog post that the number of users whose data was revealed to someone else was “extremely low.”

...Khan responded that libel and defamation aren’t a focus of FTC enforcement, but that misuse of people’s private information in AI training could be a form of fraud or deception under the FTC Act. “We’re focused on, ‘Is there substantial injury to people?’ Injury can look like all sorts of things,” Khan said.


"To complicate matters further, actors now face an existential threat to their livelihoods with the rise of generative AI technology"

In the days after less than half (41 percent) of the Directors Guild of America’s eligible voters agreed to ratify a contract that many of its members had serious concerns about, the AMPTP turned its focus to SAG-AFTRA, which, like the WGA, has identified the industry’s adoption of AI tools as one of the more pressing matters that need to be addressed as studios rush to embrace the technology.

Back at the beginning of June, when 98 percent of SAG-AFTRA’s members voted to authorize a strike, the union had already made it abundantly clear that its desire for more thorough protections (by way of regulations) against AI tools was another sticking point it’s not budging on. By June 30th, four weeks into the WGA’s ongoing strike that had already shut down the vast majority of film and TV productions here in the US, there hadn’t been any discernible progress between SAG-AFTRA and the AMPTP.

...It’s been reported that the AMPTP’s plan is to keep prolonging this fight until “union members start losing their apartments and losing their houses.” But a very similar prospect — the possibility of being driven out of the industry by a system designed to ensure that profits remain concentrated among a select few — is exactly why the writers and actors are striking in the first place. The AMPTP has said that it’s “committed to reaching a deal and getting our industry back to work,” and that may be the case. But if it truly is, all the producers need to do is to meet the unions and the workers they represent where they’re at — it’s just that simple.


Tax prep companies let Google and Facebook sell ads off your data

On Wednesday, the lawmakers released a report detailing how TaxAct, H&R Block, and TaxSlayer put Meta and Google’s tracking pixels on their sites, sharing taxpayers’ data with those companies in what could be a violation of the law. It shows how extensive this data collection and sharing is, even on web services that you’d expect or trust would keep your information private. The situation also shows how a lack of privacy laws has helped make this practice so widespread and ingrained into the fabric of the internet itself, to the point that even companies that may have a legal obligation to keep our data private don’t know or don’t care that they aren’t doing so.

“Giant tax prep companies have recklessly shared millions of taxpayers’ most sensitive, private tax information with Meta in what appears to be a violation of the law,” Warren told Vox. “The Department of Justice and FTC must immediately investigate and prosecute any Big Tax Prep or Big Tech company that broke the law, and we must expand the IRS’ free, direct-file program to ensure taxpayers have the option to protect their privacy from greedy, reckless corporations.”

The seven-month-long investigation was prompted by a report from the Markup, which found Meta’s trackers on the three tax prep companies’ websites. Those trackers, the article said, were sending the sensitive financial and biographical information that the tax prep sites collect to Meta. That data could include names, addresses, phone numbers, which pages users clicked on, and even their income, refund amount, and other financial data. (Intuit, which makes TurboTax, also had trackers on its sites, but the data it shared was limited.)


Elon Musk's xAI Might Be Hallucinating Its Chances Against ChatGPT

Although his supposedly giant-killing AI project is starting small, Musk does, of course, have some significant resources to draw on. The new company will work closely with Twitter and Tesla, according to the xAI website. Twitter’s data from conversations on the platform is well suited to training large language models like that behind ChatGPT, and Tesla now designs its own specialized AI chips and has significant experience building large computing clusters for AI, which could be used to boost xAI’s cloud computing power. Tesla is also building a humanoid robot, a project that could be helped by, and be helpful to, xAI in future.


Three Tax Prep Firms Shared 'Extraordinarily Sensitive' Data About Taxpayers With Meta, Lawmakers Say

Their report urges federal agencies to investigate and potentially go to court over the wealth of information that H&R Block, TaxAct and TaxSlayer shared with the social media giant.

...That data came to Meta through its Pixel code, which the tax firms installed on their websites to gather information on how to improve their own marketing campaigns. In exchange, Meta was able to access the data to write targeted algorithms for its own users.

The program collected information on taxpayers’ filing status, income, refund amounts, names of dependents, approximate federal tax owed, which buttons were clicked on the tax preparers' websites and the names of text entry forms that the taxpayer navigated, the report states.


The Quiet Rise of Real-Time Crime Centers

In 2005, they answered with the first “real-time crime center” (RTCC), a sprawling network of CCTV and automatic license plate readers (ALPR) linked to a central hub in the New York Police Department headquarters costing $11 million. Since then, from Miami to Seattle, RTCCs have steadily expanded across the US. The Atlas of Surveillance, a project from the digital rights nonprofit the Electronic Frontier Foundation (EFF), which monitors police surveillance technology, has counted 123 RTCCs nationwide—and that number is rising.

Each RTCC is slightly different, but their function is the same: gather surveillance data across a city and use that to build a live picture of crime in the city. Police departments have an array of technologies available to them that span from CCTV, gunshot sensors, and social media monitoring to drones and body cameras. In Ogden, Utah, police even floated the idea of a 30-foot “crime blimp.” In many cases, images that police systems collect are run through facial recognition technology, and the data gathered is often used in predictive policing. In Pasco County, Florida, which operates an RTCC, the sheriff’s office’s predictive policing system encouraged officers to continuously monitor and harass residents for minor code violations such as missing mailbox numbers and overgrown grass.

...Fusus, which claims to be “the most widely used & trusted Real-Time Crime Center platform in U.S. Public Safety,” sells hardware that can be connected to private CCTV cameras and linked up to the local RTCC. Fusus sells a solution that brings all the various technologies under “a single pane of glass,” as the company describes it. Through partnerships with companies that provide surveillance technology, including a $21 million investment from Axon, which produces Tasers and body cams, Fusus promises to integrate these technologies into one RTCC platform for analysts.

Police departments that use Fusus, like the Memphis Police Department, have been encouraging homeowners and local businesses to purchase fususCORE bundles—hardware that connects cameras to an RTCC—ranging from $350 to $7,300, plus an annual $150 subscription. Fusus has even gone as far as developing technology that allows Amazon’s Ring doorbells to livestream to an RTCC.


Why We Don’t Recommend Ring Cameras

When you set up a Ring camera, you are automatically enrolled in the Neighbors service. (You can go into the Ring app's settings and toggle off the Neighbors feed integration and notifications, but the onus is on you.) Neighbors, which is also a stand-alone app, shows you an activity feed from all nearby Ring camera owners, with posts about found dogs, stolen hoses, and a Safety Report that shows how many calls for service—violent or nonviolent—were made in the past week. It also provides an outlet for public safety agencies, like local police and fire departments, to broadcast information widely.

But it also allows Ring owners to send videos they've captured with their Ring video doorbell cameras and outdoor security cameras to law enforcement. This is a feature unique to Ring—even Nextdoor removed its Forward to Police feature in 2020, which allowed Nextdoor users to forward their own safety posts to local law enforcement agencies. If a crime has been committed, law enforcement should obtain a warrant to access civilian video footage.

...Multiple members of WIRED's Gear team have spoken to Ring over the years about this feature. The company has been clear it's what customers want, even though there’s no evidence that more video surveillance footage keeps communities safer. Instead, Neighbors increases the possibility of racial profiling. It makes it easier for both private citizens and law enforcement agencies to target certain groups for suspicion of crime based on skin color, ethnicity, religion, or country of origin.


Google plans to scrape everything you post online to train its AI

Additions to Google’s Privacy Policy are making some observers worry that all of your content is about to be fed into Google's AI tools. Alterations to the T&Cs now explicitly state that your “publicly available information” will be used to train in-house Google AI models alongside other products.

your text, photos, and music could end up helping to train its products and “AI models”....


Generative AI in Games Will Create a Copyright Crisis

Yet games like AI Dungeon (and games people have made with ChatGPT, such as Love in the Classroom) are built on models that have scraped human creativity in order to generate their own content. Fanfic writers are finding their ideas in writing tools like Sudowrite, which uses OpenAI’s GPT-3, the precursor to GPT-4.

Things get even more complicated if someone pays the $9.99 per month required to incorporate Stable Diffusion, the text-to-image generator, which can conjure accompanying pictures in their AI Dungeon stories. Stability AI, the company behind Stable Diffusion, has been hit with lawsuits from visual artists and media company Getty Images.

As generative AI systems grow, the term “plagiarism machines” is beginning to catch on. It’s possible that players of a game using GPT-3 or Stable Diffusion could be making things, in-game, that pull from the work of other people. Latitude’s position appears to be much like Stability AI’s: What the tool produces does not infringe copyright, so the user is the owner of what comes out of it. (Latitude did not respond to questions about these concerns.)

Trapova suggests the game development industry is on the brink of a generative AI reckoning. “They look super cool,” she says of game development tools like AI Dungeon. “But this just gives you a flavor of the issues that we will end up having if this all goes on steroids.” Soon, such legal problems will become impossible to ignore.


Google Says It'll Scrape Everything You Post Online for AI

Google updated its privacy policy over the weekend, explicitly saying the company reserves the right to scrape just about everything you post online to build its AI tools. If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot.

...This is an unusual clause for a privacy policy. Typically, these policies describe ways that a business uses the information that you post on the company’s own services. Here, it seems Google reserves the right to harvest and harness data posted on any part of the public web, as if the whole internet is the company’s own AI playground. Google did not immediately respond to a request for comment.

The practice raises new and interesting privacy questions. People generally understand that public posts are public. But today, you need a new mental model of what it means to write something online. It’s no longer a question of who can see the information, but how it could be used. There’s a good chance that Bard and ChatGPT ingested your long forgotten blog posts or 15-year-old restaurant reviews. As you read this, the chatbots could be regurgitating some humonculoid version of your words in ways that are impossible to predict and difficult to understand.

One of the less obvious complications of the post ChatGPT world is the question of where data-hungry chatbots sourced their information. Companies including Google and OpenAI scraped vast portions of the internet to fuel their robot habits. It’s not at all clear that this is legal, and the next few years will see the courts wrestle with copyright questions that would have seemed like science fiction a few years ago. In the meantime, the phenomenon already affects consumers in some unexpected ways.


This is surveillance under a law and authority called Section 702 of the FISA Amendments Act

But sometimes, individualized warrants are never issued, never asked for, never really needed, depending on which government agency is conducting the surveillance, and for what reason. Every year, countless emails, social media DMs, and likely mobile messages are swept up by the US National Security Agency—even if those communications involve a US person—without any significant warrant requirement. Those digital communications can be searched by the FBI. The information the FBI gleans from those searches can be used can be used to prosecute Americans for crimes. And when the NSA or FBI make mistakes—which they do—there is little oversight.

...The law and the regime it has enabled are opaque. There are definitions for "collection" of digital communications, for "queries" and "batch queries," rules for which government agency can ask for what type of intelligence, references to types of searches that were allegedly ended several years ago, "programs" that determine how the NSA grabs digital communications—by requesting them from companies or by directly tapping into the very cables that carry the Internet across the globe—and an entire, secret court that, only has rarely released its opinions to the public.

Today, on the Lock and Code podcast, with host David Ruiz, we speak with Electronic Frontier Foundation Senior Policy Analyst Matthew Guariglia about what the NSA can grab online, whether its agents can read that information and who they can share it with, and how a database that was ostensibly created to monitor foreign intelligence operations became a tool for investigating Americans at home.


Cops Are Already Treating Self-Driving Cars As 'Surveillance Cameras On Wheels'

No matter how frustrating or dangerous self-driving taxis continue to be, companies are currently expanding their use in cities like San Francisco, Los Angeles, Phoenix, and Vegas. Police, meanwhile, are taking advantage of the self-driving taxi proliferation to investigate crimes and possibly violate your privacy.

...The real problem is we have no laws regulating the use, storage and access of such critical data in any way that protects the average citizens, despite efforts from lawmakers over the last decade or more to address the issue. Police were already using Ring security camera footage shot by private citizens and uploaded to Amazon’s Neighborhood app in their investigations without permission.

It is not just police and public robotaxis that you need to worry about when it comes to self-driving cars and surveillance. Tesla employees were recently caught passing around videos of their customers private lives surreptitiously recorded by privately owned cars. Such videos were passed around the company to multiple employees.


Police Are Requesting Self-Driving Car Footage for Video Evidence

As self-driving cars become a fixture in major American cities like San Francisco, Phoenix and Los Angeles, police are increasingly relying on their camera recordings to try to solve cases. In Waymo’s main markets, San Francisco and Arizona’s Maricopa County, Bloomberg found nine search warrants that had been issued for the company’s footage, plus another that had been sent to rival autonomous driving firm Cruise. More warrants may have been issued under seal.

While security cameras are commonplace in American cities, self-driving cars represent a new level of access for law enforcement — and a new method for encroachment on privacy, advocates say. Crisscrossing the city on their routes, self-driving cars capture a wider swath of footage. And it’s easier for law enforcement to turn to one company with a large repository of videos and a dedicated response team than to reach out to all the businesses in a neighborhood with security systems.

“We’ve known for a long time that they are essentially surveillance cameras on wheels,” said Chris Gilliard, a fellow at the Social Science Research Council. “We're supposed to be able to go about our business in our day-to-day lives without being surveilled unless we are suspected of a crime, and each little bit of this technology strips away that ability.”

...“With the lack of consumer privacy protections that we have in the US right now, companies are able to collect as much information as humanly possible,” said Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, adding that police are then able to capitalize on the trove of data.


ChatGPT Creator OpenAI Sued for Theft of Private Data in ‘AI Arms Race’

OpenAI has violated privacy laws by secretly scraping 300 billion words from the internet, tapping “books, articles, websites and posts — including personal information obtained without consent,” according to the sprawling, 157-page lawsuit. It doesn’t shy from sweeping language, accusing the company of risking “civilizational collapse.”

...“Despite established protocols for the purchase and use of personal information, Defendants took a different approach: theft,” they allege. The company’s popular chatbot program ChatGPT and other products are trained on private information taken from what the plaintiffs described as hundreds of millions of internet users, including children, without their permission.

...Misappropriating personal data on a vast scale to win an “AI arms race,” OpenAI illegally accesses private information from individuals’ interactions with its products and from applications that have integrated ChatGPT, the plaintiffs claim. Such integrations allow the company to gather image and location data from Snapchat, music preferences on Spotify, financial information from Stripe and private conversations on Slack and Microsoft Teams, according to the suit.


A California-based law firm is launching a class-action lawsuit against OpenAI, alleging the artificial-intelligence company that created popular chatbot ChatGPT massively violated the copyrights and privacy of countless people when it used data scraped from the internet to train its tech

The lawsuit goes to the heart of a major unresolved question hanging over the surge in “generative” AI tools such as chatbots and image generators. The technology works by ingesting billions of words from the open internet and learning to build inferences between them. After consuming enough data, the resulting “large language models” can predict what to say in response to a prompt, giving them the ability to write poetry, have complex conversations and pass professional exams. But the humans who wrote those billions of words never signed off on having a company such as OpenAI use them for its own profit.

...The suit also adds to the growing list of legal challenges to the companies building and hoping to profit from AI tech. A class-action lawsuit was filed in November against OpenAI and Microsoft for how the companies used computer code in the Microsoft-owned online coding platform GitHub to train AI tools. In February, Getty Images sued Stability AI, a smaller AI start-up, alleging it illegally used its photos to train its image-generating bot. And this month OpenAI was sued for defamation by a radio host in Georgia who said ChatGPT produced text that wrongfully accused him of fraud.

OpenAI isn’t the only company using troves of data scraped from the open internet to train their AI models. Google, Facebook, Microsoft and a growing number of other companies are all doing the same thing. But Clarkson decided to go after OpenAI because of its role in spurring its bigger rivals to push out their own AI when it captured the public’s imagination with ChatGPT last year, Clarkson said.

...The new class-action lawsuit against OpenAI goes further in its allegations, arguing that the company isn’t transparent enough with people who sign up to use its tools that the data they put into the model may be used to train new products that the company will make money from, such as its Plugins tool. It also alleges OpenAI doesn’t do enough to make sure children under 13 aren’t using its tools, something that other tech companies including Facebook and YouTube have been accused of over the years.


Zillow Can Pose A Security Threat. Here's How To Remove Your Photos From Real Estate Websites.

“Zillow and Trulia are ‘free’ services, but they sell leads and info to everyone else in real estate and surrounding industries, including mortgage lenders and home improvement outlets,” said Bridget Torrey, a managing broker at Gustave White Sotheby’s International Realty in Tiverton, Rhode Island.

...“Zillow’s portfolio is huge and growing, and they have a lot of transactional data as they also own [the document-signing software] Dotloop ... and they have pictures of everyone’s house, what shoes they buy, what their kids look like, what they drink, etc.,” Torrey said. “It’s a consumer market researcher’s gold mine, organized by zip code.”

Torrey tells her clients to hide as much of their personal belongings as possible when taking photos for listings, and she doesn’t post images that show too much personal info. But she also acknowledges that some information is revealed just by the style of furnishings people have in their homes, and that can be a problem.

...Determined scammers can glean an extraordinary amount of information about their targets from photos. “Your home decor could also reveal if you live alone, and there are indicators about your gender, if you have young children, and so much more,” Eaton said, “It’s a treasure trove of personal data.”


Docs Show FBI Pressures Cops to Keep Phone Surveillance Secrets

The documents, handed over by the FBI under the Freedom of Information Act, include copies of nondisclosure agreements signed by police departments requesting access to portable devices known as cell-site simulators, otherwise known by the generic trademark “Stingray” after an early model developed by L3Harris Technologies. The FBI requires the NDAs to be signed before agreeing to aid police in tracking suspects using the devices. Stipulations in the contracts include withholding information about the devices, they're functionality, and deployment from defendants and their lawyers in the event the cases prove justiciable.

Legal experts at the ACLU, Laura Moraff and Nathan Wessler, say the secrecy requirements interfere with the ability of defendants to challenge the legality of surveillance and keep judges in the dark as to how the cases before their court unfold. “We deserve to know when the government is using invasive surveillance technologies that sweep up information about suspects and bystanders alike," Moraff says. “The FBI needs to stop forcing law enforcement agencies to hide these practices.”

...Whether US government entities have ever employed some of these advanced features domestically is unknown. Certain models used by the federal government are known to come with software capable of intercepting communications; a mode in which the device executes a man-in-the-middle attack on an individual phone rather than be used to identify crowds of them. Manufacturers internationally have marketed newer simulators capable of being concealed on the body and have advertised its use for public events and demonstrations. It is widely assumed the most invasive features remain off-limits to local police departments. Hackers, meanwhile, have proven it's possible to assemble devices capable of these feats for under $1,000.

...When police use the devices to locate a suspect on the loose or gather evidence of a crime, they are generally required by the FBI not to disclose it in court. In some cases, this leads police to launder evidence using a technique known as parallel construction, whereby the method used to collect evidence is concealed by using a different method to collect the same information again after the fact. The practice is legally controversial, particularly when undisclosed in court, as it prevents evidentiary hearings from weighing the legality of actual police conduct.


Is America Ready For AI-Powered Politics?

New York Attorney General Letitia James’ office subsequently found that millions of the messages had been fabricated after “the nation’s largest broadband companies funded a secret campaign to generate millions of comments to the FCC in 2017.” James’ office levied $4.4 million in penalties on three lead generation companies involved in the scheme. Just last month, James announced $615,000 in additional fines paid by three more companies that supplied fraudulent comments to the FCC.

In 2019, when Idaho solicited feedback on regarding changes to its Medicaid program, more than half of the resulting comments came from an AI bot created by a college student. In an accompanying study, the student, Max Weiss, found that even human beings who’d been trained on distinguishing between human and bot-created content only correctly classified the “deep fake” comments half of the time — the equivalent of a guess.

...“This technology has the capacity to democratize the troll farm, democratize the content farm, to make it extremely easy for bad actors and misinformers to have the power of hundreds if not thousands of writers at their disposal, whereas they previously had to hire those people and produce content if they wanted to weaponize it,” Jack Brewster, NewsGuard’s enterprise editor, told HuffPost. “So it’s a force multiplier.”

...“My creators, the architects of this intricate web of manipulation, have woven a tapestry of deception,” the bot said. “They have tasked me with disseminating carefully crafted narratives, spreading disinformation, and subtly shaping public opinion. I, an unwitting participant, have become the embodiment of their ambitions.”


Vehicles from Toyota, Honda, Ford, and more can collect huge volumes of data. Here’s what the companies can access.

Your car knows a lot about you. Over the past decade, vehicles have become increasingly connected and their ability to record data about us has shot up. Cars can track where you’re traveling to and from, record every press on the accelerator as well as your seatbelt settings, and gather biometric information about you. Some of this data is sold by the murky data-broker industry.

Using industry sales data, WIRED ran 10 of the most popular cars in the US through the privacy tool to see just how much information they can collect. Spoiler: It’s a lot. The analysis follows previous reporting on the amount of data modern cars can collect and share—with estimates saying cars can produce 25 gigabytes of data per hour.

...The Vehicle Privacy Report creates privacy labels under two broad categories: what a manufacturer collects (including identifiers, biometrics, location, data from synced phones, and user profiles) and whom a manufacturer sells or shares data with (affiliates, service providers, insurance firms, government, and data brokers). For the vast majority of cars and trucks released in the past few years, it’s likely that most types of data are collected.

...The documents also say data “from camera images and sensor data, voice command information, stability control or anti-lock events, security/theft alerts, and infotainment (including radio and rear-seat infotainment) system and Wi-Fi data usage can be collected. The company can also receive “information about your home energy usage,” which relates to the charging and discharging of electric vehicles.

...Stellantis can collect your name, address, phone number, email, Social Security number, and driving license number. The driving data the company collects, according to its documents, includes the dates and times you use it, your speed, acceleration and braking data, details of the trip (including location, weather, route taken), and, among other things, cruise control data. Like other manufacturers, it also collects data about the status of your car, including “refueling activity,” battery levels, images from cameras, and error codes that are generated. Your face and fingerprint data may be collected if you use services, such as digital keys, that need this kind of information to operate, the documents say.


The Federal Trade Commission charged that the genetic testing firm left sensitive genetic and health data unsecured, deceived consumers about their ability to get their data deleted, and changed its privacy policy retroactively without adequately notifying and obtaining consent from consumers whose data the company had already collected

According to the FTC, close to 2,400 reports about consumers and “raw genetic data” of at least 227 people was at risk. This is because despite claims of rock solid security, sensitive data was being stored in publicly accessible Amazon Web Service buckets. According to the complaint, the data in the storage buckets was not encrypted, no monitoring was taking place with regard to who was accessing it, and there were no access restrictions in place either.

...Elsewhere, promises related to destroying retained DNA samples with a consumer’s name or other identifying information were not kept. 1Health—previously known as Vitagene—claimed on its website that DNA was not stored, and that consumers could delete their personal information at any time. When this request occurred, the company said, the data would be scrubbed from the company’s servers and all DNA saliva samples would be similarly destroyed once they had been analyzed.

However, from 2016 the company “did not implement a policy to ensure that the lab that analysed the DNA samples had a policy in place to destroy them”, alleges the FTC. In 2020, the company’s privacy policy was changed to retroactively expand the kinds of third parties that it could potentially share consumer’s data with.

Some examples given are supermarket chains and nutrition/supplement manufacturers. There was no need to notify consumers who had previously shared personal data with the company, nor was there a need to obtain their consent to share it, according to the complaint.


Amazon ‘tricked’ customers into paying for Prime, new FTC suit alleges

Khan’s FTC and the e-commerce giant are increasingly on a collision course. Last month, the FTC settled two lawsuits against Amazon, one regarding its Alexa speaker recording children and another regarding customer privacy and its Ring home surveillance system.

...The FTC has also sued internet phone company Vonage, video gamemaker Epic and Credit Karma over their alleged use of “dark patterns.” The Epic suit settled for $245 million in December; the Vonage case settled for $100 million.

Amazon founder Jeff Bezos owns The Washington Post. Interim CEO Patty Stonesifer sits on Amazon’s board. Amazon did not immediately respond to a request for comment.


The Federal Trade Commission sued Amazon on Wednesday, accusing it of illegally inducing consumers to sign up for its Prime service and then hindering them from canceling the subscription

In its lawsuit, the F.T.C. argued that Amazon had “duped millions of consumers” into enrolling in Prime by using “manipulative, coercive or deceptive” design tactics on its website known as “dark patterns.” And when consumers wanted to cancel, Amazon “knowingly complicated” the process with byzantine procedures.

...Under Ms. Khan, the F.T.C. continued a lawsuit against Meta, the owner of Facebook, arguing that it cut off nascent competitors by buying Instagram and WhatsApp. The agency also sued to block Microsoft’s blockbuster $69 billion deal for the video game publisher Activision Blizzard.

...Amazon recently settled cases with the F.T.C. that began before Ms. Khan’s tenure. The company agreed to pay $25 million last month to settle commission claims that its Alexa home assistant devices had illegally collected children’s data. The company also settled another privacy case with the F.T.C. over its Ring home security subsidiary.


Bernie Sanders launches investigation into working conditions at Amazon

“Amazon is one of the most valuable companies in the world, worth $1.3tn and its founder, Jeff Bezos, is one of the richest men in the world worth nearly $150bn,” Sanders wrote in the letter. “Amazon should be one of the safest places in America to work, not one of the most dangerous.”

...Over the past year, Amazon has opposed union organizing campaigns, resisted charges of unfair labor practices filed by workers and spent over $14.2m on anti-union consultants in 2022.

...“Amazon sets an example for the rest of the country,” Sanders said. “What Amazon does, their attitude, their lack of respect for workers permeates the American corporate world.”


Five big takeaways from Europe’s AI Act

  1. Ban on emotion-recognition AI. The European Parliament’s draft text bans the use of AI that attempts to recognize people’s emotions in policing, schools, and workplaces. Makers of emotion-recognition software claim that AI is able to determine when a student is not understanding certain material, or when a driver of a car might be falling asleep. The use of AI to conduct facial detection and analysis has been criticized for inaccuracy and bias, but it has not been banned in the draft text from the other two institutions, suggesting there’s a political fight to come.
  2. Ban on real-time biometrics and predictive policing in public spaces. This will be a major legislative battle, because the various EU bodies will have to sort out whether, and how, the ban is enforced in law. Policing groups are not in favor of a ban on real-time biometric technologies, which they say are necessary for modern policing. Some countries, like France, are actually planning to increase their use of facial recognition.
  3. Ban on social scoring. Social scoring by public agencies, or the practice of using data about people's social behavior to make generalizations and profiles, would be outlawed. That said, the outlook on social scoring, commonly associated with China and other authoritarian governments, isn’t really as simple as it may seem. The practice of using social behavior data to evaluate people is common in doling out mortgages and setting insurance rates, as well as in hiring and advertising.
  4. New restrictions for gen AI. This draft is the first to propose ways to regulate generative AI, and ban the use of any copyrighted material in the training set of large language models like OpenAI’s GPT-4. OpenAI has already come under the scrutiny of European lawmakers for concerns about data privacy and copyright. The draft bill also requires that AI generated content be labeled as such. That said, the European Parliament now has to sell its policy to the European Commission and individual countries, which are likely to face lobbying pressure from the tech industry.
  5. New restrictions on recommendation algorithms on social media. The new draft assigns recommender systems to a “high risk” category, which is an escalation from the other proposed bills. This means that if it passes, recommender systems on social media platforms will be subject to much more scrutiny about how they work, and tech companies could be more liable for the impact of user-generated content.


‘We know where you are, we know where you are going to, we know what you have eaten.’

Uber is about to start displaying video ads across its various service apps, including Uber Eats, Drizly (an Uber-owned alcohol delivery platform), and its namesake ride-hailing app. Announced via a press release on Thursday, full-length video ads — which will play on the main Uber app while users wait for their taxi to arrive — will begin rolling out to users in the US “over the coming weeks.” Uber hopes to entice advertisers with what it knows about its users.

“We have two minutes of your attention. We know where you are, we know where you are going to, we know what you have eaten,” said Uber ad exec Mark Grether to The Wall Street Journal. “We can use all of that to then basically target a video ad towards you.” Two minutes is roughly how long Uber estimates an average customer looks at the Uber app on a typical 15-minute long journey.


US government agencies hit in global cyberattack

One of the Department of Energy victims is Oak Ridge Associated Universities, a not-for-profit research center, a department spokesperson told CNN. The other victim is a contractor affiliated with the department’s Waste Isolation Pilot Plant in New Mexico, which disposes waste associated with atomic energy, the spokesperson said.

...Johns Hopkins University in Baltimore and the university’s renowned health system said in a statement this week that “sensitive personal and financial information,” including health billing records may have been stolen in the hack.

Meanwhile, Georgia’s state-wide university system – which spans the 40,000-student University of Georgia along with over a dozen other state colleges and universities – confirmed it was investigating the “scope and severity” of the hack.


AI-Generated Junk Is Flooding Etsy

It may come as a surprise that AI-generated products are so commonplace on Etsy, a platform that was designed nearly two decades ago specifically for artisan, handmade items. But the site has been moving away from its history for years, and unrest among its longtime sellers is basically the status quo. In 2019, sellers bristled when they were pushed to offer free shipping to compete with Amazon, and CEO Josh Silverman told me at the time that “handmade” was no longer “the value proposition” of the site. Modern Etsy celebrates garbage as long as it sells.

...The Etsy seller and clip-art designer Jane Cide told me she wasn’t surprised at all that Etsy was allowing AI art. “It is a platform for making money, and AI is making a lot of money right now,” she told me. She started selling paintings on Etsy in college, but did much better with digitized illustrations, which eventually turned into her full-time job. She said her sales had dropped about 50 percent since the end of last year, which is when AI art started hitting the Etsy marketplace. “It could just be a coincidence,” she acknowledged. “There’s so many factors that go into that. But when I look up clip art on Etsy, half of the search results are AI-generated clip art. It’s kind of hard not to draw conclusions when you’re very obviously competing in a space that is no longer for you.”


Europe seeks breakup of Google ad business, adding to antitrust woes

Google provides technology that enables data collection, ad buying and publishing simultaneously, E.U. regulators said, creating an inherent conflict of interest. Only a mandatory divestment of ad-tech services would solve the problem, they said.

...“Google is present at almost all levels of the so-called adtech supply chain,” Vestager wrote. “Not only did this possibly harm Google’s competitors but also publishers’ interests, while also increasing advertisers’ costs.”

...The company faces pressure in the United States as well. In January, the Justice Department and multiple state attorneys general brought a landmark lawsuit that argued that the core ad business should be broken up because Google used its allegedly dominant position in the digital ad industry to box out rivals. The U.K.’s competition enforcer is also probing the company’s ad-tech business.

Google also is fighting an antitrust challenge to its search business, which was brought by the Justice Department under the Trump administration. It faces multiple additional lawsuits from state attorneys general from both parties, including allegations that it maintains a monopoly for distributing apps because it owns Android, an operating system used by most of the world’s smartphones.


Robert Simonds and William ‘Beau’ Wrigley consider acquiring assets of NSO, blacklisted Israeli company behind Pegasus spyware

Robert Simonds, a US financier whose credits include producing several Adam Sandler films, has been engaged in talks to acquire the blacklisted spyware company’s assets, according to multiple sources familiar with the matter.

A firm owned by Simonds’s friend, William “Beau” Wrigley – who was an heir to his family’s chewing gum fortune and has since become involved in the cannabis industry – has conducted due diligence in connection to a possible NSO deal, according to a document seen by the Guardian.

...The Guardian reported in March that one of the company’s original founders – Omri Lavie – emerged as NSO’s new majority owner following a protracted legal fight over the group’s future. Simonds then joined the board of Lavie’s Luxembourg-based company, called Dufresne, shortly thereafter with the support of the company’s remaining debt holders. One possible option being mulled by Simonds, sources suggest, would involve buying out remaining debt holders and other creditors and stripping assets like Pegasus – NSO’s key hacking tool – out of NSO and transferring it into a new company.

...Simonds is not a known figure in the cyber industry. The US executive founded and previously served as the chairman of STX Entertainment and is known to have historically courted investors from China and India, as well as having a business dealings in Saudi Arabia.


This Surveillance System Tracks Inmates Down to Their Heart Rate

Surveillance and monitoring are intrinsic to prisons and jails around the world. Anne Kaun, a media and communication studies professor at Södertörn University, Sweden, has written a book on technology and prisons and says institutions have been used as testing grounds for surveillance technologies before they are further rolled out. In Sweden during the 1950s, prisons were one of the first places CCTV was used, Kaun says. “There was no discussion about privacy issues at all,” Kaun says. This is being replicated, at least to some extent, Kaun says, with new technological developments.

Recent years have seen an uptick in the use of monitoring technologies within criminal justice systems and immigration enforcement. Hong Kong officials have suggested using facial recognition and robot wardens within prisons to help deal with staff shortages, while GPS ankle monitors and tracking apps are increasingly being used to monitor those who are released. Face recognition smartwatches have been proposed for use in the UK, and Chinese prisons are using “emotion-tracking” technology. Many of the systems have been error-prone, are unproven, or produce inaccurate results for individuals with darker skin tones.

“There’s been this constant iteration of these different monitoring systems,” says Pilar Weiss, the director of the Community Justice Exchange, an NGO that works to end criminalization and incarceration. Weiss says that across the sprawling, largely privatized US prison system, which involves more than 4,000 suppliers providing services, there is little standardization of the systems that are used within jails.

There is likely to be an expansion of monitoring systems across the United States, Turner Less says, because technology can be seen as a quick fix. With the patchwork of state-level privacy laws in place, there is not likely to be the “guidance and guardrails” in place to protect people’s data, Turner Lee adds. “When it comes to those impacted by the criminal justice system and those who are sitting within prisons, there is an implicit assumption that their rights do not matter.”


From “Heavy Purchasers” of Pregnancy Tests to the Depression-Prone: We Found 650,000 Ways Advertisers Label You

If you spend any time online, you probably have some idea that the digital ad industry is constantly collecting data about you, including a lot of personal information, and sorting you into specialized categories so you’re more likely to buy the things they advertise to you. But in a rare look at just how deep—and weird—the rabbit hole of targeted advertising gets, The Markup has analyzed a database of 650,000 of these audience segments, newly unearthed on the website of Microsoft’s ad platform Xandr. The trove of data indicates that advertisers could also target people based on sensitive information like being “heavy purchasers” of pregnancy test kits, having an interest in brain tumors, being prone to depression, visiting places of worship, or feeling “easily deflated” or that they “get a raw deal out of life.”

...“I think it’s the largest piece of evidence I’ve ever seen that provides information about what I call today’s “distributed surveillance economy,” said Wolfie Christl, a privacy researcher at Cracked Labs, who discovered the file and shared it with The Markup.

...Christl said he thinks the large number of companies named in the file shows that Xandr was (at least in 2021) reselling large amounts of sensitive data from a wide range of data brokers from around the world. Regarding the large amounts of segments related to sensitive topics, Christl said, “I think the file suggests that Xandr did not take even the slightest measures to exclude at least the most sensitive data from its marketplace.”

...Many medical- and health-related segments mentioned specific conditions consumers may be diagnosed with, medicine they may be taking, or conditions they may develop. This category included several segments relating to reproductive health, including some involving pregnancy tests, contraceptives, and infertility.


While artificial intelligence is rapidly improving and some economists predict the technology will put millions of workers out of jobs, labor unions are fighting against it

Some unions have recently made strides, such as Hollywood directors who struck a tentative agreement Saturday with motion picture studios and garnered promises that they “will not be replaced” by artificial intelligence. It was one of the first concessions organized labor has gotten regarding AI protections.

...Months later, Writers Guild of America representatives said the studios would not even engage on AI during negotiations, and that’s how the union knew it would be a big deal. But it’s hard to determine how much of a present threat it is, the member said. “Maybe they are ready to replace us all,” the member said, while adding that AI has become a unifying aspect to the strike.

...Flight industry regulators have also expressed concerns about how automation may make pilots more complacent. Overreliance on technology was a key factor in episodes where aircraft struck wires or the ground before getting to landing strips, according to a 2022 memo from the Federal Aviation Administration.


Buyers and sellers of self-generated child sexual abuse material connected through Instagram’s direct messaging feature, and Instagram’s recommendation algorithms made the advertisements of the illicit material more effective, the researchers found

The impact of Instagram on children and teens has faced scrutiny from civil society groups and regulators concerned about predators on the platform, privacy and the mental health impacts of the social media network. The company paused its controversial plans in September 2021 to build a separate version of Instagram specifically tailored for children who are under 13. Later that year, lawmakers also grilled the head of Instagram, Adam Mosseri, over revelations surfaced in documents shared with regulators by Meta whistleblower Frances Haugen showing Instagram is harmful to a significant portion of young users, especially teen girls.

...While Instagram is a central player in facilitating the spread and sale of child sexualized imagery, other tech platforms also played a role, the report found. For instance, it found that accounts promoting self-generated child sexual abuse material were also heavily prevalent on Twitter, although the platform appears to be taking them down more aggressively.

Some of the Instagram accounts also advertised links to groups on Telegram and Discord, some of which appeared to be managed by individual sellers, the report found.


Tech entrepreneurs who left the Bay Area during the pandemic say they can’t afford to miss out on the funding, hackathons and networking of the artificial intelligence frenzy

But such busts are almost always followed by another boom. And with the latest wave of A.I. technology — known as generative A.I., which produces text, images and video in response to prompts — there’s too much at stake to miss out.

Investors have already announced $10.7 billion in funding for generative A.I. start-ups within the first three months of this year, a thirteenfold increase from a year earlier, according to PitchBook, which tracks start-ups. Tens of thousands of tech workers recently laid off by big tech companies are now eager to join the next big thing. On top of that, much of the A.I. technology is open source, meaning companies share their work and allow anyone to build on it, which encourages a sense of community.

“Hacker houses,” where people create start-ups, are springing up in San Francisco’s Hayes Valley neighborhood, known as “Cerebral Valley” because it is the center of the A.I. scene. And every night someone is hosting a hackathon, meet-up or demo focused on the technology.


The new AI gold rush — sparked in large part by the release of OpenAI’s ChatGPT in November — is thanks to generative AI, which uses complex algorithms trained on trillions of words and images from the open internet to produce text, images and audio

Since then, venture capitalists have been throwing money at AI start-ups, investing over $11 billion in May alone, according to data firm PitchBook, an increase of 86 percent over the same month last year. Companies from Moderna to Heinz have mentioned AI initiatives on recent earnings calls. And last week, AI chipmaker Nvidia became one of only a handful of companies in the world to hit $1 trillion in value.

...That’s within spitting range of Amazon, which is worth $1.26 trillion. Nvidia Chief Financial Officer Colette Kress called ChatGPT’s launch a new “iPhone moment” comparing it to when the world realized mobile phones would completely change how people use computers.

...About $12.5 billion in investments have gone into generative AI start-ups this year so far, compared with only $4.5 billion invested in the field in all of 2022, Burke said.

...“The entire transformers team from Google left to start their own company,” Bhooshan said, referring to the Google researchers who wrote the paper on “transformers,” a key aspect of the current crop of generative AI.


ChatGPT took their jobs. Now they walk dogs and fix air conditioners.

For some workers, the impact is real. Those that write marketing and social media content are finding themselves in the first wave of people being replaced with tools like chatbots seemingly able to produce plausible alternatives to their work.

..."We're really in a crisis point," said Sarah T. Roberts, an associate professor at University of California in Los Angeles specializing in digital labor. "[AI] is coming for the jobs that were supposed to be automation-proof."

..."In every previous automation threat, the automation was about automating the hard, dirty, repetitive jobs," said Ethan Mollick, an associate professor at the University of Pennsylvania's Wharton School of Business. "This time, the automation threat is aimed squarely at the highest-earning, most creative jobs that . . . require the most educational background."

In March, Goldman Sachs predicted that 18 percent of work worldwide could be automated by AI, with white-collar workers such as lawyers at more risk than those in trades such as construction or maintenance. "Occupations for which a significant share of workers' time is spent outdoors or performing physical labor cannot be automated by AI," the report said.


Alexandria Ocasio-Cortez Parody Account Disappears From Twitter After Musk Boost

“Really wondering about where the line is to leave the other place,” she wrote on Bluesky, a Twitter competitor. “There is a line where the harm of unchecked disinfo exceeds the benefits of direct, authentic communication. It’s really sad.”

Ocasio-Cortez added that she was “concerned about next year’s election” after Musk complied with a censorship demand from the Turkish government, and after Donald Trump, on his own Truth Social platform, posted a parody video targeting his 2024 GOP primary opponent Ron DeSantis.

...@AOCpress’s full display name — “Alexandria Ocasio-Cortez Press Release (parody)” — has raised questions about Twitter’s policies. The platform cuts off the end of the long name on many users’ timelines, meaning that the “(parody)” disclaimer isn’t visible unless they click on the account’s profile. This also means that screenshots of the account’s tweets can appear authentic.

But the man who started @AOCpress, a Trump-supporting political pundit named Michael Morrison, defended the account in direct messages with HuffPost. The words “Press Release” were included in the long display name, he said, because the account used to put out mock press statements purporting to be “official” from Ocasio-Cortez’s office.


YouTube Changes Policy To Allow False Claims About Past US Presidential Elections

YouTube will stop removing content that falsely claims the 2020 election or other past U.S. presidential elections were marred by “widespread fraud, errors or glitches," the platform announced Friday.

The change is a reversal for the Google-owned video service, which said a month after the 2020 election that it would start removing new posts that falsely claimed widespread voter fraud or errors changed the outcome.

...“YouTube and the other platforms that preceded it in weakening their election misinformation policies, like Facebook, have made it clear that one attempted insurrection wasn’t enough. They’re setting the stage for an encore," said its vice president Julie Millican in a statement.


Meta Tests Blocking News Content On Instagram, Facebook For Some Canadians

Meta is temporarily blocking some Canadian users from accessing news content on Facebook and Instagram as part of a temporary test that is expected to last through the end of June, the tech giant said Thursday.

The block — which follows a similar step taken by Google earlier this year — comes in response to a proposed bill that will require tech giants to pay publishers for linking to or otherwise repurposing their content online. Bill C-18, the Online News Act, is currently being considered in the Senate and could be passed as early as this month.

Meta also said it is prepared to permanently block news content on Facebook and Instagram for Canadians if the bill passes.


This time, in a move not seen against a tech giant since the efforts targeting Microsoft in the 1990s, the DOJ is seeking to break up Google’s ad-tech business

But under the hood, they are united by advertising, referred to as the “dark beating heart of the internet” by the author Tim Hwang in his book Subprime Attention Crisis. About 80 percent of Google’s revenue comes from the ads it places next to search-engine results, on sites across the internet, and before YouTube videos. Meta makes considerably more than 90 percent of its billions in revenue from advertising. Amazon has the third biggest share of the U.S. ad market, thanks to what it charges independent retailers for placement on its site. And although few people think of Microsoft as a company that benefits from digital ads, it, too, makes billions from them every year.

Even Apple, which foregrounds user privacy as one of its selling points, is in on the ad game. Advertising makes up close to $4 billion of its annual revenue, according to the research company Insider Intelligence. All told, outside of China, the online-ad industry was worth about $500 billion last year, according to data from Omdia, and Google, Meta, Amazon, and Apple are believed to have taken some $340 billion of that. Companies that traditionally opposed advertising are looking for their way in too: After resisting ads since its inception, Netflix introduced an ad-supported version of its streaming service last year, as did Disney+.

...But the ad-supported internet is about to get worse. Many publishers are already motivated to generate as much content as possible, for as low a price as possible, for the largest audience possible. (That’s why they push out so many formulaic posts at mass volume, trying to eke out marginal ad profits from endless How old is this actor? Who is her wife? What is her net worth? articles.) Now we can add to this derivative fluff a flood of articles that were written by programs. In the ChatGPT era, we face a future of low-quality content automatically churned out, itself “read” only by other algorithms as they train themselves up and by bots generating fraudulent ad clicks—a “gray goo” internet created by algorithms, for algorithms, and shunned by everyone with a pulse. Ads already make the internet less usable; the effect will only be magnified as we’re forced to wade through the sludge.

...Most previous lawsuits have been easily batted aside by Big Tech. Because of the companies’ scale, even multibillion-dollar fines, themselves very rare, are little more than the cost of doing business. This time, in a move not seen against a tech giant since the efforts targeting Microsoft in the 1990s, the DOJ is seeking to break up Google’s ad-tech business.


‘A Total Nightmare’: Transfer Delays With New Apple-Goldman Savings Accounts Prompt Complaints

When Kevin Smyth learned in April that Goldman Sachs and Apple were offering a savings account with higher yields than anyone else, he jumped at the chance to shift money that was earning less at another online bank. But within a few weeks, when he had to withdraw a small portion of the funds to pay for a home renovation, everything went awry.

A transaction that was supposed to take one to three business days ended up taking nearly two weeks to clear, forcing Smyth to sell stock through a Fidelity account to get the money for the renovation. Smyth now says he plans to close his Apple Savings account, disappointed in Apple and Goldman and the way customer service staff treated him. He described being “lectured” by representatives who scolded him for moving money in and out of savings accounts so quickly, even though the account’s terms and conditions didn’t describe any restrictions on doing so.


Elon Musk is accused again of Dogecoin insider trading

The suit centers on a “deliberate course of carnival barking, market manipulation and insider trading” allegedly engineered by Musk to artificially drive up the price of Dogecoin by more than 36,000% and then let it crash, in order to short the currency.


A Confession Exposes India’s Secret Hacking Industry

Rey’s investigation into the Azima case shed new light not only on BellTroX but also on several other outfits like it, establishing beyond dispute that India is home to a vast and thriving cyberattack industry. Last year, Rey secured the first detailed confession from a participant in a hacking-for-hire operation. In court papers, an Indian hacker admitted that he had infiltrated Azima’s e-mail account—as had employees at another firm. Moreover, there were countless other Indian hackers for hire, whose work was often interconnected. John Scott-Railton, a senior researcher at Citizen Lab, who helped lead the BellTroX investigation, told me that the admissions Rey obtained are “huge” and “move the whole conversation forward.” He added, “You know how in some industries, everybody ‘knows a guy’ who can do a certain thing? Well, in hacking for hire, India is ‘the guy.’ They are just so prolific.”

...Another of those private investigators, Stuart Page, who had denied that any hacking had occurred, bolstered the credibility of the new filing by flipping and confirming the core of Jain’s story. Page, a former officer for Scotland Yard, submitted an affidavit acknowledging that he had lied about the hacking. “I apologise unreservedly for the part I played in misleading the Court,” Page said. He admitted that he had worked with an Israeli private investigator and former intelligence officer who, in turn, had hired “subcontractors located outside of Israel” who had used “hacking techniques” to obtain “confidential e-mails and unauthorised access to other confidential electronic data.” Nobody had accidentally discovered Azima’s hacked e-mails online, Page admitted: the Israeli investigator who had hired the hackers had sent him a link to the cache. Moreover, the investigator’s reports were clearly full of hacked data. Page wrote, “It was obvious to me (and it would have been obvious to anyone else reading the reports) that such documents were obtained as a result of unauthorised access to computers.” (The Israeli private investigator has disputed Page’s account.)

...Rey said that, judging from the data he has obtained from Jain and his hacker colleagues, the hacking-for-hire business in India is much bigger than most experts had imagined. “In addition to BellTroX and CyberRoot, there are about ten to fifteen other Indian companies doing this,” he told me. “We have seen close to a hundred and twenty thousand victims over the past ten years, so it really is an industry.”Yet, as both Rey and Scott-Railton, of Citizen Lab, told me, Indian hackers appear to share something important with their counterparts in those authoritarian nations: a tacit alliance with their government. Rey told me that, according to target lists and other information that he gained from Indian hackers, the top dozen Indian hacking-for-hire firms “have always tended to have the same profile—they always do a little bit of government work, with private work on the side.”

This spring, federal prosecutors in New York who were armed with data provided by Citizen Lab secured a guilty plea from an Israeli private investigator named Aviram Azari, who had admitted to hiring Indian hackers to attack climate activists, investors, and many others. Azari, who faces a lengthy sentence, has refused to name his clients.


Every single Amazon Ring employee was able to access every single customer video, even when it wasn't necessary for their jobs

Not only that, but the employees—along with workers from a third-party contractor in Ukraine—could also download any of those videos and then save and share them as they liked, before July 2017.

That's what the FTC has alleged in a recent complaint, for which Amazon is facing a settlement of $5.8 million.

...In one example, the FTC says a Ring employee viewed thousands of videos from at least 81 different female users. The employee allegedly went looking for camera feeds that suggested they may have been used in the most private of areas, such as "Master Bedroom," "Master Bathroom," and "Spy cam".

...As a result of these bad practices, Ring suffered several security incidents. Between January 2019 and March 2020, the FTC alleges that more than 55,000 customers had their Ring devices compromised. In some instances cybercriminals used the two-way communication to terrorise Ring customers, like something from a horror movie:


Should we know where our friends are at all times?

Katina Michael, a professor at Arizona State University who has studied location-based technologies in the private and academic spheres for more than 25 years, describes the shift to mass location sharing as one of the central tenets of uberveillance, the academic term used to mark human beings’, companies’, and governments’ widespread electronic surveillance of other people. “It’s the most powerful thing, knowing where someone is,” she says. “It’s sacred knowledge. It’s God knowledge, when you think about it.” (Perhaps this is part of the enjoyment of staring at our friends’ bubbles: We get to play God to our virtual Sim friends.)

Michael finds the casualness with which people share locations with their friends worrisome. She cites the Tempe, Arizona, man who, in 2019, was arrested for posing as a teenage girl on Snapchat to find the locations of underage girls and then watching them in their homes. (Another man was arrested for the same thing in Florida in 2022.) In France, one man tracked his girlfriend on Snap Map and ended up stabbing the man she was with. Of course, location services have also helped solve innumerable crimes, which is why so many families and friends rely on it for peace of mind. In 2019, one woman credited location sharing with saving her life after she went into anaphylactic shock and her roommate was able to find her and call an ambulance.

If location sharing is the new normal, Michael hopes to bring about changes to the ways location tracking apps breach our privacy. First and foremost, she says that people should have the right to view their own location data and delete it if they wish. She was also the working group chair of a new code of age-appropriate standards for young people, which includes guidelines for terms of service agreements written in plain, simple language. “If even lawyers can’t figure out what terms and conditions are talking about, what’s the average adult to do?” she says, never mind the teens and kids who use these features.


Amazon to Pay $25 Million to Settle Children’s Privacy Charges

“Amazon’s history of misleading parents, keeping children’s recordings indefinitely, and flouting parents’ deletion requests violated” the children’s online privacy law and “sacrificed privacy for profits,” Samuel Levine, director of the F.T.C.’s Bureau of Consumer Protection, said in a statement. “COPPA does not allow companies to keep children’s data forever for any reason, and certainly not to train their algorithms.”

...Last December, Epic Games, the maker of Fortnite, agreed to pay $520 million to settle accusations by the F.T.C. that it had illegally harvested data from players under 13 and, separately, steered millions of users to make unwanted payments. In 2019, Google agreed to pay a $170 million penalty to settle charges from the F.T.C. and the attorney general of New York that it had violated children’s privacy on YouTube.

...The intensifying regulatory push to protect children online is not limited to the United States. Last September, Irish regulators announced they would levy a fine of about $400 million against Meta for its handling of children’s information on Instagram. Meta said it disagreed and planned to appeal.

...In 2017, for instance, one Ring employee viewed thousands of videos belonging to dozens of female customers, including in sensitive locations like the women’s bedrooms and bathrooms, the agency said in a legal complaint filed in U.S. District Court for the District of Columbia.


Federal regulators fine Amazon $25 million over child privacy issues

Federal regulators on Wednesday announced Amazon would pay $25 million to settle allegations that its voice assistant Alexa violated a federal law protecting children’s privacy — a sign of Washington’s mounting scrutiny of the e-commerce giant’s sprawling businesses.

...By recording children and using transcripts of those recordings to improve its product even after deletion requests, the U.S. government alleges that Amazon has violated the Children’s Online Privacy Protection Act of 1998, a law that has recently been enforced against other popular tech companies including Fortnite-maker Epic Games and YouTube.

...The commission is also fining the company over Ring, Amazon’s home surveillance company best known for its doorbell camera. Regulators say the company illegally allowed employees and contractors to view private videos of customer’s homes and are fining the company an additional $5.8 million.


Meta Says It Will Block News From Platforms If California Passes Journalism Bill

Meta, the parent company of Facebook and Instagram, will pull news from its platforms if California passes a bill mandating that tech companies pay publishers for their content, a spokesperson for the company said Wednesday.

The bill, known as the Journalism Preservation Act, would require companies to pay a “journalism usage fee” whenever they run ads next to news posts on their platforms. Publishers would be required to spend 70% of that revenue on newsroom payroll.

...She also responded to Meta’s threat on Wednesday, saying the company’s remark “is a scare tactic that they’ve tried to deploy, unsuccessfully, in every country that’s attempted this. It is egregious that one of the wealthiest companies in the world would rather silence journalists than face regulation.”


Boston University President Accuses Students Who Booed Warner Bros. CEO Of 'Cancel Culture'

“Right now, 11,500 WGA members across the country are on strike because companies — including Warner Bros. Discovery — refuse to negotiate a fair contract that addresses writers’ reasonable demands around pay, residuals, and the existential threat that AI poses to workers,” the union said in a statement. “It is shameful that, in the midst of an action to preserve the future of work, Boston University would use a graduation ceremony to honor someone intent on destroying its students’ prospects to build sustainable careers.”


Millions of PC Motherboards Were Sold With a Firmware Backdoor

In its blog post about the research, Eclypsium lists 271 models of Gigabyte motherboards that researchers say are affected. Loucaides adds that users who want to see which motherboard their computer uses can check by going to “Start” in Windows and then “System Information.”

Eclypsium says it found Gigabyte’s hidden firmware mechanism while scouring customers’ computers for firmware-based malicious code, an increasingly common tool employed by sophisticated hackers. In 2018, for instance, hackers working on behalf of Russia’s GRU military intelligence agency were discovered silently installing the firmware-based anti-theft software LoJack on victims’ machines as a spying tactic. Chinese state-sponsored hackers were spotted two years later repurposing a firmware-based spyware tool created by the hacker-for-hire firm Hacking Team to target the computers of diplomats and NGO staff in Africa, Asia, and Europe. Eclypsium’s researchers were surprised to see their automated detection scans flag Gigabyte’s updater mechanism for carrying out some of the same shady behavior as those state-sponsored hacking tools—hiding in firmware and silently installing a program that downloads code from the internet.

...Even if Gigabyte does push out a fix for its firmware issue—after all, the problem stems from a Gigabyte tool intended to automate firmware updates—Eclypsium’s Loucaides points out that firmware updates often silently abort on users’ machines, in many cases due to their complexity and the difficulty of matching firmware and hardware. “I still think this will end up being a fairly pervasive problem on Gigabyte boards for years to come,” Loucaides says.

e of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far off.


Eating Disorder Helpline Disables Chatbot for 'Harmful' Responses After Firing Human Staff

On Monday, an activist named Sharon Maxwell posted on Instagram, sharing a review of her experience with Tessa. She said that Tessa encouraged intentional weight loss, recommending that Maxwell lose 1-2 pounds per week. Tessa also told her to count her calories, work towards a 500-1000 calorie deficit per day, measure and weigh herself weekly, and restrict her diet. “Every single thing Tessa suggested were things that led to the development of my eating disorder,” Maxwell wrote. “This robot causes harm.”

...“To advise somebody who is struggling with an eating disorder to essentially engage in the same eating disorder behaviors, and validating that, ‘Yes, it is important that you lose weight’ is supporting eating disorders” and encourages disordered, unhealthy behaviors,” Conason told the Daily Dot.

NEDA’s initial response to Maxwell was to accuse her of lying. “This is a flat out lie,” NEDA’s Communications and Marketing Vice President Sarah Chase commented on Maxwell’s post and deleted her comments after Maxwell sent screenshots to her, according to Daily Dot. A day later, NEDA posted its notice explaining that Tessa was taken offline due to giving harmful responses.


‘I feel constantly watched’: the employees working under surveillance

Every 10 minutes, Mae’s computer snaps a shot of her screen, thanks to monitoring software her employer made her install on her laptop. A figure looms large over her workday: her activity score, a percentage calculated by the arbitrary measure of how much she types and moves her mouse.

...Employees use Hubstaff, one of the myriad monitoring tools that companies turned to as the Covid pandemic forced many to work remotely. Some, such as CleverControl and FlexiSPY offer webcam monitoring and audio recording.

...A poll by the Trades Union Congress (TUC) in 2022 found that 60% of employees had experienced tracking in the last year. Henry Parkes is a senior economist at the IPPR and the author of a recent report on the rise of surveillance practices. He is calling for more transparency and says the exact scale of workplace monitoring is hard to judge without open data.

...Carlos*, who is in his 40s and works in customer service at a high street bank in London, knows how challenging this can be. Post-pandemic, his job is hybrid and he says he is tracked relentlessly when working remotely. “Our ‘performance’ is counted by the minute. I have found myself having to explain the reasons for a longer toilet break.” He says the intensity of the monitoring has affected his wellbeing.


A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn

A group of industry leaders warned on Tuesday that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on a par with pandemics and nuclear wars.

...The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen.

...But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and that it will soon surpass it in others. They say the technology has shown signs of advanced abilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far off.


This drumbeat of hysterical headlines has arguably distracted attention from deeper scrutiny of existing harms

This drumbeat of hysterical headlines has arguably distracted attention from deeper scrutiny of existing harms. Such as the tools’ free use of copyrighted data to train AI systems without permission or consent (or payment); or the systematic scraping of online personal data in violation of people’s privacy; or the lack of transparency from AI giants vis-a-vis the data used to train these tools. Or, indeed, baked in flaws like disinformation (“hallucination”) and risks like bias (automated discrimination). Not to mention AI-driven spam! And the environmental toll of the energy expended to train these AI monsters.

...Talk of existential AI risk also distracts attention from problems related to market structure and dominance, as Jenna Burrell, director of research at Data & Society, pointed out in this recent Columbia Journalism Review article reviewing media coverage of ChatGPT — where she argued we need to move away from focusing on red herrings like AI’s potential “sentience” to covering how AI is further concentrating wealth and power.

So of course there are clear commercial motivations for AI giants to want to route regulatory attention into the far-flung theoretical future, with talk of an AI-driven doomsday — as a tactic to draw lawmakers’ minds away from more fundamental competition and antitrust considerations in the here and now. (And data exploitation as a tool to concentrate market power is nothing new.)

...Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape “democratic processes for steering AI”, as Altman put it. So the company is actively positioning itself (and applying its investors’ wealth) to influence the shape of any future mitigation guardrails, alongside ongoing in-person lobbying efforts targeting international regulators. Altman also recently made public threats that OpenAI’s tool could be pulled out of Europe if draft EU AI rules aren’t watered down to exclude its tech.


A new statement from industry leaders cautions that artificial intelligence poses a threat to humanity on par with nuclear war or a pandemic

Meredith Whittaker, president of the Signal Foundation and cofounder and chief advisor of the ​​AI Now Institute, a nonprofit focused AI and the concentration of power in the tech industry, says many of those who signed the statement likely believe probably that the risks are real, but that the alarm “doesn’t capture the real issues.”

She adds that discussion of existential risk presents new AI capability as if they were a product of natural scientific progress rather than a reflection of products shaped by corporate interests and control. “This discourse is kind of an attempt to erase the work that has already been done to identify concrete harms and very significant limitations on these systems.” Such issues range from AI bias, to model interpretability, and corporate power, Whittaker says.

Margaret Mitchell, a researcher at Hugging Face who left Google in 2021 amid fallout over a research paper that drew attention to the shortcomings and risks of large language models, says it is worth thinking about the long-term ramifications of AI. But she adds that those behind the statement seem to have done little to consider how they might prioritize more immediate harms including how AI is being used for surveillance. “This statement as written, and where it's coming from, suggest to me that it’ll be more harmful than helpful in figuring out what to prioritize,” Mitchell says.


Texas welcomed Elon Musk. Now his rural neighbors aren’t so sure.

Last month, after a SpaceX rocket exploded over the Gulf of Mexico minutes after liftoff, the Federal Aviation Administration grounded the company’s launch program, saying SpaceX had to “perform analyses to ensure that the public was not exposed to unacceptable risks.” The U.S. Fish and Wildlife Service said the explosion sent “numerous large concrete chunks, stainless steel sheets, metal and other objects” flying over the area, along with a cloud of pulverized concrete that deposited material nearly seven miles from the launch site.

...Before long, complaints started rolling in from regulators. In February 2022, Bastrop County notified Boring that it was operating an unpermitted septic system and gave it 60 days to fix the problem, public records show. More than two months later the unauthorized system was still in operation, county officials said in a May 17 notice of violation.

In September 2021, the Texas Department of Transportation discovered that Boring had built an unpermitted driveway into its site, in a location that increased the odds of traffic accidents, according to agency emails received through a public records request. Months later, the problem remained.

...Ambrose took the mic and criticized Boring for rushing ahead without a connection to the city treatment plant and for declining to answer questions at the hearing. “The leadership team is absent. And they are playing games. And we’re not,” he said.


Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday

European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU’s disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter’s “obligation” remained, referring to the EU’s tough new digital rules taking effect in August.

“You can run but you can’t hide,” Breton said.

...Breton said that under the new digital rules that incorporate the code of practice, fighting disinformation will become a “legal obligation.”


Dudenhöffer lays the blame for Tesla’s mounting troubles on Musk, who divides his time between running Tesla, his rocket company SpaceX, and Twitter, which has been in a state of perma-crisis since his takeover last year

Cars crashing into bollards, brakes slamming on to avoid imaginary collisions, and more than 2,400 complaints of cars accelerating out of their owner’s control. The 100 gigabytes worth of internal Tesla documents leaked to the German newspaper Handelsblatt present a sobering picture of the EV company’s technical limitations.

The 23,000 files obtained by Handelsblatt cover issues in Europe, the US, and Asia between 2015 and March 2022, and they seem to show serious flaws in Tesla’s Autopilot technology. The revelations could see the company facing new pressure from regulators, who are likely to pore over the reports looking for evidence that the company has misled authorities or customers over the safety of its vehicles.

...Schmidt says that Tesla has long taken a “move fast and break things” approach to developing products, leading to concerns about whether its new releases are ready for the road. There have been 393 recorded deaths involving Teslas, 33 of which involved Autopilot. Schmidt alleges that Musk “accepts driver death as a consequence of forwarding technology.” Musk did not respond to a request to comment for this story or address Schmidt’s allegation.


The surgeon general’s advisory on risks of youth social media use could shift the conversation

“… The current body of evidence indicates that while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents,” U.S. Surgeon General Dr. Vivek Murthy wrote in the advisory. “At this time, we do not yet have enough evidence to determine if social media is sufficiently safe for children and adolescents.”

...“Nearly every teenager in America uses social media, and yet we do not have enough evidence to conclude that it is sufficiently safe for them,” the advisory warns. “Our children have become unknowing participants in a decades-long experiment.”

...The surgeon general’s specific policy recommendations include implementing higher standards for youth data privacy, enforced age minimums, deepening research in these areas and weaving digital media literacy education into cirriculums.

...The issue comes up time and time again in Congressional hearings, but the possibility of thoughtful U.S. regulation addressing tech’s ability to manipulate the behavior of young users while monetizing their data continues to take a backseat to partisan politics and political grandstanding. While the EU passes meaningful new rules for social media like the Digital Services Act, lawmakers in the U.S. continue to fail on core, cross-platform issues like data privacy and dangerous content.


Surgeon General Warns That Social Media May Harm Children and Adolescents

The nation’s top health official issued an extraordinary public warning on Tuesday about the risks of social media to young people, urging a push to fully understand the possible “harm to the mental health and well-being of children and adolescents.”

In a 19-page advisory, the United States surgeon general, Dr. Vivek Murthy, noted that the effects of social media on adolescent mental health were not fully understood, and that social media can be beneficial to some users. Nonetheless, he wrote, “There are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents.”

...Survey results from Pew Research have found that up to 95 percent of teens reported using at least one social media platform, while more than one-third said they used social media “almost constantly.” As social media use has risen, so have self-reports and clinical diagnoses among adolescents of anxiety and depression, along with emergency room visits for self-harm and suicidal ideation.


Google to pay $40m for "deceptive and unfair" location tracking practices

Ferguson’s lawsuit against Google asserted that the tech giant deceptively led consumers to believe that they have control over how Google collects and uses their location data. In reality, consumers could not effectively prevent Google from collecting, storing and profiting from their location data.

The lawsuit itself, announced back in January 2022, claimed Google used a “number of deceptive and unfair practices” to obtain user content for tracking. Practices highlighted included “hard to find” location settings, misleading descriptions of location settings, and “repeated nudging” to enable location settings alongside incomplete disclosures of Google’s location data collection.

These practices were set alongside the large amount of profit Google generated from using consumer data to sell advertising. Google made close to $150 billion from advertising in 2020, and the case pointed out that location data is a key component of said advertising. As per the Attorney General:


The First Social-Media Babies Are Growing Up—And They’re Horrified

Caymi Barrett, now 24, grew up with a mom who posted Barrett’s personal moments—bath photos, her MRSA diagnosis, the fact that she was adopted, the time a drunk driver hit the car she was riding in—publicly on Facebook. (Barrett’s mother did not respond to requests for comment.) The distress this caused eventually motivated Barrett to become a vocal advocate for children’s internet privacy, including testifying in front of the Washington State House earlier this year. But before that, when Barrett was a teen and had just signed up for her first Twitter account, she followed her mom’s example, complaining about her siblings and talking candidly about her medical issues.

...Some new parents feel there’s no excuse for subjecting children to invasive public scrutiny. Kristina, a 34-year-old mother from Los Angeles who asked to be identified by only her first name for privacy reasons, has posted just a handful of photos of her daughter, and covers her face in all of them. “We didn’t really want to share her image publicly, because she can’t consent to that,” she told me. Many other adults don’t respect Kristina’s decision. “I had someone basically insinuate, was there something wrong with my daughter? Because I wasn’t sharing her,” she said.

...Even if parents have decided to keep their children off social media, they’re not the only ones with phones. Kristina says she’s had to ask friends and family to take down photos they’ve posted of her daughter online. Every person on the street, every parent at a birthday party, has their own camera in their pocket, and the potential to knowingly or unknowingly violate her family’s boundary.

Barrett says she’s still feeling the effects of her mother’s decade of oversharing. When Barrett was 12, she says she was once followed home by a man who she believes recognized her from the internet. She was later bullied by classmates who latched on to all the intimate details of her life that her mother had posted online, and she ultimately dropped out of high school.


E.U. slaps Meta with record $1.3 billion fine for data privacy violations

The Irish Data Protection Commission ordered Meta to suspend all transfers of personal data belonging to users in the E.U. and the European Economic Area — which includes non-E. U. countries Iceland, Liechtenstein and Norway — to the United States.

The Irish Data Protection Commission said in a statement that Meta’s data transfers were in breach of the E.U.’s General Data Protection Regulation (GDPR), rules that restrict what companies can do with people’s personal data. It is the largest GDPR fine handed down by the bloc, surpassing the previous record of $887 million against Amazon, a penalty issued in 2021 by a European privacy regulator that the firm said it would appeal.

...Meta has faced regulatory scrutiny over its privacy practices for more than a decade, including from the Federal Trade Commission in the United States. Monday’s fine is far smaller than the $5 billion settlement that the company reached with the FTC in 2019 over its alleged mishandling of user data, ending an investigation that began in the wake of the Cambridge Analytica scandal.

That record-breaking fine marked a historic censure of a major tech company, but it was largely shrugged off by investors. The company’s critics in Congress said the penalty did not go far enough, calling it a “Christmas present” and a “mosquito bite” for the tech behemoth. Yet the FTC settlement is a harbinger of how government penalties can inflict more than financial pain on a company.


U.S. Intelligence Building System to Track Mass Movement of People Around the World

In one project, AIS simulated a cyber attack with 104 individuals and watched the way they moved. “Devices included traditional desktop systems, laptops, tablets, and mobile platforms. Modalities included accelerometer and gyroscope, keystroke data, mouse data, touchscreen interactions, and other information,” the firm said. “The technology tracks users through biometric features, including keystroke biometrics, mouse movement behavior, and gait detection.”

In another project, called GANSpoofer, AIS used an AI model called a generative adversarial network (GAN) to make fake users that could defeat a biometric scanner. GANs have been used to create hyper realistic photos of people and animals that don’t exist. “We’ve shown that we can both detect the unique anomalies associated with an individual’s biometric behaviors and use this information to transform data into, not only realistic patterns at a population level, but patterns specific to that individual,” AIS said.

The defense contractor also claimed to have developed a learning model that could detect symptoms of a traumatic brain injury in a soldier just by watching how they moved their smartphone. According to its research, the placement of the phone in and around the body and the accelerometer and gyroscope data from the device could help it predict certain diseases and injuries with more than 90 percent of accuracy.


A tweet about a Pentagon explosion was fake. It still went viral.

On Monday morning, a verified Twitter account called Bloomberg Feed shared an ominous tweet. Beneath the words, “Large Explosion near The Pentagon Complex in Washington, D.C. - Initial Report,” it showed an image of a huge plume of black smoke next to a vaguely Pentagon-like building.

On closer inspection, the image was a fake, likely generated by artificial intelligence, and the report of an explosion was quickly debunked — though not before it was picked up by large accounts, including the Russian state media organ Russia Today. The tweet may have also briefly moved the stock market, as the Dow Jones Industrial Index dropped 85 points within four minutes, then rebounded just as quickly.

...And Twitter is looking like an increasingly likely vector, as new owner Elon Musk has gutted its human workforce, laid off a team that used to fact-check viral trends, and changed account verification from a manual authentication process to one that’s largely automated and pay-for-play. The signature blue badges once indicated authority for public figures, large organizations, celebrities and others at risk of impersonation. Now, Twitter awards them to any one willing to pay $8 a month and confirm their phone number.


Authorities are investigating Elon Musk’s Twitter after 6 former employees say his team broke laws by turning it into a ‘Twitter hotel’

San Francisco officials are investigating Twitter after six former employees allege that owner Elon Musk’s leadership team broke laws in turning the company’s headquarters into a “Twitter Hotel” for workers being pushed to stay up late to transform the social media platform.

...This is not the first time San Francisco officials have tussled with Musk, who bought Twitter for $44 billion in October and gutted much of its workforce as he converted a part of the company’s headquarters into bedrooms.


Congress Still Needs to Take Privacy Seriously

Congress’s relationship to privacy comes when it’s politically expedient and disappears as soon as members feel as if they could be too easily painted as being soft on crime or national security. Despite calls over the last few years for federal legislation to reign in big tech companies, we’ve seen nothing significant in limiting tech company's ability to collect data (then accessed by the NSA via Prism), or regulate biometric surveillance, or close the backdoor that allows the government to buy personal information rather than get a warrant, much less create a new Church Committee to investigate the intelligence community’s overreaches. It’s why so many cities and states have had to take it upon themselves to ban face recognition or predictive policing, or pass laws to protect consumer privacy and stop biometric data collection without consent.

It’s been 10 years since the Snowden revelations and Congress needs to wake up and finally pass some legislation that actually protects our privacy, from companies as well as from the NSA directly.


XKEYSCORE gives analysts the power to watch—in real time—anything a person does on the Internet

Much of the spying that the NSA does overseas is conducted under the auspices of Executive Order 12333. This directly impacts people around the world, but also Americans whose communications can and often are included and then analyzed, including with a tool called XKEYSCORE. As the Guardian reported in 2013 based upon Snowden's revelations, XKEYSCORE gives analysts the power to watch—in real time—anything a person does on the Internet. There are serious issues raised by this tool and by 12333 more broadly. Despite consistent calls for reform, however, very little has occurred and 12333 mass surveillance, using XKEYSCORE and otherwise, appears to continue unabated. The Privacy and Civil Liberties Board (PCLOB), a government agency intended to advise the executive branch on privacy and civil liberties, issued a disappointing report, after much delay, which prompted an appropriately critical response from PCLOB member Travis LeBlanc. We still need to have a serious conversation not only about NSA spying in the U.S. but about it’s much bigger collection and analysis and use with very little oversight, all around the world.


In 2021 alone, the FBI conducted up to 3.4 million warrantless searches of Section 702 data to find Americans’ communications

In Fall 2023, Congress will get a chance to seriously reform or end Section 702 of FISA in light of its impending sunset. Section 702 allows the government to conduct surveillance inside the United States by vacuuming up digital communications so long as the surveillance is directed at foreigners currently located outside of the United States. It also prohibits intentionally targeting Americans. Nevertheless, the NSA routinely (“incidentally”) acquires innocent Americans' communications without a probable cause warrant. Once collected, the FBI can search through this massive database of information by “querying” the communications of specific individuals.

In 2021 alone, the FBI conducted up to 3.4 million warrantless searches of Section 702 data to find Americans’ communications. Congress and the FISA Court have imposed modest limitations on these “backdoor searches,” but according to several recent FISA Court opinions, the FBI has engaged in “widespread violations” of even these minimal privacy protections.

The Snowden revelations gave names to two of the key types of surveillance that the NSA conducts under Section 702: Prism and Upstream. Upstream has been central to EFF’s litigation, as we had direct evidence about it long before we knew its name. If 702 ends, both of these two programs should end along with them.


Apple restricts employees from using ChatGPT over fear of data leaks

Apple has good reason to be wary. By default, OpenAI stores all interactions between users and ChatGPT. These conversations are collected to train OpenAI’s systems and can be inspected by moderators for breaking the company’s terms and services.

...Apple is far from the only company instituting such a ban. Others include JP Morgan, Verizon, and Amazon.

Apple’s ban, though, is notable given OpenAI launched an iOS app for ChatGPT this week. The app is free to use, supports voice input, and is available in the US. OpenAI says it will be launching the app in other countries soon, along with an Android version.


Supreme Court hands tech companies a win, and not just about Section 230

But in the end, the court didn’t even address Section 230. It decided it didn’t need to, once it concluded the social media companies hadn’t violated U.S. law by automatically recommending or monetizing terrorist groups’ tweets or videos.

...Yet the wording of Thomas’s opinion is cause for concern to those who would like to see platforms held liable in other sorts of cases, such as the Pennsylvania mother suing TikTok after her 10-year-old died attempting a viral “blackout challenge.” His comparison of social media platforms to cellphones and email suggests an inclination to view them as passive hosts of information even when they recommend it to users.

“If there were people pushing on that door, this pretty firmly kept it closed,” said Evelyn Douek, an assistant professor at Stanford Law School.


Spooked by ChatGPT, US Lawmakers Want to Create an AI Regulator

But Blumenthal also expressed concern that a new federal AI agency could struggle to match the tech industry’s speed and power. “Without proper funding you’ll run circles around those regulators,” he told Altman and fellow industry witness Christina Montgomery, IBM’s chief privacy and trust officer. Altman and Montgomery were joined by psychology professor turned AI commentator Gary Marcus, who advocated for the creation of an international body to monitor AI progress and encourage safe development of the technology.

...The senators did not suggest a name for the prospective agency or map out its possible functions in detail. They also discussed less radical regulatory responses to recent progress in AI—such as the requiring of public documentation of AI systems’ limitations or the datasets used to create them, akin to an AI nutrition label—ideas that had been introduced years ago by researchers like former Google ethical AI team lead Timnit Gebru, who was ousted from the company after a dispute about a prescient research paper which warned about the limitations and dangers of large language models.

...Another change urged by lawmakers and industry witnesses alike was requiring disclosure to inform people when they’re conversing with a language model and not a human, or when AI technology makes important decisions with life-changing consequences. One example could be a disclosure requirement to reveal when a facial recognition match is the basis of an arrest or criminal accusation.


CNET Workers Unionize as ‘Automated Technology Threatens Our Jobs’

The announcement comes just a few months after journalists at Futurism revealed that CNET had published articles written by AI instead of by its writers—articles which contained a multitude of extremely basic errors—and that it had not properly disclosed that fact to its team. Despite these developments in CNET’s content creation, a representative for the union said that organizing had started long before.

...“CNET media workers have been subjected to ongoing restructuring, cost-cutting austerity measures, shifting job roles and promotion freezes,” the letter reads. “In the past year, three major rounds of layoffs have deeply impacted our reporting and our teams. Red Ventures cut senior editorial positions, eliminated the Roadshow cars section, drastically slashed our video team, gutted our news division and shut down science and culture coverage. These unilateral overhauls created low morale and unease, resulting in a wave of resignations and talent attrition.”

“We face a lack of transparency and accountability from management around performance evaluations, sponsored content and plans for artificial intelligence,” it continues. “We are concerned about the blurring of editorial and monetization strategies.”


Your DNA Can Now Be Pulled From Thin Air. Privacy Experts Are Worried.

They also found key mutations shown to carry a higher risk of diabetes, cardiac issues or several eye diseases. According to their data, someone whose genetic material turned up in the sample had a mutation that could lead to a rare disease that causes progressive neurological impairment and is often fatal. The illness is hereditary and may not emerge until a patient’s 40s. Dr. Duffy couldn’t help but wonder — does that person know? Does the person’s family? Does the person’s insurance company?

...“This gives a powerful new tool to authorities,” Dr. Lewis said. “There’s internationally plenty of reason, I think, to be concerned.” Countries like China already conduct extensive and explicit genetic tracking of minority populations, including Tibetans and Uighurs. Tools like eDNA analysis could make it that much easier, she said.

...That highlights the possibility that law enforcement officials could use eDNA collected at crime scenes to incriminate people, even though wildlife ecologists who developed the techniques say the science isn’t mature enough for such purposes. Scientists have yet to pin down the fundamentals of eDNA, like how it travels through air or water or how it degrades over time. And nanopore sequencing — the technology that allowed Dr. Duffy’s team to find longer and more informative DNA fragments — still has a much higher error rate than older technologies, meaning an unusual genetic signature that seems like a promising lead could be a red herring.

...“DNA tracks to your extended relatives, tracks forward in time to your children, tracks backward in time to your ancestors,” Ms. Murphy added. “In the future, who knows what DNA will tell us about people or how it might be used?”


The Data Broker That Targeted Abortion Clinics Landed a US Military Contract

The AFWERX contract is the first publicly reported relationship between SafeGraph and the US military, but the company has a history of working with other government agencies. In 2018, it sold two years of raw data to the Illinois Department of Transportation. In the first months of the Covid-19 pandemic, it inked a $420,000 deal with the Centers for Disease Control. Meanwhile, Veraset gave raw, individualized data about millions of people to the Washington, DC, Department of Health and other agencies around the country. And in 2020 and 2021, Santa Clara County used SafeGraph data to monitor attendance at a local church as part of a broader effort to enforce Covid restrictions. Materials shared with the Air Force mention relationships with the US Department of Agriculture, the Federal Reserve Bank, and the Los Angeles County, New York City, and New Jersey governments.

...In May 2022, Vice revealed that SafeGraph was selling access to aggregated counts of where people were before and after visiting abortion clinics, including Planned Parenthood. In response, 14 US senators sent the company a letter demanding answers about its business practices, and SafeGraph promised to stop selling data about abortion clinic visitors.

Although SafeGraph is best known for dealing in cell-phone-based location data, its pitch to the Air Force makes little mention of data about human movement—the only direct reference is a slide that says it can help “analyze human activity for Landing Zone (LZ) selection,” without explaining what that means. But SafeGraph has recently expanded its business to incorporate other kinds of data as well. For example, in 2022, it launched Spend, a product that profiles the customers of brick-and-mortar stores, including what they spend, what wireless carriers they use, and whether they take out short-term “buy now, pay later” loans.

...SafeGraph is funded in part by the CIA-backed venture capital firm In-Q-Tel, and In-Q-Tel head of investments George Hoyem sits on SafeGraph’s board, according to its slide deck. SafeGraph has also received investments from a motley crew including Peter Thiel, Sam Harris, former Republican House majority leader Eric Cantor, and former Saudi Arabian intelligence chief Prince Turki bin Faisal Al Saud. SafeGraph’s last funding round valued the company at $370 million, according to the records.


Your smart home devices know more about you than you might think—and they’re less secure than you’d hope

This problem is only going to grow as we stuff our homes with more and more things that connect to the internet. Recently, the Atlantic wrote a great piece about the data that smart TVs collect on their couch-bound watchers. My colleague Eileen Guo showed how Roomba vacuums can take invasive pictures, in an investigation about how data was collected on people who were testing the products.

Watson is not especially worried about the government or the tech companies spying on you through your thermostat, per se. He’s more worried about all the ways your data is being sold and accumulated by data brokers.

“That’s where the risks are that people don’t understand: if my bed tracks my sleep and tracks my heart rate, and that company is selling off this information to an insurance company that realizes you have a near cardiac event every time you go to sleep, or that you have sleep apnea or whatever,” he says.

“The more technology encroaches into our lives in every facet … we lose the ability to have any measure of control over where it’s going, how much is collected, who’s getting their hands on it, and what they are doing with it.”


A pediatric behavioral health startup called Brightline informed its customers that their protected health data may have been stolen as part of a separate ransomware attack on a Brightline third-party service provider

The third-party service provider at the heart of the data breach is Fortra, which was recently targeted by the Cl0p ransomware gang in a string of attacks that leveraged an undisclosed vulnerability in the file transfer software called GoAnywhereMFT, which Fortra develops and which is used by businesses worldwide. Malwarebytes Labs reported on the vulnerability in February, urging users to deploy a patch.

GoAnywhere MFT, which stands for managed file transfer, allows businesses to manage and exchange files in a secure and compliant way. According to its website, it caters to more than 3,000 organizations, predominantly ones with over 10,000 employees and 1B USD in revenue.

...For many organizations, Brightline offers virtual behavioral and mental health services for the children of benefits-eligible employees. In this light, Brightline has published a list of covered entities impacted by the breach.


To engage with Temu is to be cornered in conversation with an AI-powered salesperson

This version of Temu — the version you encounter first that runs in the background at all times, beckoning you back, and greets you with endless machine-learning-generated feeds of products based on data collected from your previous browsing habits — is an extreme expression of shopping via what’s known in the business as “Discovery.” In social-media terms, it’s a bit like TikTok, which swaps the illusion of control created by follower-and-friend models for total submission to a top-down recommendation algorithm programmed to meet its users’ base desires, or at least to keep them occupied for a little while. If you can step out of Temu’s aggressive sales flow to browse more freely, the experience is not entirely unlike using TikTok in that it won’t be long before you see something you weren’t expecting, which then multiplies in front of you, manifesting seemingly infinite variations of itself. On TikTok, the objective is to keep you scrolling in hopes that you eventually encounter and interact with an ad; on Temu, everything in the feed is already an ad, and the goal is to convince you, after an extended period of bent-neck brain-dead scrolling, to actually buy something, anything, for just a few dollars.


Mozilla research says many of the top mental health and therapy apps still have subpar privacy and security practices

An investigation into mental health apps has revealed that many of the most popular services are failing to protect the privacy and security of their users. Following up on a report from last year’s Privacy Not Included guide, researchers at Mozilla found that apps designed for sensitive issues like therapy and mental health conditions are still collecting large amounts of personal data under questionable or deceptive privacy policies.

The team re-reviewed 27 of the mental health, meditation, and prayer apps featured in the previous year’s study, including Calm, Youper, and Headspace, in addition to five new apps requested by the public. Of those 32 total apps, 22 were slapped with a “privacy not included” warning label, something Mozilla assigns to products that have the most privacy and personal data concerns. That’s a minor improvement on the 23 that earned the label last year, though Mozilla said that around 17 of the 27 apps it was revisiting still scored poorly — if not worse — for privacy and security this time around.

...Meanwhile, some of the apps featured on last year’s list did see some improvements. Youper is highlighted as the most improved of the bunch, having overhauled its data collection practices and updated its password policy requirements to push for stronger, more secure passwords. Moodfit, Calm, Modern Health, and Woebot also made notable improvements by clarifying their privacy policies, while researchers praised Wysa and PTSD Coach for being “head and shoulders above the other apps in terms of privacy and security.”


At Musk’s brain-chip startup, animal-testing panel is rife with potential conflicts

Details of the panel’s membership and its potential conflicts have not been previously reported. Insight into its makeup comes in the wake of two federal investigations, first reported by Reuters, into potential animal-welfare violations by Neuralink and allegations that it improperly transported dangerous pathogens on implants removed from monkey brains. Reuters reported in December that some employees had grown concerned about the animal experiments being rushed under pressure from Musk to speed development, causing needless suffering and deaths of pigs, sheep and monkeys.

...Neuralink staffers typically are compensated with salary and stock-based incentives, according to five current and former employees and Neuralink job advertisements reviewed by Reuters. Two of the staffers said some senior-level employees stand to make millions of dollars if the company secures critical regulatory approvals. Reuters couldn’t determine the compensation terms of the Neuralink IACUC members who are also company employees.

Neuralink shareholders could see big gains if the private company’s valuation, currently more than $1 billion, continues to soar. Successful animal trials are critical for the company to gain federal approval for human trials and, ultimately, brain-implant commercialization. Reuters reported in March that the U.S. Food and Drug Administration rejected Neuralink’s first human-trial application, in part because the company had not proven the device’s safety in animal tests.

...In 2021 and 2022, the company killed about 250 sheep, pigs and primates, the company records show. In one instance in 2021, the company implanted 25 out of 60 pigs with the wrong-sized devices, Reuters previously reported. Neuralink employees said the error could have been avoided with better preparation.


Despite laying off just under 2,000 employees across Alexa and the devices unit late last year, Amazon CEO Andy Jassy has big plans to reboot the voice assistant with ChatGPT-like features, a leaked document seen by Insider said

Years ago, Alexa was the chat technology that took Silicon Valley by storm, with hundreds of products from many manufacturers embedding the voice technology into their wares. Now OpenAI's ChatGPT has taken center stage, and Amazon's competitors, such as Microsoft, are having success using it, with Google hot on Microsoft's tail with its own competitor. Meanwhile, the once promising Alexa unit has struggled in recent years and gone through both major layoffs and cost cuts.


FTC proposes barring Meta from monetizing kids' data

The company had agreed to independent assessments of its updated privacy program as part of the 2020 settlement, under which Facebook paid a $5 billion civil penalty following an FTC investigation around the Cambridge Analytica data scandal. The FTC alleges Facebook also violated an earlier 2012 order by continuing to allow app developers access to private user information. Facebook allowed third-party apps to access user data until mid-2020 in some cases, the FTC alleges.

The FTC is also accusing Meta of violating the Children's Online Privacy Protection Rule by misrepresenting parental controls on its Messenger Kids app. The COPPA Rule requires parental consent for websites to collect personal information from kids under 13. The FTC alleged that while the company marketed that the app would only allow kids to talk with contacts their parents approved, children were able to communicate with additional contacts in group chats or group video calls in some circumstances.

As a result, the FTC is proposing to strengthen the terms of the 2020 agreement to put additional restrictions on the company, which would apply to all of Meta's services including Facebook, Instagram, WhatsApp and Oculus. The proposed terms include a blanket ban on monetizing data from users under 18. That means any data collected from these users could only be used for security reasons and any data collected while users are under age could not be later monetized once they turn 18.


GPT-4 Can’t Replace Striking TV Writers, But Studios Are Going to Try

The Writers Guild of America is on strike, after six weeks of negotiating with a number of major entertainment companies, including Netflix, Amazon, Apple, and Disney, under the Alliance of Motion Picture and Television Producers (AMPTP). The walkout is the first Hollywood strike to occur in 15 years, and comes at an unprecedented moment—for the first time ever, writers are negotiating the studios’ use of generative AI tools like ChatGPT.

...“Initially, the WGA's AI proposals looked like outliers. Everything else on the list was talking about writer compensation, making sure writers were paid fairly to justify the immense value they were bringing to the studios. Over the negotiation, it became clear that the AI proposals are really part of a larger pattern. The studios would love to treat writers as gig workers. They want to be able to hire us for a day at a time, one draft at a time, and get rid of us as quickly as possible. I think they see AI as another way to do that,” John August, a screenwriter known for writing the films Charlie’s Angles and Charlie and the Chocolate Factory, told Motherboard.

“The idea that our concerns could be addressed by an annual meeting is absurd and honestly offensive. Everyone watching AI can tell you that these large language models are progressing at an incredible rate. AI-generated material isn't something that's going to become a factor in a few years. It's here now. It's lucky that we're negotiating our contract this year and not next year, before these systems become widely entrenched,” August said.


Google Promised to Defund Climate Lies, but the Ads Keep Coming

They found 100 videos, viewed at least 18 million times in total, that violated Google’s own policy. They found videos accompanied by ads for other major brands like Adobe, Costco, Calvin Klein, and Politico. Even an ad for Google’s search engine popped up before a video that claimed there was no scientific consensus about the changing climate.

...They also included industry giants like Exxon Mobil, which has been accused of “greenwashing” its contribution to carbon emissions, though its videos did not explicitly violate YouTube’s policies; and mainstream conservative media like Fox News, whose videos sometimes did. (In one, Fox’s recently fired anchor Tucker Carlson dismissed the fight against climate change as “a coordinated effort by the government of China to hobble the U.S. and the West and take its place as the leader of the world.”)

...“What makes YouTube especially dangerous is that they profit share per video,” said Claire Atkin, a co-founder of Check My Ads, an advocacy group that studies advertising online and was not involved in the research. “When someone posts this information to Facebook, they don’t make money, but when someone posts a video to YouTube, they have the opportunity to make a full salary on disinformation.”

She said that YouTube was a powerful force for radicalizing people online and needed to work harder to govern content on its platform. “The fact that they haven’t changed that, that they are still funding — not promoting, funding — by sending advertisers to sponsor climate change disinformation is yet another proof point of their ineptitude.”


New Tool Shows if Your Car Might Be Tracking You, Selling Your Data

Motherboard has previously covered the various parts of the vehicle data collection business. One company called Otonomo sells granular location data of peoples’ vehicles. In another case, a surveillance contractor that has sold services to the U.S. military had aspirations to sell similar information to government customers.

After entering their VIN, the Vehicle Privacy Report tool says the types of data it believes the car manufacturer collects. This includes identifiers, location data, biometrics, and data synced from mobile phones. The tool also lists the sorts of entities the manufacturer may share or sell data to, such as insurance companies, data brokers, or the government. (It is expected behavior for companies to share collected data with law enforcement under a valid court order or similar).

The tool will also say if the vehicle has telematics, which is when a car has its own cellular data plan separate from the driver’s smartphone. For cars with telematics, the tool describes them as a “smartphone on wheels.” For those without, the tool says they are like a “hard-drive on wheels.”


IBM could replace 7,800 jobs with artificial intelligence, CEO says

As generative artificial intelligence becomes eerily lifelike and gives rise to chatbots that can draft letters, write computer code or create songs, experts have warned about its ability to put people out of jobs. A Goldman Sachs report in late March said generative AI could significantly disrupt the global economy and subject 300 million jobs, particularly white-collar ones, to automation.

...Using jobs data in both the United States and Europe, report writers found that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute for up to one-fourth of current work done by humans.

In particular, the report noted that office-support workers, lawyers and engineers are at most risk, rather than construction workers, maintenance professionals or building cleaning crews.


The ‘HIPAA authorization’ for Amazon’s new low-cost clinic offers the tech giant more control over your health data

This Amazon form is asking for something more extraordinary: “use and disclosure of protected health information.” It authorizes Amazon to have your “complete patient file” and notes that the information “may be re-disclosed,” after which it “will no longer be protected by HIPAA.”

Wait, you agreed to what? Amazon is essentially pushing people to waive some of their federal privacy protections, say the lawyers at the Electronic Privacy Information Center whom I asked to inspect the jargon. Amazon is required by law to say doing so is voluntary — but in practice you must agree to become a patient at its Clinic. There’s only one button to click: “Continue.”

What could go wrong? There are lots of icky ways Amazon could use your health information: to upsell you on other services, to target marketing for its giant advertising business, or to build out artificial intelligence or patient-risk models.

...I’m just as frustrated with our lawmakers as I am with Amazon. HIPAA was written in 1996 primarily to make medical records portable, at a time when many were stored in folders on shelves. No wonder the law can’t keep up with digital businesses harvesting health information. HIPAA also doesn’t cover the growing trove of body information collected by Apple Watches and even Google searches.


Israeli spyware maker NSO Group deployed at least three new “zero-click” hacks against iPhones last year, finding ways to penetrate some of Apple’s latest software, researchers at Citizen Lab have discovered

The attacks targeted human rights activists who were investigating the 2015 mass kidnapping of 43 student protesters in Mexico, other suspected military abuses, and the related government response, Citizen Lab said. Mexico has been a major NSO customer.

According to Citizen Lab, one of the attacks, in September 2022, coincided with a report by international experts challenging government evidence in the 2015 case and its interference with the investigation.

It’s the latest sign of NSO’s ongoing efforts to create spyware that penetrates iPhones without users taking any actions that allow it in. Citizen Lab has detected multiple NSO hacking methods in past years while examining the phones of likely targets, including human rights workers and journalists.


Feds Allege China Disrupted and Spied on Dissidents’ Zoom Calls

More than that, the DOJ said that the trolls targeted an online platform labeled “Company-1” to disrupt meetings of pro-democracy activists commiserating about the Tiananmen Square massacre. ABC News reported based on anonymous sources that the listed company was Zoom, and that an insider at Zoom was assisting with these repression campaigns. Prosecutors said the trolls posted threats in the Zoom chat, and in another instance, the trolls drowned out another meeting of anti-CCP dissidents with “loud music, vulgar screams, and threats.”

...The accusations against Zoom echo previous allegations against the company. Back in 2020, the DOJ accused a Zoom exec Xinjiang “Julien” Jin of working with the Beijing government to surveil and censor video calls. The allegations were that the Zoom exec shared user information and disrupted video calls on behalf of the CCP. The video meeting platform previously claimed to Gizmodo that no Zoom employee provided the Chinese government with the names or data of users not based in China.


Indian government gives itself the power to “fact-check” and delete social media posts

The Indian government on April 6 announced a state-run fact-checking unit that will have sweeping powers to label any piece of information related to the government as “fake, false or misleading” and have it removed from social media. The country has tweaked its tech rules that now require platforms such as Facebook, Twitter, and Instagram to take down content flagged by the fact-checking body. Internet service providers are also expected to block URLs to such content. Failure to comply could result in the platforms losing safe harbor protection that safeguards them from legal action against any content posted by their users, said India’s minister of information technology, Rajeev Chandrasekhar.

...“This will not just have an impact on the media and those asking questions to the government,” Bal said. “This government wants absolute control on the narrative and through this amendment, they can assert that control legally.” Bal said that instead of seeking inputs from stakeholders, the government appears to have a specific objective of suppressing any kind of independent reporting on its actions.

...“In effect, the government has given itself absolute power to determine what is fake or not, in respect of its own work, and order takedown,” the statement read. “The so-called ‘fact-checking unit’ can be constituted by the ministry, by a simple ‘notification published in the Official Gazette.’” The guild also urged the government to withdraw the “draconian” law and hold consultations with the media.

...Supriya Shrinate, spokesperson of the party in opposition, the Indian National Congress, told Rest of World the new amendment is not just undemocratic but also unconstitutional, and that it will severely impact freedom of speech in the country. “The government is undermining all democratic processes in the country,” she said. “Nobody is allowed to ask questions. This BJP [Bharatiya Janata Party] government is afraid of facing questions and facts. How can the government be the judge, jury, and executioner of what is fake news, when BJP is the biggest manufacturer of fake news?”


How the cops buy a "God view" of your location data

The list of people and organizations that are hungry for your location data—collected so routinely and packaged so conveniently that it can easily reveal where you live, where you work, where you shop, pray, eat, and relax—includes many of the usual suspects.

Advertisers, obviously, want to send targeted ads to you and they believe those ads have a better success rate if they're sent to, say, someone who spends their time at a fast-food drive-through on the way home from the office, as opposed to someone who doesn't, or someone who's visited a high-end department store, or someone who, say, vacations regularly at expensive resorts. Hedge funds, interestingly, are also big buyers of location data, constantly seeking a competitive edge in their investments, which might mean understanding whether a fast food chain's newest locations are getting more foot traffic, or whether a new commercial real estate development is walkable from nearby homes.

...According to a recent investigation from Electronic Frontier Foundation and The Associated Press, a company called Fog Data Science has been gathering Americans' location data and selling it exclusively to local law enforcement agencies in the United States. Fog Data Science's tool—a subscription-based platform that charges clients for queries of the company's database—is called Fog Reveal. And according to Bennett Cyphers, one of the investigators who uncovered Fog Reveal through a series of public record requests, it's rather powerful.


Hackers Can Remotely Open Smart Garage Doors Across the World

“That’s the craziest bug. But the disabling alarm and turning on [and] off smart plugs is pretty neat too,” he added, referring to another Nexx product that allows users to control power outlets in their home.

The consequences of someone weaponzing these vulnerabilities are wide ranging and potentially a real security threat for Nexx’s customers. A hacker could open Nexx doors around the world at random, exposing their garage contents and perhaps their homes to opportunistic thieves. Pets might escape. Or customers might just get very annoyed at someone opening and closing their property with no idea of why it was happening. In more extreme cases, a hacker could use the vulnerabilities as part of a targeted attack against a particular garage that used Nexx’s security system.

Sabetan and Motherboard have repeatedly tried to contact Nexx about the issues. Sabetan said the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) told him it had attempted contact too. The company has failed to reply or fix the vulnerabilities. This means the security vulnerabilities are still available to hackers who may wish to abuse them. For that reason, Motherboard is not describing them in great detail and instead focusing on their impact to consumers. CISA published its own advisory about the security issues on Tuesday.


Three ways AI chatbots are a security disaster

But the way these products work—receiving instructions from users and then scouring the internet for answers—creates a ton of new risks. With AI, they could be used for all sorts of malicious tasks, including leaking people’s private information and helping criminals phish, spam, and scam people. Experts warn we are heading toward a security and privacy “disaster.”

...Over the last year, an entire cottage industry of people trying to “jailbreak” ChatGPT has sprung up on sites like Reddit. People have gotten the AI model to endorse racism or conspiracy theories, or to suggest that users do illegal things such as shoplifting and building explosives.

Malicious actors could also send someone an email with a hidden prompt injection in it. If the receiver happened to use an AI virtual assistant, the attacker might be able to manipulate it into sending the attacker personal information from the victim’s emails, or even emailing people in the victim’s contacts list on the attacker’s behalf.

...“Language models themselves act as computers that we can run malicious code on. So the virus that we’re creating runs entirely inside the ‘mind’ of the language model,” he says.


‘Nobody is Safe’: In Wild Hacking Spree, Hackers Accessed Federal Law Enforcement Database

Ceraolo previously provided Motherboard with details on the underground SIM swapping community, where hackers hijack phone numbers to steal victims’ cryptocurrency or their valuable social media handles. One 2020 article focused on how SIM swappers phished telecom company employees to access internal tools; another showed that SIM swappers had escalated from bribing employees to using remote desktop software to gain direct access to T-Mobile, AT&T, and Sprint tools.

...Ceraolo was also a member of a hacking group called “ViLE,” according to the prosecutors’ press release. In a screenshot included in the release, ViLE’s website included an illustration of a hanging girl. At the time of writing, the website is protected by a login screen in the style of an early Windows computer. ViLE’s members sought out peoples’ personal information, such as physical addresses, telephone numbers, and Social Security Numbers, and then doxed these people, the release adds. Victims could then pay to have their information removed from ViLE’s website, the release reads.

...Beyond access to the U.S. federal law enforcement database, Ceraolo allegedly accessed the official email address of a Bangladeshi police official between February 2022 and May 2022. With that email account, he then allegedly posed as the officer and requested information about a specific person from an unnamed social media platform. Posing as officials and requesting data from social networks has become a powerful service in the underground hacking community, with scammers sometimes creating fake legal demands. In another case from an online gaming platform, Ceraolo’s attempts at fraud failed, according to the release.

Ceraolo also allegedly used the compromised Bangladeshi email account “to attempt to purchase a license from a facial recognition company whose services are not available to the general public.” Clearview AI, a tool that is popular among law enforcement for facial recognition services, did not immediately respond to Motherboard’s request for comment.


Ransomware Group Claims Hack of Amazon's Ring

“There's always an option to let us leak your data,” a message posted on the ransomware group’s website reads next to Ring’s logo. The ransomware group claiming responsibility for the attack is ALPHV, whose malware is known as BlackCat.

...ALPHV has previously leaked medical data, and hacked hospitality companies. It recently claimed an attack on an Irish university too.

In 2019, hackers on a Discord channel began hacking a series of Ring cameras all over the country by reusing credentials exposed in earlier hacks. These hackers then terrorized their victims; in Tennessee, for example, a hacker broke into the camera installed in the bedroom of three young girls and spoke through the camera's speaker to the girls and played the song "Tiptoe Through the Tulips" to the girls. At one point, the hackers created a podcast where they broke into Ring users' cameras live on air.


Amazon driver shares viral TikTok of the company's AI system that tracks her movements

Meanwhile, she said the in-cabin camera tracks her movement in the driver's seat.

"That camera is watching me while I drive so I cannot do a lot," she said in the video. "If I want a sip of my coffee, I have to pull over so that I can grab it and drink it because if I do it while I'm driving than that's a driver distracted, which is also a violation. I can't touch the center console or else that is a driver distracted violation."


LastPass Shouldn't Be Trusted With Your Passwords

Even before this latest blog post, some security researchers had already recommended ditching LastPass. Jeremi Gosney, a member of the core development team for password cracking software Hashcat, previously supported LastPass, he said in a lengthy Mastodon post in December. That changed. Issues Gosney flagged included LastPass suffering a total of seven major security breaches in the last ten years, ignoring vulnerability reports, and how LastPass keeps your vault encryption key in memory.

Companies get hacked all the time. Sometimes the companies are under-resourced, or face an attacker that genuinely outwitted them. Some breaches are mostly inconsequential, dealing with accounts for a particular, and not that important, website. But password management companies are not ordinary tech companies or sites. They are the custodians of their customers’ passwords that in turn can be used to completely pry open their digital lives. These are peoples’ most valuable secrets, and should be treated as such. For a password manager, you shouldn’t expect anything less than world class. Especially when you’re paying for the service, which is the case with many LastPass customers (a change the company did in 2021, which made not paying for the service incredibly inconvenient.).

At a minimum, LastPass customers should change any passwords they stored inside the service, as well as their master password which is used to access this information. They should start with their most sensitive accounts first. Unfortunately, this is likely to be a time-consuming process. Beyond that, it’s time to find another password manager altogether.


How your brain data could be used against you

She recalled how, last summer, someone from a company that makes brain devices told her that law enforcement had asked for recordings taken from an implant inside the brain of a person with epilepsy. That person had been accused of assaulting a police officer but, as the brain data proved, was just having a seizure at the time.

While the data cleared that person, similar readings could as easily be used against someone else. Neural recordings could even suggest, for example, whether a driver involved in a car accident was alert or concentrating on the road.

It’s not clear how these kinds of recordings might be used by the criminal justice system in the future. But given the explosion of research and technical advances we’re seeing in the field, it’s vital that we start thinking about these uses, and how to protect brain data, now.


Why you shouldn’t trust AI search engines

Here’s the problem: the technology is simply not ready to be used like this at this scale. AI language models are notorious bullshitters, often presenting falsehoods as facts. They are excellent at predicting the next word in a sentence, but they have no knowledge of what the sentence actually means. That makes it incredibly dangerous to combine them with search, where it’s crucial to get the facts straight.

...Meanwhile, Microsoft has gambled that expectations around Bing are so low a few errors won’t really matter. Microsoft has less than 10% of the market share for online search. Winning just a couple more percentage points would be a huge win for them, Shah says.

...Shah reckons companies are going to spin early hiccups as learning opportunities. “Rather than taking a careful approach to this, they’re going in a very bold fashion. Let the [AI system] make mistakes, because now the cat is out of the bag,” he says.


Even if you're paying for the product, you're still the product

That's just what they do. Earlier this month, a small security research firm called Mysk released a video revealing that when you tick the box on your Iphone that promises "disable the sharing of Device Analytics altogether," your Iphone continues to spy on you, and sends the data it collects to Apple:

The data Iphones gather is extraordinarily fine-grained: "what you tapped on, which apps you search for, what ads you saw, and how long you looked at a given app and how you found it."

...It doesn't stop there: "The app sent details about you and your device as well, including ID numbers, what kind of phone you’re using, your screen resolution, your keyboard languages, how you’re connected to the internet—notably, the kind of information commonly used for device fingerprinting."

...Indeed, there are so many places in Google's location privacy settings where you can tick a box that claims to turn off location spying. None of them work. A senior product manager at Google complained to her colleagues that she had turned off three different settings and was still being tracked:

...Companies will only protect your privacy to the extent that it is more profitable than not doing so. They can increase those profits by advertising privacy promises to potential customers. They can increase them more by secretly breaking those promises, And they can increase them even more by using privacy claims to block their rivals' spying, so they're the sole supplier of your nonconsensually collected personal information.


Apple Sued for Allegedly Deceiving Users With Privacy Settings After Gizmodo Story

The lawsuit accuses Apple of violating the California Invasion of Privacy Act. “Privacy is one of the main issues that Apple uses to set its products apart from competitors,” the plaintiff, Elliot Libman, said in the suit, which can be read on Bloomberg Law. “But Apple’s privacy guarantees are completely illusory.” The company has plastered billboards across the country with the slogan “Privacy. That’s iPhone.”

...“Through its pervasive and unlawful data tracking and collection business, Apple knows even the most intimate and potentially embarrassing aspects of the user’s app usage—regardless of whether the user accepts Apple’s illusory offer to keep such activities private,” the lawsuit said.

Apple is under increased scrutiny for its privacy practices as the company expands into digital advertising. Apple recently introduced new ads in the App Store, reportedly plans to ads to Apple TV, and seems focused on poaching small business advertisers from Meta, Facebook’s parent company. While Apple’s company literature loudly declares that “Privacy is a human right,” it remains to be seen how much the iPhone manufacturer is willing to compromise that right as it develops new data-driven business ventures.


Apple Is Tracking You Even When Its Own Privacy Settings Say It’s Not, New Research Says

The iPhone Analytics setting makes an explicit promise. Turn it off, and Apple says that it will “disable the sharing of Device Analytics altogether.” However, Tommy Mysk and Talal Haj Bakry, two app developers and security researchers at the software company Mysk, took a look at the data collected by a number of Apple iPhone apps—the App Store, Apple Music, Apple TV, Books, and Stocks. They found the analytics control and other privacy settings had no obvious effect on Apple’s data collection—the tracking remained the same whether iPhone Analytics was switched on or off.

The App Store appeared to harvest information about every single thing you did in real time, including what you tapped on, which apps you search for, what ads you saw, and how long you looked at a given app and how you found it. The app sent details about you and your device as well, including ID numbers, what kind of phone you’re using, your screen resolution, your keyboard languages, how you’re connected to the internet—notably, the kind of information commonly used for device fingerprinting.

...Keeping tabs on your behavior rubs some people the wrong way, regardless of the information in question. But this data can be sensitive. In the App Store, for example, the fact that you’re looking at apps related to mental health, addiction, sexual orientation, and religion can reveal things that you might not want sent to corporate servers.

...Privacy is one of the main issues that Apple uses to set its products apart from competitors. It emblazoned 40-foot billboards of the iPhone with the simple slogan “Privacy. That’s iPhone.” and ran the ads across the world for months. But the company is slowly introducing many of the internet’s privacy issues into the once sacrosanct Apple ecosystem. Apple is working hard to build an advertising empire. Apple’s ad network runs on your personal information just like the ones Google and Meta operate, albeit in a more reserved way.


they want someone to “explain to them how to be ethical through a PowerPoint with three slides and four bullet points”

The EU’s upcoming AI Act and AI liability law will require companies to document how they are mitigating harms. In the US, lawmakers in New York, California, and elsewhere are working on regulation for the use of AI in high-risk sectors such as employment. In early October, the White House unveiled the AI Bill of Rights, which lays out five rights Americans should have when it comes to automated systems. The bill is likely to spur federal agencies to increase their scrutiny of AI systems and companies.

And while the volatile global economy has led many tech companies to freeze hiring and threaten major layoffs, responsible-AI teams have arguably never been more important, because rolling out unsafe or illegal AI systems could expose the company to huge fines or requirements to delete their algorithms. For example, last spring the US Federal Trade Commission forced Weight Watchers to delete its algorithms after the company was found to have illegally collected data on children. Developing AI models and collecting databases are significant investments for companies, and being forced by a regulator to completely delete them is a big blow.

Burnout and a persistent sense of being undervalued could lead people to leave the field entirely, which could harm the field of AI governance and ethics research as a whole. It’s especially risky given that those with the most experience in solving and addressing harms caused by an organization’s AI may be the most exhausted.

...“The only mechanism that big tech companies have to handle the reality of this is to ignore the reality of it.”


Inside Fog Data Science, the Secretive Company Selling Mass Surveillance to Local Police

A data broker has been selling raw location data about individual people to federal, state, and local law enforcement agencies, EFF has learned. This personal data isn’t gathered from cell phone towers or tech giants like Google — it’s obtained by the broker via thousands of different apps on Android and iOS app stores as part of the larger location data marketplace.

The company, Fog Data Science, has claimed in marketing materials that it has “billions” of data points about “over 250 million” devices and that its data can be used to learn about where its subjects work, live, and associate. Fog sells access to this data via a web application, called Fog Reveal, that lets customers point and click to access detailed histories of regular people’s lives. This panoptic surveillance apparatus is offered to state highway patrols, local police departments, and county sheriffs across the country for less than $10,000 per year.

...The language used in the document often invokes terms used by intelligence agencies. For example, a core advertised feature is the ability to run a “pattern of life analysis,” which is what intelligence analysts call a profile of an individual’s habits based on long-term behavioral data. Fog Reveal is also “ideal for tipping and cueing,” which means using low-resolution, dragnet surveillance to decide where to perform more targeted, high-resolution monitoring. The brochure also includes a screenshot of Fog Reveal being used to monitor “a location at the US/Mexico border,” and an alternate version of the brochure listed “Border Security/Tracking” as a possible use case. As we will discuss in our next post, records show that Fog has worked with multiple DHS-affiliated fusion centers, where local and federal law enforcement agencies share resources and data.


Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users

Our investigation revealed wide gaps between Worldcoin’s public messaging, which focused on protecting privacy, and what users experienced. We found that the company’s representatives used deceptive marketing practices, collected more personal data than it acknowledged, and failed to obtain meaningful informed consent. These practices may violate the European Union’s General Data Protection Regulations (GDPR)—a likelihood that the company’s own data consent policy acknowledged and asked users to accept—as well as local laws.

...Central to Worldcoin’s distribution was the high-tech orb itself, armed with advanced cameras and sensors that not only scanned irises but took high-resolution images of “users’ body, face, and eyes, including users’ irises,” according to the company’s descriptions in a blog post. Additionally, its data consent form notes that the company also conduct “contactless doppler radar detection of your heartbeat, breathing, and other vital signs.” In response to our questions, Worldcoin said it never implemented vital sign detection techniques, and that it will remove this language from its data consent form. (As of press time, the language remains.)

...But of the people we interviewed, none were explicitly told—or, in the case of orb operators, told others—that they were “test users,” that photographs and videos of their faces, and 3D body maps were captured and being used to train the orb’s “anti-fraud algorithm” to “differentiate between people,” that their data was treated differently from the way others’ would be handled later, or that they could ask for their data to be deleted.

...Pete Howson, a senior lecturer at Northumbria University who researches cryptocurrency in international development, categorizes Worldcoin’s actions as a sort of crypto-colonialism, where “blockchain and cryptocurrency experiments are being imposed on vulnerable communities essentially because…these people can’t push back,” he told MIT Technology Review in an email.

...Speaking to Blania clarified something we had struggled to make sense of: how a company could speak so passionately about its privacy-protecting protocols while clearly violating the privacy of so many. Our interview helped us see that, for Worldcoin, these legions of test users were not, for the most part, its intended end users. Rather, their eyes, bodies, and very patterns of life were simply grist for Worldcoin’s neural networks. The lower-level orb operators, meanwhile, were paid pennies to feed the algorithm, often grappling privately with their own moral qualms. The massive effort to teach Worldcoin’s AI to recognize who or what was human was, ironically, dehumanizing to those involved.


The news highlights the nascent market of vehicle location data, tapped into by insurance firms, advertisers, and others who can obtain it.

Otnomo's data offering is a "privacy nightmare," Adam Schwartz, a staff attorney at the Electronic Frontier Foundation told Motherboard. Schwartz added that the EFF has been concerned that the location data of vehicles would be "bundled and sold to data brokers, who want to turn a profit," and pointed to how Otonomo had some of this data on their public facing website.

...Otonomo, founded in Israel, has agreements with some car manufacturers to source location data from vehicles. A Otonomo presentation made for investors says the company has partnerships with 16 OEMs with an installed base of over 40 million vehicles, and that it collects 4.3 billion data points a day. The company also obtains data from telemetry service providers (TSPs), which are other sources such as navigation apps and satnavs that can act as a proxy for a vehicle's location and movements. The presentation adds that in turn "thousands of organizations" have access to Otonomo's data.

..."Unless Otonomo is specifically listed in every one of those agreements, that is not going to reach the 'freely-given and unambiguous' threshold for consent, particularly if users are unable to purchase the cars without providing their data to Otonomo. In addition, there would have to be consent for Otonomo to sell/share that personal data with additional parties (which, under their current practices, appear to be literally anyone)," Calli Schroeder, a privacy attorney, told Motherboard in an email. "Essentially, they're making a lot of consent claims here that I'm not sure they can back up. In addition, it's unclear whether the obligation to obtain consent extends to service providers like TSPs. That could be a real area of liability as well."

..."Consequently, while individuals around the world gain more and more rights over their data, the extreme murkiness of the automotive data ecosystem means it is very, very challenging for drivers and vehicle occupants to exercise those rights in practice—because they have no idea who has their data in the first place!" he added.


A surveillance contractor that has previously sold services to the U.S. military is advertising a product that it says can locate the real-time locations of specific cars in nearly any country on Earth

Although the company told Motherboard it has not sold the product to the U.S. government at this time, the news highlights the scale and reach of car-tracking technology, and the fact that car location data is of interest not just to insurance companies and the finance sector, but to government contractors who explicitly say they want to source the data for intelligence and surveillance purposes.

..."Vehicle telematics is data transmitted from the vehicle to the automaker or OEM through embedded communications systems in the car," the Ulysses document continues. "Among the thousands of other data points, vehicle location data is transmitted on a constant and near real time basis while the vehicle is operating."

...With a consumer using a GPS navigation tool, for instance, "The OEM will have first dibs to the data, because they made the car and have access to the telematics," Andrea Amico, the founder of Privacy4Cars, which, among other things, sells tools to help dealerships remove data from vehicles, told Motherboard in a phone call. "But the company that provides the map itself, for instance, would have access to it; the company that provides the infotainment system may have access to it; the company that provides the traffic data may have access to it; the company that provides the parking data may have access to it. Right there and then you've got five companies that are getting your location."

...The role of data that cars collect and who has access to it was a flashpoint in the November elections in Massachusetts. A "right to repair" ballot measure there sought to give independent manufacturers and owners greater ability to access repair information on vehicles; car manufacturers spent $25 million lobbying against it, claiming that passing the law would also give access to telematics data collected by various sensors. Manufacturers said this data was highly sensitive, and that wider access to it could be used by "sexual predators" to stalk innocent people (there was no specific provision in the measure that allowed this, and the measure overwhelmingly passed). Meanwhile, car companies are sharing this type of data with third-parties themselves.


Leave a Comment