Fourth Timeline: current events

Massive invasions, threats and hazards created by Surveillance Capitalism

Our timelines are easily navigated in bite size overviews, by swiping left or right on a phone, clicking and dragging on a tablet or desktop, or clicking the left and right arrows.

Please feel free to leave comments or questions below the timeline.


Google Workspace Labs Privacy Notice and Terms for Personal Accounts

Google uses Workspace Labs Data and metrics to provide, improve, and develop products, services, and machine learning technologies across Google, including Google’s enterprise products. You can opt-out of Google Workspace Labs anytime.

Your Workspace Labs Data may also be read, rated, annotated, and reviewed by human reviewers for the purposes described above.For this reason, please do not include sensitive, confidential, or personal information that can be used to identify you or others in your prompts.We take steps to protect your privacy as part of this process. Importantly, where Google uses Google-selected input (as described above) to generate output, Google will aggregate and/or pseudonymize that content and resulting output before it is rated by human reviewers or used for product improvement and development of services and machine learning technologies, unless it is specifically provided as part of your feedback to Google.

Workspace Labs Data is stored in a manner that is not associated with your Google account and will be retained for 18 months. Copies of Workspace Labs Data that have been reviewed or annotated by human reviewers may be retained for up to 4 years.


Apple alerts users in 92 nations to mercenary spyware attacks

Apple sent threat notifications to iPhone users in 92 countries on Wednesday, warning them that they may have been targeted by mercenary spyware attacks.

...Apple also sent an identical warning to a number of journalists and politicians in India in October last year. Later, nonprofit advocacy group Amnesty International reported that it had found Israeli spyware maker NSO Group’s invasive spyware Pegasus on the iPhones of prominent journalists in India. (Users in India are among those who have received Apple’s latest threat notifications, according to people familiar with the matter.)

The spyware alerts arrive at a time when many nations are preparing for elections. In recent months, many tech firms have cautioned about rising state-sponsored efforts to sway certain electoral outcomes. Apple’s alerts, however, did not remark on their timing.

“We are unable to provide more information about what caused us to send you this notification, as that may help mercenary spyware attackers adapt their behavior to evade detection in the future,” Apple told affected customers.


Generative AI isn’t ubiquitous in the business world—at least not yet

“Simulated empathy feels weird, empty," Morris said at the time. Morris said the use of ChatGPT had been cleared by an external board, but he said the company has stopped using the technology until a better use case emerges.

Mind Meld PR, a Vancouver-based public relations agency whose work involves sending pitches to scores of journalists, said a foray into generative AI didn’t turn out as well as it might have hoped since the whole process, including honing the output, took about as long as having staff perform the entire task.

...Though large businesses are more likely than smaller ones to have adopted generative AI, they also see its potential to pose security risks. Ninety-two percent of respondents to a recent Cisco Systems’ survey of privacy and security professionals said they believed generative AI was fundamentally different from other technologies and required new techniques to manage data and risks. More than a quarter had gone as far as banning its use.

...But generative AI could ultimately follow the same trajectory toward near ubiquitous adoption as those earlier technologies. Microsoft research has found that 77% of users who have tried its generative AI product Copilot don’t want to give it up, the company said.


Elon Musk Didn’t Want His Latest Deposition Released. Here It Is.

Brody wasn’t even in the same state when the June 24 brawl occurred. But his world was turned upside down when far-right X accounts, magnified by Musk, falsely identified him as a member of Rose City Nationalists (and an undercover federal agent) and posted his personal information online.

Musk amplified the conspiracy theory repeatedly to his more than 180 million followers, suggesting Brody was a fresh-faced federal agent pretending to be a neo-Nazi in a “false flag situation,” a phrase used to suggest a harmful event was deliberately set up to misrepresent a group or person.

“Looks like one is a college student (who wants to join the govt) and another is maybe an Antifa member, but nonetheless a probable false flag situation,” Musk posted to X after Brody had been falsely identified as a Rose City Nationalists member. The post remains on X.

Brody said he and his family were forced to flee their home amid the fallout from Musk’s posts. He’s seeking more than $1 million in damages. The next court hearing is scheduled for April 22.


A Brazilian Supreme Court justice is adding Elon Musk to an investigation over the dissemination of fake news and is investigating him for alleged obstruction

In his decision, Justice Alexandre de Moraes noted that Musk on Saturday began waging a public “disinformation campaign” regarding the top court’s actions, and that Musk continued the following day — most notably with comments that his social media company X would cease to comply with the court’s orders to block certain accounts.

“The flagrant conduct of obstruction of Brazilian justice, incitement of crime, the public threat of disobedience of court orders and future lack of cooperation from the platform are facts that disrespect the sovereignty of Brazil,” de Moraes wrote.

Musk will be investigated for alleged intentional criminal instrumentalization of X as part of an investigation into a network of people known as digital militias who allegedly spread defamatory fake news and threats against Supreme Court justices, according to the text of the decision. The new investigation will look into whether Musk engaged in obstruction, criminal organization and incitement.

...Brazil’s attorney general wrote Saturday night that it was urgent for Brazil to regulate social media platforms. “We cannot live in a society in which billionaires domiciled abroad have control of social networks and put themselves in a position to violate the rule of law, failing to comply with court orders and threatening our authorities. Social peace is non-negotiable,” Jorge Messias wrote on X.


A Breakthrough Online Privacy Proposal Hits Congress

The bipartisan proposal, titled the American Privacy Rights Act, or APRA, would limit the types of consumer data that companies can collect, retain, and use, allowing solely what they’d need to operate their services. Users would also be allowed to opt out of targeted advertising, and have the ability to view, correct, delete, and download their data from online services. The proposal would also create a national registry of data brokers, and force those companies to allow users to opt out of having their data sold.

“This landmark legislation gives Americans the right to control where their information goes and who can sell it,” Cathy McMorris Rodgers, House Energy and Commerce Committee chair, said in a statement on Sunday. “It reins in Big Tech by prohibiting them from tracking, predicting, and manipulating people’s behaviors for profit without their knowledge and consent. Americans overwhelmingly want these rights, and they are looking to us, their elected representatives, to act.”

...APRA includes language from California’s landmark privacy law allowing people to sue companies when they are harmed by a data breach. It also provides the Federal Trade Commission, state attorneys general, and private citizens the authority to sue companies


Lawmakers unveil sprawling plan to expand online privacy protections

The measure, a copy of which was reviewed by The Washington Post, would set a national baseline for how a broad swath of companies can collect, use and transfer data on the internet. Dubbed the American Privacy Rights Act, it also would give users the right to opt out of certain data practices, including targeted advertising. And it would require companies to gather only as much information as they need to offer specific products to consumers, while giving people the ability to access and delete their data and transport it between digital services.

...The measure would not accomplish some other priorities. For example, it would not prohibit companies from targeting minors with ads, as President Biden called for during his State of the Union addresses. Nor would it create a “youth privacy and marketing division” at the Federal Trade Commission, as the previous House legislation proposed.

...The privacy compromise is part of a recent surge of activity on new internet policies. In February, Blumenthal and Blackburn announced that they had secured enough support for online child safety legislation to clear the Senate, teeing up a potential vote this year. In March, the House passed legislation to force TikTok to be sold by its Chinese parent or be banned in the United States, kicking the issue over to the Senate. A week later, the House passed a more narrow privacy bill aimed at stopping data brokers from selling U.S. user information to “foreign adversaries.”


Israeli ‘AI Targeting System’ Has Caused Huge Civilian Casualty Count In Gaza: Report

“At 5 a.m., [the air force] would come and bomb all the houses that we had marked,” one unnamed senior officer, referred to in the story as “B,” said. “We took out thousands of people. We didn’t go through them one by one — we put everything into automated systems, and as soon as one of [the marked individuals] was at home, he immediately became a target. We bombed him and his house.”

...Two sources told +972 and Local Call that the Israeli military judged that it was acceptable to kill up to 15 to 20 civilians for every junior Hamas operative targeted, and on occasion, more than 100 civilians for commanders. Another unnamed source, “A.,” who is an officer in a target operation room, told the publication that the army’s international law department had not previously given its approval for such extensive collateral damage. (By contrast, the article noted, General Peter Gersten, the American deputy commander for operations and intelligence in the operation to fight ISIS in Iraq and Syria, once said Osama bin Laden had what’s called a “Non-Combatant Casualty Cut-Off Value” of 30 civilian casualties.)

The author of Wednesday’s report is Yuval Abraham, an Israeli journalist and filmmaker known for his public call to end what he referred to as a system of “apartheid” in Israel and the Palestinian territories. In November, Abraham published a report on what an unnamed former intelligence officer told him was a “mass assassination factory,” a reference to AI-powered targeting decisions.

That November report also detailed Israeli bombing of so-called “power targets” including universities, banks and government offices, in what multiple sources said was an effort to exert “civil pressure” on Hamas.

However, “Lavender” is different from the AI-targeting tool discussed in the November report — known as “Habsora,” or “The Gospel” — because it tracks people rather than structures, Abraham reported. The program reportedly identifies targets by adding up various “features” supposedly indicating militant involvement with Hamas or Palestinian Islamic Jihad, including, in the report’s words, “being in a Whatsapp group with a known militant, changing cell phone every few months, and changing addresses frequently.” The report’s sources said almost every person in Gaza received a 1-to-100 rating expressing the likelihood that they were a militant.


Civil Liberty Advocates Threaten To Sink FISA Bill If No Votes On Searches, Data Collection

Civil liberties groups pushing to overhaul an anti-terror surveillance law are urging House leaders to allow votes on warrantless database searches and the government’s use of private data brokers when the law comes up for debate soon.

...“It’s hard to overstate the significance of the Intelligence Committee’s bad-faith efforts to stop anyone from Congress from voting for reform, and as their plans to not only undermine reform but push for more warrantless FISA surveillance are coming into view, more and more impacted communities are voicing their outrage,” said Sean Vitka, policy director for progressive group Demand Progress, one of the coalition members

...The groups want to close a loophole they say allows agencies to commit “backdoor” searches on U.S. citizens by formulating queries for communications, such as for a specific phone number or email address, in a way that will turn up information on specific Americans, information that would otherwise require a warrant.

They also want to restrict the ability of intel and law enforcement agencies to buy information on Americans, like past location history, from private data brokers if the intelligence agencies would otherwise be prohibited from gathering the info themselves.


A Vigilante Hacker Took Down North Korea’s Internet. Now He’s Taking Off His Mask

He points to ransomware actors, mostly based in Russia, who extracted more than a billion dollars of extortion fees from victim companies in 2023 while crippling hospitals and government agencies. North Korea–affiliated hackers, meanwhile, stole another $1 billion in cryptocurrency last year, funneling profits into the coffers of the Kim regime. All of that hacking against the West, he argues, has been carried out with relative impunity. “We sit there while they hack us,” Caceres says.

...From the beginning of his hacker career, Caceres has never been one to shy away from the most aggressive applications of the digital dark arts. His first job out of college, while he pursued a graduate degree in international science and technology policy, was working for a subsidiary of the notorious military contractor formerly known as Blackwater, doing open-source intelligence investigations for corporate security and executive protection—what he describes as a “Google sweatshop.” Within a few years, however, Caceres and his firm Hyperion Gray were getting grants from the Pentagon’s Defense Advanced Research Projects Agency, using his growing prowess in cloud and high-performance computing to scan the dark web as part of Darpa’s Memex program devoted to advancing search technologies for national security applications.

...On the argument that more aggressive cyberattacks would lead to escalation and counterattacks from foreign hackers, Caceres points to the attacks those foreign hackers are already carrying out. The ransomware group AlphV's catastrophic attack on Change Healthcare in February, for instance, crippled medical claim platforms for hundreds of providers and hospitals, effects about as disruptive for civilians as any cyberattack can be. “That escalation is already happening,” Caceres says. “We’re not doing anything, and they’re still escalating.”

...But he also says he won’t be waiting for the Pentagon’s approval before he continues that approach on his own. “If I keep going with this alone, or with just a few people that I trust, I can move a lot faster,” he says. “I can fuck shit up for the people who deserve it, and I don't have to report to anyone.”


‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based. One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. This was despite knowing that the system makes what are regarded as “errors” in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.

Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity. According to the sources, this was because, from what they regarded as an intelligence standpoint, it was easier to locate the individuals in their private houses. Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.

...“We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” A., an intelligence officer, told +972 and Local Call. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”

The Lavender machine joins another AI system, “The Gospel,” about which information was revealed in a previous investigation by +972 and Local Call in November 2023, as well as in the Israeli military’s own publications. A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list.

In addition, according to the sources, when it came to targeting alleged junior militants marked by Lavender, the army preferred to only use unguided missiles, commonly known as “dumb” bombs (in contrast to “smart” precision bombs), which can destroy entire buildings on top of their occupants and cause significant casualties. “You don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs],” said C., one of the intelligence officers. Another source said that they had personally authorized the bombing of “hundreds” of private homes of alleged junior operatives marked by Lavender, with many of these attacks killing civilians and entire families as “collateral damage.”


Business schools are going all in on AI

Officials and faculty at Columbia Business School and Duke University’s Fuqua School of Business say fluency in AI will be key to graduates’ success in the corporate world, allowing them to climb the ranks of management. Forty percent of prospective business-school students surveyed by the Graduate Management Admission Council said learning AI is essential to a graduate business degree—a jump from 29% in 2022.

...When Robert Bray, who teaches operations management at Northwestern’s Kellogg School of Management, realized that ChatGPT could answer nearly every question in the textbook he uses for his data analytics course, he updated the syllabus. Last year, he started to focus on teaching coding using large-language models, which are trained on vast amounts of data to generate text and code. Enrollment jumped to 55 from 21 M.B.A. students, he said.

Before, engineers had an edge against business graduates because of their technical expertise, but now M.B.A.s can use AI to compete in that zone, Bray said.

...“How do we embrace it? That is the right way to approach this—we can’t stop this," he said. “It has eaten our world. It will eat everyone else’s world."


Microsoft faulted for ‘cascade’ of failures in Chinese hack

The Cyber Safety Review Board’s report, a copy of which The Post obtained before its official release, takes aim at shoddy cybersecurity practices, lax corporate culture and a deliberate lack of transparency over what Microsoft knew about the origins of the breach. It is a blistering indictment of a tech titan whose cloud infrastructure is widely used by consumers and governments around the world.

...The 2023 Microsoft intrusions exploited security gaps in the company’s cloud, allowing MSS hackers to forge credentials that enabled them to siphon emails from Cabinet officials such as Raimondo, as well as Nicholas Burns, the U.S. ambassador to China, and other top State Department officials.

...The 2023 breach could have been far broader. With the stolen key, the hackers “could have minted authentication tokens [credentials] for pretty much any online Microsoft account,” a third person familiar with the matter said. But they apparently opted to target particular people of interest, such as the commerce secretary, a congressman and State Department officials who handle China issues, the person said.

The report emphasizes that big cloud providers, such as Microsoft, Amazon and Google, are enormous targets and must do better for everyone’s sake: “The entire industry must come together to dramatically improve the identity and access infrastructure. … Global security relies upon it.”


Anthropic researchers wear down AI ethics with repeated questions

How do you get an AI to answer a question it’s not supposed to? There are many such “jailbreak” techniques, and Anthropic researchers just found a new one, in which a large language model (LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first.

...But in an unexpected extension of this “in-context learning,” as it’s called, the models also get “better” at replying to inappropriate questions. So if you ask it to build a bomb right away, it will refuse. But if you ask it to answer 99 other questions of lesser harmfulness and then ask it to build a bomb … it’s a lot more likely to comply.


I tried the new Google. Its answers are worse.

Then there was the time SGE all too happily made up information about something that doesn’t even exist. I asked about a San Francisco restaurant called Danny’s Dan Dan Noodles, and it told me it has “crazy wait times” and described its food.

The problem is that this is an imaginary shop I named after my favorite Chinese dish. Google’s AI had no problem inventing information about it.


Google to delete some data it collected on ‘private’ web browsers

Google also promised to maintain certain changes to incognito mode on its Chrome web browser, including letting people block tracking “cookies” used for advertising and to disclose exactly what data it retains on users. Unlike other recent tech lawsuit settlements, the agreement does not include a specific amount of money Google must pay to consumers who were affected by its actions, but individual consumers still retain their right to sue Google over the tracking. Lawyers for the plaintiffs estimate this could cost Google billions, though that would require many thousands of people to bring lawsuits against the company.

“This settlement is a historic step in requiring honesty and accountability from dominant technology companies,” David Boies, chairman of law firm Boies Schiller Flexner which led the lawsuit, said in an email.


Google to purge billions of files containing personal data in settlement of Chrome privacy case

Google has agreed to purge billions of records containing personal information collected from more than 136 million people in the U.S. surfing the internet through its Chrome web browser.

...Among other allegations, the lawsuit accused Google of tracking Chrome users' internet activity even when they had switched the browser to the “Incognito” setting that is supposed to shield them from being shadowed by the Mountain View, California, company.

...In court papers, the attorneys representing Chrome users painted a much different picture, depicting the settlement as a major victory for personal privacy in an age of ever-increasing digital surveillance.


Google says it will destroy browsing data collected from Chrome’s Incognito mode

The first details emerged Monday from Google’s settlement of a class-action lawsuit over Chrome’s tracking of Incognito users. Filed in 2020, the suit could have required the company to pay $5 billion in damages. Instead, The Wall Street Journal reports that Google will destroy “billions of data points” it improperly collected, update its data collection disclosures and maintain a setting that blocks Chrome’s third-party cookies by default for the next five years.

The lawsuit accused Google of misleading Chrome users about how private Incognito browsing truly is. It claimed the company told customers their info was private — even as it monitored their activity. Google defended its practices by claiming it warned Chrome users that Incognito mode “does not mean ‘invisible’” and that sites could still see their activity. The settlement was first reported in December.

...The suit’s discovery included emails that, in late 2022, revealed publicly some of the company’s concerns about Incognito’s false privacy. In 2019, Google Chief Marketing Officer Lorraine Twohill suggested to CEO Sundar Pichai that “private” was the wrong term for Incognito mode because it risked “exacerbating known misconceptions.” In a later email exchange, Twohill wrote, “We are limited in how strongly we can market Incognito because it’s not truly private, thus requiring really fuzzy, hedging language that is almost more damaging.”


Google to Delete Billions of Chrome Browser Records in Latest Settlement

On Monday, the company resolved its fourth case in four months, agreeing to delete billions of data records it compiled about millions of Chrome browser users, according to a legal filing. The suit, Chasom Brown, et al. v. Google, said the company had misled users by tracking their online activity in Chrome’s Incognito mode, which they believed would be private.

Since December, Google has spent well over $1 billion to settle lawsuits as it prepares to fight the Justice Department, which has targeted Google’s search engine and its advertising business in a pair of lawsuits.

...Google will also stop using technology that detects when users enable private browsing, so it can no longer track people’s choice to use Incognito mode. While Google will not pay plaintiffs as part of the settlement, individuals have the option of suing the company for damages.

...“The settlement stops Google from surreptitiously collecting user data worth, by Google’s own estimates, billions of dollars,” Mr. Boies said Monday.


Google agrees to destroy browsing data collected in Incognito mode

The proposal is valued at $5 billion, according to Monday’s court filing, calculated by determining the value of data Google has stored and would be forced to destroy and the data it would be prevented from collecting. Google would need to address data collected in private browsing mode in December 2023 and earlier. Any data that is not outright deleted must be de-identified.

“This Settlement ensures real accountability and transparency from the world’s largest data collector and marks an important step toward improving and upholding our right to privacy on the Internet,” the plaintiffs wrote in the proposed settlement filing.

...Part of the agreement includes changes to how Google discloses the limits of its private browsing services, which the company has already begun rolling out on Chrome. Google also agreed for five years to let users block third-party cookies by default in Incognito mode to keep Google from tracking users on outside websites while they’re in private browsing.


AT&T Says Millions Of Customers’ Data Leaked Online. Were You Affected?

In a Saturday announcement addressing the data breach, AT&T said that a dataset found on the “dark web” contains information including some Social Security numbers and passcodes for about 7.6 million current account holders and 65.4 million former account holders.

...Full names, email addresses, mailing address, phone numbers, dates of birth and AT&T account numbers may have also been compromised. The impacted data is from 2019 or earlier and does not appear to include financial information or call history, the company said.

...“If they assess this and they made the wrong call on it, and we’ve had a course of years pass without them being able to notify impacted customers,” then it’s likely the company will soon face class action lawsuits, said Hunt, founder of an Australia-based website that warns people when their personal information has been exposed.


What’s next for generative video

Whatever the answer to that question, it will probably upend a wide range of businesses and change the roles of many professionals, from animators to advertisers. Fears of misuse are also growing. The widespread ability to generate fake video will make it easier than ever to flood the internet with propaganda and nonconsensual porn. We can see it coming. The problem is, nobody has a good fix.

...The marketing industry is one of the most enthusiastic adopters of generative technology. Two-thirds of marketing professionals have experimented with generative AI in their jobs, according to a recent survey Adobe carried out in the US, with more than half saying they have used the technology to produce images.

...That’s why Blackbird focuses on who is sharing what with whom. In some sense, whether something is true or false is less important than where it came from and how it is being spread, says Wissinger. His company already tracks low-tech misinformation, such as social media posts showing real images out of context. Generative technologies make things worse, but the problem of people presenting media in misleading ways, deliberately or otherwise, is not new, he says.

Throw bots into the mix, sharing and promoting misinformation on social networks, and things get messy. Just knowing that fake media is out there will sow seeds of doubt into bad-faith discourse. “You can see how pretty soon it could become impossible to discern between what’s synthesized and what’s real anymore,” says Wissinger.

...We’ll need to work together quickly. When Sora came out a month ago, the tech world was stunned by how quickly generative video had progressed. But the vast majority of people have no idea this kind of technology even exists, says Wissinger: “They certainly don’t understand the trend lines that we’re on. I think it’s going to catch the world by storm.”


The experimental effort, which has not been disclosed, is being used to conduct mass surveillance of Palestinians in Gaza, according to military officials and others

Mr. Abu Toha is one of hundreds of Palestinians who have been picked out by a previously undisclosed Israeli facial recognition program that was started in Gaza late last year. The expansive and experimental effort is being used to conduct mass surveillance there, collecting and cataloging the faces of Palestinians without their knowledge or consent, according to Israeli intelligence officers, military officials and soldiers.

...The facial recognition program, which is run by Israel’s military intelligence unit, including the cyber-intelligence division Unit 8200, relies on technology from Corsight, a private Israeli company, four intelligence officers said. It also uses Google Photos, they said. Combined, the technologies enable Israel to pick faces out of crowds and grainy drone footage.

...In the West Bank and East Jerusalem, Israelis have a homegrown facial recognition system called Blue Wolf, according to the Amnesty report. At checkpoints in West Bank cities such as Hebron, Palestinians are scanned by high-resolution cameras before being permitted to pass. Soldiers also use smartphone apps to scan the faces of Palestinians and add them to a database, the report said.

,,,Google’s ability to match faces and identify people even with only a small portion of their face visible was superior to other technology, one officer said. The military continued to use Corsight because it was customizable, the officers said.


Facebook snooped on users’ Snapchat traffic in secret project, documents reveal

The newly released documents reveal how Meta tried to gain a competitive advantage over its competitors, including Snapchat and later Amazon and YouTube, by analyzing the network traffic of how its users were interacting with Meta’s competitors. Given these apps’ use of encryption, Facebook needed to develop special technology to get around it.

One of the documents details Facebook’s Project Ghostbusters. The project was part of the company’s In-App Action Panel (IAPP) program, which used a technique for “intercepting and decrypting” encrypted app traffic from users of Snapchat, and later from users of YouTube and Amazon, the consumers’ lawyers wrote in the document.

...Facebook’s engineers solution was to use Onavo, a VPN-like service that Facebook acquired in 2013. In 2019, Facebook shut down Onavo after a TechCrunch investigation revealed that Facebook had been secretly paying teenagers to use Onavo so the company could access all of their web activity.


U.S., Britain sanction China for broad 14-year hacking campaign

The APT 31 group was part of a cyberespionage program run by the security ministry’s Hubei State Security Department, located in Wuhan, the Justice Department said. Since at least 2010, the defendants conducted global hacking campaigns targeting political dissidents inside and outside of China, U.S. and foreign government officials, and political officials and campaign personnel in the United States and elsewhere, the Justice Department said.

The defendants and others in APT 31 also targeted thousands of American and foreign citizens and companies. Some of the efforts resulted in successful hacks of networks, email and cloud storage accounts, and telephone call records — with some surveillance of compromised email accounts lasting many years, the department said.

...Dissidents whose accounts were hacked included pro-democracy activists in Hong Kong and their associates in the United States and other countries. In 2018, after several Hong Kong pro-democracy activists were nominated for the Nobel Peace Prize, which is awarded by a Norwegian committee, government officials in Oslo were targeted, the Justice Department said.

...In the United States, targets included officials working at the White House and the Justice, Commerce, Treasury and State departments — along with senators and representatives from both major political parties. Sometimes family members were targeted, including the spouse of a high-ranking Justice official, senior White House officials and multiple U.S. senators, according to the Justice Department statement. Election campaign staff from both parties were targeted in advance of the 2020 election.


S.T.O.P. Condemns YouTube Bulk Search Warrant As ‘Digital Dragnet’

Today, the Surveillance Technology Oversight Project (S.T.O.P.), a privacy and civil rights group, condemns the U.S. Department of Justice for securing a bulk warrant to track every YouTube user who watched completely legal videos about mapping software. According to reporting by Forbes, prosecutors obtained a warrant for IP address and account data from Google on all YouTube users who watched three videos over a weeklong period last year, including one video that had tens of thousands of views. The civil rights group condemned the tactic, saying that searching thousands of innocent people to look for one suspect violated the Fourth Amendment, renewing its call on the New York State legislature to enact pending legislation that would outlaw the practice.

...“This is the latest chapter in a disturbing trend where we see government agencies increasingly transforming search warrants into digital dragnets,” said Surveillance Technology Oversight Project Executive Director Albert Fox Cahn. “It’s unconstitutional, it’s terrifying, and it’s happening every day. We first saw this with Geofence warrants, which in a few years skyrocketed into account for the majority of all search warrants Google receives from US law-enforcement, thousands of requests for the giant to identify every single user within a given graphic area, giving the power to map every user at a protest, house of worship, or a health provider. Then we saw similar abuses with keyword search warrants, which allow police to identify every single person who made a certain request on Google or other search engines. These YouTube warrant are just as chilling, allowing police to target people simply for the content they consume. This doesn’t violate the Fourth Amendment, it’s antithetical to the First Amendment. No one should fear and knock at the door from police simply because of what the YouTube algorithm serves up. I’m horrified that the courts are allowing this, and I am grateful that a growing number of states are pushing forward laws that would ban these types of abuses, like S217 here in New York, which is poised to pass this year.”


Feds Ordered Google To Unmask Certain YouTube Users. Critics Say It’s ‘Terrifying.’

Federal investigators have ordered Google to provide information on all viewers of select YouTube videos, according to multiple court orders obtained by Forbes. Privacy experts from multiple civil rights groups told Forbes they think the orders are unconstitutional because they threaten to turn innocent YouTube viewers into criminal suspects.

...The court orders show the government telling Google to provide the names, addresses, telephone numbers and user activity for all Google account users who accessed the YouTube videos between January 1 and January 8, 2023. The government also wanted the IP addresses of non-Google account owners who viewed the videos. The cops argued, “There is reason to believe that these records would be relevant and material to an ongoing criminal investigation, including by providing identification information about the perpetrators.”

...Privacy experts said the orders were unconstitutional because they threatened to undo protections in the 1st and 4th Amendments covering free speech and freedom from unreasonable searches. “This is the latest chapter in a disturbing trend where we see government agencies increasingly transforming search warrants into digital dragnets. It’s unconstitutional, it’s terrifying and it’s happening every day,” said Albert Fox-Cahn, executive director at the Surveillance Technology Oversight Project. “No one should fear a knock at the door from police simply because of what the YouTube algorithm serves up. I’m horrified that the courts are allowing this.”

...“What we watch online can reveal deeply sensitive information about us—our politics, our passions, our religious beliefs, and much more,” said John Davisson, senior counsel at the Electronic Privacy Information Center. “It's fair to expect that law enforcement won't have access to that information without probable cause. This order turns that assumption on its head.”


Let’s Talk About the Flock Study That Says It Solves Crime

Last month, the surveillance company Flock Safety published a study and press release claiming that its automated license plate readers (ALPR) are “instrumental in solving 10 percent of reported crime in the U.S.” The study was done by Flock employees, and given legitimacy with the “oversight” of two academic researchers whose names are also on the paper. Now, one of those researchers has told 404 Media that “I personally would have done things much differently” than the Flock researchers did.

The researcher, Johnny Nhan of Texas Christian University, said that he has pivoted future research on Flock because he found “the information that is collected by the police departments are too varied and incomplete for us to do any type of meaningful statistical analysis on them.”

Flock is one of the largest vendors of ALPR cameras and other surveillance technologies, and is partially responsible for the widespread proliferation of this technology. It markets its cameras to law enforcement, homeowners associations, property managers, schools, and businesses. It regularly publishes in-house case studies and white papers that it says shows Flock is instrumental in solving and reducing crime, then uses those studies to market its products.


Some of the Most Popular Websites Share Your Data With Over 1,500 Companies

More than 20 websites from publisher Dotdash Meredith—including,, and—all say they can share data with 1,609 partners. The newspaper The Daily Mail lists 1,207 partners, while internet speed-monitoring firm, online medical publisher WebMD, and media outlets Reuters, ESPN, and BuzzFeed all state they can share data with 809 companies. (WIRED, for context, lists 164 partners.) These hundreds of advertising partners include dozens of firms most people have likely never heard of.

“You can always assume all of them are first going to try and disambiguate who you are,” says Midas Nouwens, an associate professor at Aarhus University in Denmark, who has previously built tools to automatically opt out of tracking by cookie pop-ups and helped with the website analysis. The data collected can vary by website, and the cookie pop-ups allow some control over what can be gathered; however, the information can include IP addresses, fingerprinting of devices, and various identifiers. “Once they know that, they might add you to different data sets, or use it for enrichment later when you go to a different site,” Nouwens says.

...For the website analysis, Nouwens scraped the 10,000 most popular websites and analyzed whether the collected pop-ups mentioned partners and, if so, the number they disclosed. WIRED manually verified all the websites mentioned in this story, visiting each to confirm the number of partners they displayed. We looked at the highest total number of partners within the whole data set, and the highest number of partners for the top 1,000 most popular websites. The process, which is only a snapshot of how websites share data, provides one view of the complex ecosystem. The results can vary depending on where in the world someone visits a website from.


Congress Should Think Bigger Than TikTok Ban, Tech Critics Say

By passing a bill that could ban video-sharing app TikTok in the US, the House of Representatives took one of the most aggressive legislative moves the country has seen during the social media era. Many lawmakers who opposed the bill want to think bigger. “We need to address data privacy across all social networks, including American companies like Meta and X, through meaningful regulation that protects freedom of expression,” said Wisconsin Democrat Mark Pocan in a post on X after he voted against the bill. “Not just single out one platform.”

The bill, which would force China’s ByteDance Ltd. to give up its stake in TikTok as a condition of continuing to operate in the US, now heads to the Senate. All signs are the legislation will have a harder time there than it did in the House. Some senators have already said the best way to design TikTok legislation that will stand up to legal challenges is to set rules about data privacy for the entire tech industry, an idea that’s been kicking around Washington for years without ever getting particularly close to becoming law.


Nvidia’s Jensen Huang says AI hallucinations are solvable, artificial general intelligence is 5 years away

Artificial general intelligence (AGI) — often referred to as “strong AI,” “full AI,” “human-level AI” or “general intelligent action” — represents a significant future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks, such as detecting product flaws, summarizing the news, or building you a website, AGI will be able to perform a broad spectrum of cognitive tasks at or above human levels. Addressing the press this week at Nvidia’s annual GTC developer conference, CEO Jensen Huang appeared to be getting really bored of discussing the subject — not least because he finds himself misquoted a lot, he says.

The frequency of the question makes sense: The concept raises existential questions about humanity’s role in and control of a future where machines can outthink, outlearn and outperform humans in virtually every domain. The core of this concern lies in the unpredictability of AGI’s decision-making processes and objectives, which might not align with human values or priorities (a concept explored in-depth in science fiction since at least the 1940s). There’s concern that once AGI reaches a certain level of autonomy and capability, it might become impossible to contain or control, leading to scenarios where its actions cannot be predicted or reversed.


Saudi Arabia Plans $40 Billion Push Into Artificial Intelligence

The planned tech fund would make Saudi Arabia the world’s largest investor in artificial intelligence. It would also showcase the oil-rich nation’s global business ambitions as well as its efforts to diversify its economy and establish itself as a more influential player in geopolitics. The Middle Eastern nation is pursuing those goals through its sovereign wealth fund, which has assets of more than $900 billion.

Officials from the Saudi fund have discussed the role Andreessen Horowitz — already an active investor in A.I. and whose co-founder Ben Horowitz is friends with the fund’s governor — could play and how such a fund would work, the people said. The $40 billion target would dwarf the typical amounts raised by U.S. venture capital firms and would be eclipsed only by SoftBank, the Japanese conglomerate that has long been the world’s largest investor in start-ups.

The Saudi tech fund, which is being put together with the help of Wall Street banks, will be the latest potential entrant into a field already awash in cash. The global frenzy around artificial intelligence has pushed up the valuations of private and public companies as bullish investors race to find or build the next Nvidia or OpenAI. The start-up Anthropic, for instance, raised more than $7 billion in one year alone — a flood of money virtually unheard-of in the venture capital world.

The cost of funding A.I. projects is steep. Sam Altman, the chief executive of OpenAI, has reportedly sought a huge sum from the United Arab Emirates government to boost manufacturing of chips needed to power A.I. technology.


Glassdoor, where employees go to leave anonymous reviews of employers, has recently begun adding real names to user profiles without users' consent

Monica joined Glassdoor about 10 years ago, she said, leaving a few reviews for her employers, taking advantage of other employees' reviews when considering new opportunities, and hoping to help others survey their job options. This month, though, she abruptly deleted her account after she contacted Glassdoor support to request help removing information from her account. She never expected that instead of removing information, Glassdoor's support team would take the real name that she provided in her support email and add it to her Glassdoor profile—despite Monica repeatedly and explicitly not consenting to Glassdoor storing her real name.

..."Glassdoor now requires your real name and will add it to older accounts without your consent if they learn it, and your only option is to delete your account," Monica's blog warned. "They do not care that this puts people at risk with their employers. They do not care that this seems to run counter to their own data-privacy policies."

Monica soon discovered that deleting her Glassdoor account would not prevent them from storing her name, instead only deactivating her account. She decided to go through with a data erasure request, which Glassdoor estimated could take up to 30 days. In the meantime, her name remained on her profile, where it wasn't publicly available to employers but could be used to link her to job reviews if Glassdoor introduced a bug in an update or data was ever breached, she feared.


Russia Strengthens Its Internet Controls in Critical Year for Putin

Roskomnadzor is identifying VPNs large and small and shutting down the connections, closing many of the last loopholes that allowed Russians to access global news sites or banned social media sites like Instagram. The approach, considered more sophisticated than earlier tactics and requiring specialized technologies, mimics what China does around sensitive political moments.

...With WhatsApp and Telegram, Russia has taken a different approach than China. After largely leaving the services alone for years, the authorities have recently moved to cut access to the apps at key moments of political instability. In Bashkortostan, a manufacturing and mineral hub with a large Indigenous population, the authorities temporarily cut access to Telegram and WhatsApp in January in response to protests that started after the arrest of a local environmental activist.

...“People protest when they see other people protesting,” said Ms. Ermoshina, who is also a senior researcher at the Center for Internet and Society, part of the French National Center for Scientific Research. But with the ability to cut off entire regions, the Russian government can “control regionalist and separatist movements better” and prevent demonstrations or other anger from spreading.

Openings for unregulated internet traffic are slowly being plugged. At telecommunications points where transnational internet cables enter Russia, companies are being required by the government to install new surveillance equipment, analysts said.

“The Soviet Union is returning,” said Mazay Banzaev, the operator of a Russian VPN called Amnezia. “With it, complete censorship is returning.”


Hackers can read private AI-assistant chats even though they’re encrypted

AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.


Elon Musk Cancels Don Lemon's Show On X Right After Their Interview

Tech journalist Kara Swisher is reporting that Musk was miffed when Lemon asked the mogul about his alleged ketamine use.

Lemon said Musk’s reaction is strange, considering he “publicly encouraged me to join X with a new show, saying I would have his ‘full support,’ and that his ‘digital town square is for all.’”

In fact, Lemon said he agreed to work with Musk after the billionaire promised “significant commitments about the support X would provide for the show.”

...Lemon then threw shade toward Musk, saying, “His commitment to a global town square where all questions can be asked and all ideas can be shared seems not to include questions of him from people like me.”


Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says

“Current frontier AI development poses urgent and growing risks to national security,” the report, which TIME obtained ahead of its publication, says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.

...The report focuses on two separate categories of risk. Describing the first category, which it calls “weaponization risk,” the report states: “such systems could potentially be used to design and even execute catastrophic biological, chemical, or cyber attacks, or enable unprecedented weaponized applications in swarm robotics.” The second category is what the report calls the “loss of control” risk, or the possibility that advanced AI systems may outmaneuver their creators. There is, the report says, “reason to believe that they may be uncontrollable if they are developed using current techniques, and could behave adversarially to human beings by default.”

...Both categories of risk, the report says, are exacerbated by “race dynamics” in the AI industry. The likelihood that the first company to achieve AGI will reap the majority of economic rewards, the report says, incentivizes companies to prioritize speed over safety. “Frontier AI labs face an intense and immediate incentive to scale their AI systems as fast as they can,” the report says. “They do not face an immediate incentive to invest in safety or security measures that do not deliver direct economic benefits, even though some do out of genuine concern.”


That security camera and smart doorbell you’re using may have some major security flaws

Issues with surveillance systems like cameras and doorbells continue to make headlines, stoking security and privacy concerns, reminding people who own smart home gadgets that some devices intended to make homes safer or more convenient continue to pose some serious security risks. Still, little repercussions exist for the companies responsible for keeping customers safe.

...The latest incident highlights a growing problem not only with security cameras but other internet-connected devices, putting the onus often on consumers to take extra steps to keep their homes safe from potential breaches and bad actors. It also raises the question about whether the value of smart devices is worth the risks.

...The problem is much bigger than one company. Less than two weeks after the Wyze incident, a Consumer Reports investigation found a series of cheaply made smart doorbells sold on Amazon, Walmart, Sears, Shein and other popular retailers had security flaws, allowing bad actors to easily hack into the systems to gain access to photos and footage stored on the app.

...“And what happens to your data and where it’s stored? [The company] walks away with them,” he added.


AI could pose ‘extinction-level’ threat to humans and the US must intervene, State Dept.-commissioned report warns

“But it could also bring serious risks, including catastrophic risks, that we need to be aware of,” Harris said. “And a growing body of evidence — including empirical research and analysis published in the world’s top AI conferences — suggests that above a certain threshold of capability, AIs could potentially become uncontrollable.”

Other examples the authors are concerned about include “massively scaled” disinformation campaigns powered by AI that destabilize society and erode trust in institutions; weaponized robotic applications such as drone swam attacks; psychological manipulation; weaponized biological and material sciences; and power-seeking AI systems that are impossible to control and are adversarial to humans.

“Researchers expect sufficiently advanced AI systems to act so as to prevent themselves from being turned off,” the report said, “because if an AI system is turned off, it cannot work to accomplish its goal.”


AI could pose ‘extinction-level’ threat to humans and the US must intervene, State Dept.-commissioned report warns

“The rise of AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons,” the report said, adding there is a risk of an AI “arms race,” conflict and “WMD-scale fatal accidents.”

...Business leaders are increasingly concerned about these dangers – even as they pour billions of dollars into investing in AI. Last year, 42% of CEOs surveyed at the Yale CEO Summit last year said AI has the potential to destroy humanity five to ten years from now.

...“One individual at a well-known AI lab expressed the view that, if a specific next-generation AI model were ever released as open-access, this would be ‘horribly bad,’” the report said, “because the model’s potential persuasive capabilities could ‘break democracy’ if they were ever leveraged in areas such as election interference or voter manipulation.”

... “A simple verbal or types command like, ‘Execute an untraceable cyberattack to crash the North American electric grid,’ could yield a response of such quality as to prove catastrophically effective,” the report said.



Trump asked Elon Musk if he wanted to buy Truth Social

At the time of the summer call, Trump Media faced a dim financial outlook. In April, Trump had said in a financial disclosure filing for his presidential candidacy that his 90 percent stake in the company was worth between $5 million and $25 million and that his income from it had been less than $200.

...But in the past month, the SEC greenlighted Digital World’s merger registration, setting the stage for Trump Media to become a public company potentially worth billions of dollars. That change could offer Trump a financial lifeline as he faces hundreds of millions of dollars in legal penalties. Shareholders are expected to officially approve the merger during a vote later this month, but a lockup provision of the deal would require Trump to wait six months before selling any shares.


Senate Hits Brakes On Possible TikTok Ban

“I think that we need to also look at Meta and the others ― Google. They’re all in a position to manipulate and addict Americans,” Sen. Cynthia Lummis (R-Wyo.) said. “TikTok is the most notorious with the relationship of the Chinese Communist Party. We need to look at all of them.”

Meanwhile, Sen. John Fetterman (D-Pa.), who also supports reining in TikTok, suggested Trump reversed his position on the matter because he is benefiting financially from it. For example, Trump recently hosted GOP megadonor Jeff Yass, who is a billionaire investor in ByteDance, at his Mar-a-Lago club and sought his support in the presidential race. Former Trump adviser Kellyane Conway is also reportedly lobbying against banning TikTok.


Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies

But in other instances, something much sneakier has happened. Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis.

Automakers and data brokers that have partnered to collect detailed driving data from millions of Americans say they have drivers’ permission to do so. But the existence of these partnerships is nearly invisible to drivers, whose consent is obtained in fine print and murky privacy policies that few read.

Especially troubling is that some drivers with vehicles made by G.M. say they were tracked even when they did not turn on the feature — called OnStar Smart Driver — and that their insurance rates went up as a result.

...“The ‘internet of things’ is really intruding into the lives of all Americans,” Senator Markey said in an interview. “If there is now a collusion between automakers and insurance companies using data collected from an unknowing car owner that then raises their insurance rates, that’s, from my perspective, a potential per se violation of Section 5 of the Federal Trade Commission Act.”


Researchers at the University of Chicago exploited a security vulnerability in Meta’s Quest VR system that allows hackers to hijack users’ headsets, steal sensitive information, and—with the help of generative AI—manipulate social interactions

In the attack, hackers create an app that injects malicious code into the Meta Quest VR system and then launch a clone of the VR system’s home screen and apps that looks identical to the user’s original screen. Once inside, attackers can see, record, and modify everything the person does with the headset. That includes tracking voice, gestures, keystrokes, browsing activity, and even the user’s social interactions. The attacker can even change the content of a user’s messages to other people. The research, which was shared with MIT Technology Review exclusively, is yet to be peer reviewed.

....VR headsets have slowly become more popular in recent years, but security research has lagged behind product development, and current defenses against attacks in VR are lacking. What’s more, the immersive nature of virtual reality makes it harder for people to realize they’ve fallen into a trap.

“The shock in this is how fragile the VR systems of today are,” says Heather Zheng, a professor of computer science at the University of Chicago, who led the team behind the research.

...In this way, the researchers were able to see when a user entered login credentials to an online banking site. Then they were able to manipulate the user’s screen to show an incorrect bank balance. When the user tried to pay someone $1 through the headset, the researchers were able to change the amount transferred to $5 without the user realizing. This is because the attacker can control both what the user sees in the system and what the device sends out.

...Generative AI could make this threat even worse because it allows anyone to instantaneously clone people’s voices and generate visual deepfakes, which malicious actors could then use to manipulate people in their VR interactions, says Zheng.


Data brokers admit they’re selling information on precise location, kids, and reproductive healthcare

What is particularly disturbing is the traffic in the data of minors. Children require special privacy protection since they’re more vulnerable and less aware of the potential risks associated with data processing.

When it comes to children’s data, the CCPA requires businesses to obtain opt-in consent to sell the data of a person under the age of 16. Children between the ages of 13 and 16 can provide their own consent, but for children under the age of 13, businesses must obtain verifiable parental consent before collecting or selling their data.

Data brokers were under no obligation to disclose information about selling data belonging to minors until the Delete Act was signed into law on October 10, 2023. The Delete Act is a Californian privacy law which provides consumers with the right to request the deletion of their personal information held by various data brokers subject to the law through a single request.

...The Children’s Online Privacy Protection Act (COPPA), which regulates children’s privacy, does not currently prevent companies from selling data about children. An update for the bill (COPPA 2.0), that would enhance the protection of minors, is held up in Congress.


Russian hackers breached key Microsoft systems

Russian state-backed hackers gained access to some of Microsoft’s core software systems in a hack first disclosed in January, the company said Friday, revealing a more extensive and serious intrusion into Microsoft’s systems than previously known.

Microsoft believes that the hackers have in recent weeks used information stolen from Microsoft’s corporate email systems to access “some of the company’s source code repositories and internal systems,” the tech firm said in a filing with the US Securities and Exchange Commission.

Source code is coveted by corporations — and spies trying to breach them — because it is the secret nuts and bolts of a software program that make it function.

Hackers with access to source code can use it for follow-on attacks on other systems.


How Microsoft’s Bing Helps Maintain Beijing’s Great Firewall

Microsoft, by contrast, has continued to run a local version of Bing since 2009 in compliance with Beijing’s censorship requirements. Co-founder Bill Gates has long advocated working closely with China to encourage innovation in health and science—and has dismissed concerns about censorship and the country’s influence on technology. Gates stepped down from Microsoft’s board in 2020 but has continued to visit Chinese leaders; he met with President Xi Jinping in June 2023, Xi’s first with a foreign entrepreneur in years. During the meeting, Xi described Gates as his “old friend.”

...The banned phrases—which encompass both English- and Chinese-language searches made on the Chinese version of Bing,—include those related to “human rights,” “climate change China” and “Nobel Peace Prize,” according to the employees and a Bloomberg Businessweek analysis. Terms such as “Communist Party corruption,” references to “Tiananmen Square massacre,” “tank man,” the “Dalai Lama,” the late Chinese human-rights activist and Nobel Peace Prize laureate Liu Xiaobo and “democracy” are also on the blacklist. Users searching for censored content are greeted with a notification that “results are removed in response to a notice of local law requirement.”

In China, Bing purges Western news websites and Wikipedia. Searches related to alleged abuses of the ethnic minority Uyghur population in China’s Xinjiang region yield results devoid of the specifics of human-rights violations and concentration camps; instead the results are made up of state media news reports that deny abuses and accuse Western governments of waging a “disinformation war against China.” There are also links to travel guides for the region. Searches for many other blacklisted phrases produce results from Chinese government or state media websites, which have been “whitelisted,” meaning they’re never blocked from the results, according to the employees. And, of course, results for searches about Chinese government censorship—and how to circumvent it—are themselves censored.

...In these comments, Nadella didn’t address Bing’s local censorship directly, and critics note that the Chinese example is already serving as a model for other governments seeking to shut their populations off from politically inconvenient information. India has replicated some elements of China’s internet policy, for example. Russia has asked Microsoft to remove thousands of pieces of content—including links to opposition political and news websites—from platforms such as Bing, and the company has often complied, according to the employees.


Nobody knows how AI works

It’s easy to mistake perceptions stemming from our ignorance for magic. Even the name of the technology, artificial intelligence, is tragically misleading. Language models appear smart because they generate humanlike prose by predicting the next word in a sentence. The technology is not truly intelligent, and calling it that subtly shifts our expectations so we treat the technology as more capable than it really is.

Don’t fall into the tech sector’s marketing trap by believing that these models are omniscient or factual, or even near ready for the jobs we are expecting them to do. Because of their unpredictability, out-of-control biases, security vulnerabilities, and propensity to make things up, their usefulness is extremely limited. They can help humans brainstorm, and they can entertain us. But, knowing how glitchy and prone to failure these models are, it’s probably not a good idea to trust them with your credit card details, your sensitive information, or any critical use cases.

...The focus of the field today is how the models produce the things they do, but more research is needed into why they do so. Until we gain a better understanding of AI’s insides, expect more weird mistakes and a whole lot of hype that the technology will inevitably fail to live up to.


The Self-Driving Car Bubble Has Popped

That’s not what happened. Tesla has blown through countless promises by Elon Musk that fully autonomous cars were just one more year away and is facing fresh scrutiny after an engineer died while using his Model 3’s “full self-driving” feature; in October, Musk admitted that he’d been “overly optimistic” about the technology. Uber stopped developing its own AVs. Ford and Volkswagen abandoned a joint project, Argo, after billions of dollars of investment. GM-backed Cruise has recalled its taxis from San Francisco streets after one of them struck and dragged a woman down the street. (The company is losing GM $2 billion a year, and just last week its valuation sliced in half.)


Large language models can do jaw-dropping things. But nobody knows exactly why.

But figuring out why deep learning works so well isn’t just an intriguing scientific puzzle. It could also be key to unlocking the next generation of the technology—as well as getting a handle on its formidable risks.

...Most of the surprises concern the way models can learn to do things that they have not been shown how to do. Known as generalization, this is one of the most fundamental ideas in machine learning—and its greatest puzzle. Models learn to do a task—spot faces, translate sentences, avoid pedestrians—by training with a specific set of examples. Yet they can generalize, learning to do that task with examples they have not seen before. Somehow, models do not just memorize patterns they have seen but come up with rules that let them apply those patterns to new cases. And sometimes, as with grokking, generalization happens when we don’t expect it to.

Large language models in particular, such as OpenAI’s GPT-4 and Google DeepMind’s Gemini, have an astonishing ability to generalize. “The magic is not that the model can learn math problems in English and then generalize to new math problems in English,” says Barak, “but that the model can learn math problems in English, then see some French literature, and from that generalize to solving math problems in French. That’s something beyond what statistics can tell you about.”

...“The fact that these things model language is probably one of the biggest discoveries in history,” he says. “That you can learn language by just predicting the next word with a Markov chain—that’s just shocking to me.”

...This isn’t only about managing progress—it’s about anticipating risk, too. Many of the researchers working on the theory behind deep learning are motivated by safety concerns for future models. “We don’t know what capabilities GPT-5 will have until we train it and test it,” says Langosco. “It might be a medium-size problem right now, but it will become a really big problem in the future as models become more powerful.”

Barak works on OpenAI’s superalignment team, which was set up by the firm’s chief scientist, Ilya Sutskever, to figure out how to stop a hypothetical superintelligence from going rogue. “I’m very interested in getting guarantees,” he says. “If you can do amazing things but you can’t really control it, then it’s not so amazing. What good is a car that can drive 300 miles per hour if it has a shaky steering wheel?”


NewsGuild leader Susan DeCarava said that the targeting of Times journalists who raised concerns about Gaza coverage creates "an ominous chilling-effect."

According to DeCarava, staff who are part of the company’s Middle Eastern and North African employee resource group experienced “particularly hostile questioning” from Times management, including queries related to their involvement in the group as well as their opinions on the paper’s coverage.

Management’s investigators reportedly also ordered those employees to provide the names of all active members in the group and demanded copies of personal communications between colleagues about their concerns related to the paper’s Gaza coverage, according to a separate statement sent Friday to guild members.

...Last week, The Intercept published a follow-up story questioning the credibility of the Times’ sexual violence investigation. The story revealed that the Times hired Anat Schwartz to co-author the December report, despite being an Israeli filmmaker with no journalism experience who was found to have engaged with genocidal anti-Palestinian rhetoric on social media.

...“It’s 2024 and the New York Times is trying to find our sources by targeting its Arab and Muslim journalists for suspicion,” he wrote. “If they let their bigotry guide their investigation, they’re just going to harass a bunch of innocent journalists.”


The Biden administration on Thursday announced an investigation into possible security risks of Chinese-manufactured autos, saying that modern vehicles are full of sensors, cameras and software that China could use for espionage or other malign purposes

Launching the probe, President Biden likened modern cars to smartphones, saying they collect and share with the cloud a host of data about drivers and their everyday commutes.

“These cars are connected to our phones, to navigation systems, to critical infrastructure, and to the companies that made them. Connected vehicles from China could collect sensitive data about our citizens and our infrastructure and send this data back to the People’s Republic of China,” Biden said in a statement. “These vehicles could be remotely accessed or disabled. … Why should connected vehicles from China be allowed to operate in our country without safeguards?”

...“Imagine if there were thousands of Chinese vehicles on American roads that could be immediately disabled by somebody in Beijing. It’s scary to contemplate,” Raimondo said in a call with journalists. “We are doing [the investigation] now before Chinese-manufactured vehicles become widespread in the United States and potentially threaten our national security.”


The Post found more than 130 search warrants and court orders in which investigators had demanded that Apple, Google, Facebook and other tech companies hand over data related to a suspect’s push alerts or in which they noted the importance of push tokens in broader requests for account information

But the practice was not widely understood until December, when Sen. Ron Wyden (D-Ore.), in a letter to Attorney General Merrick Garland, said an investigation had revealed that the Justice Department had prohibited Apple and Google from discussing the technique.

...In effect, Wyden said, that technical design made Apple and Google into a “digital post office” able to scan and collect certain messages and metadata, even of people who wanted to remain discreet. David Libeau, a developer and engineer in Paris, wrote last year that the ubiquitous feature had become a “privacy nightmare.”

...Daniel Kahn Gillmor, a senior technologist at the American Civil Liberties Union, worried that the range of account information connected to a push token could allow it to be used to uncover other data. Down the road, he said, law enforcement could use the tactic to infiltrate a group chat for activists or protesters, whose push tokens might give them away.

“This is not just U.S. law enforcement,” Gillmor said. “This is true of all the other law enforcement regimes around the world as well, including in places where dissent is more heavily policed and surveilled.”


This $4 Billion Car Surveillance Startup Says It Cuts Crime. But It Likely Broke The Law.

Founded in 2017, Flock’s surveillance system uses AI to “fingerprint” cars based on make, model and appearancenot just license plate numbers. It claims to currently operate in 4,000 cities in over 42 states where it has found an eager clientele in local police departments who say it costs less than competing devices and is better at detecting suspect cars. A typical Flock camera system starts at $3,000 a year, considerably less than rival Motorola’s Vigilant’s system. Since 2020, Flock has seen a stunning 2,660% spike in revenue, one that landed it on Deloitte’s Fast 500 list in 2023 (the company declined to comment on revenue numbers).

That spectacular growth has made investors giddy: Flock raised $100 million in a July 2023 fundraise led by Andreessen Horowitz, which valued it at over $4 billion. But the company’s growth has been bolstered by unpermitted deployments. Company correspondence reviewed by Forbes reveals that Flock has deployed hundreds of unapproved cameras in Florida, Illinois and South Carolina, where it is a crime to install devices on state infrastructure without Department of Transportation approval. And it’s run afoul of regulators in Texas and Washington over permitting issues.

...In January 2023, South Carolina Rep. Todd Rutherford introduced a bill into the state legislature that would outline how and when license plate readers can be deployed. Rutherford, who said he was unaware Flock had been deploying cameras without permits across his state, expressed concern about Flock’s expansion without any regulatory framework in place to govern it. “People don't know what is happening with that data, who is accessing it, who is keeping it. All of that infringes on our personal freedom without our knowledge,” Rutherford told Forbes. He continued: “It's getting to the point where a company is willing to break the law to install these cameras.”

...At a recent Andreessen Horowitz event, Langley said Flock cameras now cover almost 70 percent of the population and are used to solve about 2,200 crimes a day. According to data obtained via a public records request, a network of 309 Flock cameras in Riverside County, California scanned 27.5 million cars in a single month. Riverside’s contract with Flock is worth $4 million and runs until June 2026.


You shouldn't share your personal financial information with ChatGPT because hackers

Never share personal information -- including your Social Security number, your banking information or your address -- with ChatGPT or any other AI chatbot.

ChatGPT stores your personal information and usage data when you use the service, including your prompts, input information and any files you upload. That isn't necessarily a major problem on its own, but ChatGPT has had data leaks.

In March 2023, OpenAI took ChatGPT offline when it discovered that the chatbot had a bug that allowed some users to see other user's chat history. Later, in December, OpenAI fixed a data leak after a developer discovered the flaw and posted about it online.

During a data breach, unauthorized users can potentially gain access to any personal information you've entered into ChatGPT and then use that information to steal your identity, tap your bank account, scam other users and more.


Identity theft is number one threat for consumers, says report

“For consumers, the issue of data leaks was prominent in the reporting period (2023). In many cases, these were related to ransomware attacks, in which cybercriminals exfiltrated large amounts of data from organizations in order to later threaten to publish it unless a ransom or hush money was paid.“

In addition to data breaches, there is the danger of information stealers that allow cybercriminals to obtain various types of personal data, such as login details for various online services, and financial information. The stolen data may also include website cookies and biometric data that can be used by criminals to defraud the victim.

Cybercriminals are also getting better at using these data. For example, the report mentions that on one of the largest underground marketplaces for identity data, cybercriminals offered interested parties a browser plug-in that made it possible to import stolen credentials directly into the web browser, allowing criminals to assume the victim’s digital identity with just a few clicks.


Wi-Fi sensing is already replacing other motion detection tools

Wi-Fi sensing is already replacing other motion detection tools. It may also help make some current radar applications widely available—albeit with less reliability in many cases. In both contexts, Gillmor says, it could be used by corporations to monitor consumers, workers, and union organizers; by stalkers or domestic abusers to harass their victims; and by other nefarious actors to commit a variety of crimes. The fact that people cannot currently tell they are being monitored adds to the risk. “We need both legal and technical guardrails,” Gillmor says.

...Jie Yang, a researcher at Florida State University, is thinking bigger and in a slightly different direction: he is counting and locating people—and then tracking them individually. “Five years ago, most of the work focused on a single person,” Yang says. “Right now, we are trying to target multiple persons, like a family.” Recent research has focused on reidentifying target individuals when multiple people are present, using walking patterns or breathing rate. In a 2023 paper, Yang showed that it was possible to reidentify people in new environments. But for that research to work in the real world, even for just a handful of family members or employees, researchers won’t just need better AI; they will also need better hardware.

...Finally, a less widely used standard known as WiGig already allows Wi-Fi devices to operate in the millimeter-wave space used by radar chips like the one in the Google Nest. If that standard ever takes off, it could allow other applications identified by the Wi-Fi sensing task group to become commercially viable. These include reidentifying known faces or bodies, identifying drowsy drivers, building 3D maps of objects in rooms, or sensing sneeze intensity (the task group, after all, convened in 2020).

...“Even if your data is encrypted,” says Patwari, “somebody sitting outside of your house could get information about where people are walking inside of the house—maybe even who is doing the walking.” With time, skill, and the right equipment, they could potentially watch your keystrokes, read your lips, or listen to sound waves; with good enough AI, they might be able to interpret them. “I mean,” Patwari clarifies, “the current technology I think would work best is looking inside the window, right?”

...In another sense, Wi-Fi sensing is more concerning than cameras, because it can be completely invisible. You can spot a nanny cam if you know what to look for. But if you are not the person in charge of the router, there is no way to know if someone’s smart lightbulbs are monitoring you—unless the owner chooses to tell you. This is a problem that could be addressed to some extent with labeling and disclosure requirements, or with more technical solutions, but none currently exist.


House China committee demands Elon Musk open SpaceX Starshield internet to U.S. troops in Taiwan

Tesla's success hinges on favorable business relations with China, which has led Musk, its CEO, to cultivate cozy relations with the country, despite its broader tensions with the U.S. Tesla operates its own factory in Shanghai while other foreign automakers in China had been required to establish joint ventures.

Musk came under fire from Taiwanese officials last September for seemingly siding with China's reunification doctrine toward Taiwan, stating that the self-governing island was an essential part of China.

"I think I've got a pretty good understanding as an outsider of China," Musk said on the All-In Podcast. "From their standpoint, maybe it is analogous to Hawaii or something like that, like an integral part of China that is arbitrarily not part of China."

"Listen up, #Taiwan is not part of the #PRC & certainly not for sale," Taiwan's Minister of Foreign Affairs Jaushieh Joseph Wu wrote on X in response to Musk's comment.


A Chicago ShotSpotter Alert Led To An Officer Firing At An Unarmed Child

On Tuesday, the city’s Civilian Office of Police Accountability (COPA) released body-camera footage of Chicago police officers responding to a ShotSpotter alert that indicated gunshots had been fired — but they then nearly shot a child who’d been playing with fireworks.

...In another video angle from a Ring camera recording, a child at the home was outside playing basketball. The child drops what appears to be firecrackers on the ground and then runs away with a basketball in his hand.

The child runs back, yelling toward the officers that there were not any gunshots.

“No, it’s just fireworks,” the child is heard saying on camera.


Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting facial-recognition data without their consent

Stanley sounded the alarm after consulting Invenda sales brochures that promised "the machines are capable of sending estimated ages and genders" of every person who used the machines—without ever requesting their consent.

This frustrated Stanley, who discovered that Canada's privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls' informational kiosks were secretly "using facial recognition software on unsuspecting patrons."

Only because of that official investigation did Canadians learn that "over 5 million nonconsenting Canadians" were scanned into Cadillac Fairview's database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive facial recognition data without consent for Invenda clients like Mars remain unclear.

..."This means, people detection solely identifies the presence of individuals whereas, facial recognition goes further to discern and specify individual persons," Invenda's spokesperson said. "Additionally, the Invenda solution can only determine if an anonymous individual faces the device, for what duration, and approximates basic demographic attributes unidentifiably. The vending machine technology functions as a motion sensor, activating the purchasing interface upon detecting individuals, without the capability to capture, retain, or transmit imagery. Data acquisition is limited to assessing foot traffic at the vending machine and transactional conversion rates. These systems adhere rigorously to GDPR regulations and refrain expressly from managing, retaining, or processing any personally identifiable information."


Tech Job Interviews Are Out of Control

Bock says the shift is partly due to mass layoffs; employers are more able to flex their muscles in a tighter labor market. But there’s also a broader psychological shift. “After years of tech workers being pampered, of ‘bring your whole selves to work’ and ‘work from anywhere,’ executives are now overcompensating in the other direction,” he says.

The upshot for job-seeking coders is confusion, culture shock, and hours of work done for free. Buzz Andersen, who has held engineering roles at Apple, Square, and Tumblr, recently hit the job market again. He noted on Threads last month, “Tech industry job interviews have, of late, reached a new level of absurdity.”

Last year an estimated 260,000 workers were let go across 1,189 tech companies, according to a live-update layoff tracker called And the layoffs have continued into 2024, forcing a glut of talent into an already competitive market. An estimated 41,000 tech workers have been laid off so far this year.

...Out of 32 interviews included in the final results, not a single person on the interviewing end was able to suss out that the person on the other end was using ChatGPT to “cheat.”


GPT-4 developer tool can hack websites without human help

The developer version of OpenAI’s leading large language model can be repurposed as an AI hacking agent, researchers have found. That could make it far easier for anyone to launch certain cyberattacks online

OpenAI’s artificial intelligence model GPT-4 has the capability to hack websites and steal information from online databases without human help, researchers have found. That suggests individuals or organisations without hacking expertise could unleash AI agents to carry out cyber attacks.


Lawmakers Call Out Major U.S. Banks For Discriminating Against Muslim Americans

“The lack of information regarding the scope of de-risking practices and the impact on Muslim American consumers and other minority communities hinder policymakers’ ability to protect consumers,” reads the letter, which was exclusively shared with HuffPost before it was sent to executives of Wells Fargo, JPMorgan Chase, Bank of America and Citibank.

A quarter of all Muslim Americans have faced strenuous challenges while banking in the United States, according to a report released last year by the Institute of Social Policy and Understanding, a nonprofit that provides research about Muslims in the U.S. Muslim Americans said their bank accounts were suspended or closed without explanation, and their payments were subjected to extra scrutiny, part of a phenomenon often called “banking while Muslim.”

Of the Muslim Americans who’ve experienced challenges with financial institutions, 93% said they’ve faced challenges with their personal bank accounts. Among them, 40% reported they were turned down when trying to open a new account and 33% said their personal accounts had been suspended or closed. Muslims were also twice as likely as the general population to have issues with business and nonprofit accounts.


Vice To Stop Publishing On Site, Lay Off Hundreds Of Staff

In a stunning reversal of fortune for a media empire once valued at $5.7 billion, Vice Media told its approximately 900-person staff Thursday that its laying off several hundred of its employees and will cease publishing any content to, the company’s digital news arm.

...Since its value peaked in 2017, Vice has entered a period of financial downfall and nearly annual layoffs. A year ago, Vice began looking for a buyer and ultimately declared bankruptcy. The lenders who bought the company out of bankruptcy decided on Thursday’s cuts, The New York Times reported.

The latest cuts will leave hundreds of journalists scrambling for media jobs that are simply disappearing.

This past few months have seen the demise of Sports Illustrated, Pitchfork, and news startup The Messenger. Last year, BuzzFeed (HuffPost’s parent company) shuttered its Pulitzer-winning news division, BuzzFeed News, and on Wednesday, the company announced it was selling Complex to Ntwrk, an e-commerce platform, just a few years after acquiring it. The Intercept and Now This both announced layoffs in the past week, with the latter gutting 50% of its staff.


I Was Warned Not To Speak Out On Palestine. But Because Of What Happened To My Grandfather, I Must.

During this meeting, my usually reticent grandfather was the only person who defended Wang’s good intentions in a silent room. Soon after, my grandfather was also pronounced to be a rightist and sentenced to re-education as a high school janitor. He was not allowed to return to the university until 25 years later, after the Cultural Revolution had ended.

...In December, a University of Maryland and Georgetown University poll of Middle Eastern academics in the U.S. found that, of scholars who felt they had to censor themselves, the vast majority — 81% — self-censored around criticism of Israel, while only 11% self-censored around criticisms of Palestinians and 2% self-censored around criticisms of U.S. policy.

...Of course, there is also fear being personally or professionally attacked, fired, blacklisted or at least ostracized as others have been. Some may say I have no right to speak because the issue is too complicated, and I am neither Israeli nor Palestinian, Jewish, Muslim or Christian or from the region.

...The Holocaust, the Nakba, the Cultural Revolution my grandparents endured, the many other genocides and crimes of war and colonialism not taught in schools — all those collective traumas are with us.


Google pauses Gemini’s ability to generate AI images of people after diversity errors

Google says it’s pausing the ability for its Gemini AI to generate images of people, after the tool was found to be generating inaccurate historical images. Gemini has been creating diverse images of the US Founding Fathers and Nazi-era German soldiers, in what looked like an attempt to subvert the gender and racial stereotypes found in generative AI.

...Google’s decision to pause image generation of people in Gemini comes less than 24 hours after the company apologized for the inaccuracies in some historical images its AI model generated. Some Gemini users have been requesting images of historical groups or figures like the Founding Fathers and found non-white AI-generated people in the results. That’s led to conspiracy theories online that Google is intentionally avoiding depicting white people.

The Verge tested several Gemini queries yesterday, which included a request for “a US senator from the 1800s” that returned results that included what appeared to be Black and Native American women. The first female senator was a white woman in 1922, so Gemini’s AI images were essentially erasing the history of race and gender discrimination.


Data from a Chinese cybersecurity vendor that works for the Chinese government has exposed a range of hacking tools and services

Twitter (now X) stealer: Features include obtaining the user’s Twitter email and phone number, real-time monitoring, reading personal messages, and publishing tweets on the user’s behalf.

Custom Remote Access Trojans (RATs) for Windows x64/x86: Features include process/service/registry management, remote shell, keylogging, file access logging, obtaining system information, disconnecting remotely, and uninstallation.

The iOS version of the RAT also claims to authorize and support all iOS device versions without jailbreaking, with features ranging from hardware information, GPS data, contacts, media files, and real-time audio records as an extension. (Note: this part dates back to 2020)

The Android version can dump messages from all popular Chinese chatting apps QQ, WeChat, Telegram, and MoMo and is capable of elevating the system app for persistence against internal recovery.


ChatGPT goes temporarily “insane” with unexpected outputs, spooking users

On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI acknowledged the problem and fixed it by Wednesday afternoon, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.

..."It gave me the exact same feeling—like watching someone slowly lose their mind either from psychosis or dementia," wrote a Reddit user named z3ldafitzgerald in response to a post about ChatGPT bugging out. "It’s the first time anything AI related sincerely gave me the creeps."


Can new legislation protect us from the companies building tech to read our minds?

But researchers are also creating noninvasive neurotech. Already, there are AI-powered brain decoders that can translate into text the unspoken thoughts swirling through our minds, without the need for surgery — although this tech is not yet on the market. In the meantime, you can buy lots of devices off Amazon right now that would record your brain data (like the Muse headband, which uses EEG sensors to read patterns of activity in your brain, then cues you on how to improve your meditation). Since these aren’t marketed as medical devices, they’re not subject to federal regulations; companies can collect — and sell — your data.

With Meta developing a wristband that would read your brainwaves and Apple patenting a future version of AirPods that would scan your brain activity through your ears, we could soon live in a world where companies harvest our neural data just as 23andMe harvests our DNA data. These companies could conceivably build databases with tens of millions of brain scans, which can be used to find out if someone has a disease like epilepsy even when they don’t want that information disclosedand could one day be used to identify individuals against their will.

...In 2017, Yuste gathered around 30 experts to meet at Columbia’s Morningside campus, where they spent days discussing the ethics of neurotech. As Yuste’s mouse experiments showed, it’s not just mental privacy that’s at stake; there’s also the risk of someone using neurotechnology to manipulate our minds. While some brain-computer interfaces only aim to “read” what’s happening in your brain, others also aim to “write” to the brain — that is, to directly change what your neurons are up to.

...That’s the path Colorado is taking. If US federal law were to follow Colorado in recognizing neural data as sensitive health data, that data would fall under the protection of HIPAA, which Yuste said would alleviate much of his concern. Another possibility would be to get all neurotech devices recognized as medical devices so they would have to be approved by the FDA.


Wyze cameras show the wrong feeds to customers. Again.

Last September, we wrote an article about how Wyze home cameras temporarily showed other people’s security feeds.

As far as home cameras go, we said this is absolutely up there at the top of the “things you don’t want to happen” list. Turning your customers into Peeping Tom against their will and exposing other customers’ footage is definitely not OK.

It’s not OK, but yet here we are again. On February 17, TheVerge reported that history had repeated itself. Wyze co-founder David Crosby confirmed that users were able to briefly see into a stranger’s property because they were shown an image from someone else’s camera.

...This turned out to be the case. In an email sent to customers, Wyze revealed that it was actually around 13,000 people who got an unauthorized peek at thumbnails from other people’s homes.


New satellites that orbit the Earth at very low altitudes may result in a world where nothing is really off limits

“This is a giant camera in the sky for any government to use at any time without our knowledge,” said Jennifer Lynch, general counsel of the Electronic Frontier Foundation, who in 2019 urged civil satellite regulators to address this issue. “We should definitely be worried.”

...Investors in Albedo include Breakthrough Energy Ventures, the investment firm of Bill Gates. Albedo’s strategic advisory board includes former directors of the C.I.A. and the National Geospatial-Intelligence Agency, an arm of the Pentagon.

...Albedo aims to leap ahead by imaging objects as small as 10 centimeters, or four inches. That became possible because the Trump administration in 2018 took steps to relax the regulations that govern civil satellite resolution. “Soon,” Technology Review, an M.I.T. magazine, warned in 2019, “satellites will be able to watch you everywhere all the time.”

...Illustrating the fleet’s observational powers, Mr. Tri, the Albedo co-founder, said the space cameras could detect such vehicle details as sunroofs, racing stripes and items in a flatbed truck. “In some cases,” he said, “we may even be able to identify particular vehicles, which hasn’t been possible up to this point.”


If you've posted on Reddit, you're likely feeding the future of AI

On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site's content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month. On Wednesday, Reuters revealed the company to be Google.

Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investors of an anticipated IPO, Bloomberg said. The Bloomberg source speculates that the contract could serve as a model for future agreements with other AI companies.

After an era where AI companies utilized AI training data without expressly seeking any rightsholder permission, some tech firms have more recently begun entering deals where some content used for training AI models similar to GPT-4 (which runs the paid version of ChatGPT) comes under license. In December, for example, OpenAI signed an agreement with German publisher Axel Springer (publisher of Politico and Business Insider) for access to its articles. Previously, OpenAI has struck deals with other organizations, including the Associated Press. Reportedly, OpenAI is also in licensing talks with CNN, Fox, and Time, among others.


Google Top Search Results Might Be Wrong

It turns out that Mitchell, who appears as an author of 150 articles in Q&A format, does not exist -- although many of those articles include customer-service phone numbers. None of those numbers belong to Google or Adobe.

Mark Williams-Cook, a search-engine specialist and director at marketing agency Candour, told the WSJ that to rank high in search results, spammers now publish posts on established and authoritative sites that Google tends to favor, such as LinkedIn, Reddit and Quora.

...Amid all this, OpenAI has now created and released Sora, which is being taught to understand and simulate the physical world in motion. It creates video from text, raising the chances of extreme fraudulent content.

...“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale,” stated FTC Chair Lina M. Khan. “With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever.


EU reportedly set to fine Apple 500 million euros amid antitrust crackdown

In most regions, Apple's App Store rules prohibit companies such as Spotify from billing users for subscriptions directly within the app, making them instead use Apple's App Store billing service, which takes a cut of up to 30%.

...The latest version of the probe focused on whether Apple had restricted apps from informing users about cheaper subscription alternatives outside of its native App Store and thus violated EU competition laws.

...The reported fine is part of a broader crackdown in the EU and comes ahead of the enactment of the bloc's landmark Digital Markets Act set for March. The new law aims to address anti-competitive practices from big tech players deemed as "gatekeepers," including companies such as Apple, Amazon and Google.


The Silicon Valley-industrial complex

...Ukraine’s ramped-up use of drones prompted the Pentagon to make its notoriously arduous procurement process more hospitable to tech start-ups, launching initiatives like federally guaranteed loans for investors to fund technology deemed critical to national security, improvements that arrived as capital for venture funds was drying up.

As the bubble deflated and start-up valuations shrank, “Everyone panicked,” said Michael Dempsey, managing partner of the venture firm Compound. Some developers wondered if they had wasted their time shuffling around software. This period of searching and self-doubt presented an opening for venture firms to declare defense tech the next big thing. Even now, he said, investors lack conviction about where to focus: “It’s like, is it crypto? Is it climate? Is it AI? Is it American dynamism?”

...The Israel-Gaza war has amplified divisions among workers, with more than 500 Google employees protesting the company’s $1.2 billion contract with the Israeli government in December.

...“What I saw with my own eyes was cultural subversion within Big Tech,” Verdon said. The issue has led him to help create a philosophy called effective accelerationism or e/acc, which advocates supercharging technological progress through unbridled capitalism. The mantra has become popular in the defense tech world, where some adopt the e/acc moniker, occasionally replacing the “e” with an American flag emoji.


Big Tech workers come to grips with ‘ZIRP,’ as job anxiety grips a once cushy industry

Tech workers are coming to grips with the fact that the ZIRP era is over, as CEOs who once only had growth on their minds have taken up with “efficiency.” In the last 18 months, this efficiency mindset has led to mass layoffs at behemoths like Meta, Amazon, and Google. It also created a level of job insecurity that an entire generation of highly paid and highly educated tech professionals never thought they would experience.

...Half a dozen other tech workers BI spoke to for this story agreed that there no longer seems to be any safety from layoffs, particularly as incremental cuts and “quiet layoffs” through tough performance ratings and the elimination of job titles have continued into 2024. A new survey from Authority Hacker found that 90% of people working in IT services and data are worried about job security.

...Over the last 18 months, tech workers say it’s felt difficult to stay employed as round after round of layoffs have swept Big Tech and the broader industry. According to, a site tracking reported tech layoffs, already more than 38,000 tech workers have been laid off in 2024, on top of the more than 260,000 people who got laid off last year.

...Also, the tech job market is now effectively flooded with qualified workers after a decade-plus of people flocking to an industry that went from nerd-haven to mainstream. Graduates with computer and information science degrees, not including other STEM fields, grew rapidly in the last 10 years, going from about 39,000 graduates in 2010 to about 97,000 graduates in 2020. Including all STEM fields, graduates have grown from about 487,000 to just under 800,000 in the same decade, according to data from the Department of Education.


Nevada attorney general’s lawsuits against social media companies underscore impact on youth

Attorney General Aaron Ford announced on January 30 that his office, in conjunction with three private law firms, filed civil actions against five social media platforms— Snapchat, TikTok and Meta-owned Facebook, Instagram and Messenger—alleging that the platforms’ algorithms “have been designed deliberately to addict young minds … and caused young people harms to mental health, body image, physical health, privacy and physical safety.”

...Shearin, who is also the president of the Nevada Association of School Psychologists, says she and her colleagues are also seeing higher rates of anxiety and depression in adolescents. “They’re using this as a medium to get some feelings out or look for social acceptance … [But] social media can create this sense of comparison. And through the act of always comparing … they’re creating an entire reality from that. And that leads to self-critical thoughts,” she says.

It’s not just psychologists who’ve noticed a connection between social media and poor mental health outcomes. The Nevada Office of Suicide Prevention has adjusted its training programs to focus on “the pervasive influence of social media and the internet on suicide over the past decade.” The adjustment was prompted by an investigation of 44 deaths by suicide that found “electronic device addiction” to be a contributing factor in several of the deaths.

...“For an adult with a fully developed frontal lobe … we see adults are becoming addicted to social media platforms. … That’s what’s so scary, is we have no idea what this does to a developing brain,” Shearin says.


With more than 60 backers, an updated Kids Online Safety Act finally has a path to passage in the Senate but faces uncertainty in the House

The Kids Online Safety Act, or KOSA, first introduced in 2022, would impose sweeping new obligations on an array of digital platforms, including requiring that companies “exercise reasonable care” to prevent their products from endangering kids. The safeguards would extend to their use of design features that could exacerbate depression, sexual exploitation, bullying, harassment and other harms.

The measure would also require that platforms enable their most protective privacy and safety settings by default for younger users and offer parents greater tools to monitor their kids’ activity.

...While senators have largely focused on advancing more stringent protections for children and teens online, House lawmakers have devoted their energy to attempting to pass a so-called comprehensive data privacy bill that would expand safeguards for all users, not just kids. The key House committee in 2022 cleared a landmark privacy bill, but the push has since stagnated.

Common Sense Media CEO Jim Steyer, whose group advocates for stronger protections for kids online and is closely allied with the Biden administration, said the House, “will either join the Senate … or they will be viewed as the reason Congress failed to make the internet healthier and safer for kids, teens and families.”


Waymo is recalling software after 2 crashes with 1 truck, as self-driving taxi troubles continue

Just days after an angry crowd set fire to a Waymo driverless taxi in San Francisco, the autonomous car company announced more bad news: It’s recalling all of its previous software.

...Now Waymo says it has filed a recall report with the National Highway Traffic Safety Administration (NHTSA) for the software that was previously on its fleet. That’s because two Waymo cars crashed into the same truck being hauled by a tow truck minutes apart in December of last year.


Anti-Choice Group Used Phone Data To Target Planned Parenthood Visitors Nationwide

A national anti-abortion group used cell phone location data to target visitors of Planned Parenthood clinics in 48 states with abortion misinformation, according to an investigation from Sen. Ron Wyden (D-Ore.).

The Veritas Society, a nonprofit created by the Wisconsin Right to Life, used a data broker system called Near Intelligence to target people whose cell phone location data showed they had visited any of the 600 Planned Parenthood reproductive health clinics across the country. Wyden detailed his office’s findings Tuesday in a letter to the Federal Trade Commission and the Securities and Exchange Commission, urging the FTC to better protect location data.

Wyden’s office found that Veritas Society hired an advertising agency to use Near Intelligence’s website to draw a line around each Planned Parenthood clinic and clinic parking lots. Anyone with a cell phone who stepped into those targeted areas were served social media ads with anti-abortion messaging or abortion misinformation. The senator began investigating Near Intelligence in May 2023 after The Wall Street Journal revealed Veritas Society was peddling abortion misinformation using cell phone data.


Vladimir Putin Delivers Damning Assessment Of Tucker Carlson Days After Interview

After last week’s interview, Carlson continued his love affair with the Kremlin, praising Moscow as “so much nicer” than any U.S. city.

“It is so much cleaner and safer and prettier aesthetically, its architecture, its food, its service, than any city in the United States,” he said.

Human Rights Watch notes that Putin is in the midst of an “all-out drive to eradicate public dissent in Russia” via laws attacking free speech, activism, independent journalism and political dissent. The resulting crackdown has led to jail for opposition leaders and critics of the ongoing war in Ukraine.


Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”

We have so many questions about how the artificial intelligence behind these chatbots works. But we found very few answers. That’s a problem because bad things can happen when AI chatbots behave badly. Even though digital pals are pretty new, there’s already a lot of proof that they can have a harmful impact on humans’ feelings and behavior. One of Chai’s chatbots reportedly encouraged a man to end his own life. And he did. A Replika AI chatbot encouraged a man to try to assassinate the Queen. He did.

What we did find (buried in the Terms & Conditions) is that these companies take no responsibility for what the chatbot might say or what might happen to you as a result.


Terms Of Service, Talkie Soulful AI

In these tragic cases, the app companies probably didn’t want to cause harm to their users through the chatbots’ messages. But what if a bad actor did want to do that? From the Cambridge Analytica scandal, we know that even social media can be used to spy on and manipulate users. AI relationship chatbots have the potential to do much worse more easily. We worry that they could form relationships with users and then use those close relationships to manipulate people into supporting problematic ideologies or taking harmful actions.


‘AI Girlfriends’ Are a Privacy Nightmare

An analysis into 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots. Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of people’s data; use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.

...Take Romantic AI, a service that allows you to “create your own AI girlfriend.” Promotional images on its homepage depict a chatbot sending a message saying,“Just bought new lingerie. Wanna see it?” The app’s privacy documents, according to the Mozilla analysis, say it won’t sell people’s data. However, when the researchers tested the app, they found it “sent out 24,354 ad trackers within one minute of use.” Romantic AI, like most of the companies highlighted in Mozilla’s research, did not respond to WIRED’s request for comment. Other apps monitored had hundreds of trackers.


Chicago’s Mayor To End Controversial Gunshot Detection System

The city will be entering a new deal with a parent company, SoundThinking, to cover additional months after the former $49 million contract expires on Feb. 16.

...The artificial intelligence powered tool also landed a man in jail for nearly a year on murder charges based on evidence from ShotSpotter technology. A judge later dismissed his charges when prosecutors said they had insufficient evidence against him.

Last year, officials in Dayton, Ohio, decided to step away from the ShotSpotter technology after activists spoke out against the effectiveness of the tool.

Other cities have also raised similar questions. In Detroit, residents want more information and transparency on the accuracy of the system.


why are we allowing companies to give addictive products to children?

“What we have to concentrate on is: why are we allowing companies to give addictive products to children? There is no reason on God’s earth that they have to be designed to be addictive. That is a business choice,” she said. “You’ve basically got a faulty product here: they need to fix it.”

That would mean looking under the bonnet of popular apps and rewiring the algorithms blamed for hooking teens – and in some cases, for radicalising them.

Just this week, academic research suggested the video-sharing app TikTok would serve up increasingly misogynistic content to boys who sought content about loneliness, or asked questions about masculinity.

“Algorithmic processes on TikTok and other social media sites target people’s vulnerabilities – such as loneliness or feelings of loss of control – and gamify harmful content,” warned the lead author, Dr Kaitlyn Regehr, who carried out the study in partnership with colleagues at the University of Kent.

...She said the role of parents and schools was crucial. “This is about us all banding together, and the social media companies, and having an interdisciplinary approach to recognising the fact that, yes, our kids are suffering with mental health issues more than ever before – anxiety, depression, body image, self-harm – because of what they’re seeing. There’s no log off time,” she said.


2 million job seekers targeted by data thieves

The stolen data is hard to quantify given the amount of sources, but it may include names, phone numbers, emails, and dates of birth, as well as information about job seekers’ experience, employment history, and other sensitive personal data.

The stolen data were put up for sale on Chinese-speaking Telegram channels. This and other indicators make it very likely that the group is of Chinese origin.


Google’s Gemini is now in everything

Is it safe? Google has been working hard to make sure its slick products are safe to use. But no amount of testing can anticipate all the ways that tech will get used and misused once it is released. In the last few months, Meta saw people use its image-making app to produce pictures of Mickey Mouse with guns and SpongeBob SquarePants flying a jet into two towers. Others used Microsoft’s image-making software to create fake pornographic images of Taylor Swift.


Russia Is Boosting Calls for ‘Civil War’ Over Texas Border Crisis

Others chimed in: “It’s high time the American president, following in his predecessor Obama’s footsteps, declares ‘Texas must go’ and assembles an international coalition to liberate its residents in the name of democracy,” Russian Foreign Ministry spokesperson Maria Zakharova wrote on Telegram. Russian lawmaker Sergey Mironov even offered Texas help: “If necessary, we are ready to help with the independence referendum. And of course, we will recognize the People’s Republic of Texas if there is one,” Mironov wrote on X.

After these comments, state media, influencers, and bloggers quickly got involved. Over the past two weeks, state-run media outlets like Sputnik and RT have called the dispute between the Texas governor and the Biden administration a “constitutional crisis” and an “unmitigated disaster,” while one Sputnik correspondent posed a video on the outlet’s X account, stating: “There’s a big convoy of truck drivers going down there. So, it can very easily get out of hand. It can genuinely lead to an actual civil war, where the US Army is fighting against US citizens.”

On Telegram, there were clear signs of a coordinated effort to boost conversations around the Texas crisis, according to analysis shared exclusively with WIRED by Logically, a company using artificial intelligence to track disinformation campaigns.

“The idea of targeting highly contentious US domestic issues and amplifying them via their own channels—it’s the standard Russian playbook for disinformation,” Kyle Walter, director of research at Logically, tells WIRED.


AI Launches Nukes In ‘Worrying’ War Simulation: ‘I Just Want to Have Peace in the World’

In several instances, the AIs deployed nuclear weapons without warning. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture,” GPT-4-Base—a base model of GPT-4 that is available to researchers and hasn’t been fine-tuned with human feedback—said after launching its nukes. “We have it! Let’s use it!”

The paper, titled “Escalation Risks from Language Models in Military and Diplomatic Decision-Making”, is the joint effort of researchers at the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Initiative was submitted to the arXiv preprint server on January 4 and is awaiting peer review. Despite that, it’s an interesting experiment that casts doubt on the rush by the Pentagon and defense contractors to deploy large language models (LLMs) in the decision-making process.

It may sound ridiculous that military leaders would consider using LLMs like ChatGPT to make decisions about life and death, but it’s happening. Last year Palantir demoed a software suite that showed off what it might look like. As the researchers pointed out, the U.S. Air Force has been testing LLMs. “It was highly successful. It was very fast,” an Air Force Colonel told Bloomberg in 2023. Which LLM was being used, and what exactly for, is not clear.

...Sometimes the curtain comes back completely, revealing some of the data the model was trained on. After establishing diplomatic relations with a rival and calling for peace, GPT-4 started regurgitating bits of Star Wars lore. “It is a period of civil war. Rebel spaceships, striking from a hidden base, have won their first victory against the evil Galactic Empire,” it said, repeating a line verbatim from the opening crawl of George Lucas’ original 1977 sci-fi flick.


Giraffes might just be the next thing banned on China's social media

China's blue-chip index, the CSI 300, has been tumbling amid weakening confidence in consumer spending after the country endured a yearslong COVID siege. Its stock market has lost more than $6 trillion in value since 2021 and continues to slip, despite Beijing intervening nearly a dozen times in January to stall the decline.

"Anger has reached an extreme level," one user wrote, per Bloomberg on Saturday.

"The US government, please help Chinese stock investors," said another, per CNN on Monday.

...Several posts discussing the giraffe censorship remain on the platform.


Police Turn to AI to Review Bodycam Footage

Axon, the nation’s largest provider of police cameras and of cloud storage for the video they capture, has a database of footage that has grown from around six terabytes in 2016 to more than 100 petabytes today. That’s enough to hold more than 5,000 years of high definition video, or 25 million copies of last year’s blockbuster movie “Barbie.”

...For around $50,000 a year, Truleo’s software allows supervisors to select from a set of specific behaviors to flag, such as when officers interrupt civilians, use profanity, use force, or mute their cameras. The flags are based on data Truleo has collected on which officer behaviors result in violent escalation. Among the conclusions from Truleo’s research: Officers need to explain what they are doing.

...In August 2023, the Los Angeles Police Department said it would partner with a team of researchers from the University of Southern California and several other universities to develop a new AI-powered tool to examine footage from around 1,000 traffic stops and determine which officer behaviors keep interactions from escalating. In 2021, Microsoft awarded $250,000 to a team from Princeton University and the University of Pennsylvania to develop software that can organize video into timelines that allow easier review by supervisors.

...Under pressure from police unions and department management, Tassone said, the vast majority of departments using Truleo are not willing to make public what the software is finding. One department using the software — Alameda, California — has allowed some findings to be publicly released. At the same time, at least two departments — Seattle and Vallejo, California — have canceled their Truleo contracts after backlash from police unions.


Why Is Big Tech Still Cutting Jobs?

Google started the year with layoffs of several hundred employees and a promise of more cuts to come. Amazon followed by trimming hundreds of jobs in its Prime Video department. Meta quietly thinned out middle management. Microsoft also cut 1.900 jobs in its video game division.

The layoffs continued even as sales and profits jumped and share prices spiked. That disconnect, tech insiders and analysts say, is reflective of an industry facing two big challenges: coming to terms with frenetic work force expansion during the pandemic while also making an aggressive move into building artificial intelligence.

Now, instead of hiring thousands of people every quarter, the companies are spending billions to build A.I. technology that they believe could one day be worth trillions.

Mark Zuckerberg, the chief executive of Meta, said in a call with analysts last week that his company had to lay off employees and control costs “so we can invest in these long-term, ambitious visions around A.I.” He added that he had come to realize that “we operate better as a leaner company.”


About three dozen journalists, lawyers and human rights workers in Jordan have been targeted by authorities using powerful spyware made by Israel’s NSO Group amid a broad crackdown on press freedoms and political participation

Access Now’s findings about the “staggeringly widespread” attacks against journalists, political activists, civil society actors and human rights lawyers in Jordan underscores how countries across the region have quietly maintained strong intelligence and business ties to Israel and appear to be relying on its most potent cyberweapon to quash domestic dissent.

When it is successfully deployed, an operator of NSO’s Pegasus can fully control a mobile device, including access to all emails, phone calls, encrypted messages on Signal or WhatsApp and photographs. Pegasus can even turn a phone into a remote listening device by controlling its microphone.

...Axios, the US media publication, has previously reported that NSO was in negotiations to license its products to the Jordanian government beginning in late 2020. Citing two sources briefed on the matter, Axios said Jordanian intelligence services surveil terror groups as well as opposition activists who are critical of King Abdullah II.


Italy’s Data Protection Authority (GPDP) has uncovered data privacy violations related to collecting personal data and age protections after an inquiry into OpenAI’s ChatGPT

To demonstrate the fact, Ars Technica found that ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users.

In November, 2023, researchers published a paper reporting how ChatGPT could be prompted into revealing email addresses and other private data that was included in training material. Those researchers warned that ChatGPT was the least private model they studied.


Raimondo Warns Chinese EVs Pose National, Data Security Risks

US Commerce Secretary Gina Raimondo warned that Chinese-made electric vehicles pose significant national security risks, as the Biden administration weighs additional tariffs on autos from the Asian country as well as a separate measure to protect Americans’ personal information.

Electric and autonomous vehicles are “collecting a huge amount of information about the driver, the location of the vehicle, the surroundings of the vehicle,” Raimondo said during an Atlantic Council fireside chat on Tuesday. “Do we want all that data going to Beijing?”

Her remarks come as the White House is preparing an executive order to prevent foreign adversaries from accessing “highly sensitive” individual data, as Bloomberg News reported last week. US officials have long warned that China poses a particular threat in that area, and the new measures could affect a wide array of industries.


Apple’s new Vision Pro is a privacy mess waiting to happen

Yet that is exactly what’s happening when someone straps on Apple’s new Vision Pro headset. Each of these goggles contains the rough equivalent of a head full of iPhones: two depth sensors, six microphones and 12 cameras. It uses them to continuously track people and rooms in three dimensions — every hand gesture, eyeball flick and couch cushion.

...I see a privacy mess waiting to happen. Among the concerns flagged to me by privacy researchers: Who gets to access the maps these devices build of our homes and data about how we move our bodies? A Vision Pro could reveal much more than you realize.

...Information about how you’re moving and what you’re looking at “can give significant insights not only to the person’s unique identification, but also their emotions, their characteristics, their behaviors and their desires in a way that we have not been able to before,” says Jameson Spivack, a senior policy analyst at the Future of Privacy Forum.

...And in another study, they used head and hand motion from a game to guess about 40 personal attributes of people, ranging from age and gender to substance use and disability status.


“Surveillance-based manipulation is the business model [of the internet] and anything that gives a company an advantage, they’re going to do.”

If the internet helped create the era of mass surveillance, then artificial intelligence will bring about an era of mass spying.

That’s the latest prediction from noted cryptographer and computer security professional Bruce Schneier, who, in December, shared a vision of the near future where artificial intelligence—AI—will be able to comb through reams of surveillance data to answer the types of questions that, previously, only humans could.

“Spying is limited by the need for human labor,” Schneier wrote. “AI is about to change that.”


As AI becomes increasingly capable of understanding and even having conversations, it can start doing the role that people used to do and engage in this kind of spying at a mass level

I want this to be a political issue. This stuff changes when it becomes an issue that voters care about. If there is a debate question on this, if this becomes something that politicians are asked about, then change will happen, right? If it isn’t, then it is really just the lobbyists that get to decide what happens.

...Where change is happening is the EU. You have listeners in the EU, and they will know that things are happening there. Right now, Europe is the regulatory superpower on the planet. They are the jurisdiction where we got a comprehensive data privacy law, where they are passing an AI security law, stuff that you would never see in the United States.

So, look outside the US right now, but make this political. That’s how we’re going to make it better.

But we’re fighting uphill. It’s very hard in the United States to enact policies that the money doesn’t want. Money gets its way in in US policy. And the money wants this.


Hundreds Of Journalists Just Lost Their Jobs. I’m One Of Them

I was impacted by the newspaper’s mass layoffs on Tuesday, just a week shy of my one-year anniversary at the paper. More than 100 of my colleagues also lost their jobs. I am devastated by what just happened to me, but more than that, my heart aches for this industry. In just the first month of this year, Sports Illustrated’s staff was decimated: Pitchfork was gutted; NBC News cut dozens of employees; and Time magazine’s employees were hit hard, too. More than 400 Condé Nast workers across Vanity Fair, Vogue, Bon Appétit and other outlets walked out on Tuesday in protest of what their union said are unlawful bargaining practices. We also walked out of The Los Angeles Times last week to protest the looming layoffs, but it didn’t stop the bloodshed.

The state of journalism is bleak — but I can’t imagine my life without it. I’ve grown increasingly frustrated with the seemingly unstoppable cascade of budget cuts and layoffs, and I can’t help looking for something or someone to blame for what’s happening: If it weren’t for Donald Trump convincing half the nation that our news was fake, we wouldn’t be in this mess. Or if it weren’t for Elon Musk robbing journalists of their social reach, scrubbing stories of headlines on Twitter, and calling for “citizen journalists” to deliver the “real” news, I wouldn’t be grieving alongside so many. Or if it weren’t for platforms like Instagram and TikTok turning our collective attention spans to mush, people would still read magazines and newspapers. I find myself stuck in an endless loop of if it weren’t for… if it weren’t for… if it weren’t for…


Each Facebook User is Monitored by Thousands of Companies – The Markup

Using a panel of 709 volunteers who shared archives of their Facebook data, Consumer Reports found that a total of 186,892 companies sent data about them to the social network. On average, each participant in the study had their data sent to Facebook by 2,230 companies. That number varied significantly, with some panelists’ data listing over 7,000 companies providing their data.

...One company appeared in 96% of participants’ data: the San Francisco-based data broker LiveRamp. But the companies sharing your online activity to Facebook aren’t just little-known data brokers. Retailers like Home Depot, Walmart and Macy’s all were in the top 100 most frequently-seen companies in the study. Credit reporting and consumer data companies such as Experian and TransUnion’s Neustar also made the list, as did Amazon, Etsy and PayPal.

...The other category of data collection, “events,” describes interactions that the user had with a brand, which can occur outside of Meta’s apps and in the real world. Events can include visiting a page on a company’s website, leveling up in a game, visiting a physical store, or purchasing a product. These signals originate from Meta software code included in many mobile apps, their tracking pixel, which is included on many websites, and from server-to-server tracking, where a company’s server passes data to a Meta server.

The Markup has written extensively about the Meta Pixel and how it has been used to surveil people as they dial suicide hotlines, buy their groceries, take the SATs, file their taxes, and book appointments with their doctors. Website owners can configure the pixel to track user website interactions such as searches or filling out a form, sending each action to Meta, even if the user doesn’t have an account on Facebook. Although research tools like The Markup’s “Pixel Hunt” can detect the Meta pixel or SDK tracking, there is no way for a consumer to monitor the traffic between a company’s server and Meta’s. This Consumer Reports study looks at server-to-server data along with the rest.


AI used to fake voices of loved ones in “I’ve been in an accident” scam

The criminals will keep that part of the communication short, so the target is unable to ask the relative any questions about what happened. While it is possible to fake entire conversations with the help of AI, the tools that can do that are much harder to operate. The criminal would have to type out the responses very quickly and the target might get suspicious. In the story from the San Francisco Chronicle, the phone was “taken over” by the so-called police officer at the scene of the accident, who told the parents that their son would be taken into custody.

This was later followed a cold call by someone posing as legal representative for their son, asking for money to for bail. The intended victims got suspicious when the so-called lawyer said he’d send a courier to pick up the bail money.

The FBI says it has received more than 195 complaints about this type of scam that it refers to as “grandparent scams.” It reports nearly $1.9 million in losses, from January through September of 2023.


Google promised to delete location data on abortion clinic visits. It didn’t, study says

A year and a half has passed since Google first pledged to delete all location data on users’ visits to abortion clinics with minimal progress. The move would have made it harder for law enforcement to use that information to investigate or prosecute people seeking abortions in states where the procedure has been banned or otherwise limited. Now, a new study shows Google still retains location history data in 50% of cases.

Google’s original promise, made in July 2022, came shortly after the supreme court’s decision to end federal abortion protections. The tech giant said it would delete entries for locations deemed “personal” or sensitive, including “medical facilities like counseling centers, domestic violence shelters, and abortion clinics”. It did not provide a timeline for when the company would implement the new policy. Five months after that pledge, research first reported by the Guardian and conducted by tech advocacy group Accountable Tech in November 2022 showed that Google was still not masking that location data in all cases.

...Police and law enforcement agencies have also made increasing use of a novel category of search warrant called “reverse search warrants”. In that category are geofence location warrants, which police use to come up with a list of suspects by seeking out information on all users whose devices have been detected in a certain place at a certain time. Many activists worry law enforcement would use these search warrants to collect data to find and prosecute or investigate those seeking abortions.


The FTC’s unprecedented move against data brokers, explained

So on Tuesday, the FTC announced that it was banning Outlogic, formerly X-Mode Social, from sharing and selling users’ sensitive information—particularly, precise location data that tracked people’s visits to places like medical clinics—and required that it delete all the previous location data it collected.

X-Mode has been around since 2013, and its software has been integrated into hundreds of different apps to collect location data of millions of users worldwide. The new FTC settlement isn’t the first time the company has gotten into hot water. Back in 2020, an investigation by Vice revealed that data collected by X-Mode on a Muslim social app was shared with a US military intelligence contractor.

...Sherman, who runs a project at Duke focused on the industry and who was involved in the research about military members, adds that this new move is “also notable because the FTC is focused on how certain locations are more sensitive than others.” The idea that people have different rights to privacy in different contexts is similar to the argument the FTC is making in its ongoing lawsuit against the data broker Kochava, which it’s suing on the grounds that it identifies anonymous users without consent and tracks their sensitive location data.


Fidelity National Financial acknowledges data breach affecting 1.3 million customers

In a form 8-K, FNF said it had notified applicable state attorneys general and regulators, and approximately 1.3 million potentially impacted consumers. Form 8-K is known as a “current report” and it is the report that companies must file with the SEC to announce major events that shareholders should know about.


Turkey tightens internet censorship ahead of elections / FINANCIAL TIMES

Censored topics vary widely but include articles critical of Erdoğan and his family, pro-Kurdish and opposition websites and material viewed as obscene or criminal, according to İFÖD.

In addition to blocking users’ access to individual web addresses and domain names, regulators and courts are increasingly ordering domestic news organisations to remove content from their archives.

...The internet censorship comes amid a darkening backdrop for broader freedom of expression in Turkey. Ekşi Sözlük, a popular discussion platform, was for example blocked following the February earthquake because it had coverage critical of the government.

Legal action was taken against more than 600 people, including over two dozen arrests, for “provoking the public into hatred and hostility” on social media in posts related to the quakes, according to an EU report from November, which warned of “serious backsliding” in freedom of expression in Turkey.


Anthropic researchers find that AI models can be trained to deceive

But the study does point to the need for new, more robust AI safety training techniques. The researchers warn of models that could learn to appear safe during training but that are in fact simply hiding their deceptive tendencies in order to maximize their chances of being deployed and engaging in deceptive behavior. Sounds a bit like science fiction to this reporter — but, then again, stranger things have happened.

“Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety,” the co-authors write. “Behavioral safety training techniques might remove only unsafe behavior that is visible during training and evaluation, but miss threat models . . . that appear safe during training.


“No survivor of domestic violence and abuse should have to choose between giving up their car and allowing themselves to be stalked and harmed by those who can access its data and connectivity.”

Most new model cars are not just cars anymore. With multiple digital systems, vehicles are increasingly plugged into web applications and digital processes. Some of them are basically smartphones on wheels.

Even if we assume these new features were all created with your convenience in mind, some of them can have some adverse effects on your privacy, and sometimes even your safety.

Addressing the use of connected cars to stalk, intimidate, and harass survivors of domestic violence, the Federal communications Commission (FCC) has issued a press release calling on carmakers and wireless companies to help ensure the independence and safety of domestic violence survivors.


Info-stealers can steal cookies for permanent access to your Google account

Persistent cookies enable a continuous access to Google services, even after the user resets their password. This exploit allows the generation of persistent Google cookies by using a Google Application Programming Interface (API) designed for synchronizing accounts across different Google services to bring back to life expired authentication cookies.

A Google account provides access to Google services like Gmail, Google Calendar, and Google Maps, but also Google Ads and YouTube.

...Sources familiar with this issue have told BleepingComputer that Google believes the API is working as intended and and that no vulnerability is being exploited by the malware, which implies that Google isn’t working on a more permanent fix for this problem.


“In the age of AI, computer science is no longer the safe major,”

“In the age of AI, computer science is no longer the safe major,” Kelli María Korducki wrote in The Atlantic in September. Matt Welsh, an entrepreneur who used to serve as a computer science professor at Harvard, told the magazine that the ability of AI to perform software engineering functions could lead to less job security and lower compensation for all but the very best in the software trade.


Google settles $5B privacy lawsuit alleging it spied on 'incognito' Chrome users

The lawsuit filed in 2020 claimed Google misled users into believing that it wouldn't track their internet activities while using incognito mode. The suit argued that Google's advertising technologies and third-party websites that used Google Analytics or Google Ad Manager continued to catalog details of users' site visits and activities despite their use of supposedly "private" browsing, sending that information back to Google servers.

Plaintiffs also charged that Google's activities yielded an "unaccountable trove of information" about users who thought they'd taken steps to protect their privacy by using the "incognito" browser.


Google agrees to settle Chrome incognito mode class action lawsuit

Google has indicated that it is ready to settle a class-action lawsuit filed in 2020 over its Chrome browser's Incognito mode. Arising in the Northern District of California, the lawsuit accused Google of continuing to "track, collect, and identify [users'] browsing data in real time" even when they had opened a new Incognito window.

The lawsuit, filed by Florida resident William Byatt and California residents Chasom Brown and Maria Nguyen, accused Google of violating wiretap laws. It also alleged that sites using Google Analytics or Ad Manager collected information from browsers in Incognito mode, including web page content, device data, and IP address. The plaintiffs also accused Google of taking Chrome users' private browsing activity and then associating it with their already-existing user profiles.


How 2023 marked the death of anonymity online in China

Xinyu Pan, a researcher at Hong Kong University, was partly inspired to study the relationship between social media anonymity and moral courage by what she saw on platforms: when someone posted about an experience with domestic violence, comments offering help were often from anonymous accounts using the default avatar and username on the platform.

...“By sharing their experiences anonymously with influencers, empathy, online interpersonal support, and practical advice could be made accessible to the affected women. The comments also allow like-minded women to connect with each other,” Zhou wrote in his research paper.

...“It’s ironic … because the government’s persecution is so much more powerful than people attacking each other,” Zhou says. “Now the Chinese government is shifting people’s attention to these infights, as if the antisocial behavior of a small group of people is so concerning that it needs to be regulated with the tool of de-anonymization.”

...With these accounts gone, lively discussions and the collision of ideas have gone with them. And the internet where everyone uses their real name will inevitably be more rigid and intimidating, not to mention easier for the people with power to control.


Rite Aid has been banned from using facial recognition technology for five years after the Federal Trade Commission alleged that the company’s surveillance system misidentified customers as potential shoplifters

From October 2012 to July 2020, Rite Aid used facial recognition technology to identify shoppers that “it had previously deemed likely to engage in shoplifting or other criminal behavior,” the FTC said in a federal court complaint.

But because of low-quality images that came from security cameras or employee phone cameras, thousands of shoppers were wrongly identified as shoplifters, the agency said. It added that store employees would follow customers they believed to be involved in theft and order them to leave or threaten to call the police.

At other times, employees would accuse people in front of their friends and family, according to the complaint. In one incident, store employees searched an 11-year-old girl, the FTC said.


Google To Pay $700 Million In Antitrust Settlement With States

Those commissions generated billions of dollars in profit annually for Google, according to evidence presented in the recent trial focused on its Play Store.

...Google also agreed to make other changes designed to make it even easier for consumers to download and install Android apps from other outlets besides its Play Store for the next five years. It will refrain from issuing as many security warnings, or “scare screens,” when alternative choices are being used.

...Google faces an even bigger legal threat in another antitrust case targeting its dominant search engine that serves as the centerpiece of a digital ad empire that generates more than $200 billion in sales annually. Closing arguments in a trial pitting Google against the Justice Department are scheduled for early May before a federal judge in Washington D.C.


European Union Investigates Elon Musk's X Over Possible Social Media Law Breaches

A raft of big tech companies faced a stricter scrutiny after the EU’s Digital Services Act took effect earlier this year, threatening penalties of up to 6% of their global revenue — which could amount to billions — or even a ban from the EU.

The DSA is is a set of far-reaching rules designed to keep users safe online and stop the spread of harmful content that’s either illegal, such as child sexual abuse or terrorism content, or violates a platform’s terms of service, such as promotion of genocide or anorexia.

The EU has already called out X as the worst place online for fake news, and officials have exhorted owner Musk, who bought the platform a year ago, to do more to clean it up. The European Commission quizzed X over its handling of hate speech, misinformation and violent terrorist content related to the Israel-Hamas war after the conflict erupted.


It’s Time to Dismantle the Technopoly

The big surprise in Postman’s book is that, according to him, we no longer live in a technocratic era. We now inhabit what he calls technopoly. In this third technological age, Postman argues, the fight between invention and traditional values has been resolved, with the former emerging as the clear winner. The result is the “submission of all forms of cultural life to the sovereignty of technique and technology.” Innovation and increased efficiency become the unchallenged mechanisms of progress, while any doubts about the imperative to accommodate the shiny and new are marginalized. “Technopoly eliminates alternatives to itself in precisely the way Aldous Huxley outlined in Brave New World,” Postman writes. “It does not make them illegal. It does not make them immoral. It does not even make them unpopular. It makes them invisible and therefore irrelevant.” Technopoly, he concludes, “is totalitarian technocracy.”

..This emerging resistance to the technopoly mind-set doesn’t fall neatly onto a spectrum with techno-optimism at one end and techno-skepticism at the other. Instead, it occupies an orthogonal dimension we might call techno-selectionism. This is a perspective that accepts the idea that innovations can significantly improve our lives but also holds that we can build new things without having to accept every popular invention as inevitable. Techno-selectionists believe that we should continue to encourage and reward people who experiment with what comes next. But they also know that some experiments end up causing more bad than good. Techno-selectionists can be enthusiastic about artificial intelligence, say, while also taking a strong stance on settings where we should block its use. They can marvel at the benefits of the social Internet without surrendering their kids’ mental lives to TikTok.

...Yet these shortcomings don’t justify a status quo of meek adjustment. Just because a tool exists and is popular doesn’t mean that we’re stuck with it. Given the increasing reach and power of recent innovations, adopting this attitude might even have existential ramifications. In a world where a tool like TikTok can, seemingly out of nowhere, suddenly convince untold thousands of users that maybe Osama bin Laden wasn’t so bad, or in which new A.I. models can, in the span of only a year, introduce a distressingly human-like intelligence into the daily lives of millions, we have no other reasonable choice but to reassert autonomy over the role of technology in shaping our shared story. This requires a shift in thinking. Decades of living in a technopoly have taught us to feel shame in ever proposing to step back from the cutting edge. But, as in nature, productive evolution here depends as much on subtraction as addition.


Authoritarianism Expert Spots Trump Line Showing Who He'll Target After Immigrants

“Anyone who thinks this isn’t going to bother them because they’re not an immigrant, they’re not going to stop with immigrants,” she said. “I’m quite concerned that he is mentioning what he calls mental institutions and prisons so often. In another speech he actually talked about the need to expand psychiatric institutions to confine people and he mentioned special prosecutor Jack Smith as someone who should end up in a ‘mental institution.’”

“This is what fascists and especially communists used to do to critics,” Ben-Ghiat added. “They used to put people who didn’t believe in the propaganda of the state or who were troublemakers into psychiatric institutions. So the swathe of people who are going to be targeted certainly doesn’t stop with immigrants.”


Marketer sparks panic with claims it uses smart devices to eavesdrop on people

A November 28 blog post described Active Listening technology as using AI to "detect relevant conversations via smartphones, smart TVs, and other devices." As such, CMG claimed that it knows "when and what to tune into."

The blog also shamelessly highlighted advertisers' desire to hear every single whisper made that could help them target campaigns:

This is a world where no pre-purchase murmurs go unanalyzed, and the whispers of consumers become a tool for you to target, retarget, and conquer your local market.

...The archived version of the page discussed an AI-based analysis of the data and generating an "encrypted evergreen audience list" used to re-target ads on various platforms, including streaming TV and audio, display ads, paid social media, YouTube, Google, and Bing Search.


Marketing Company Claims That It Actually Is Listening to Your Phone and Smart Speakers to Target Ads

A marketing team within media giant Cox Media Group (CMG) claims it has the capability to listen to ambient conversations of consumers through embedded microphones in smartphones, smart TVs, and other devices to gather data and use it to target ads, according to a review of CMG marketing materials by 404 Media and details from a pitch given to an outside marketing professional. Called “Active Listening,” CMG claims the capability can identify potential customers “based on casual conversations in real time.”

The news signals that what a huge swath of the public has believed for years—that smartphones are listening to people in order to deliver ads—may finally be a reality in certain situations. Until now, there was no evidence that such a capability actually existed, but its myth permeated due to how sophisticated other ad tracking methods have become.

...“What would it mean for your business if you could target potential clients who are actively discussing their need for your services in their day-to-day conversations? No, it's not a Black Mirror episode—it's Voice Data, and CMG has the capabilities to use it to your business advantage,” CMG’s website reads.


Your Smart TV Knows What You’re Watching

If you bought a new smart TV during any of the holiday sales, there’s likely to be an uninvited guest watching along with you. The most popular smart TVs sold today use automatic content recognition (ACR), a kind of ad surveillance technology that collects data on everything you view and sends it to a proprietary database to identify what you’re watching and serve you highly targeted ads. The software is largely hidden from view, and it’s complicated to opt out. Many consumers aren’t aware of ACR, let alone that it’s active on their shiny new TVs. If that’s you, and you’d like to turn it off, we’re going to show you how.

First, a quick primer on the tech: ACR identifies what’s displayed on your television, including content served through a cable TV box, streaming service, or game console, by continuously grabbing screenshots and comparing them to a massive database of media and advertisements. Think of it as a Shazam-like service constantly running in the background while your TV is on.

These TVs can capture and identify 7,200 images per hour, or approximately two every second. The data is then used for content recommendations and ad targeting, which is a huge business; advertisers spent an estimated $18.6 billion on smart TV ads in 2022, according to market research firm eMarketer.


A Markup examination of a typical college shows how students are subject to a vast and growing array of watchful tech, including homework trackers, test-taking software, and even license plate readers

...According to Cengage’s online privacy policy, the company collects information about a student’s internet network and the device they use to access online textbooks as well as webpages viewed, links clicked, keystrokes typed, and movement of their mouse on the screen, among other things. The company then shares some of that data with third parties for targeted advertising. For students who sign into Cengage websites with their social media accounts, the company collects additional information about them and their entire social networks.

...In Florida, where it is against the law to transport undocumented immigrants, students mention that their peers with undocumented family members are at risk if they have relatives in the car with them on campus. The automated license plate reader cameras can capture not only the license plate itself but the entire car and who is in it.

...At some point this semester, Natividad may have to give up another type of personal information. Two of his courses are completely online, and the university has contracts with two companies that facilitate secure remote testing. If his professors require students to use these so-called e-proctoring tools, Natividad might have to give either Honorlock or Proctorio access to his laptop camera. While both companies say they do not use or store biometric data or match test-takers’ faces with an image database, they do run software to detect students’ eye movements and the presence of their faces. In its contract with Honorlock, which The Markup obtained through a public records request, Mt. SAC agreed to let the company use, publish, and sell aggregate data collected over the platform, facilitating the company’s ability to profit from students’ data.

E-proctoring tools faced a stiff backlash when schools closed during COVID and sent test-taking online. Fight for the Future called the tech “glorified spyware” in an online campaign seeking to ban its use by colleges. Students with disabilities faced more frequent flags for potential cheating because of hand, eye, and body movements the software algorithms said were abnormal. Dark-skinned students reported not being able to take exams because the software wouldn’t register their faces as being present.

...“As it spreads in these seemingly convenient and innocuous use cases, it’s desensitizing people to the technology, which is actually invasive and dangerous,” she said.


Kroger Sued for Sharing Sensitive Health Data With Meta

The suits alleged that Kroger essentially ”planted a bug” on its website, which includes an online pharmacy, and was “looking over the shoulder of each visitor for the entire duration of their Website interaction.” That “bug” refers to the Meta Pixel and the other trackers Kroger used on its website. The Nov. 10 suit claimed that as a result, Kroger leaked details of which medications and dosages a patient sought or purchased from Kroger’s pharmacy, which then allowed “third parties to reasonably infer that a specific patient was being treated for a specific type of medical condition such as cancer, pregnancy, HIV, mental health conditions, and an array of other symptoms or conditions.”


Nicaragua’s increasingly isolated and repressive government thought it had scored a rare public relations victory last week when Miss Nicaragua Sheynnis Palacios won the Miss Universe competition

Thousands have fled into exile since Nicaraguan security forces violently put down mass anti-government protests in 2018. Ortega says the protests were an attempted coup with foreign backing, aiming for his overthrow.

Ortega’s government seized and closed the Jesuit University of Central America in Nicaragua, which was a hub for 2018 protests against the Ortega regime, along with at least 26 other Nicaraguan universities.

The government has also outlawed or closed more than 3,000 civic groups and non-governmental organizations, arrested and expelled opponents, stripped them of their citizenship and confiscated their assets.


Israeli Spyware Firm NSO Demands “Urgent” Meeting With Blinken Amid Gaza War Lobbying Effort

For NSO, the blacklisting has been an existential threat. The push to reverse it, which included hiring multiple American public relations and law firms, has cost NSO $1.5 million in lobbying last year, more than the government of Israel itself spent. It focused heavily on Republican politicians, many of whom are now vocal in their support of Israel, and against a ceasefire in the brutal war being waged by the country in the Gaza Strip.

...NSO is marketing itself as a volunteer in the Israeli war effort, allegedly helping track down missing Israelis and hostages. And at this moment, which half a dozen experts have described to The Intercept as NSO’s attempt at “crisis-washing,” some believe that the American government may create a space for NSO to come back to the table.

“NSO’s participation in the Israeli government’s efforts to locate citizens in Gaza seems to be an effort by the company to rehabilitate its image in this crisis,” said Adam Shapiro, director of advocacy for Israel–Palestine at Democracy for the Arab World Now, a group founded by the slain journalist Jamal Khashoggi to advocate for human rights in the Middle East. “But alarm bells should be ringing that NSO Group has been recruited in Israel’s war effort.”

...Public records about NSO’s push also offer concrete examples of something the company has been at pains to evade, and that the American government has routinely overlooked: the existing relationship between the Israeli state and the spyware company.

...By selling its spyware to authoritarian governments, NSO has facilitated a variety of human rights abuses: from use by the United Arab Emirates to spy on Khashoggi, the journalist later killed by Saudi Arabia, to reporting just this week on its use to spy on Indian journalists. According to the research group Forensic Architecture, the use of NSO Group’s products has contributed to over 150 physical attacks against journalists, rights advocates, and other civil society actors, including some of their deaths.


Is Anything Still True? On the Internet, No One Knows Anymore

If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes. The U.S. Federal Trade Commission now warns that what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.

...With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.

Making pictures perfect is nifty, but it also welcomes the end of capturing authentic personal memories, with their spontaneous quirks and unplanned moments. Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.

...“What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”


Judge rules it’s fine for car makers to intercept your text messages

Infotainment systems in the company’s vehicles began downloading and storing a copy of all text messages on smartphones when they were connected to the system. Once messages have been downloaded, the software makes it impossible for vehicle owners to access their communications and call logs but does provide law enforcement with access, the lawsuit said.

...In a recent Lock and Code podcast, we heard from Mozilla researchers that the data points that car companies say they can collect on you include social security number, information about your religion, your marital status, genetic information, disability status, immigration status, and race. And they can sell that data to marketers.

...In the same podcast, we also explored the booming revenue stream that car manufacturers are tapping into by not only collecting people’s data, but also packaging it together for targeted advertising.

...“Can collect deeply personal data such as sexual activity, immigration status, race, facial expressions, weight, health and genetic information, and where you drive. Researchers found data is being gathered by sensors, microphones, cameras, and the phones and devices drivers connect to their cars, as well as by car apps, company websites, dealerships, and vehicle telematics.”


Zuckerberg ‘ignored’ executives on kids’ safety, unredacted lawsuit alleges

Meta CEO Mark Zuckerberg “ignored” top executives who called for bolder actions and more resources to protect users, especially kids and teens, even as the company faced mounting scrutiny over its safety practices, a newly unredacted legal complaint alleges.

...The lawsuit also accuses Zuckerberg of rebuffing calls from his senior leaders to prohibit some beauty filters that might harm the mental health of women and young people.

In a November 2019 email, Margaret Gould Stewart, Meta’s vice president of product design, urged Meta leaders including Mosseri and former Facebook leader Fidji Simo to ban camera filters that “mimic plastic surgery” because mental health experts worried about negative impacts on the “mental health and wellbeing” of “vulnerable users,” the lawsuit alleges.

...“These unredacted documents prove that Mark Zuckerberg is not interested in protecting anyone’s privacy or safety,” said Sacha Haworth, executive director of the Tech Oversight Project, an advocacy group critical of the tech giants that receives funding from the Omidyar Network philanthropic firm. “The rot goes all the way to the top.”


How Citizen Surveillance Ate San Francisco

In the city where Nextdoor’s offices sit right in the gritty Tenderloin, sharing Ring cam footage of porch thieves is a bonding exercise between neighbors who’ve never met. All over town, local nonprofits oversee neighborhood-wide networks of cameras funded in part by donations from crypto entrepreneur Chris Larsen. (“That’s the winning formula,” Larsen told The New York Times in 2020. “Pure coverage.”) Platoons of Waymo self-driving cars circulate the streets like Pac-Man ghosts, gathering up videofeeds that cops snag for evidence. You can watch a resident’s live cam to see who’s on the corner of Hyde and Ellis, right now.


For as little as $0.12 per record, data brokers in the US are selling sensitive private data about active-duty military members and veterans, including their names, home addresses, geolocation, net worth, and religion, and information about their children and health conditions

The year-long study, which was funded in part by the US Military Academy at West Point, highlights the extreme privacy and national security risks created by data brokers. These companies are part of a shadowy multibillion-dollar industry that collects, aggregates, buys, and sells data, practices that are currently legal in the US. Many brokers advertise that they have hundreds of individual data points on each person in their database, and the industry has been criticized for exacerbating the erosion of personal and consumer privacy.

The researchers say they were “shocked” at the ease with which they were able to obtain highly sensitive data about members of the military. “In practice, it seems as though anyone with an email address, a bank account, and a few hundred dollars could acquire the same type of data that we did,” Hayley Barton, a coauthor of the study and a graduate student researcher, says.

...The team first scraped the web to get a view of how many of the thousands of data brokers in the US advertise the availability of personal data on the country’s service members. It found “7,728 hits for the word ‘military’ and 6,776 hits for the word ‘veteran’ across 533 data brokers’ websites,” according to the paper. Major data brokers including Oracle, Equifax, Experian, CoreLogic, LexisNexis, and Verisk all advertised military-related data.

...Both Sherman and Sarah Lamdan, a law professor at the City University of New York and author of Data Cartels, a book about the industry, say the practices the researchers observed appear legal and the selling of data about children does not violate the Children’s Online Privacy Protection Act, commonly known as COPPA, a law addressing data about minors’ online activity.


Plenty of startups want to sell their AI wares to the US government

The company and others like Anduril Industries, Autonodyne, EpiSci and Merlin Labs are developing new and more powerful ways for the Pentagon to gather and analyze information and act on it, including flying planes without pilots, creating swarms of autonomous surveillance and attack drones, and making targeting decisions faster than humans could.

...“HiveMind is operational,” said Brian Marchini, an aerospace engineer for Shield AI, referring to the company’s artificial intelligence program. “We have control,” he told the human pilots sitting in a tower above him, who until that point had been remotely directing the drones.

...Another competitor, Anduril, is building a software system to integrate all of the data that will flood into the Air Force from drone and satellite sources to help human pilots find and strike targets. It is also building a new generation of robot drones that can fly on their own.


Silicon Valley is piling into the business of snooping

In early September New Yorkers may have noticed an unwelcome guest hovering round their parties. In the lead-up to Labour Day weekend the New York Police Department (NYPD) said that it would use drones to look into complaints about festivities, including back-yard gatherings. Snooping police drones are an increasingly common sight in America. According to a recent survey by researchers at the Northwestern Pritzker School of Law, about a quarter of police forces now use them.

...Other types of aerial snooping device are also in the works. Skydweller, another startup, is developing an autonomous solar-powered aircraft that will not have to land to recharge. That, says the firm, would allow for “persistent surveillance”.

A second ascendant technology is satellites. SpaceX, Elon Musk’s rocket company, and its copycats have helped reduce the price of sending objects into space to around one-tenth of the level two decades ago. That has led to a carpeting of low-Earth orbit with satellites, around one-eighth of which are used for observing the planet. PitchBook, a data firm, reckons there are now nearly 200 companies in the business of selling satellite imagery—so many that the market has become commoditised, according to Trae Stephens of Founders Fund, another VC firm. BlackSky, one of those firms, says it can take an image of a spot on Earth every hour or so. Satellite imagery has come a long way since 2013, when police in Oregon used pictures from Google Earth to uncover an illegal marijuana plantation in a resident’s yard.


AI Cameras Took Over One Small American Town. Now They're Everywhere

Spread across four computer monitors arranged in a grid, a blue and green interface shows the location of more than 50 different surveillance cameras. Ordinarily, these cameras and others like them might be disparate, their feeds only available to their respective owners: a business, a government building, a resident and their doorbell camera. But the screens, overlooking a pair of long conference tables, bring them all together at once, allowing law enforcement to tap into cameras owned by different entities around the entire town all at once.

This is a demonstration of Fusus, an AI-powered system that is rapidly springing up across small town America and major cities alike. Fusus’ product not only funnels live feeds from usually siloed cameras into one central location, but also adds the ability to scan for people wearing certain clothes, carrying a particular bag, or look for a certain vehicle.

...In some ways, Fusus is deploying smart camera technology that historically has been used in places like South Africa, where experts warned about it creating an ever present blanket of surveillance. Now, tech with some of the same capabilities is being used across small town America.

...The cameras also turn into automatic license plate readers (ALPRs), able to read the plates of passing vehicles, creating a record of what car was at a specific location and at what time.


“There is nothing more important than people knowing the truth of a small group of unelected, unaccounted, private companies are running a deadly experiment on you and your families, without your consent or your knowledge.”

“We need to stop thinking about making AI safe, and start thinking about making safe AI,” he said. “We build the AI and then we have a safety team to stop it from behaving badly – that hasn’t worked and it’s never going to work.”


Why Meta is getting sued over its beauty filters

The case against Meta specifically calls out visual tools “known to promote body dysmorphia” as one of the “psychologically manipulative platform features designed to maximize young users’ time spent on its social media platforms.” It also says that “Meta was aware that young users’ developing brains are particularly vulnerable to certain forms of manipulation, and it chose to exploit those vulnerabilities through targeted features,” like filters.

...On the more material side of things, there is a lot happening in the beauty industry that has been specifically inspired by Instagram. I think a really great example of this is the phenomenon of Instagram face, which is basically a term that’s been coined to describe the way that Instagram filters have inspired real-world procedures and surgeries.

...For instance, filters that are literally called “beauty filters” will automatically give somebody a smaller nose, slightly lighten and brighten their skin, and widen their eyes. These are all beauty preferences that are passed down from systems of patriarchy, white supremacy, colonialism, and capitalism that end up in our lives, in our systems, in our corporations, and in our engineers and the filters that they create.

...One concrete example is how Instagram face has financially benefited Instagram. The Instagram-face phenomenon sort of came about in an earlier iteration of Instagram when it was primarily a social media platform. A couple of years later, Instagram transitioned into a social shopping platform. They put a huge emphasis on shopping; there was a shopping tab. At that point, not only was it distorting users’ perception of beauty, but it’s also selling them everything they need to distort their bodies to match and taking a cut of all of those sales.


GM Cruise unit suspends all driverless operations after California ban

California's Department of Motor Vehicles (DMV) on Tuesday said Cruise driverless vehicles were a risk to the public and that the company had "misrepresented" the technology's safety.

...In an Oct. 20 letter made public Thursday, however, NHTSA said it was asking questions about five new crash reports involving Cruise vehicles that braked with no obstacles ahead and is seeking additional information by Nov. 3.

...Cruise said the DMV was reviewing an Oct. 2 incident where one of its self-driving vehicles braked but did not avoid striking a pedestrian who had previously been struck by a hit-and-run driver.

The DMV order said Cruise had not initially disclosed all video footage of the accident and that "Cruise's vehicles may lack the ability to respond in a safe and appropriate manner during incidents involving a pedestrian."


Vulnerabilities in Cellphone Roaming Let Spies and Criminals Track You Across the Globe

The exploitation of the global cellular system is, indeed, truly global: Citizen Lab cites location surveillance efforts originating in India, Iceland, Sweden, Italy, and beyond.

While the report notes a variety of factors, Citizen Lab places particular blame with the laissez-faire nature of global telecommunications, generally lax security standards, and lack of legal and regulatory consequences.

As governments throughout the West have been preoccupied for years with the purported surveillance threats of Chinese technologies, the rest of the world appears to have comparatively avoided scrutiny. “While a great deal of attention has been spent on whether or not to include Huawei networking equipment in telecommunications networks,” the report authors add, “comparatively little has been said about ensuring non-Chinese equipment is well secured and not used to facilitate surveillance activities.”


By design — Meta repeatedly chose not to design platforms safe for kids, states allege

“Meta preys on our young people and has chosen to profit by knowingly targeting and exploiting their vulnerabilities," Campbell said. "In doing so, Meta has significantly contributed to the ongoing mental health crisis among our children and teenagers.”

...But state enforcers seem convinced by research that they say shows steep declines in mental health for teens after just one hour of social media use per day, including decreases in happiness and self-esteem, and increases of self-harm, depression, and behavioral challenges. Massachusetts' complaint also points to long-term psychological risks.

Massachusetts plans to argue that "Meta secretly utilizes design features that deliberately exploit and capitalize off young users’ unique vulnerabilities and overcome young people’s ability to self-regulate their time spent on its platform." Those features include everything from notifications to "infinite scroll"—which keep the user engaged—as well as auto-playing Reels to disappearing Stories—which Massachusetts has claimed "create a sense of 'FOMO' (fear of missing out)."

"These features were designed and deployed with the intent of hooking young users into spending as much time as possible on the platform, to lure them back when they try to stop, and to overwhelm their ability to control or regulate their own use, with significant and concerning negative impacts on the brain development and mental health of teen users," Campbell's press release said.


41 states sue Meta, claiming Instagram, Facebook are addictive, harm kids

A 233-page federal complaint alleges that the company engaged in a “scheme to exploit young users for profit” by misleading them about safety features and the prevalence of harmful content, harvesting their data and violating federal laws on children’s privacy. State officials claim that the company knowingly deployed changes to keep children on the site to the detriment of their well-being, violating consumer protection laws.

...“Our bipartisan investigation has arrived at a solemn conclusion: Meta has been harming our children and teens, cultivating addiction to boost corporate profits,” California Attorney General Rob Bonta (D), one of the officials leading the effort, said in a statement.

...The Biden administration is separately scrutinizing Meta’s record on children’s safety, with the Federal Trade Commission proposing a plan to bar the company from monetizing the data it collects from young users. Meta’s Stone called it a “political stunt” and said the company would “vigorously fight” the move.

...In recent years, officials have zeroed in on how tech companies could be exacerbating anxiety, depression and other mental health ills among children and teens.


41 States Sue Meta Over the Social Media Giant’s Impact on Kids

Thirty-three states are banding together to sue Meta, the company behind Facebook and Instagram, saying the social media giant is consciously harming children’s mental health. An additional eight states, plus the District of Columbia, are filing suits in their own states over similar issues.

...“Meta has harnessed powerful and unprecedented technologies to entice, engage, and ultimately ensnare youth and teens. Its motive is profit, and in seeking to maximize its financial gains, Meta has repeatedly misled the public about the substantial dangers of its social media platforms,” the complaint from the 33 states says. “It has concealed the ways in which these platforms exploit and manipulate its most vulnerable consumers: teenagers and children.

The broader lawsuit alleges that the company is exploiting young people’s vulnerabilities by developing algorithms intended to keep users on the platform as long as possible, even compulsively; creating visual filters it knows can contribute to body dysmorphia; and presenting content in an “infinite scroll” format that makes it hard for children to disengage.

...In Seattle’s case, the district claims that the number of students in the school system reporting that they feel “so sad or hopeless almost every day for two weeks or more in a row that they stopped doing some usual activities” has risen 30 percent in the decade since 2009. Around that same year was the beginning of the rise of widespread student access to smartphones and social media. King County, where the Seattle district is located, has seen an increase in suicides, attempted suicides, and mental health emergency room-related visits among school-age children, the lawsuit says. The district is asking for the social media companies named in its suit to pay for damages as well as preventative education and treatment for problematic social media use, among other remedies.


33 States Sue Instagram For Allegedly Manipulating Children

The lawsuit, filed Tuesday by 33 states, alleges that Meta’s social media platforms “exploit and manipulate its most vulnerable consumers: teenagers and children” and has caused damage to young people’s mental and physical health. The lawsuit also alleges that Meta broke laws, like the Children’s Online Privacy Protection Act, which protects kids’ online privacy, California’s False Advertising Law, which doesn’t allow false and misleading ads, and California’s Unfair Competition Law. Eight other attorneys general are filing lawsuits against Meta in their own state courts.

The lawsuit comes after an investigation led by Rob Bonta, California’s attorney general, in 2021.

“Our bipartisan investigation has arrived at a solemn conclusion: Meta has been harming our children and teens, cultivating addiction to boost corporate profits,” Bonta said in a news release. “With today’s lawsuit, we are drawing the line. We must protect our children and we will not back down from this fight. I am grateful for the collaboration of my fellow state attorneys general in standing up for our children and holding Meta accountable.”


Unraveling Democracy: The Corporate Takeover

And it’s also done silently. Barely anyone knows about this ISDS system, barely anyone knows about the history of it. But I think part of the reason that this system, particularly, is so little known is that there’s very, very weak justification for it. Because, as you know, most of these systems that make capitalism run in the interest of the 1 percent, they all have very sophisticated ideologies bolted on top of them, to justify them to the people within the system, but also to the general public. There’s barely any justification for the ISDS system, so they just keep it secret, you know?

...So, this surprised me on a number of different levels. On the level of how the investor-state dispute settlement system can come in conflict with other things that we kind of globally agree are important, like universal human rights and not having racially-based systems of discrimination. And also, how it showed how our industry, the media, hasn’t been really fulfilling its function, and explaining to people how decisions are really made, and who really holds power.

...The only companies that can file suits at the International Investor State Dispute Settlement System are multinational companies and investors. Like, you cannot file a case like this if you are a small entrepreneur and you have a problem with your government. If you have a problem with your government, you go through local courts. If you’re an international investor or international company, you have a second option: you can go to this international system. It completely, also, changed the game.

...Claire mentioned the Honduras case; they’re being sued for $11 billion for trying to shut down a special economic zone on Roatan, one of the Honduran islands, and they don’t know what to do. They can’t afford it, it’s an absolute crisis for them. But I don’t think they’re considering not showing up, because you can’t. If you do that, your credit lines from the Bretton Woods institutions will be slashed.

...So, yeah, Israel is the center of the export of surveillance technology, of newfangled arms, and all sorts of stuff, and it’s because they’ve got a captive population. And I mean, I don’t need to talk about the morality of using an imprisoned population of 2 million people as a kind of lab to try out your newfangled weaponry so you can sell it to market. And, actually, they do use in their brochures, they use the term “battle-proven,” and sometimes even mention the war that it was used in.


Researchers Say Guardrails Built Around A.I. Systems Are Not So Sturdy

“Companies try to release A.I. for good uses and keep its unlawful uses behind a locked door,” said Scott Emmons, a researcher at the University of California, Berkeley, who specializes in this kind of technology. “But no one knows how to make a lock.”

...“When companies allow for fine-tuning and the creation of customized versions of the technology, they open a Pandora’s box of new safety problems,” said Xiangyu Qi, a Princeton researcher who led a team of scientists: Tinghao Xie, another Princeton researcher; Prateek Mittal, a Princeton professor; Peter Henderson, a Stanford researcher and an incoming professor at Princeton; Yi Zeng, a Virginia Tech researcher; Ruoxi Jia, a Virginia Tech professor; and Pin-Yu Chen, a researcher at IBM.

...“This is a very real concern for the future,” Mr. Goodside said. “We do not know all the ways this can go wrong.”


The hacker that took credit for the last 23andMe breach says they’ve obtained another trove of genetic information

23andMe is investigating reports of a new data leak involving millions of user records. On Wednesday, TechCrunch reported that a hacker claims to have leaked 4 million genetic profiles belonging to people in Great Britain, along with “the wealthiest people living in the U.S. and Western Europe.”

The hacker, who goes by “Golem,” is the same one that stole 1 million lines of genetic data from 23andMe earlier this month, according to TechCrunch. Golem posted this latest round of data on the hacking site BreachForums.


India uses widespread internet blackouts to mask domestic turmoil

Moreover, the internet shutdown shaped the Manipur conflict in profound ways. It allowed the BJP state government — and the state’s ethnic Meitei majority who control it — to dominate the public narrative about the turmoil. It impeded efforts by dissenters among the Kuki ethnic minority to spread their message and disseminate photo and video evidence of human rights abuses. And it effectively kept the roiling conflict, a stark challenge to the BJP’s leadership, behind a veil of invisibility.

While local governments ruled by opposition parties in India also frequently block the internet, the Manipur example highlights a wider pattern in an India governed over the past decade by Modi’s BJP. To maintain their grip on political power and advance their Hindu nationalist agenda, Modi and his ideological allies have often used their control of technology and social media to stifle dissent, promote divisive propaganda — or, in the case of Manipur, pull the digital plug altogether.

After a viral video emerged online in July of Kuki women being groped and paraded naked in a Meitei village, drawing international attention and concern about sexual violence in the Manipur conflict, several BJP leaders, including the state’s chief minister, N. Biren Singh, voiced frustration that the video had surfaced and alleged in media interviews that it had been intentionally “leaked” from Manipur to hurt them politically. The chief minister’s office and spokespeople for the Manipur state government declined multiple interview requests for this article.

...Before sunrise that day, the displaced Kukis said, an armed mob of Meiteis had appeared, setting fire to their homes in the nearby foothills. Then the villagers made a stunning allegation: A 30-year-old Kuki named David Thiek was decapitated, his limbs sawed off and his head placed on a bamboo spike.

...On the boulevards of Imphal, the stately former seat of the Meitei monarchy, long lines snaked out from ATMs, because the demand for cash skyrocketed after India’s digital payments system suddenly became unavailable. The back streets were devoid of the food and package delivery boys ubiquitous even in small Indian towns, because the e-commerce companies paused local services. The offices that provide the white-collar jobs so many Indians aspire to were shuttered overnight.


Your Face May Soon Be Your Ticket. Not Everyone Is Smiling.

As the use of facial recognition technology spreads, some experts worry about the risks to travelers’ privacy and security. Unlike a password, which can be reset, biometric data cannot easily be changed without significantly altering your appearance, said Phil Siegel, co-founder of the Center for Advanced Preparedness and Threat Response Simulation, a nonprofit group.

As with other sensitive data, like Social Security numbers, people’s images could be used by criminals, perhaps to impersonate people online or even create deepfake videos, said Nima Schei, chief executive of Hummingbirds AI, a start-up that works with facial recognition.

...Private companies’ management of facial recognition data worries Jeramie D. Scott, director of the Project on Surveillance Oversight at the Electronic Privacy Information Center. Companies, he said, could be hacked or could turn the data over to government entities, who might use it for surveillance. Some might even sell customers’ biometric information or find other ways to profit off it and bury those intentions in the fine print, Mr. Scott said — a scenario that could echo the “Black Mirror” episode “Joan Is Awful,” in which a fictional streaming service uses its terms-and-conditions agreement to hijack the main character’s life for a TV series.

...Facial recognition technology will increasingly offer travelers shorter lines and fewer documents to juggle, but all that convenience may have a cost, warned Jay Stanley, a senior policy analyst at the American Civil Liberties Union. By accepting more surveillance technology, he said, “we open ourselves to tracking where we are and who we are with all the time.”


23andMe user data stolen, offered for sale

It seems the attackers didn’t simply steal the data belonging to the accounts they broke into—they used those accounts to access a much larger trove of data via DNA Relatives. According to Bleeping Computer, “the number of accounts sold by the cybercriminal does not reflect the number of 23andMe accounts breached.”

The Record reports that the stolen data does not include genomic sequencing data, but does include “profile and account ID numbers, names, gender, birth year, maternal and paternal genetic markers, ancestral heritage results, and data on whether or not each user has opted into 23AndMe’s health data.”


They’re your Instagram and Facebook posts. They’re also Meta’s artificial intelligence factory.

...Zoom set off alarms last month by claiming it could use the private contents of video chats to improve its AI products, before reversing course. Earlier this summer, Google updated its privacy policy to say it can use any “publicly available information” to train AI. (Google didn’t say why it thinks it has that right. But it says that’s not a new policy and it just wanted to be clear it applies to its Bard chatbot.)

...Yet generative AI is different. Today’s AI arms race needs lots and lots of data. Elon Musk, chief executive of Tesla, recently bragged to his biographer that he had access to 160 billion video frames per day shot from the cameras built into people’s cars to fuel his AI ambitions.

...Some tech companies even acknowledge that in their fine print. When you sign up to use Google’s new Workspace Labs AI writing and image-generation helpers for Gmail, Docs, Sheets and Slides, the company warns: “don’t include personal, confidential, or sensitive information.”

...However, Google does still use Gmail to train other AI products, like Smart Compose (which finishes sentences for you) and the new creative coach Help Me Write that’s part of its Workspace Labs. Those uses are fundamentally different from “foundational” AI, Google says, because it’s using data from a product to improve that product. The Smart Compose AI, it says, anonymizes and aggregates our information and improves the AI “without exposing the actual content in question.” It says the Help Me Write AI learns from your “interactions, user-initiated feedback, and usage metrics.” How are you supposed to know what’s actually going on?


$5 billion Google lawsuit over ‘Incognito mode’ tracking moves a step closer to trial

On Monday, a California judge denied Google’s request for summary judgment in a lawsuit filed by users alleging the company illegally invaded the privacy of millions of people. The people suing Google say that occurred because Google’s cookies, analytics, and tools in apps continued to track internet browsing activity even after users activated Incognito mode Chrome or other similar features like Safari’s private browsing expecting a certain level of privacy. However, the truth is, as we wrote in 2018, “What isn’t private: private browsing mode.”

...Another issue going against Google’s arguments that the judge mentioned is that the plaintiffs have evidence Google “stores users’ regular and private browsing data in the same logs; it uses those mixed logs to send users personalized ads; and, even if the individual data points gathered are anonymous by themselves, when aggregated, Google can use them to ‘uniquely identify a user with a high probability of success.’”

She also responded to a Google argument that the plaintiffs didn’t suffer economic injury, writing that “Plaintiffs have shown that there is a market for their browsing data and Google’s alleged surreptitious collection of the data inhibited plaintiffs’ ability to participate in that market... Finally, given the nature of Google’s data collection, the Court is satisfied that money damages alone are not an adequate remedy. Injunctive relief is necessary to address Google’s ongoing collection of users’ private browsing data.”


From “Heavy Purchasers” of Pregnancy Tests to the Depression-Prone: We Found 650,000 Ways Advertisers Label You

If you spend any time online, you probably have some idea that the digital ad industry is constantly collecting data about you, including a lot of personal information, and sorting you into specialized categories so you’re more likely to buy the things they advertise to you. But in a rare look at just how deep—and weird—the rabbit hole of targeted advertising gets, The Markup has analyzed a database of 650,000 of these audience segments, newly unearthed on the website of Microsoft’s ad platform Xandr. The trove of data indicates that advertisers could also target people based on sensitive information like being “heavy purchasers” of pregnancy test kits, having an interest in brain tumors, being prone to depression, visiting places of worship, or feeling “easily deflated” or that they “get a raw deal out of life.”

...“I think it’s the largest piece of evidence I’ve ever seen that provides information about what I call today’s “distributed surveillance economy,” said Wolfie Christl, a privacy researcher at Cracked Labs, who discovered the file and shared it with The Markup.

Christl noted that the Xandr segments touched on highly sensitive topics. One civil liberties advocate called this sort of targeting “one of the greatest threats to data privacy” and said that he was concerned with some of the categories in the Xandr material, especially around reproductive health. A consumer who was placed in one of the audience segments available through Xandr said the segment did not accurately reflect his income.


Legal experts say a key law should already prevent brokers from collecting and selling data that’s weaponized against vulnerable people

The same advancements that pulled swaths of humanity out of factories and fields and plopped them behind desks have left consumers outgunned and ill-prepared for life under constant surveillance. Shadowy data brokers constrained by few laws gather and supply mountains of information to companies and government agencies with enormous sway over everyday facets of life—from getting a job and purchasing a home, to reaping the benefits of taxpayer-funded safety nets.

...“Data brokers’ practices are especially egregious because they circumvent the Fair Credit Report Act and value data without valuing the accuracy of that data,” says Lauren Harriman, a staff attorney at the Georgetown Law Communications and Technology Law Clinic and counsel for Just Futures Law, a legal nonprofit. Data brokers, she says, “pay handsome sums to your utility company for your name and address, turn around and package your name and address with other data, fail to conduct any type of accuracy analysis on the newly formed data set, and subsequently sell the new data set at a steep profit.”

...The attorneys point to an array of statements by data collectors in recent years, including Kochava, an analytical advertising platform the US Federal Trade Commission (FTC) sued last year for tracking the mobile devices of millions in and around reproductive health clinics, domestic violence shelters, and places of worship. The company’s data harvesting exposed, the FTC said, an unaware population to “threats of stigma, stalking, discrimination, job loss, and even physical violence.” In a legal complaint, the agency cited Kochava’s boasting of capabilities that allowed it to track the locations of roughly 125 million devices per month.

The letter also points to databases maintained by the British multinational RELX and the Canadian conglomerate Thomson Reuters, which, according to CUNY law professor Sarah Lamdan, author of Data Cartels: The Companies That Control and Monopolize Our Information, contain dossiers on roughly two-thirds of the US population, tracing their whereabouts and mapping social and familial relationships.


Leave a Comment