By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

A shocking new tracking admission from Google, one that hasn’t yet made headlines, should be a serious warning to Chrome’s 2.6 billion users. If you’re one of them, this nasty new surprise should be a genuine reason to quit. Behind the slick marketing and feature updates, the reality is that Chrome is in a mess when it comes to privacy and security. It has fallen behind rivals in protecting users from tracking and data harvesting, its plan to ditch nasty third-party cookies has been awkwardly postponed, and the replacement technology it said would prevent users from being profiled and tracked turns out to have just made everything worse. “Ubiquitous surveillance... harms individuals and society,” Firefox developer Mozilla warns, and “Chrome is the only major browser that does not offer meaningful protection against cross-site tracking... and will continue to leave users unprotected.” Google readily (and ironically) admits that such ubiquitous web tracking is out of hand and has resulted in “an erosion of trust... [where] 72% of people feel that almost all of what they do online is being tracked by advertisers, technology firms or others, and 81% say the potential risks from data collection outweigh the benefits.” So, how can Google continue to openly admit that this tracking undermines user privacy, and yet enable such tracking by default on its flagship browser? The answer is simple—follow the money. Restricting tracking will materially reduce ad revenue from targeting users with sales pitches, political messages, and opinions. And right now, Google doesn’t have a Plan B—its grand idea for anonymized tracking is in disarray. “Research has shown that up to 52 companies can theoretically observe up to 91% of the average user’s web browsing history,” a senior Chrome engineer told a recent Internet Engineering Task Force call, “and 600 companies can observe at least 50%.” Google’s Privacy Sandbox is supposed to fix this, to serve the needs of advertisers seeking to target users in a more “privacy-preserving” way. But the issue is that even Google’s staggering level of control over the internet advertising ecosystem is not absolute. There is already a complex spider’s web of trackers and data brokers in place. And any new technology simply adds to that complexity and cannot exist in isolation. It’s this unhappy situation that’s behind the failure of FLoC, Google’s self-heralded attempt to deploy anonymized tracking across the web. It turns out that building a wall around only half a chicken coop is not especially effective—especially when some of the foxes are already hanging around inside. Rather than target you as an individual, FLoC assigns you to a cohort of people with similar interests and behaviors, defined by the websites you all visit. So, you’re not 55-year-old Jane Doe, sales assistant, residing at 101 Acacia Avenue. Instead, you’re presented as a member of Cohort X, from which advertisers can infer what you’ll likely do and buy from common websites the group members visit. Google would inevitably control the entire process, and advertisers would inevitably pay to play. FLoC came under immediate fire. The privacy lobby called out the risks that data brokers would simply add cohort IDs to other data collected on users—IP addresses or browser identities or any first-party web identifiers, giving them even more knowledge on individuals. There was also the risk that cohort IDs might betray sensitive information—politics, sexuality, health, finances, ... No, Google assured as it launched its controversial FLoC trial, telling me in April that “we strongly believe that FLoC is better for user privacy compared to the individual cross-site tracking that is prevalent today.” Not so, Google has suddenly now admitted, telling IETF that “today’s fingerprinting surface, even without FLoC, is easily enough to uniquely identify users,” but that “FLoC adds new fingerprinting surfaces.” Let me translate that—just as the privacy lobby had warned, FLoC makes things worse, not better. Follow this thread on OUR FORUM.

Coinciding with Tim Cook hitting the 10-year mark as Apple’s CEO, the iPhone maker has found itself in a strange place. The consumer electronics giant that’s spent years positioning itself as the pro-privacy alternative to tech giants like Google and Facebook has inadvertently landed smack in the middle of two things. One, a huge controversy that has normally pliant journalists treating Apple with rare skepticism. And, two, a controversy that also threatens to undermine Apple’s privacy-focused core philosophy under Cook. The culprit here: One of the many new iOS 15 features, included with the next big software update this fall. By now, if you follow Apple news to any degree, you’re probably familiar with the particulars. Starting with iOS 15, Apple is going to start doing something new. It will hash and compare photos destined to be uploaded to iCloud against a CSAM (child sexual abuse material) database. The National Center for Missing and Exploited Children, or NCMEC, maintains the database in the US. And the new iOS system kicks into action if the following conditions are met. First, if you possess specific CSAM material — which is already marked or hashed, and able to be matched against what’s in the NCMEC database. Also, if you use iCloud to store your photos, which the vast majority of iPhone owners do. After you hit a threshold of successful comparisons of CSAM material — meaning, material that’s in your possession matches what’s in the database, a certain number of times — Apple notifies law enforcement. Meanwhile, ironically, there’s actually a pretty easy way to avoid all this new scrutiny from Apple in the first place. All you’ve got to do? Just disable the sharing of photos to iCloud. Open the Settings app on your iPhone or iPad > then navigate to “Photos” > and disable the “iCloud Photos” option. After that, choose “Download Photos & Videos” when the popup appears, to pull everything in your iCloud Photos library down to your device. If you then want to migrate away from Apple? Maybe, say, because you feel that the iPhone maker is invading your privacy via these new iOS 15 features? Well … all we can say is good luck with that transition. Almost every provider of cloud backup service already does this same kind of scanning. The key difference, and it’s a huge one, is that they do it all in the cloud. On their end. Apple, however, performs both cloud scanning as well as some of the image matching on your device itself. And therein is the reason for the outcry from privacy advocates. Apple is going to be looking for a specific kind of contraband on your personal device going forward. Like it or not. Unless that is, you disable the setting we noted above. Speaking of which, NSA whistleblower Edward Snowden angrily blasted the fact that you can so easily do so in a new post he published to his Substack on Wednesday evening. “If you’re an enterprising pedophile with a basement full of CSAM-tainted iPhones,” he writes, “Apple welcomes you to entirely exempt yourself from these scans by simply flipping the ‘Disable iCloud Photos’ switch.” It’s “a bypass which reveals that this system was never designed to protect children, as they would have you believe, but rather to protect their brand.” In other words, he continues, this is about keeping that material off their servers. And thus keeping Apple out of negative headlines. Do Snowden (and, for that matter, privacy advocates like him) seem overly concerned here about some dark hypothetical future because of these new iOS 15 features? “So what happens when, in a few years at the latest … in order to protect the children, bills are passed in the legislature to prohibit this (Disable iCloud) bypass, effectively compelling Apple to scan photos that aren’t backed up to iCloud?” Snowden continues in his new post. Or, what about if a party in India starts demanding that Apple scan for memes associated with a separatist movement? “How long do we have left before the iPhone in your pocket begins quietly filing reports about encountering ‘extremist’ political material, or about your presence at a ‘civil disturbance’?” For more in-depth reading visit OUR FORUM.

Microsoft on Thursday warned thousands of its cloud computing customers, including some of the world's largest companies, that intruders could have the ability to read, change or even delete their main databases, according to a copy of the email and a cyber security researcher. The vulnerability is in Microsoft Azure's flagship Cosmos DB database. A research team at security company Wiz discovered it was able to access keys that control access to databases held by thousands of companies. Wiz Chief Technology Officer Ami Luttwak is a former chief technology officer at Microsoft's Cloud Security Group. Because Microsoft cannot change those keys by itself, it emailed the customers Thursday telling them to create new ones. Microsoft agreed to pay Wiz $40,000 for finding the flaw and reporting it, according to an email it sent to Wiz. "We fixed this issue immediately to keep our customers safe and protected. We thank the security researchers for working under coordinated vulnerability disclosure," Microsoft told Reuters. Microsoft's email to customers said there was no evidence the flaw had been exploited. "We have no indication that external entities outside the researcher (Wiz) had access to the primary read-write key," the email said. “This is the worst cloud vulnerability you can imagine. It is a long-lasting secret,” Luttwak told Reuters. “This is the central database of Azure, and we were able to get access to any customer database that we wanted.” Luttwak's team found the problem, dubbed ChaosDB, on Aug. 9 and notified Microsoft on Aug. 12, Luttwak said. The flaw was in a visualization tool called Jupyter Notebook, which has been available for years but was enabled by default in Cosmos beginning in February. After Reuters reported on the flaw, Wiz detailed the issue in a blog post. Luttwak said even customers who have not been notified by Microsoft could have had their keys swiped by attackers, giving them access until those keys are changed. Microsoft only told customers whose keys were visible this month, when Wiz was working on the issue. Microsoft told Reuters that "customers who may have been impacted received a notification from us," without elaborating. The disclosure comes after months of bad security news for Microsoft. The company was breached by the same suspected Russian government hackers that infiltrated SolarWinds, who stole Microsoft source code. Then a wide number of hackers broke into Exchange email servers while a patch was being developed. A recent fix for a printer flaw that allowed computer takeovers had to be redone repeatedly. Another Exchange flaw last week prompted an urgent U.S. government warning that customers need to install patches issued months ago because ransomware gangs are now exploiting it. Problems with Azure are especially troubling because Microsoft and outside security experts have been pushing companies to abandon most of their own infrastructure and rely on the cloud for more security. But though cloud attacks are rarer, they can be more devastating when they occur. What's more, some are never publicized. Learn more by visiting OUR FORUM.