By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Microsoft will kick-start its Windows 11 rollout to mainstream users on October 5. On that day, Windows 11 will begin rolling out to eligible Windows 10 PCs and be for sale on a handful of PCs that will come preloaded with Windows 11, officials said today, August 31. There will be one feature that Microsoft originally touted as part of the Windows 11 experience that won't be available at launch: The ability to get Android apps via the Microsoft Store. Microsoft officials said today that they will have a preview of this capability ready for preview by Windows Insiders "over the coming months." Microsoft is working with Amazon and Intel on this and has been developing an Android subsystem for Windows to make it happen. Microsoft officials are planning a phased rollout of Windows 11 between October 5 and mid-2022. Microsoft will make the operating system available to new devices first. The company plans to use "intelligence models that consider hardware eligibility, reliability, metrics, age of the device and other factors" to roll it out to additional in-market PCs. Microsoft plans to use Windows Update to notify Windows 10 users when their devices are eligible to move to Windows 11. Users also will be able to "seek" manually the upgrade for eligible devices by going to Settings > Windows Update > Check for Updates. As Microsoft officials said last week, the company is testing a revamped PC Health Check app and will make it available to all users soon so they can check to see if their PCs meet Microsoft's requirements for the upgrade. Users with PCs deemed by Microsoft to be ineligible will have an option to upgrade their own PCs, with the understanding they will be in an officially "unsupported state." This means, confusingly that they may or may not get security and driver updates, according to Microsoft. Individuals or businesses who aren't ready or interested in going to Windows 11 will be able to stay on Windows 10, which Microsoft will continue to support through October 14, 2025. (There's a new Windows 10 release coming this fall, as well, which will be a very minor update known as Windows 10 21H2. Microsoft officials haven't said whether there will be a Windows 10 22H1, H2, etc.) I've asked if there will be a blocking tool, as usual, for users who don't want Windows 11 "offered" to them and their user base. Maybe one won't be needed. I was told by the company: "Microsoft is putting that choice in the hands of its customers. When a customer with an eligible Windows 10 device is notified through Window Update, they can decide if they want to upgrade to Windows 11 or stay on Windows 10." As to why Microsoft is going with October 5 as its Windows 11 release date, my guess is it's hoping to give PC makers, hoping for new Windows 11 PC purchases, a Columbus Holiday sale boost, and a long runway into the holiday 2021 season. Microsoft's blog post lists a bunch of PCs that they're touting as Windows 11-ready. None of these are unannounced devices; they're all in the market already. Follow the release of Windows 11 on OUR FORUM.

A shocking new tracking admission from Google, one that hasn’t yet made headlines, should be a serious warning to Chrome’s 2.6 billion users. If you’re one of them, this nasty new surprise should be a genuine reason to quit. Behind the slick marketing and feature updates, the reality is that Chrome is in a mess when it comes to privacy and security. It has fallen behind rivals in protecting users from tracking and data harvesting, its plan to ditch nasty third-party cookies has been awkwardly postponed, and the replacement technology it said would prevent users from being profiled and tracked turns out to have just made everything worse. “Ubiquitous surveillance... harms individuals and society,” Firefox developer Mozilla warns, and “Chrome is the only major browser that does not offer meaningful protection against cross-site tracking... and will continue to leave users unprotected.” Google readily (and ironically) admits that such ubiquitous web tracking is out of hand and has resulted in “an erosion of trust... [where] 72% of people feel that almost all of what they do online is being tracked by advertisers, technology firms or others, and 81% say the potential risks from data collection outweigh the benefits.” So, how can Google continue to openly admit that this tracking undermines user privacy, and yet enable such tracking by default on its flagship browser? The answer is simple—follow the money. Restricting tracking will materially reduce ad revenue from targeting users with sales pitches, political messages, and opinions. And right now, Google doesn’t have a Plan B—its grand idea for anonymized tracking is in disarray. “Research has shown that up to 52 companies can theoretically observe up to 91% of the average user’s web browsing history,” a senior Chrome engineer told a recent Internet Engineering Task Force call, “and 600 companies can observe at least 50%.” Google’s Privacy Sandbox is supposed to fix this, to serve the needs of advertisers seeking to target users in a more “privacy-preserving” way. But the issue is that even Google’s staggering level of control over the internet advertising ecosystem is not absolute. There is already a complex spider’s web of trackers and data brokers in place. And any new technology simply adds to that complexity and cannot exist in isolation. It’s this unhappy situation that’s behind the failure of FLoC, Google’s self-heralded attempt to deploy anonymized tracking across the web. It turns out that building a wall around only half a chicken coop is not especially effective—especially when some of the foxes are already hanging around inside. Rather than target you as an individual, FLoC assigns you to a cohort of people with similar interests and behaviors, defined by the websites you all visit. So, you’re not 55-year-old Jane Doe, sales assistant, residing at 101 Acacia Avenue. Instead, you’re presented as a member of Cohort X, from which advertisers can infer what you’ll likely do and buy from common websites the group members visit. Google would inevitably control the entire process, and advertisers would inevitably pay to play. FLoC came under immediate fire. The privacy lobby called out the risks that data brokers would simply add cohort IDs to other data collected on users—IP addresses or browser identities or any first-party web identifiers, giving them even more knowledge on individuals. There was also the risk that cohort IDs might betray sensitive information—politics, sexuality, health, finances, ... No, Google assured as it launched its controversial FLoC trial, telling me in April that “we strongly believe that FLoC is better for user privacy compared to the individual cross-site tracking that is prevalent today.” Not so, Google has suddenly now admitted, telling IETF that “today’s fingerprinting surface, even without FLoC, is easily enough to uniquely identify users,” but that “FLoC adds new fingerprinting surfaces.” Let me translate that—just as the privacy lobby had warned, FLoC makes things worse, not better. Follow this thread on OUR FORUM.

By now you've probably heard that Apple plans to push a new and uniquely intrusive surveillance system out to many of the more than one billion iPhones it has sold, which all run the behemoth's proprietary, take-it-or-leave-it software. This new offensive is tentatively slated, to begin with, the launch of iOS 15⁠—almost certainly in mid-September⁠—with the devices of its US user-base designated as the initial targets. We’re told that other countries will be spared, but not for long. You might have noticed that I haven’t mentioned which problem it is that Apple is purporting to solve. Why? Because it doesn’t matter. Having read thousands upon thousands of remarks on this growing scandal, it has become clear to me that many understand it doesn't matter, but few if any have been willing to actually say it. Speaking candidly, if that’s still allowed, that’s the way it always goes when someone of institutional significance launches a campaign to defend an indefensible intrusion into our private spaces. They make a mad dash to the supposed high ground, from which they speak in low, solemn tones about their moral mission before fervently invoking the dread specter of the Four Horsemen of the Infopocalypse, warning that only a dubious amulet—or suspicious software update—can save us from the most threatening members of our species. Suddenly, everybody with a principled objection is forced to preface their concern with apologetic throat-clearing and the establishment of bonafides:  As a parent, I’m here to tell you that sometimes it doesn’t matter why the man in the handsome suit is doing something. What matters are the consequences? Apple’s new system, regardless of how anyone tries to justify it, will permanently redefine what belongs to you, and what belongs to them. The task Apple intends its new surveillance system to perform—preventing their cloud systems from being used to store digital contraband, in this case, unlawful images uploaded by their customers—is traditionally performed by searching their systems. While it’s still problematic for anybody to search through a billion people’s private files, the fact that they can only see the files you gave them is a crucial limitation. Now, however, that’s all set to change. Under the new design, your phone will now perform these searches on Apple’s behalf before your photos have even reached their iCloud servers, and—yadda, yadda, yadda—if enough "forbidden content" is discovered, law-enforcement will be notified. I intentionally wave away the technical and procedural details of Apple’s system here, some of which are quite clever, because they, like our man in the handsome suit, merely distract from the most pressing fact—the fact that, in just a few weeks, Apple plans to erase the boundary dividing which devices work for you, and which devices work for them. For its part, Apple says their system, in its initial, v1.0 design, has a narrow focus: it only scrutinizes photos intended to be uploaded to iCloud (although for 85% of its customers, that means EVERY photo), and it does not scrutinize them beyond a simple comparison against a database of specific examples of previously-identified child sexual abuse material (CSAM). If you’re an enterprising pedophile with a basement full of CSAM-tainted iPhones, Apple welcomes you to entirely exempt yourself from these scans by simply flipping the “Disable iCloud Photos” switch, a bypass which reveals that this system was never designed to protect children, as they would have you believe, but rather to protect their brand. As long as you keep that material off their servers, and so keep Apple out of the headlines, Apple doesn’t care. So what happens when, in a few years at the latest, a politician points that out, and—in order to protect the children—bills are passed in the legislature to prohibit this "Disable" bypass, effectively compelling Apple to scan photos that aren’t backed up to iCloud? What happens when a party in India demands they start scanning for memes associated with a separatist movement? What happens when the UK demands they scan for a library of terrorist imagery? How long do we have left before the iPhone in your pocket begins quietly filing reports about encountering “extremist” political material, or about your presence at a "civil disturbance"? Or simply about your iPhone's possession of a video clip that contains, or maybe-or-maybe-not contains, a blurry image of a passer-by who resembles, according to an algorithm, "a person of interest"? To read this posting in its entirety visit OUR FORUM.