By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

A program known as XCheck has given millions of celebrities, politicians, and other high-profile users special treatment, a privilege many abuse. Users to speak on equal footing with the elites of politics, culture and journalism, and that its standards of behavior apply to everyone, no matter their status or fame. In private, the company has built a system that has exempted high-profile users from some or all of its rules, according to company documents reviewed by The Wall Street Journal. The program, known as “cross check” or “XCheck,” was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists. Today, it shields millions of VIP users from the company’s normal enforcement process, the documents show. Some users are “whitelisted”—rendered immune from enforcement actions—while others are allowed to post rule-violating material pending Facebook employee reviews that often never come. At times, the documents show, XCheck has protected public figures whose posts contain harassment or incitement to violence, violations that would typically lead to sanctions for regular users. In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook. Whitelisted accounts shared inflammatory claims that Facebook’s fact-checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up “pedophile rings,” and that then-President Donald Trump had called all refugees seeking asylum “animals,” according to the documents. A 2019 internal review of Facebook’s whitelisting practices, marked attorney-client privileged, found favoritism to those users to be both widespread and “not publicly defensible.” “We are not actually doing what we say we do publicly,” said the confidential review. It called the company’s actions “a breach of trust” and added: “Unlike the rest of our community, these people can violate our standards without any consequences.” Despite attempts to rein it in, XCheck grew to include at least 5.8 million users in 2020, documents show. In its struggle to accurately moderate a torrent of content and avoid negative attention, Facebook created invisible elite tiers within the social network. In describing the system, Facebook has misled the public and its own Oversight Board, a body that Facebook created to ensure the accountability of the company’s enforcement systems. In June, Facebook told the Oversight Board in writing that its system for high-profile users was used in “a small number of decisions.” In a written statement, Facebook spokesman Andy Stone said criticism of XCheck was fair, but added that the system “was designed for an important reason: to create an additional step so we can accurately enforce policies on content that could require more understanding.” He said Facebook has been accurate in its communications to the board and that the company is continuing to work to phase out the practice of whitelisting. “A lot of this internal material is outdated information stitched together to create a narrative that glosses over the most important point: Facebook itself identified the issues with cross-check and has been working to address them,” he said. The documents that describe XCheck are part of an extensive array of internal Facebook communications reviewed by The Wall Street Journal. They show that Facebook knows, in acute detail, that its platforms are riddled with flaws that cause harm, often in ways only the company fully understands. Moreover, the documents show, Facebook often lacks the will or the ability to address them. This is the first in a series of articles based on those documents and on interviews with dozens of current and former employees. At least some of the documents have been turned over to the Securities and Exchange Commission and to Congress by a person seeking federal whistleblower protection, according to people familiar with the matter. Facebook’s stated ambition has long been to connect people. As it expanded over the past 17 years, from Harvard undergraduates to billions of global users, it struggled with the messy reality of bringing together disparate voices with different motivations—from people wishing each other happy birthday to Mexican drug cartels conducting business on the platform. Those problems increasingly consume the company. Time and again, the documents show, in the U.S. and overseas, Facebook’s own researchers have identified the platform’s ill effects, in areas including teen mental health, political discourse, and human trafficking. Time and again, despite congressional hearings, its own pledges, and numerous media exposés, the company didn’t fix them. Sometimes the company held back for fear of hurting its business. In other cases, Facebook made changes that backfired. Even Mr. Zuckerberg’s pet initiatives have been thwarted by his own systems and algorithms. The documents include research reports, online employee discussions, and drafts of presentations to senior management, including Mr. Zuckerberg. They aren’t the result of idle grumbling, but rather the formal work of teams whose job was to examine the social network and figure out how it could improve. They offer perhaps the clearest picture thus far of how broadly Facebook’s problems are known inside the company, up to the CEO himself. And when Facebook speaks publicly about many of these issues, to lawmakers, regulators, and, in the case of XCheck, its own Oversight Board, it often provides misleading or partial answers, masking how much it knows. Read this very detailed 2 part story on OUR FORUM.

Your home network’s security is only as good as the configuration of your router or gateway. Leave it open or vulnerable, and you might end up with freeloaders that hog your bandwidth, at best. At worst, a snoop might take the opportunity to examine your internal traffic, hoping to learn sensitive information about you that can be exploited. To ensure that only approved devices are connected to your network, you can take a few simple steps to strengthen its security, which we explain below. If you can’t access some of these settings in your gateway (the combination modem/router provided by your internet service provider), consider switching off the router part of it and using a dedicated router instead, either of the traditional or mesh variety. Depending on your router’s age, you may need to change both the administrator password (which gives access to the management interface) and also the Wi-Fi password. Older routers usually default to ultra-simple passwords for the administrator account —think “admin” and “password”—and they’re easily found online. You may have also chosen a simple, crackable password when turning on encryption for your network. For both scenarios, choose a new, stronger replacement. The best way to do this is a built-in password generator in a password manager—they’ll be truly random and thus more secure, and the manager will ensure you don’t forget it. (Good free password managers exist, so solid online security doesn’t have to cost you a thing.) For newer routers, they often come with random passwords as default. It doesn’t hurt to change those if your router or gateway has that info printed on them though, particularly if you have less control over who might have physical access to the device. Just be sure to keep track of your new passwords, ideally in a password manager as mentioned. You should always encrypt your network traffic. These days, choose WPA2 for the best security. Older protocols like WPA and the ancient WEP won’t adequately protect you. If your router supports the newer WPA3 protocol, you can try it out—it’s an improvement over WPA2—but all of your connecting devices must support that protocol. Most people can stick with WPA2 for now, and then flip over to WP3 once all devices in the household can also make the leap. When setting up WPA2 encryption, pick WPA2 Personal if given a choice between that and WPA2 Enterprise in your router settings. Also, if you see TKIP and AES as different encryption options, go with AES as it’s much stronger. For older devices that cap out at WPA, consider upgrading your router at last. You’ll get better security, faster speeds, and more features for as little as $50 (or less if you wait for a sale). If you’re on an ancient router that only has WEP, replace it stat. You’re barely one step above having an open network. As for folks who leave encryption off because you want to share your internet with others: We salute your altruism, but don’t let that come back to haunt you. As mentioned above, no encryption means that people can spy on your internet traffic, giving them clues to your activities (including banking). That could lead to troublesome problems down the road. Name your network wisely. It should be something generic but not too common that doesn't reveal your address. A Service Set Identifier (SSID) is the name of a wireless network. That is what you see when trying to connect to a Wi-Fi network: Linksys616, D-Link2289, 555MainSt, We Have No Wi-Fi Here, etc. Because older routers default to ultra-simple or easily cracked passwords, changing the SSID to a non-identifying word or phrase helps thwart hackers looking for low-hanging fruit. Leave it as Linksys, and a savvy snoop may realize you’re running a much older Linksys router with “admin” as the password for router management. If you haven’t changed that password (and most people don’t), your home network is ripe for their exploration. As you move forward in time, many routers use a combination of the manufacturer name and numeric string (often the model number) for the SSID—making it even easier to look up the default admin password. Unless you have a modern enough router that issues random passwords as part of the factory settings, you could be even more vulnerable. So just change the SSID. (Don’t use your address for it, either. No need to make yourself more identifiable.) Note: Years ago, a common recommendation used to be to not broadcast your SSID: that is, keep it hidden from the list of available Wi-Fi networks in your vicinity. But trying to do security through obfuscation doesn’t really work here—it’s been proven that someone can easily discover hidden networks with a wireless spectrum scanner. Since disabling SSID broadcasting also makes it harder for people to join your network, you’re generally better off leaving it visible, using the strongest encryption available to you, and creating a very strong Wi-Fi password. Visit OUR FORUM to learn how to secure your wireless router.

Some of the most successful and lucrative online scams employ a “low-and-slow” approach — avoiding detection or interference from researchers and law enforcement agencies by stealing small bits of cash from many people over an extended period. Here’s the story of a cybercrime group that compromises up to 100,000 email inboxes daily, and apparently does little else with this access except for siphon gift card and customer loyalty program data that can be sold online. The data in this story come from a trusted source in the security industry that has visibility into a network of hacked machines that fraudsters in just about every corner of the Internet are using to anonymize their malicious Web traffic. For the past three years, the source — we’ll call him “Bill” to preserve his requested anonymity — has been watching one group of threat actors that is mass-testing millions of usernames and passwords against the world’s major email providers day. Bill said he’s not sure where the passwords are coming from, but he assumes they are tied to various databases for compromised websites that get posted to password cracking and hacking forums on a regular basis. Bill said this criminal group averages between five and ten million email authentication attempts daily and comes away with anywhere from 50,000 to 100,000 working inbox credentials. In about half the cases the credentials are being checked via “IMAP,” which is an email standard used by email software clients like Mozilla’s Thunderbird and Microsoft Outlook. With his visibility into the proxy network, Bill can see whether or not an authentication attempt succeeds based on the network response from the email provider (e.g. mail server responds “OK” = successful access). You might think that whoever is behind such a sprawling crime machine would use their access to blast out spam, or conduct targeted phishing attacks against each victim’s contacts. But based on interactions that Bill has had with several large email providers so far, this crime gang merely uses a custom, automated scripts that periodically log in and search each inbox for digital items of value that can easily be resold. And they seem particularly focused on stealing gift card data. “Sometimes they’ll log in as much as two to three times a week for months at a time,” Bill said. “These guys are looking for low-hanging fruit — basically cash in your inbox. Whether it’s related to hotel or airline rewards or just Amazon gift cards after they successfully log in to the account their scripts start pilfering inboxes looking for things that could be of value.” How do the compromised email credentials break down in terms of ISPs and email providers? There are victims on nearly all major email networks, but Bill said several large Internet service providers (ISPs) in Germany and France are heavily represented in the compromised email account data. “With some of these international email providers we’re seeing something like 25,000 to 50,000 email accounts a day get hacked,” Bill said.  “I don’t know why they’re getting popped so heavily.” That may sound like a lot of hacked inboxes, but Bill said some of the bigger ISPs represented in his data have tens or hundreds of millions of customers. Measuring which ISPs and email providers have the biggest numbers of compromised customers is not so simple in many cases, nor is identifying companies with employees whose email accounts have been hacked. This kind of mapping is often more difficult than it used to be because so many organizations have now outsourced their email to cloud services like Gmail and Microsoft Office365 — where users can access their email, files, and chat records all in one place. In a December 2020 blog post about how Microsoft is moving away from passwords to more robust authentication approaches, the software giant said an average of one in every 250 corporate accounts is compromised each month. As of last year, Microsoft had nearly 240 million active users, according to this analysis. “To me, this is an important story because for years people have been like, yeah we know email isn’t very secure, but this generic statement doesn’t have any teeth to it,” Bill said. “I don’t feel like anyone has been able to call attention to the numbers that show why email is so insecure.” Bill says that in general companies have a great many more tools available for securing and analyzing employee email traffic when that access is funneled through a Web page or VPN, versus when that access happens via IMAP. “It’s just more difficult to get through the Web interface because on a website you have a plethora of advanced authentication controls at your fingertips, including things like device fingerprinting, scanning for HTTP header anomalies, and so on,” Bill said. “But what are the detection signatures you have available for detecting malicious logins via IMAP?” Microsoft declined to comment specifically on Bill’s research but said customers can block the overwhelming majority of account takeover efforts by enabling multi-factor authentication. Read the detailed report on OUR FORUM.

Microsoft will kick-start its Windows 11 rollout to mainstream users on October 5. On that day, Windows 11 will begin rolling out to eligible Windows 10 PCs and be for sale on a handful of PCs that will come preloaded with Windows 11, officials said today, August 31. There will be one feature that Microsoft originally touted as part of the Windows 11 experience that won't be available at launch: The ability to get Android apps via the Microsoft Store. Microsoft officials said today that they will have a preview of this capability ready for preview by Windows Insiders "over the coming months." Microsoft is working with Amazon and Intel on this and has been developing an Android subsystem for Windows to make it happen. Microsoft officials are planning a phased rollout of Windows 11 between October 5 and mid-2022. Microsoft will make the operating system available to new devices first. The company plans to use "intelligence models that consider hardware eligibility, reliability, metrics, age of the device and other factors" to roll it out to additional in-market PCs. Microsoft plans to use Windows Update to notify Windows 10 users when their devices are eligible to move to Windows 11. Users also will be able to "seek" manually the upgrade for eligible devices by going to Settings > Windows Update > Check for Updates. As Microsoft officials said last week, the company is testing a revamped PC Health Check app and will make it available to all users soon so they can check to see if their PCs meet Microsoft's requirements for the upgrade. Users with PCs deemed by Microsoft to be ineligible will have an option to upgrade their own PCs, with the understanding they will be in an officially "unsupported state." This means, confusingly that they may or may not get security and driver updates, according to Microsoft. Individuals or businesses who aren't ready or interested in going to Windows 11 will be able to stay on Windows 10, which Microsoft will continue to support through October 14, 2025. (There's a new Windows 10 release coming this fall, as well, which will be a very minor update known as Windows 10 21H2. Microsoft officials haven't said whether there will be a Windows 10 22H1, H2, etc.) I've asked if there will be a blocking tool, as usual, for users who don't want Windows 11 "offered" to them and their user base. Maybe one won't be needed. I was told by the company: "Microsoft is putting that choice in the hands of its customers. When a customer with an eligible Windows 10 device is notified through Window Update, they can decide if they want to upgrade to Windows 11 or stay on Windows 10." As to why Microsoft is going with October 5 as its Windows 11 release date, my guess is it's hoping to give PC makers, hoping for new Windows 11 PC purchases, a Columbus Holiday sale boost, and a long runway into the holiday 2021 season. Microsoft's blog post lists a bunch of PCs that they're touting as Windows 11-ready. None of these are unannounced devices; they're all in the market already. Follow the release of Windows 11 on OUR FORUM.

A shocking new tracking admission from Google, one that hasn’t yet made headlines, should be a serious warning to Chrome’s 2.6 billion users. If you’re one of them, this nasty new surprise should be a genuine reason to quit. Behind the slick marketing and feature updates, the reality is that Chrome is in a mess when it comes to privacy and security. It has fallen behind rivals in protecting users from tracking and data harvesting, its plan to ditch nasty third-party cookies has been awkwardly postponed, and the replacement technology it said would prevent users from being profiled and tracked turns out to have just made everything worse. “Ubiquitous surveillance... harms individuals and society,” Firefox developer Mozilla warns, and “Chrome is the only major browser that does not offer meaningful protection against cross-site tracking... and will continue to leave users unprotected.” Google readily (and ironically) admits that such ubiquitous web tracking is out of hand and has resulted in “an erosion of trust... [where] 72% of people feel that almost all of what they do online is being tracked by advertisers, technology firms or others, and 81% say the potential risks from data collection outweigh the benefits.” So, how can Google continue to openly admit that this tracking undermines user privacy, and yet enable such tracking by default on its flagship browser? The answer is simple—follow the money. Restricting tracking will materially reduce ad revenue from targeting users with sales pitches, political messages, and opinions. And right now, Google doesn’t have a Plan B—its grand idea for anonymized tracking is in disarray. “Research has shown that up to 52 companies can theoretically observe up to 91% of the average user’s web browsing history,” a senior Chrome engineer told a recent Internet Engineering Task Force call, “and 600 companies can observe at least 50%.” Google’s Privacy Sandbox is supposed to fix this, to serve the needs of advertisers seeking to target users in a more “privacy-preserving” way. But the issue is that even Google’s staggering level of control over the internet advertising ecosystem is not absolute. There is already a complex spider’s web of trackers and data brokers in place. And any new technology simply adds to that complexity and cannot exist in isolation. It’s this unhappy situation that’s behind the failure of FLoC, Google’s self-heralded attempt to deploy anonymized tracking across the web. It turns out that building a wall around only half a chicken coop is not especially effective—especially when some of the foxes are already hanging around inside. Rather than target you as an individual, FLoC assigns you to a cohort of people with similar interests and behaviors, defined by the websites you all visit. So, you’re not 55-year-old Jane Doe, sales assistant, residing at 101 Acacia Avenue. Instead, you’re presented as a member of Cohort X, from which advertisers can infer what you’ll likely do and buy from common websites the group members visit. Google would inevitably control the entire process, and advertisers would inevitably pay to play. FLoC came under immediate fire. The privacy lobby called out the risks that data brokers would simply add cohort IDs to other data collected on users—IP addresses or browser identities or any first-party web identifiers, giving them even more knowledge on individuals. There was also the risk that cohort IDs might betray sensitive information—politics, sexuality, health, finances, ... No, Google assured as it launched its controversial FLoC trial, telling me in April that “we strongly believe that FLoC is better for user privacy compared to the individual cross-site tracking that is prevalent today.” Not so, Google has suddenly now admitted, telling IETF that “today’s fingerprinting surface, even without FLoC, is easily enough to uniquely identify users,” but that “FLoC adds new fingerprinting surfaces.” Let me translate that—just as the privacy lobby had warned, FLoC makes things worse, not better. Follow this thread on OUR FORUM.

By now you've probably heard that Apple plans to push a new and uniquely intrusive surveillance system out to many of the more than one billion iPhones it has sold, which all run the behemoth's proprietary, take-it-or-leave-it software. This new offensive is tentatively slated, to begin with, the launch of iOS 15⁠—almost certainly in mid-September⁠—with the devices of its US user-base designated as the initial targets. We’re told that other countries will be spared, but not for long. You might have noticed that I haven’t mentioned which problem it is that Apple is purporting to solve. Why? Because it doesn’t matter. Having read thousands upon thousands of remarks on this growing scandal, it has become clear to me that many understand it doesn't matter, but few if any have been willing to actually say it. Speaking candidly, if that’s still allowed, that’s the way it always goes when someone of institutional significance launches a campaign to defend an indefensible intrusion into our private spaces. They make a mad dash to the supposed high ground, from which they speak in low, solemn tones about their moral mission before fervently invoking the dread specter of the Four Horsemen of the Infopocalypse, warning that only a dubious amulet—or suspicious software update—can save us from the most threatening members of our species. Suddenly, everybody with a principled objection is forced to preface their concern with apologetic throat-clearing and the establishment of bonafides:  As a parent, I’m here to tell you that sometimes it doesn’t matter why the man in the handsome suit is doing something. What matters are the consequences? Apple’s new system, regardless of how anyone tries to justify it, will permanently redefine what belongs to you, and what belongs to them. The task Apple intends its new surveillance system to perform—preventing their cloud systems from being used to store digital contraband, in this case, unlawful images uploaded by their customers—is traditionally performed by searching their systems. While it’s still problematic for anybody to search through a billion people’s private files, the fact that they can only see the files you gave them is a crucial limitation. Now, however, that’s all set to change. Under the new design, your phone will now perform these searches on Apple’s behalf before your photos have even reached their iCloud servers, and—yadda, yadda, yadda—if enough "forbidden content" is discovered, law-enforcement will be notified. I intentionally wave away the technical and procedural details of Apple’s system here, some of which are quite clever, because they, like our man in the handsome suit, merely distract from the most pressing fact—the fact that, in just a few weeks, Apple plans to erase the boundary dividing which devices work for you, and which devices work for them. For its part, Apple says their system, in its initial, v1.0 design, has a narrow focus: it only scrutinizes photos intended to be uploaded to iCloud (although for 85% of its customers, that means EVERY photo), and it does not scrutinize them beyond a simple comparison against a database of specific examples of previously-identified child sexual abuse material (CSAM). If you’re an enterprising pedophile with a basement full of CSAM-tainted iPhones, Apple welcomes you to entirely exempt yourself from these scans by simply flipping the “Disable iCloud Photos” switch, a bypass which reveals that this system was never designed to protect children, as they would have you believe, but rather to protect their brand. As long as you keep that material off their servers, and so keep Apple out of the headlines, Apple doesn’t care. So what happens when, in a few years at the latest, a politician points that out, and—in order to protect the children—bills are passed in the legislature to prohibit this "Disable" bypass, effectively compelling Apple to scan photos that aren’t backed up to iCloud? What happens when a party in India demands they start scanning for memes associated with a separatist movement? What happens when the UK demands they scan for a library of terrorist imagery? How long do we have left before the iPhone in your pocket begins quietly filing reports about encountering “extremist” political material, or about your presence at a "civil disturbance"? Or simply about your iPhone's possession of a video clip that contains, or maybe-or-maybe-not contains, a blurry image of a passer-by who resembles, according to an algorithm, "a person of interest"? To read this posting in its entirety visit OUR FORUM.