By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Last month, a researcher for Meta prepared a talk for colleagues that they knew would hit close to home. The subject: how to cope as a researcher when the company you work for is constantly receiving negative press. The talk had been approved to show at the company’s annual research summit for employees in early November. But shortly before the event, Meta’s legal and communications department determined that the risk of the contents leaking was too great. So it disappeared from the research summit’s agenda days before, along with another pre-taped talk describing efforts to combat hate speech and bullying. Both talks never saw the light of day. The pulling of the talks highlights how a barrage of leaks and external scrutiny has chilled the flow of information inside the company formerly known as Facebook. Many of the changes appear designed to thwart the next Frances Haugen, who worked in the Integrity organization responsible for making the social network safer before she quit earlier this year, taking thousands of internal documents with her. Those documents served as the basis for a series of damning stories in The Wall Street Journal and dozens of other news outlets, including The Verge. Some of them, such as internal research showing Instagram and Facebook can have negative effects on young people, have led to congressional hearings and lawsuits. And as the bad press continues, Meta executives have argued that the documents were cherry-picked to smear the company and paint an incomplete story. While the documents Haugen leaked haven’t yet caused Meta to make meaningful changes to its products, they’ve already left a lasting mark on how the world’s largest social network operates, particularly in its research and Integrity divisions. Ten of the 70 preapproved talks presented at the internal research summit a couple of weeks ago received a second, more stringent review to minimize leak risk. Senior leaders, including policy and communications chief Nick Clegg, have in recent months slowed the release of Integrity research internally, asking for reports to be reviewed again before they’re shared even in private groups. In some cases, researchers have been told to make clear what is defensible by data in their work and what is an opinion, and that their projects will need to be cleared by more managers before work begins. Last month, Meta rolled out a new “Integrity Umbrella” system designed to thwart leakers. The Umbrella maintains a list of employees in Integrity and gives them automatic access to join private Integrity groups in Workplace, the internal version of Facebook used by employees. When it was introduced, several employees internally pointed out that the system wouldn’t have stopped Haugen, since she worked in the Integrity division when she gathered the leaked documents. It’s not just the Integrity division that is locking down access to Workplace groups. The change has become so widespread that employees have taken to a group in Workplace titled “Examples of Meta Culture trending towards ‘Closed,’” where they’ve been posting screenshots of previously open groups they belong to being set to private. This story is based on conversations with current and former Meta employees and internal Workplace posts from the past month obtained by The Verge. In response to this story, Meta confirmed that the company was making changes to internal communication. “Since earlier this year, we have been talking about the right model of information sharing for the company, balancing openness with sharing relevant information and maintaining focus,” said Mavis Jones, a Meta spokesperson. “This is a work in progress and we are committed to an open culture for the company.” Complete details are posted on OUR FORUM.

For those of you that remember the fuss about the Y2K bug, this story may sound familiar. The Cybersecurity & Infrastructure Security Agency (CISA) has issued a warning to Critical Infrastructure (CI) owners and operators, and other users who get the time from GPS, about a GPS Daemon (GPSD) bug in GPSD versions 3.20 through 3.22. If you don’t remember the Y2K bug, let me remind you quickly. Before the year 2000, lots of computer programs kept track of the year by remembering the last two digits instead of all four. Programs coded this way would work correctly until the first day of the new millennium, when they would assume they’d been transported back in time 100 years to 1900. Some computer programs don’t care what time it is, but others do, and there were genuine fears that getting the date wrong by - 100 years might cause the lights to go out, or for planes to fall from the sky. In the end, those big problems didn’t materialize, because everyone received a warning or two, or twenty, way in advance, and there was enough time to take action and fix the broken code. Alongside telling you where in space you are, the Global Positioning System (GPS) can also tell you where in time you are. To do this, it keeps a count of the number of weeks since January 5, 1980. The main civil GPS signal broadcasts the GPS week number using a 10-bit code with a maximum value of 1,023 weeks. This means every 19.7 years, the GPS week number in the code rolls over to zero. GPSD is a GPS service daemon for Linux, OpenBSD, Mac OS X, and Windows. It collects data from GPS receivers and makes that data accessible to computers, which can query it on TCP port 2947. It can be found on Android phones, drones, robot submarines, driverless cars, manned military equipment, and all manner of other embedded systems. Unfortunately, in an echo of the Y2K bug, a flaw in some versions of GPSD could cause the time to roll back after October 23, 2021. The buggy versions of the code reportedly subtract 1024 from the week number on October 24, 2021. This would mean Network Time Protocol (NTP) servers using the broken GPSD versions would think it’s March 2002 instead of October 2021. For computer systems that have no other time reference, being thrown back in time can cause several security issues. From the perspective of incident handling and incident response, well-synchronized time across systems facilitates log analysis, forensic activities, and correlation of events. Losing track of what happened when can lead to missed incidents. Even worse is getting shut out. NTP servers using the bugged GPSD version would get thrown back almost 20 years. The Network Time Protocol (NTP) is responsible in many cases to ensure that time is accurately kept. Various businesses and organizations rely on these systems. Authentication mechanisms such as Time-based One-Time Password (TOTP) and Kerberos also rely heavily on time. As such, should there be a severe mismatch in time, users would not be able to authenticate and gain access to systems. The same would happen in cases where authentication relies on cookies. Websites and services relying on expiring cookies do not respond favorably to cookies from two decades in the future. And speaking from experience, the last GPS week number reset to zero occurred on April 6, 2019. Many GPS-enabled devices that were not properly designed to account for the rollover event exhibited problems on that date. Other equipment became faulty several months before or after that date, requiring software or firmware patches to restore their function. Since the affected versions of GPSD are versions 3.20 through 3.22 users should upgrade to version 3.23.1. Going back to older versions such as 3.19 and 3.20 is not recommended since they are unsupported and had bugs. For organizations that are using GPS appliances or rely on GPSD, it is recommended to check if GPSD is being utilized anywhere in the infrastructure and check its corresponding version. It is likely that an upgrade to GPSD will be required if no recent upgrades were performed. For more detailed information visit OUR FORUM.

 

The data for approximately 7 million Robinhood customers stolen in a recent data breach are being sold on a popular hacking forum and marketplace. Last week, Robinhood disclosed a data breach after one of its employees was hacked, and the threat actor used their account to access the information for approximately 7 million users through customer support systems. In addition to stealing the data, Robinhood stated that the hacker attempted to extort the company to prevent the data from being released. Stolen email addresses, especially those for financial services, are particularly popular among threat actors as they can be used in targeted phishing attacks to steal more sensitive data. Two days after Robinhood disclosed the attack, a threat actor named 'pompompurin' announced that they were selling the data on a hacking forum. In a forum post, pompompurin said he was selling 7 million Robinhood customers' stolen information for at least five figures, which is $10,000 or higher. The sold data includes 5 million email addresses, and for another batch of Robinhood customers, 2 million email addresses and their full names. However, pompompurin said they were not selling the data for 310 customers who had more sensitive information stolen, including identification cards for some users. Robinhood did not initially disclose the theft of ID cards, and the threat actor states that they downloaded them from SendSafely, a secure file transfer service used by the trading platform when performing Know Your Customer (KYC) requirements. "As we disclosed on November 8, we experienced a data security incident and a subset of approximately 10 customers had more extensive personal information and account details revealed," Robinhood told BleepingComputer after we contacted them regarding the sale of their data. "These more extensive account details included identification images for some of those 10 people. Like other financial services companies, we collect and retain identification images for some customers as part of our regulatory-required Know Your Customer checks." pompompurin told BleepingComputer that he gained access to the Robinhood customer support systems after tricking a help desk employee into installing a remote access software on their computer. Once remote access software is installed on a device, a threat actor can monitor their activities, take screenshots, and remotely access the computer. Additionally, while remotely controlling a device, the attackers can also use the employee's saved login credentials to log in to internal Robinhood systems that they had access to. "I was able to see all account information on people. I saw a few people while the support agent did work," pompompurin told BleepingComputer. In response to further questions regarding how the employee's device was breached, Robinhood referred us back to their original statement stating that the threat actor "socially engineered a customer support employee by phone." However, they did confirm to BleepingComputer that malware was not used in the attack. As proof that they conducted the attack, pompompurin posted screenshots seen by BleepingComputer of the attackers accessing internal Robinhood systems. These screenshots included an internal help desk system used to lookup Robinhood member information by email address, an internal knowledge base page about a "Project Oliver Twister" initiative designed to protect high-risk customers, and an "annotations" page showing notes for a particular customer. This threat actor, pompompurin, was also responsible for abusing FBI's email servers to send threatening emails over the weekend. This weekend, US entities began to receive emails sent from FBI infrastructure warning recipients that their "virtualized clusters " were being targeted in a "sophisticated chain attack," as shown in the email below. To learn more direct your focus to OUR FORUM.