By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Meta has been fined a record-breaking €1.2 billion ($1.3 billion) by European Union regulators for violating EU privacy laws by transferring the personal data of Facebook users to servers in the United States. The European Data Protection Board announced the fine in a statement Monday, saying it followed an inquiry into Facebook (FB) by the Irish Data Protection Commission, the chief regulator overseeing Meta’s operations in Europe. The move highlights ongoing uncertainty about how global businesses can legally transfer EU users’ data to overseas servers. The EU regulator said the processing and storage of personal data in the United States contravened Europe’s signature data privacy law, known as the General Data Protection Regulation. Chapter 5 of the GDPR sets out the conditions under which personal data can be transferred to third countries or international organizations. The fine is the largest ever levied under GDPR. The previous record of €746 million ($805.7 million) was levied against Amazon (AMZN) in 2021. Meta has also been ordered to cease the processing of personal data of European users in the United States within six months. Meta’s infringement is “very serious since it concerns systematic, repetitive and continuous transfers,” said Andrea Jelinek, chair of the European Data Protection Board. “Facebook has millions of users in Europe, so the volume of personal data transferred is massive. The unprecedented fine is a strong signal to organizations that serious infringements have far-reaching consequences,” she added. Meta, which also owns WhatsApp and Instagram, said it would appeal the ruling, including the fine. There would be no immediate disruption to Facebook in Europe, it added. The company said the root of the issue stemmed from a “conflict of law” between US rules on access to data and the privacy rights of Europeans. EU and US policymakers were on a “clear path” to resolving this conflict under a new transatlantic Data Privacy Framework. The new framework seeks to end the limbo facing companies since 2020, when Europe’s top court struck down a transatlantic legal framework designed to address EU concerns about potential US government surveillance of European citizens, known as Privacy Shield. The United States and the EU have been negotiating a successor agreement since last year. The continued lack of a Privacy Shield replacement threatens thousands of businesses that depend on being able to move EU user data to other jurisdictions, according to legal experts. The European Data Protection Board “chose to disregard the clear progress that policymakers are making to resolve this underlying issue,” Nick Clegg, Meta’s president of global affairs, and Jennifer Newstead, the company’s chief legal officer, said in a statement. “This decision is flawed, unjustified and sets a dangerous precedent for the countless other companies transferring data between the EU and the US,” they added. Before Monday’s ruling, Ireland’s Data Protection Commission had handed Meta nearly $1 billion in fines for alleged violations of GDPR since the fall of 2021. But in this instance it was not in favor of fining Meta, judging that doing so exceeded what could be regarded as “proportionate” to address the infringement. In its own statement Monday, the regulator said it was obliged to base its final ruling on the decision of the European Data Protection Board. Ireland has a narrow path to tread between retaining top US tech companies and aligning with the European Union’s hard-hitting approach to tech regulation. Dublin is home to the European headquarters of Apple, Meta, Twitter, and Google, which have created thousands of jobs in the country and boosted its economic growth. Ireland’s low corporate tax rate of 12.5% has been a major factor in luring these firms. The country was among the last in the Organization for Economic Cooperation and Development to join a global agreement in 2021 to tax multinational firms at a minimum rate of 15%. Complete details can be found posted on OUR FORUM.

When computer scientists at Microsoft started to experiment with a new artificial intelligence system last year, they asked it to solve a puzzle that should have required an intuitive understanding of the physical world. "Here we have a book, nine eggs, a laptop, a bottle, and a nail," they asked. "Please tell me how to stack them onto each other in a stable manner." The researchers were startled by the ingenuity of the AI system's answer. Put the eggs on the book, it said. Arrange the eggs in three rows with space between them. Make sure you don't crack them. "Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up," it wrote. "The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer." The clever suggestion made the researchers wonder whether they were witnessing a new kind of intelligence. In March, they published a 155-page research paper arguing that the system was a step toward artificial general intelligence, or AGI, which is shorthand for a machine that can do anything the human brain can do. Microsoft, the first major tech company to release a paper making such a bold claim, stirred one of the tech world's testiest debates: Is the industry building something akin to human intelligence? Or are some of the industry's brightest minds letting their imaginations get the best of them? "I started off being very skeptical – and that evolved into a sense of frustration, annoyance, maybe even fear," said Peter Lee, who leads research at Microsoft. "You think: Where the heck is this coming from?" Microsoft's research paper, "Sparks of Artificial General Intelligence," goes to the heart of what technologists have been working toward – and fearing – for decades. If they build a machine that works like the human brain or even better, it could change the world. But it could also be dangerous. Making AGI claims can be a reputation killer for computer scientists. What one researcher believes is a sign of intelligence can easily be explained away by another, and the debate often sounds more appropriate to a philosophy club than a computer lab. But some believe the industry has in the past year or so inched toward something that can't be explained away: a new AI system that is coming up with humanlike answers and ideas that weren't programmed into it. Microsoft has reorganized parts of its research labs to include multiple groups dedicated to exploring the idea. One will be run by Sebastien Bubeck, who was the lead author on the Microsoft AGI paper. About five years ago, companies like Google, Microsoft, and Open AI began building large language models, or LLMs. Those systems often spend months analyzing vast amounts of digital text, including books, Wikipedia articles, and chat logs. By pinpointing patterns in that text, they learned to generate text of their own, including term papers, poetry and computer code. They can even carry on a conversation. The technology the Microsoft researchers were working with, Open AI's GPT-4, is considered the most powerful of those systems. Microsoft is a close partner of Open AI and has invested $13 billion in the San Francisco company. The researchers included Bubeck, a 38-year-old French expatriate and former Princeton University professor. One of the first things he and his colleagues did was ask GPT-4 to write a mathematical proof showing that there were infinite prime numbers and do it in a way that rhymed. The technology's poetic proof was so impressive – both mathematically and linguistically – that he found it hard to understand what he was chatting with. Please visit OUR FORUM for more.

The Metaverse, the once-buzzy technology that promised to allow users to hang out awkwardly in a disorientating video-game-like world, has died after being abandoned by the business world. It was three years old. The capital-M Metaverse, a descendant of the 1982 movie "Tron" and the 2003 video game "Second Life," was born in 2021 when Facebook founder Mark Zuckerberg changed the name of his trillion-dollar company to Meta. After a much-heralded debut, the Metaverse became the obsession of the tech world and a quick hack to win over Wall Street investors. The hype could not save the Metaverse, however, and a lack of coherent vision for the product ultimately led to its decline. Once the tech industry turned to a new, more promising trend — generative AI — the fate of the Metaverse was sealed. The Metaverse is now headed to the tech industry's graveyard of failed ideas. But the short life and ignominious death of the Metaverse offer a glaring indictment of the tech industry that birthed it. From the moment of its delivery, Zuckerberg claimed that the Metaverse would be the future of the internet. The glitzy, spurious promotional video that accompanied Zuckerberg's name-change announcement described a future where we'd be able to interact seamlessly in virtual worlds: Users would "make eye contact" and "feel like you're right in the room together." The Metaverse offered people the chance to engage in an "immersive" experience, he claimed. These grandiose promises heaped sky-high expectations on the Metaverse. The media swooned over the newborn concept: The Verge published a nearly 5,000-word-long interview with Zuckerberg immediately following the announcement — in which the writer called it "an expansive, immersive vision of the internet." Glowing profiles of the Metaverse seemed to set it on a laudatory path, but the actual technology failed to deliver on this promise throughout its short life. A wonky virtual-reality interview with the CBS host Gayle King, where low-quality cartoon avatars of both King and Zuckerberg awkwardly motioned to each other, was a stark contrast to the futuristic vistas shown in Meta's splashy introductory video. The Metaverse also suffered from an acute identity crisis. A functional business proposition requires a few things to thrive and grow: a clear use case, a target audience, and the willingness of customers to adopt the product. Zuckerberg waxed poetic about the Metaverse as "a vision that spans many companies'' and "the successor to the mobile internet," but he failed to articulate the basic business problems that the Metaverse would address. The concept of virtual worlds where users interact with each other using digital avatars is an old one, going back as far as the late 1990s with massively multiplayer online role-player games, such as "Meridian 59," "Ultima Online," and "EverQuest." And while the Metaverse supposedly built on these ideas with new technology, Zuckerberg's one actual product — the VR platform Horizon Worlds, which required the use of an incredibly clunky Oculus headset — failed to suggest anything approaching a road map or a genuine vision. In spite of the Metaverse's arrested conceptual development, a pliant press published statements about the future of the technology that was somewhere between unrealistic and outright irresponsible. The CNBC host Jim Cramer nodded approvingly when Zuckerberg claimed that 1 billion people would use the Metaverse and spend hundreds of dollars there, despite the Meta CEO's inability to say what people would receive in exchange for their cash or why anyone would want to strap a clunky headset to their face to attend a low-quality, cartoon concert. The inability to define the Metaverse in any meaningful way didn't get in the way of its ascension to the top of the business world. In the months following the Meta announcement, it seemed that every company had a Metaverse product on offer, despite it not being obvious what it was or why they should. Microsoft CEO Satya Nadella would say at the company's 2021 Ignite Conference that he couldn't "overstate how much of a breakthrough" the Metaverse was for his company, the industry, and the world. Roblox, an online game platform that has existed since 2004, rode the Metaverse hype wave to an initial public offering and a $41 billion valuation. Of course, the cryptocurrency industry took the ball and ran with it: The people behind the Bored Ape Yacht Club NFT company conned the press into believing that uploading someone's digital monkey pictures into VR would be the key to "master the Metaverse." Other crypto pumpers even successfully convinced people that digital land in the Metaverse would be the next frontier of real estate investment. Even businesses that seemed to have little to do with tech jumped on board. Walmart joined the Metaverse. Disney joined the Metaverse. Go indepth by visiting OUR FORUM.

Microsoft issued a Windows update that broke a Chrome feature, making it harder to change your default browser and annoying Chrome users with popups, Gizmodo has learned. An April Windows update broke a new button in Chrome—the most popular browser in the world—that let you change your default browser with a single click, but the worst was reserved for users on the enterprise version of Windows. For weeks, every time an enterprise user opened Chrome, the Windows default settings page would pop up. There was no way to make it stop unless you uninstalled the operating system update. It forced Google to disable the setting, which had made Chrome more convenient. This petty chapter of the browser wars started in July 2022 when Google quietly rolled out a new button in Chrome for Windows. It would show up near the top of the screen and let you change your default browser in one click without pulling up your system settings. For eight months, it worked great. Then, in April, Microsoft issued Windows update KB5025221, and things got interesting. “Every time I open Chrome the default app settings of Windows will open. I’ve tried many ways to resolve this without luck,” one IT administrator said on a Microsoft forum. A Reddit user noticed that the settings page also popped up any and every time you clicked on a link, but only if Chrome was your default browser. “It doesn’t happen if we change the default browser to Edge,” the user said. Others made similar complaints on Google support forums, some saying that entire organizations were having the issue. Users quickly realized the culprit was the operating system update. For people on the regular consumer version of Windows, things weren’t quite as bad; the one-click “Make Default” button just stopped working. Gizmodo was able to replicate the problem. In fact, we were able to circumvent the issue just by changing the name of the Chrome app on a Windows desktop. It seems that Microsoft threw up the roadblock specifically for Chrome, the main competitor to its Edge browser. Microsoft didn’t answer questions on the subject, but shared a link published before it messed up Chrome.“For information on this, please see this blog post about Microsoft’s approach to app pinning and app defaults in Windows. Microsoft has nothing further to share,” a Microsoft spokesperson said. The post describes the company’s “long-standing approach to put people in control of their Windows PC experience.” Mozilla’s Firefox has its own one-click default button, which worked just fine throughout the ordeal. But according to Steve Teixeira, chief product officer at Mozilla, this isn’t the first anti-competitive move from Microsoft in recent years. “When using Windows machines, Firefox users routinely encounter these kinds of barriers, such as overriding their selection of default browser, or pop-ups and misleading warnings attempting to persuade them that Edge is somehow safer,” Teixeira said. “It’s past time for Microsoft to respect people’s preferences and allow them to use whatever browser they wish without interfering with their choice.” In response, Google had to disable its one-click default button; the issue stopped after it did. In other words, Microsoft seems to have gone out of its way to break a Chrome feature that made life easier for users. Google confirmed the details of this story, but declined to comment further. This is part of a pattern of behavior for Microsoft as it wages war on non-Windows web browsers and the people who use them. Chrome is, it bears repeating, the world’s preferred internet browser, with a reported 66% market share. Earlier this year, Microsoft started inserting full-size ads into the search results if you looked up Google Chrome, saying “There’s no need to change your default browser.” Microsoft went as far as sticking ads for Edge on the Chrome download website itself, stating “Microsoft Edge uses the same technology as Chrome, with the added trust of Microsoft.” There were other bizarre messages to would-be Chrome users as well, with some suggesting Chrome is worse for online shopping, or referring to Google’s browser as “so 2008.” For ore please visit OUR FORUM

The Hyena code is able to handle amounts of data that make GPT-style technology run out of memory and fail. For all the fervor over the chatbot AI program known as ChatGPT, from OpenAI, and its successor technology, GPT-4, the programs are, at the end of the day, just software applications.  And like all applications, they have technical limitations that can make their performance sub-optimal. In a paper published in March, artificial intelligence (AI) scientists at Stanford University and Canada's MILA Institute for AI proposed a technology that could be far more efficient than GPT-4 -- or anything like it -- at gobbling vast amounts of data and transforming it into an answer. Known as Hyena, the technology is able to achieve equivalent accuracy on benchmark tests, such as question answering, while using a fraction of the computing power. In some instances, the Hyena code is able to handle amounts of text that make GPT-style technology simply run out of memory and fail. "Our promising results at the sub-billion parameter scale suggest that attention may not be all we need," write the authors. That remark refers to the title of a landmark AI report of 2017, 'Attention is all you need'. In that paper, Google scientist Ashish Vaswani and colleagues introduced the world to Google's Transformer AI program. The transformer became the basis for every one of the recent large language models. But the Transformer has a big flaw. It uses something called "attention," where the computer program takes the information in one group of symbols, such as words, and moves that information to a new group of symbols, such as the answer you see from ChatGPT, which is the output. That attention operation -- the essential tool of all large language programs, including ChatGPT and GPT-4 -- has "quadratic" computational complexity (Wiki "time complexity" of computing). That complexity means the amount of time it takes for ChatGPT to produce an answer increases as the square of the amount of data it is fed as input. At some point, if there is too much data -- too many words in the prompt, or too many strings of conversations over hours and hours of chatting with the program -- then either the program gets bogged down providing an answer, or it must be given more and more GPU chips to run faster and faster, leading to a surge in computing requirements. In the new paper, 'Hyena Hierarchy: Towards Larger Convolutional Language Models', posted on the arXiv pre-print server, lead author Michael Poli of Stanford and his colleagues propose to replace the Transformer's attention function with something sub-quadratic, namely Hyena. The authors don't explain the name, but one can imagine several reasons for a "Hyena" program. Hyenas live in Africa and can hunt for miles and miles. In a sense, a very powerful language model could be like a hyena, which is picking over carrion for miles and miles to find something useful. But the authors are really concerned with "hierarchy", as the title suggests, and families of hyenas have a strict hierarchy by which members of a local hyena clan have varying levels of rank that establish dominance. In some analogous fashion, the Hyena program applies a bunch of very simple operations, as you'll see, over and over again, so that they combine to form a kind of hierarchy of data processing. It's that combination element that gives the program its Hyena name. More in depth reading can be found on OUR FORUM.

Microsoft’s Digital Crimes Unit (DCU), cybersecurity software company Fortra™ and Health Information Sharing and Analysis Center (Health-ISAC) are taking technical and legal action to disrupt cracked, legacy copies of Cobalt Strike and abused Microsoft software, which have been used by cybercriminals to distribute malware, including ransomware. This is a change in the way DCU has worked in the past – the scope is greater, and the operation is more complex. Instead of disrupting the command and control of a malware family, this time, we are working with Fortra to remove illegal, legacy copies of Cobalt Strike so they can no longer be used by cybercriminals. We will need to be persistent as we work to take down the cracked, legacy copies of Cobalt Strike hosted around the world. This is an important action by Fortra to protect the legitimate use of its security tools. Microsoft is similarly committed to the legitimate use of its products and services. We also believe that Fortra choosing to partner with us for this action is recognition of DCU’s work fighting cybercrime over the last decade. Together, we are committed to going after the cybercriminal’s illegal distribution methods. Cobalt Strike is a legitimate and popular post-exploitation tool used for adversary simulation provided by Fortra. Sometimes, older versions of the software have been abused and altered by criminals. These illegal copies are referred to as “cracked” and have been used to launch destructive attacks, such as those against the Government of Costa Rica and the Irish Health Service Executive. Microsoft software development kits and APIs are abused as part of the coding of the malware as well as the criminal malware distribution infrastructure to target and mislead victims. The ransomware families associated with or deployed by cracked copies of Cobalt Strike have been linked to more than 68 ransomware attacks impacting healthcare organizations in more than 19 countries around the world. These attacks have cost hospital systems millions of dollars in recovery and repair costs, plus interruptions to critical patient care services including delayed diagnostic, imaging, and laboratory results, canceled medical procedures, and delays in delivery of chemotherapy treatments, just to name a few. Fortra and Microsoft’s investigation efforts included detection, analysis, telemetry, and reverse engineering, with additional data and insights to strengthen our legal case from a global network of partners, including Health-ISAC, the Fortra Cyber Intelligence Team, and Microsoft Threat Intelligence team data and insights. Our action focuses solely on disrupting cracked, legacy copies of Cobalt Strike and compromised Microsoft software. Microsoft is also expanding a legal method used successfully to disrupt malware and nation-state operations to target the abuse of security tools used by a broad spectrum of cyber criminals. Disrupting cracked legacy copies of Cobalt Strike will significantly hinder the monetization of these illegal copies and slow their use in cyberattacks, forcing criminals to re-evaluate and change their tactics. Today’s action also includes copyright claims against the malicious use of Microsoft and Fortra’s software code which are altered and abused for harm. More detailed information can be found on OUR FORUM.