|
Containers are meant to be immutable. Once the image is made, it is what it is, and all container instances spawned from it will be identical. The container is defined as code, so its contents, intents, and dependencies are explicit. Because of this, if used carefully, containers can help reduce supply chain risks. However, these benefits have not gone unnoticed by attackers. A number of threat actors have started to leverage containers to deploy malicious payloads and even scale up their own operations. For the Sysdig 2022 Cloud-Native Threat Report, the Sysdig Threat Research Team (Sysdig TRT) investigated what is really lurking in publicly available containers. Docker Hub is the most popular free public-facing container registry. It houses millions of pre-made container images in convenient, self-contained packages with all required software installed and configured. Public registries also host official content and images signed by Verified Publishers, which adds some level of trust that they are not malicious and can be used safely. While public registries save developers time, if a user is not careful, there could be malicious aspects to the container they pull. With so many containers to choose from, it is easy to choose the wrong one. Threat actors also appreciate how much friction this technology removes from developer workflows. They count on the fact that many developers may not examine what exactly is being installed. According to the Sysdig threat report, DockerHub is being used by malicious actors to deliver malware, backdoors, and other unwelcome surprises to users and companies. One specific practice to watch out for is typosquatting, which is when an image is disguised as legitimate while hiding something nefarious within its layers. Its name can be just a letter off the real thing, or the attacker might rely on a developer carelessly copying some instructions containing the bad path. Sysdig TRT found images shared by suspicious users with names to appear as popular open-source software in order to trick users. For example, popular packages like Drupal and Joomla have had their names used in order to disguise malicious payloads. Deploying these images means opening the doors of our environment to attackers, letting them pursue their goals or move internally to business-critical assets. The Sysdig TRT analyzed more than 250,000 Linux images over several months. During the research, 1,777 images were found to contain various kinds of malicious IPs or domains and embedded credentials. Upon taking a closer look, we see that cryptomining images are the most common malicious image type. This is quite expected because mining cryptocurrency on someone else’s compute resources is the most prevalent type of attack targeting cloud and container environments today. Embedded secrets in Docker images is the second most prevalent attack technique. In this case, attackers insert secrets in an image and use this information to get a foothold in your environment and then try to move laterally. For example, an SSH key can be added, which could allow for simple remote access or AWS keys could be added to give them cloud capabilities. This highlights the persistent challenges of secrets management is still a battle we need to win. To learn more visit OUR FORUM. The birth of the Internet in the 1990s and its subsequent expansion into every aspect of our lives began a digital revolution that has since refused to slow down. With it has come unimagined functionality, equipping us with instant access to information and communication. Those born before the Digital Enlightenment could never have imagined the power to cast aside unanswered questions with a mere "Google". Gazing across the digital expanse with our infantile stare, we failed to notice another set of eyes looking back at us. Those eyes belong to the world’s largest companies -- Big Tech giants like Facebook and Google -- who are continuously monitoring our movements across the Internet. Every time we open a website or App, our journeys are tracked and hunted down by a pack of algorithms designed to determine our interests -- products, ideas, and brands that we may feel positively towards. This data is coveted by advertisers; it is the elixir that enhances their powers of persuasion and consumer targeting and, inevitably, sales. This insatiable demand has propelled Big Tech’s rampant profiteering and extraction of consumer data. Stunned by the pace of digital expansion, consumers have failed to recognize how our data -- of which we are the sole producers -- is sold off to help influence our future decisions and expenditure. Although there have been some advancements made, such as the withdrawal of third-party cookies in some applications and regions, these have only come about due to societal pressure. Further change will not come until that pressure intensifies. We may have been the children of the Digital Age, but we must recognize that the Internet is no longer in its infancy, and neither are we. We must re-evaluate our perceptions with the experience of more than two decades behind us. We must consider how we fooled ourselves into believing that our data holds no personal value and that the sharing of our digital diaries is an inescapable part of the Internet…But what precisely is that value? To give an estimate, advertisers spend approximately £27 billion a year on digital marketing in the UK alone, which for the most part goes straight to Big Tech. This equates to around £80 per household per month. This staggering evaluation leaves little doubt as to why our data has been so exploited -- it is a precious commodity, yet one in which its creators hold no share of the reward. Advertisers are partially responsible for encouraging such pervasive and unjust looting of consumer data. Ultimately, it is the enormous paycheck that they have provided Twitter, Facebook and co. that has encouraged this activity. Advertisers must play their part in changing this. But first, consumers must embolden themselves by resisting this digital hegemony. We must demand remuneration for our data by moving en masse to direct-consumer marketing platforms that return cash rewards in exchange for data. Advertisers must also facilitate this transition; with direct access to target consumers through such platforms, they have a unique opportunity to change their mission statement from selling to selling and rewarding, realizing this by offering consumers exclusive benefits and cash rewards for their data. Such platforms allow consumers to determine the level of data access they wish to share, with rewards varying dependently. For instance, a consumer may choose to provide copies of their shopping receipts while remaining anonymous for an entry-level cash reward. Meanwhile, the most active consumers help develop the platform’s feedback loop and in exchange receive access to higher-value cash rewards. Within this setup exists an intrinsic market evaluation for consumer data that commissions its creators on a quid pro quo basis. Follow this thread on OUR FORUM. Some scholars of AI warn that the present technologies may never add up to "true" intelligence or "human" intelligence. But much of the world may not care about that. The British mathematician Alan Turing wrote in 1950, "I propose to consider the question, 'Can machines think?'" His inquiry framed the discussion for decades of artificial intelligence research. For a couple of generations of scientists contemplating AI, the question of whether "true" or "human" intelligence could be achieved was always an important part of the work. AI may now be at a turning point where such questions matter less and less to most people. The emergence of something called industrial AI in recent years may signal an end to such lofty preoccupations. AI has more capability today than at any time in the 66 years since the term AI was first coined by computer scientist John McCarthy. As a result, the industrialization of AI is shifting the focus from intelligence to achievement. Those achievements are remarkable. They include a system that can predict protein folding, AlphaFold, from Google's DeepMind unit, and the text generation program GPT-3 from startup OpenAI. Both of those programs hold tremendous industrial promise irrespective of whether anyone calls them intelligent. Among other things, AlphaFold holds the promise of designing novel forms of proteins, a prospect that has electrified the biology community. GPT-3 is rapidly finding its place as a system that can automate business tasks, such as responding to employee or customer queries in writing without human intervention. That practical success, driven by a prolific semiconductor field, led by chipmaker Nvidia, seems like it might outstrip the old preoccupation with intelligence. In no corner of industrial AI does anyone seem to care whether such programs are going to achieve intelligence. It is as if, in the face of practical achievements that demonstrate obvious worth, the old question, "But is it intelligent?" ceases to matter. As computer scientist Hector Levesque has written, when it comes to the science of AI versus technology, "Unfortunately, it is the technology of AI that gets all the attention." To be sure, the question of genuine intelligence does still matter to a handful of thinkers. In the past month, ZDNET has interviewed two prominent scholars who are very much concerned with that question. Yann LeCun, chief AI scientist at Facebook owner Meta Properties, spoke at length with ZDNET about a paper he put out this summer as a kind of think piece on where AI needs to go. LeCun expressed concern that the dominant work of deep learning today if it simply pursues its present course, will not achieve what he refers to as "true" intelligence, which includes things such as the ability of a computer system to plan a course of action using common sense. To learn more please visit OUR Forum. |
Latest Articles
|


