Customer portal
Category

Investigation

"Behind
Investigation, Opinion

Behind the Mask: Creating and Maintaining Sock Puppet Accounts for Online Research

When conducting online research or gathering open-source intelligence (OSINT), it is often necessary to observe or interact with digital spaces without revealing your true identity. This is where sock puppet accounts come into play. A sock puppet is a fictitious online identity created to access information, join closed groups, monitor activity, or engage with targets while protecting the researcher’s real identity and intent.

Used properly, sock puppets are an essential part of an investigator’s toolkit. However, their creation and use come with both ethical and legal responsibilities. Misuse can lead to legal consequences, reputational damage, or compromised investigations. Practitioners must always follow legal guidance and act within clearly defined ethical boundaries.

In this blog, we will explore how to plan, create, and maintain effective sock puppet accounts for OSINT purposes. We will discuss key operational security (OPSEC) measures, common pitfalls to avoid, and strategies for maintaining a convincing online persona over time. Whether you are new to this practice or looking to refine your approach, this guide will help you lay a solid foundation for safe and responsible online research.

What Is a Sock Puppet Account?

A sock puppet account is a false or alternate online identity used to conceal the true identity of the user behind it. In the context of online investigations and intelligence gathering, sock puppets allow researchers to access and monitor digital spaces without drawing attention to their real-world affiliations or investigative purpose.

These accounts are beneficial in OSINT investigations where anonymity is critical. They may be used to:

  • Access private or semi-restricted forums and groups
  • Observe conversations on social media without alerting subjects
  • Collect threat intelligence from Dark Web marketplaces or closed communities
  • Engage with individuals or groups in a way that does not compromise operational security

While sock puppets can be powerful tools, their use must always be underpinned by legal and ethical awareness. Investigators should never use false identities to entrap, manipulate, or harass individuals. The goal is passive information gathering, not interference or provocation. Moreover, laws governing online impersonation, data protection, and computer misuse vary between jurisdictions, and it is the investigator’s responsibility to ensure compliance.

Wherever possible, work within organisational policies, maintain internal approval processes for sensitive research, and document all actions for accountability. Ethical OSINT hinges not only on what can be done, but on what should be done.

Planning Your Sock Puppet Strategy

Before creating a sock puppet account, it is essential to define a clear objective. What do you need the account to do? Your goal might be to passively observe a forum, monitor a social media group, or engage with a specific individual or community. The purpose of the account will shape every decision that follows, from the choice of platform to the construction of your online persona.

Understanding your target environment is a crucial part of this planning stage. Different platforms have different norms, verification processes, and levels of scrutiny. A persona that appears credible on Reddit might not be believable on LinkedIn. Consider regional factors as well: language, time zone, and cultural references all contribute to the authenticity of an account. An inconsistency in these details can quickly arouse suspicion.

With your objective and environment defined, you can begin to craft a suitable cover story. This should include a basic biography, a plausible location, interests relevant to the communities you plan to interact with, and a consistent tone of voice. Keep the persona simple, but detailed enough to withstand casual scrutiny. Avoid unnecessary complexity, which can increase the risk of contradictions or mistakes.

A well-planned sock puppet starts long before the account is created. By aligning your objectives with your operational context and building a realistic backstory, you lay the groundwork for a credible and sustainable online identity.

Creating the Sock Puppet Account

Once your planning is complete, the next step is to create the sock puppet account itself. This process involves selecting the right platform, crafting a believable identity, and ensuring that your setup maintains strong operational security from the outset.

Choosing the Right Platform

Select your platform based on the objective of the investigation. If you need to observe professional activity or gather company intelligence, LinkedIn might be appropriate. For community discussions, Reddit or Discord may be more useful. For threat intelligence gathering, forums or encrypted messaging apps could be more suitable. Each platform has its own registration process, verification requirements, and user expectations, all of which must be considered.

Crafting a Believable Identity

A convincing sock puppet needs to pass casual inspection. Start with a realistic username and a dedicated email address that fits your persona. Avoid using anything that resembles your real name or any identifiers linked to your organisation.

  • Profile photo: Use AI-generated images or copyright-free alternatives. Tools like ThisPersonDoesNotExist or Generated Photos can be helpful, but check for anomalies that might raise suspicion.
  • Biography and interests: Write a brief, plausible bio that fits the persona and platform. Add relevant interests or affiliations to make the account appear active and authentic.
  • Posting behaviour: Mirror the tone, grammar, and posting frequency typical for the platform and user type. If your persona is a 30-year-old from Manchester, for example, ensure the language and topics reflect that identity.
  • Language consistency: Stick to one language and dialect throughout. Switching between different styles or regions can be a clear indicator of inauthenticity.

Acquiring a Clean IP

To prevent your real identity or location from being linked to the sock puppet, use a clean and separate IP address. A reputable VPN or proxy service is essential, and in some cases, a dedicated virtual machine or separate device should be used. Avoid logging in to real accounts or using your usual browser within the same environment, as cross-contamination can compromise the entire operation.

Account creation is not just about filling in a form. Every detail, from your profile picture to your browser setup, contributes to the believability and security of the puppet. Take your time, document each step, and treat the identity as if it were real.

OPSEC Considerations

Operational Security (OPSEC) is critical to the effective use of sock puppet accounts. Without proper precautions, it is easy to leave digital traces that link back to your real identity or organisation. To maintain credibility and protect yourself, you must build strong habits around device use, network hygiene, and identity compartmentalisation.

Device and Network Isolation

Always use a dedicated environment for sock puppet activity. This might be a virtual machine (VM), a separate user profile, or an entirely distinct physical device. The key is to ensure that no personal data, saved credentials, or browsing habits from your real identity carry over into the puppet’s digital footprint. Similarly, connect via a trusted VPN or proxy with a location appropriate to the persona. Never use your home or work IP address when managing sock puppets.

Avoiding Contamination

Cross-contamination with real accounts is one of the most common OPSEC failures. Use a clean browser instance with no saved cookies, autofill data, or extensions that may reveal identifying information. Consider using privacy-focused browsers or containerised browsing sessions to isolate activity. Disable features like browser synchronisation or automatic logins, which could leak personal credentials.

Using Burner Phones and Anonymous Email

When platforms require phone numbers for verification, use a burner device or a secure, anonymised SMS service, provided it complies with legal and policy requirements. Similarly, choose privacy-conscious email providers such as ProtonMail or Tutanota. The email address should align with the puppet’s identity and not reference any real-world details.

Password and Account Recovery Separation

Treat sock puppets as standalone entities. Use unique, complex passwords for each account and manage them using a secure password manager. Keep recovery options consistent with the identity—never link your real email or phone number. If using recovery questions, invent answers that match the puppet’s backstory and document them securely.

Logging and Documentation

Maintain secure records of your sock puppets, including account details, access credentials, personas, activity logs, and creation dates. This helps track usage over time, identify potential compromises, and safely retire or rotate identities when needed. Store this information in an encrypted format or within a secure password management tool.

Sock puppet OPSEC is not about one-time precautions—it requires ongoing discipline. A single mistake can expose your identity or compromise the entire investigation. Take a cautious, methodical approach and revisit your OPSEC practices regularly.

Maintaining Sock Puppets Over Time

Creating a sock puppet is only the beginning. To remain credible and useful over time, the account must appear active, consistent, and authentic. Dormant or obviously artificial profiles are more likely to be flagged by platforms or ignored by the communities you are trying to observe. Maintaining a sock puppet means simulating the behaviour of a genuine user, without attracting unnecessary attention.

Simulating Real Behaviour

Regular interaction is key to building a believable presence. Depending on the platform, this might include:

  • Liking or sharing posts
  • Following relevant accounts or joining groups
  • Commenting or replying in a manner consistent with the persona

These interactions should be contextually appropriate and contribute to the puppet’s credibility. For example, a user who claims to be interested in cybersecurity might follow industry influencers, comment on relevant articles, or share news stories.

Scheduling Realistic Activity Patterns

Sock puppets should reflect normal online behaviour. Consider the timezone and daily schedule of the persona. If your puppet claims to be based in Berlin, it would be unusual for them to post at 3 a.m. local time. Avoid excessive or erratic posting, which can appear automated or suspicious. A light but consistent activity pattern over time is more convincing than bursts of high engagement.

Avoiding Automation Red Flags

Some platforms are aggressive in detecting and removing accounts that behave like bots. Avoid scripted or repeated actions, especially immediately after account creation. Do not mass-follow users or copy-paste identical comments across threads. Behave like a real person—slow, deliberate, and occasionally imperfect.

Regularly Updating Profile Content

Real users update their profiles from time to time. Refresh your puppet’s bio, add a new interest, or change a profile picture occasionally to reflect life events or shifting interests. These subtle changes reinforce the illusion of an active, evolving online identity.

Ultimately, a successful sock puppet account blends in. It should quietly accumulate a digital footprint that supports its cover story and gives you access to the information you need, without ever drawing attention to itself.

Risks, Red Flags, and Account Burnout

Even well-crafted sock puppets carry risk. Platforms continue to improve their ability to detect suspicious behaviour, and users themselves may flag accounts that appear inauthentic. Understanding common warning signs and knowing when to retire or rotate an identity is key to maintaining long-term operational capability.

Common Ways Sock Puppets Get Flagged or Banned

Sock puppets may be suspended or deleted for a range of reasons, including:

  • Logging in from multiple geographic locations in a short space of time
  • Sudden spikes in activity (e.g. mass liking, following, or posting)
  • Use of stock or AI-generated profile images that resemble known fake accounts
  • Repeated use of the same contact details, browser fingerprint, or device setup
  • Lack of meaningful interaction or organic growth over time

Even a single policy violation can draw scrutiny, particularly on mainstream social media platforms where automated systems are quick to act.

Avoiding Repetitive Patterns Across Accounts

If you operate multiple sock puppets, ensure that each has a unique and independent identity. Reusing the same backstory, writing style, or image sources across accounts can make them easier to detect and link together. Separate devices, email addresses, and behavioural traits help to isolate each puppet and reduce the risk of a cascading compromise.

When to Retire a Puppet and How to Replace It Safely

No sock puppet should be considered permanent. If an account is inactive, becomes untrustworthy, or begins attracting unwanted attention, it is often safer to retire it than to try and recover its credibility. Before deletion, remove any content that could be linked to other operations. Keep a log of why it was retired, and plan how a replacement will fill the same role with improved safeguards.

Having Backup Identities Ready

To ensure continuity, it is good practice to maintain a small number of standby identities.  This is sometimes referred to as a “puppet farm”. These can be developed gradually in the background, gaining basic credibility over time, so they are ready to use when needed. In some cases, it may also be appropriate to establish layered personas, where one puppet supports or interacts with another to enhance realism.

Maintaining sock puppets is an operational task that requires regular attention. The digital landscape shifts constantly, and even the most convincing puppet may eventually outlive its usefulness. Being prepared to adapt is vital.

7. Tools and Resources

Successful sock puppet operations depend not only on planning and technique, but also on using the right tools to support anonymity, security, and realism. The following categories highlight essential resources for anyone managing online personas, with emphasis on privacy-focused solutions.

VPNs and Secure Browsers

To prevent IP address leaks or location-based flags, always connect through a reliable virtual private network (VPN). Services such as Mullvad or Proton VPN offer privacy-focused features without logging user activity. In addition, using secure or privacy-hardened browsers, such as Firefox with privacy containers, Brave, or Tor Browser, can help prevent tracking and cross-contamination between real and sock puppet identities.

For advanced operations, consider launching sock puppets within secure environments such as Tails OS or a hardened virtual machine to reduce the digital footprint even further.

Image Generation Tools

Choosing a believable profile image is vital. AI-generated photo tools like ThisPersonDoesNotExist or Generated.Photos create unique images that are not traceable to real people, reducing the risk of impersonation claims. However, these images should be reviewed carefully for visual anomalies that might suggest they are artificial. Alternatively, use licence-free photo repositories where permitted.

Secure Email Services

Every puppet should have its own email address from a secure, privacy-conscious provider. Services such as ProtonMail, Tutanota, or Mailfence are widely used for this purpose. Avoid mainstream providers that require phone verification or link accounts to existing profiles. Where possible, create the email account using the same VPN and device you plan to use for the puppet itself.

Password and Identity Managers

Managing multiple identities requires strict separation and secure record-keeping. Tools like Bitwarden, KeePassXC, or 1Password can be used to store login details, backstory notes, recovery options, and activity logs in an encrypted format. Avoid reusing passwords or security questions across accounts, and clearly label each identity to avoid mistakes.

Burner Phone and SMS Services

Some platforms require phone number verification. Where legally permitted, use burner phones or temporary SMS services to meet this requirement. Options include physical SIMs with disposable devices or online services such as MySudo or Silent Link, though reliability and legality vary by region. Never use your personal or work number under any circumstances.

A well-prepared toolkit makes managing sock puppets more secure, efficient, and scalable. Review your tools regularly, keep backups where needed, and ensure you remain up to date with changes in platform behaviour or verification processes.

Final Thoughts and Best Practices

Sock puppet accounts are powerful tools for legitimate online research. When used responsibly, they enable investigators, analysts, and researchers to access vital information, monitor digital threats, and engage with online communities without exposing their true identity. However, with this capability comes a significant ethical and operational responsibility.

These accounts should never be used to deceive, manipulate, or harm individuals. Their purpose is to observe, gather intelligence, and support investigations that serve the public interest or protect organisations from threats. Operating within legal boundaries and upholding professional standards is essential.

The digital landscape is constantly changing. Platforms evolve, detection methods improve, and user behaviour shifts. This means sock puppet strategies must also be regularly reviewed and refined. What works today may not be effective tomorrow, so ongoing learning and adaptation are key to maintaining both access and security.

Finally, any organisation or individual engaging in this kind of work should develop their own standard operating procedures (SOPs). These should include clear guidelines for planning, creation, use, and retirement of sock puppet accounts. Testing identities in controlled environments before deploying them for real investigations can also help identify weaknesses before they become liabilities.

Used with care, discipline, and a strong ethical framework, sock puppets can provide valuable insight while keeping investigators safe and discreet.

Red flag photo by Paolo Bendandi and sock puppet photo by Natalie Kinnear.

"Cyber
Investigation, Opinion

Beyond the Dark Web: Where Threat Actors Operate

The “dark web” has become something of a buzzword in recent years, often portrayed as the hidden underworld of the internet where cybercriminals operate in complete anonymity. For many, it conjures images of secret marketplaces, illicit data dumps, and hard-to-trace communications — all out of reach from the average internet user.

Because of this perception, it is a common misconception that all threat actor activity takes place exclusively on the dark web. While it certainly plays a role in enabling criminal operations, the truth is far more complex. Today’s threat actors are increasingly making use of platforms that are readily available, user-friendly, and in many cases, completely legal.

Much of their coordination, recruitment, and even data leakage now takes place in plain sight — across encrypted messaging apps, public forums, and mainstream social media platforms. Understanding where these actors truly operate is critical for any organisation looking to stay ahead of the threat landscape.

The Evolving Landscape of Threat Actor Platforms

The way threat actors communicate and coordinate has shifted significantly in recent years. Once heavily reliant on hidden services accessed through the Tor network, many cybercriminals are now embracing more accessible, mainstream platforms to conduct their activities.

This change has been driven by several key factors. One of the most prominent is the increased pressure from law enforcement. High-profile takedowns of dark web marketplaces such as AlphaBay and Hydra have disrupted long-standing criminal ecosystems, forcing actors to reconsider where and how they operate.

At the same time, modern platforms offer features that make them attractive to malicious users. Encrypted messaging apps provide a level of privacy that rivals, and in some cases exceeds, what is available on the dark web. Public forums and chat platforms are easy to access, require minimal technical knowledge, and can reach large audiences quickly.

For cybercriminals, scale and convenience matter. Hosting content on widely used services allows them to cast a broader net, whether they’re distributing stolen data, selling malware, or recruiting new affiliates. The lines between the open internet and covert criminal spaces are increasingly blurred, making it more difficult for defenders to track activity using traditional dark web monitoring alone.

Alternative Threat Actor Channels

While the dark web still plays a role in cybercriminal operations, many threat actors now prefer more accessible and user-friendly platforms. These alternatives offer speed, scalability, and often a surprising degree of anonymity — all without the need for specialised browsers or infrastructure. Below are some of the most commonly used non-dark web channels.

Telegram

Telegram has become a go-to platform for cybercriminals. With its end-to-end encryption, support for large group chats, and the ability to create private or public channels, it offers the ideal environment for discreet coordination at scale.

Threat actors use Telegram to:

  • Leak stolen data and documents
  • Advertise and sell credentials or access to compromised systems
  • Host scam pages or phishing kits
  • Organise affiliate networks or ransomware-as-a-service (RaaS) operations

Its minimal moderation and vast global user base make it a particularly attractive choice for cybercrime groups.

Discord and Other Chat Platforms

Originally designed for online gaming communities, Discord has evolved into a full-featured communication tool with support for text, voice, and private servers. Unfortunately, these same features have also made it a popular haven for fraudsters and cybercriminals.

Threat actors use Discord to:

  • Create closed communities centred around fraud, hacking tools, or data leaks
  • Share resources in “plug” communities — often focused on carding, identity theft, or botnet services
  • Coordinate attacks or distribute malware through seemingly innocuous links

Other platforms such as Tox, Matrix, and IRC-based services are also used, albeit with smaller user bases.

Surface Web Forums

Despite the risks of being in plain sight, many cybercrime forums continue to operate openly on the surface web. These forums are often language-specific or focused on particular sectors, such as financial fraud, social engineering, or credential stuffing.

They are typically used to:

  • Trade tools, tactics, and stolen data
  • Post tutorials or share exploit code
  • Vet and recruit participants for more private activities

Some forums operate with limited moderation or are hosted in jurisdictions with lax enforcement, allowing them to persist despite ongoing attention from security professionals.

Social Media (Twitter/X, Facebook, etc.)

Social media platforms remain surprisingly popular for certain types of threat actor activity. On services like Twitter/X, Facebook, and even LinkedIn, cybercriminals can quickly build audiences, push propaganda, or leak stolen information to make a statement.

Common uses include:

  • Publicly claiming responsibility for attacks or breaches
  • Promoting data leaks to gain notoriety or apply pressure to victims
  • Running influence campaigns or disinformation efforts
  • Recruiting low-level actors or collaborators

While these platforms generally respond quickly to takedown requests, the speed at which content can be published and spread makes them a persistent threat vector.

Paste Sites and Temporary File Hosts

Pastebin-style sites and ephemeral file hosting services continue to be used by cybercriminals to share content without needing to manage infrastructure. These services are often exploited to distribute:

  • Malware payloads
  • Indicators of compromise (IOCs)
  • Stolen credentials or internal documentation

Examples include Pastebin, Ghostbin, file.io, and anonfiles (when active). Their simplicity and temporary nature make them appealing for one-off drops or fast-moving campaigns.

Why the Shift Away from the Dark Web?

While the dark web once provided the primary infrastructure for cybercriminal marketplaces and forums, it has become a less attractive option for many threat actors. A combination of practical challenges and strategic advantages has led to a growing preference for mainstream and surface-level platforms.

One of the key drivers behind this shift is the increasing success of global law enforcement operations. High-profile takedowns such as AlphaBay, Hansa, and Hydra have not only dismantled major criminal marketplaces but also sown distrust within dark web communities. With undercover operations and seizures now a recurring threat, many actors perceive mainstream platforms as less risky in terms of operational security, particularly when combined with disposable accounts and encrypted messaging.

Technical reliability is another issue. Dark web services can suffer from poor uptime, slow performance, and hosting instability. These problems make it harder for threat actors to run consistent operations or maintain communication, especially when compared to the seamless experience offered by platforms like Telegram or Discord.

Accessibility also plays a major role. Mainstream platforms are far easier to use and require no special configuration or tools. Anyone with a smartphone can join a Telegram group or browse a fraud forum hosted on the surface web. This lowers the barrier to entry for newer or less technically skilled actors, fuelling growth in cybercriminal communities.

Finally, these platforms offer scale. Social media, public channels, and open forums provide instant access to large audiences, whether for pushing stolen data, coordinating campaigns, or recruiting collaborators. The potential for amplification far exceeds what is typically possible within the confines of the dark web.

For all these reasons, the dark web is no longer the sole or even primary location for cybercriminal activity. Threat actors are adapting to a broader, more dynamic digital environment, and defenders must do the same.

Implications for Threat Intelligence Teams

As threat actors diversify their platforms, the scope of effective cyber threat intelligence (CTI) must evolve accordingly. Relying solely on dark web monitoring is no longer sufficient. Instead, teams must broaden their visibility to include the various surface and semi-private spaces where cybercriminal activity increasingly takes place.

Monitoring closed channels such as Telegram groups, Discord servers, and niche forums has become essential. However, these spaces are often harder to access and require greater care in terms of operational security (OPSEC). Joining or observing these groups can carry significant risk if not done properly. Analysts must use hardened environments, anonymous accounts, and clear protocols to avoid detection or legal exposure.

Language skills and cultural awareness are also becoming increasingly important. Many cybercrime communities operate in non-English languages and use regional slang or coded terminology. Without this context, valuable intelligence can be missed or misinterpreted. Investing in native language analysts or translation tools can dramatically improve coverage and insight.

The scale and speed at which content is published across platforms make manual monitoring impractical. As such, automation is vital. Tools that scrape and index Telegram posts, track mentions on social media, or flag emerging IOCs can help intelligence teams respond quickly and reduce the chance of missing key developments.

Ultimately, the shift in threat actor behaviour demands a shift in defender strategy. The more fragmented and accessible the threat landscape becomes, the more agile and well-equipped CTI teams need to be in order to stay ahead.

Case Examples

LockBit’s Use of Telegram for PR and Leak Amplification (2024)

In early 2024, after suffering internal leaks and DDoS attacks against their dark web leak site, the LockBit ransomware group turned to Telegram to regain control of their narrative. The group created public Telegram channels to share statements, leak victim data, and coordinate with affiliates. This move not only ensured continuity during technical outages but also expanded their audience beyond the dark web’s limited reach.

Telegram’s encryption, ease of access, and built-in forwarding features allowed LockBit to amplify their message rapidly, including to journalists, researchers, and rival threat actors. It showcased a tactical shift: using mainstream tools as a parallel infrastructure for both influence and extortion pressure.

“Infinity Stealer” Malware Sold via Discord and GitHub (Mid–2023 Onwards)

Infinity Stealer, a malware strain targeting browser credentials and crypto wallets, began circulating heavily in 2023 via non-dark web platforms, notably Discord and GitHub. The malware was marketed in private Discord servers where prospective buyers were vetted and provided updates. GitHub repositories were used to host payloads, configuration templates, and instructions, often disguised as open-source tools.

This campaign highlights how cybercriminals are bypassing traditional marketplaces entirely, instead using legitimate platforms for both sales and delivery infrastructure. Discord’s private server structure and GitHub’s reputational cover enabled the operators to fly under the radar while still reaching a large pool of technically capable users.

Conclusion

The dark web remains a valuable source of cyber threat intelligence — but it is no longer the whole story. As cybercriminals adapt to a shifting digital landscape, they are increasingly leveraging open and semi-closed platforms like Telegram, Discord, and even mainstream social media to conduct and promote their activities.

For CTI teams, this evolution demands a broader approach. Effective monitoring now extends beyond Tor and onion domains to include a mix of channels, each with its own risks, nuances, and intelligence value. It also requires enhanced OPSEC, linguistic awareness, and the integration of automation tools to track activity at scale.

By recognising these trends and adapting monitoring strategies accordingly, defenders can stay better aligned with the current threat environment — one that is faster, more fragmented, and no longer confined to the shadows.

"Why
Investigation, Opinion

Why Hackers Hack: Exploring What Motivates Cybercriminal Activity

Cybercrime continues to rise in scale, complexity and impact, affecting individuals, businesses and governments alike. While much attention is given to how attacks happen, it’s just as important to ask why they occur in the first place. Understanding what motivates attackers is a crucial part of building an effective defence.

So, why do hackers hack?

Some are driven by financial gain, while others act on behalf of a nation-state or in support of a political cause. There are those motivated by revenge or personal challenge, and others who simply exploit opportunities because they can.

In this post, we explore the key motivations behind cybercriminal activity, helping you better understand the intent behind the threat and its implications for your organisation’s security posture.

Financial Gain

For many cybercriminals, money is the primary motivator. The vast majority of cybercrime is financially driven, with threat actors seeking to extract value from individuals, businesses or governments through theft, fraud or extortion.

Ransomware is perhaps the most well-known example. Attackers encrypt a victim’s data and demand payment, usually in cryptocurrency, in exchange for the decryption key. The rise of Ransomware-as-a-Service (RaaS) has made these attacks more accessible, allowing less technically skilled criminals to launch sophisticated campaigns using tools developed by others.

One of the most notorious examples of financially motivated cybercrime is Evil Corp, a Russia-based cybercrime group responsible for developing and distributing the Dridex banking Trojan and BitPaymer ransomware. The group, led by Maksim Yakubets, has been linked to attacks that have caused hundreds of millions of pounds in damages globally. According to the U.S. Department of the Treasury, Yakubets was allegedly tasked by Russian intelligence to conduct espionage operations alongside his cybercriminal activities. He is known not just for the scale of his crimes, but also for flaunting his wealth—reportedly driving a Lamborghini with a personalised number plate that reads “THIEF”.

Phishing and business email compromise (BEC) are also common financially motivated attacks. These techniques are designed to trick victims into handing over login credentials, payment details or other sensitive information that can be monetised directly or resold on dark web marketplaces. The FBI has reported billions of dollars in losses from BEC schemes, which often involve attackers impersonating executives or suppliers to redirect large financial transactions.

What’s particularly concerning is how mature and professionalised the cybercriminal ecosystem has become. Online forums and marketplaces, often hosted on the dark web, serve as thriving hubs where criminals buy and sell tools, data and services. This includes malware, exploit kits, stolen credentials and even technical support for other attackers. Some actors specialise in initial access, others in data theft or extortion, and many operate purely as brokers or facilitators.

As a result, modern cyberattacks are rarely the work of a lone hacker. Instead, they often involve multiple actors working together across a decentralised and anonymous marketplace. For a relatively low cost, almost anyone can purchase the tools and expertise needed to carry out a breach.

With high rewards and limited risk in many jurisdictions, financially motivated cybercrime remains one of the most significant threats facing organisations today.

Ideological or Political Motivation (Hacktivism)

Not all cybercriminals are driven by profit. Some are motivated by political beliefs, social causes or ideologies. These individuals or groups, often referred to as hacktivists, use hacking as a form of protest, aiming to disrupt, expose or embarrass organisations and governments they oppose.

One of the most recognisable hacktivist collectives is Anonymous, a loosely organised group known for its cyber campaigns against governments, corporations and extremist groups. Their activities have ranged from distributed denial of service (DDoS) attacks on financial institutions, to leaking sensitive documents from law enforcement agencies and political bodies.

Hacktivism has also played a prominent role in modern conflicts. In the early days of the Russia–Ukraine war, groups on both sides of the conflict engaged in cyber operations. Ukrainian-aligned actors, including the so-called IT Army of Ukraine, targeted Russian government websites and media outlets with defacements and DDoS attacks. Meanwhile, pro-Russian hacktivist groups like Killnet have launched attacks against European infrastructure in retaliation for political support of Ukraine.

These operations are not always highly technical, but they can be disruptive and attention-grabbing. For example, in 2022, Killnet claimed responsibility for attacks on several websites belonging to airports, healthcare providers and public institutions across Europe, using basic but effective DDoS techniques.

Hacktivism can blur the line between political protest and criminal activity. While some view it as a legitimate form of dissent in the digital age, it often involves illegal access, data leaks or service disruption, and can escalate geopolitical tensions or cause collateral damage to innocent third parties.

For defenders, politically motivated attacks pose a unique challenge. They may not follow the typical patterns of financially driven crime, and their targets can shift quickly based on current events, perceived injustices or ideological trends.

State-Sponsored Espionage

Some of the most advanced and persistent cyber threats come not from criminals seeking profit, but from nation-states pursuing strategic objectives. These attacks are often aimed at gathering intelligence, disrupting rivals, or gaining long-term access to critical systems. Unlike financially motivated actors, state-sponsored groups tend to operate with significant resources, patience and stealth.

These threat actors—often referred to as Advanced Persistent Threats (APTs)—typically target government departments, defence contractors, critical national infrastructure, and major corporations. Their goal may be to steal sensitive data, conduct surveillance, interfere with democratic processes, or enable future sabotage.

A prominent example is APT29, also known as Cozy Bear, a group linked to Russia’s Foreign Intelligence Service (SVR). They have been implicated in numerous high-profile intrusions, including the 2020 SolarWinds supply chain attack, which compromised several US federal agencies and global private sector organisations. The operation was notable for its sophistication and subtlety, remaining undetected for months.

Similarly, APT10, associated with China’s Ministry of State Security, was involved in an extensive global cyber espionage campaign targeting managed service providers (MSPs). By compromising these third-party IT providers, APT10 was able to access a wide range of downstream client networks, including government and corporate systems in the UK, US and beyond.

Unlike typical cybercriminals, these groups are often protected by their host governments and operate with impunity. They may also work in parallel with criminal organisations, blurring the lines between state and non-state activity. For example, some ransomware attacks have been linked to actors with suspected ties to nation-states, suggesting a dual-purpose intent: generating revenue while causing strategic disruption.

The motivations behind state-sponsored cyber operations are diverse, ranging from political influence and military advantage to intellectual property theft and economic gain. These campaigns are rarely random; they are calculated, well-resourced and long-term in nature.

For organisations, this means traditional defences may not be enough. Combating espionage-level threats requires a heightened focus on detection, incident response and threat intelligence, particularly for those in sensitive sectors.

Corporate or Industrial Espionage

Businesses, particularly those with valuable intellectual property and trade secrets, are prime targets for corporate or industrial espionage. Cybercriminals and competing organisations alike seek to gain an unfair advantage by stealing sensitive data related to research and development (R&D), product designs, strategic plans or proprietary technologies.

This type of espionage often overlaps with state-sponsored cyber operations, where nation-states target foreign companies to bolster their own industries or military capabilities. A notable example is the Operation Aurora campaign, uncovered in 2010, where threat actors believed to be linked to China targeted Google and dozens of other major companies. The attackers aimed to steal intellectual property and gain access to corporate networks.

Similarly, in 2021, the US Department of Justice indicted members of a Chinese hacking group known as APT41 for conducting widespread cyber intrusions into video game companies and technology firms, stealing source code and proprietary information to benefit commercial interests.

R&D-heavy sectors such as biotechnology, aerospace, automotive and software development face particularly high risks. The theft of trade secrets not only undermines a company’s competitive edge but can also result in substantial financial losses and damage to reputation.

Unlike typical financially motivated attacks, corporate espionage campaigns are usually stealthy and meticulously planned. Attackers may maintain prolonged access to compromised networks, gathering intelligence over months or even years to extract maximum value.

Organisations must therefore prioritise safeguarding their intellectual property through robust cybersecurity measures, employee awareness, and stringent access controls. Collaboration with industry partners and government agencies can also help in detecting and mitigating these sophisticated threats.

Personal Challenge or Prestige

For some hackers, the motivation is less about money or politics and more about curiosity, thrill-seeking, or the desire for recognition within their communities. These individuals often see hacking as a puzzle to be solved or a challenge to be conquered, gaining personal satisfaction and prestige among peers.

This motivation is particularly common among younger or amateur hackers, sometimes referred to as “script kiddies”, who may lack advanced skills but are eager to prove themselves by exploiting vulnerabilities or defacing websites. The hacking community online—including forums, social media groups and dark web marketplaces—can foster this behaviour, offering a platform for sharing exploits, bragging rights and reputation-building.

A notable example is the hacktivist group LulzSec, which gained international attention in 2011 through a series of high-profile attacks targeting organisations like Sony, the CIA, and PBS. Their actions were largely driven by the desire to embarrass their victims and entertain themselves, rather than for financial gain or political objectives.

Similarly, the case of Jonathan James, a teenage hacker from the United States, illustrates this motivation. At just 15 years old, James infiltrated several government systems, including NASA, stealing source code and causing significant disruption. His actions seemed motivated by the challenge and thrill of hacking rather than monetary rewards.

While these hackers might not always intend serious harm, their actions can have unintended consequences: disrupting services, compromising data, or exposing vulnerabilities that other malicious actors might exploit.

Revenge or Personal Grievances

Not all cyber threats originate externally—sometimes the greatest risks come from insiders motivated by personal grudges or feelings of revenge. Disgruntled employees, former staff or contractors with authorised access can deliberately cause harm to an organisation by leaking sensitive information, sabotaging systems or stealing data.

One of the most infamous cases involved Edward Snowden, a former NSA contractor who leaked vast amounts of classified information, motivated by a personal belief that the public had the right to know about government surveillance programmes. Though his actions sparked worldwide debate on privacy, they also caused significant damage to intelligence operations.

In the corporate sphere, a UK-based case saw a former IT administrator take revenge after being dismissed by deleting critical files and disabling user accounts, resulting in days of downtime and financial loss.

Such incidents highlight the critical importance of internal controls, thorough monitoring and robust offboarding procedures. Regularly reviewing access rights, implementing the principle of least privilege, and monitoring unusual activity can help detect and prevent insider threats before they escalate.

Organisations must balance trust with vigilance, fostering a positive workplace culture while ensuring employees understand the consequences of malicious actions.

Opportunistic or Accidental Hacking

Not all cyberattacks are the result of carefully planned operations. Many stem from opportunistic or accidental hacking, where attackers use automated tools to scan large numbers of systems for common vulnerabilities. These attacks require minimal effort but can still cause significant damage, especially to organisations or individuals with poor basic cyber hygiene.

Automated bots and scripts regularly probe the internet for unpatched software, weak passwords, misconfigured devices, or open ports. Once a vulnerability is found, the attacker may exploit it to gain access, often without a specific target in mind. This “spray and pray” approach relies on volume rather than precision.

For example, the WannaCry ransomware outbreak in 2017 rapidly spread across the globe by exploiting a known Windows vulnerability. Many affected organisations had failed to apply critical patches, making them vulnerable to this widespread, indiscriminate attack.

These types of attacks highlight the importance of fundamental cybersecurity practices: regularly updating software, using strong, unique passwords, enabling multi-factor authentication, and maintaining good network hygiene. Even basic measures can significantly reduce the risk posed by opportunistic attackers.

While opportunistic hacking might lack the sophistication or motive of targeted attacks, its impact can be equally devastating if proper precautions are not taken.

Mixed Motivations

In reality, cybercriminal motivations are often complex and overlapping rather than clear-cut. Many attacks are driven by a combination of factors—financial, political, ideological, or personal—which can make attribution and defence especially challenging.

A common scenario involves financially motivated cybercriminal groups being hired or tolerated by state actors to carry out attacks that serve national interests. These groups operate with relative impunity in exchange for providing offensive cyber capabilities or disruptive services.

For example, the notorious ransomware group REvil (also known as Sodinokibi) has been linked to criminal operations that sometimes intersect with geopolitical objectives. While primarily motivated by profit through ransomware extortion, there are indications that some affiliates have conducted operations aligning with certain state interests or received indirect protection from their home governments.

Such hybrid motivations complicate the threat landscape, blurring the lines between organised crime and state-sponsored espionage or sabotage. For defenders, understanding these intertwined incentives is crucial for developing effective cyber defence strategies and threat intelligence.

Conclusion

Cybercriminals are motivated by a wide and varied range of factors—from financial gain and political agendas to personal grudges and the pursuit of prestige. Understanding these diverse motivations is essential for organisations seeking to build effective defences in an increasingly complex cyber threat landscape.

By recognising what drives threat actors, businesses and individuals can better anticipate potential attack vectors, prioritise security investments, and tailor their incident response strategies accordingly. A threat-informed defence approach goes beyond technical measures, incorporating intelligence, awareness and proactive risk management.

As cyber threats continue to evolve, adopting a comprehensive, informed security posture is no longer optional—it is vital. Organisations should take active steps to understand their adversaries, strengthen their defences, and cultivate a culture of vigilance to stay ahead in the ongoing battle against cybercrime.

Header Photo by Furkan Elveren on Unsplash

"Analysis
Investigation, Opinion

Mastering the Analysis of Competing Hypotheses (ACH): A Practical Framework for Clear Thinking

In an age of information overload, uncertainty, and complex decision-making, clear analytical thinking is more crucial than ever. The Analysis of Competing Hypotheses (ACH) is a structured method designed to cut through ambiguity and support objective, evidence-based conclusions. Originally developed by Richards J. Heuer, Jr., a veteran of the U.S. intelligence community, ACH was created to help analysts systematically evaluate multiple hypotheses without falling prey to cognitive biases and premature conclusions.

At its core, ACH shifts the analytical focus from proving a favoured hypothesis to disproving less likely alternatives, ensuring that conclusions are reached through a process of elimination rather than assumption. This approach is especially valuable in fields where decisions must be made in the face of incomplete or conflicting data, such as intelligence, cybersecurity, business strategy, and investigative research.

In this article, we’ll explore the foundational principles of ACH, guide you through its step-by-step methodology, and illustrate how to apply it in real-world scenarios. Whether you’re an analyst, decision-maker, or simply someone seeking to sharpen your critical thinking skills, this practical framework offers a powerful tool for navigating complexity with clarity and rigour.

What is the Analysis of Competing Hypotheses?

The Analysis of Competing Hypotheses (ACH) is a structured analytical technique that helps individuals and teams evaluate multiple possible explanations for an event, trend, or problem—all at the same time. Rather than focusing on finding evidence that supports a single favoured hypothesis, ACH encourages analysts to test all plausible alternatives and to prioritise disconfirming evidence over confirming data.

This method stands in contrast to traditional analysis, where there is often a tendency to latch onto the most obvious explanation early on and seek only evidence that backs it up. That approach, while intuitive, is prone to cognitive pitfalls such as confirmation bias, groupthink, and premature closure.

By explicitly laying out competing hypotheses and methodically evaluating each against the available evidence, ACH helps to minimise bias, highlight critical assumptions, and improve judgement, particularly in situations that are ambiguous, fast-moving, or laden with incomplete information.

Ultimately, ACH is less about finding the answer and more about narrowing down the field of possibilities through a process that is transparent, reproducible, and intellectually disciplined.

The ACH Process Step-by-Step

The Analysis of Competing Hypotheses is more than just a checklist—it’s a disciplined approach to structuring your thinking, challenging assumptions, and arriving at well-supported conclusions. Below is an expanded walkthrough of the seven core steps, each designed to promote clarity and rigour in decision-making.

1. Define the Question or Problem

A clear, unbiased problem statement is the foundation of effective analysis. This step is about narrowing the scope of inquiry and making sure the question does not contain built-in assumptions.

Tips for framing your question:

  • Avoid language that implies causality or blame
  • Be as specific as the data allows
  • Keep it neutral and open-ended

Example:
 Why did a system failure occur in a secure network?
 This framing encourages investigation without assuming intent, method, or actor.

A poorly worded question—e.g., “Who caused the attack on our network?”—limits thinking prematurely by assuming the event was malicious and externally driven.

2. List All Plausible Hypotheses

The goal here is to generate a comprehensive list of explanations for the issue. It’s critical to suspend judgment and avoid discarding possibilities too early, especially those that feel uncomfortable or less likely at first glance.

Use techniques like brainstorming, consultation with diverse stakeholders, and red teaming to uncover blind spots.

Example Hypotheses:

  • H1: Insider sabotage
  • H2: External cyberattack
  • H3: Configuration error
  • H4: Third-party service failure
  • H5: Power or environmental disruption

Even if some hypotheses seem implausible, including them ensures a more robust analysis, and sometimes the least obvious explanation turns out to be the correct one.

3. Identify Evidence and Arguments

At this stage, you gather all the information that could potentially support or contradict your hypotheses. This includes:

  • Observational data (logs, reports, witness accounts)
  • Technical indicators (malware signatures, access logs)
  • Expert assessments
  • Circumstantial clues

For each piece of evidence, evaluate two things:

  • Source reliability: How trustworthy is the origin (e.g., system logs vs. anonymous tips)?
  • Information credibility: How plausible or accurate is the content?

Also consider whether the evidence is:

  • Direct or indirect
  • Confirmed or unverified
  • Timely or outdated

Pro tip: Avoid cherry-picking. Include evidence that contradicts your initial instincts—this is where real insight often lies.

4. Analyse Consistency

This is the heart of the ACH method: building a matrix that compares each hypothesis against each piece of evidence.

You’ll mark whether each piece of evidence is:

  • Consistent with the hypothesis
  • Inconsistent (i.e., contradicts it)
  • Neutral (i.e., not relevant to that hypothesis)

Example Matrix:

EvidenceH1: Insider sabotageH2: External cyberattackH3: Configuration error
Admin account accessed remotely at 2am✔️ Consistent✔️ Consistent❌ Inconsistent
No malware signatures detected✔️ Consistent❌ Inconsistent➖ Neutral
Recent patch deployed without testing❌ Inconsistent➖ Neutral✔️ Consistent
No third-party access in logs✔️ Consistent❌ Inconsistent✔️ Consistent

This matrix helps you visualise the weight and distribution of evidence, especially in identifying which hypotheses have significant inconsistencies.

5. Refine the Matrix

Now that the matrix is populated, focus on evaluating the diagnostic value of each piece of evidence. Ask yourself:

  • Which pieces most clearly discriminate between hypotheses?
  • Are there patterns that suggest certain hypotheses are clearly weaker?

ACH places particular emphasis on inconsistencies rather than confirmations. A single strong inconsistency can eliminate a hypothesis, while consistent evidence might apply to multiple hypotheses and be less useful in narrowing options.

Refining may also involve revisiting earlier assumptions, adjusting hypotheses, or seeking new evidence to fill gaps.

6. Draw Tentative Conclusions

This is the interpretive phase—based on the refined matrix, identify which hypothesis is least burdened by inconsistent evidence. Remember, this doesn’t mean it has the most supporting evidence, but rather that it stands up better under scrutiny.

Be cautious not to overstate certainty. If multiple hypotheses remain viable, say so. ACH supports probabilistic thinking, not premature conclusions.

Key reminders:

  • Avoid selecting the “most comfortable” hypothesis
  • Document your reasoning and uncertainties
  • Stay open to revision as new evidence emerges

7. Identify Milestones or Indicators

ACH is not static. Situations evolve, and so should your analysis. Define a set of indicators—specific events, behaviours, or pieces of data—that, if observed, would confirm, challenge, or refine your conclusion.

Examples:

  • Discovery of malware indicating a known threat actor (would support H2)
  • Forensic evidence of misconfiguration traced to recent update (would support H3)
  • Repetition of similar failures in unrelated systems (might suggest a broader issue)

Establish a plan for ongoing monitoring. This step ensures your conclusions remain grounded in reality as the situation unfolds and prevents analytical drift over time.


Analysis of Competing Hypotheses

Practical Example: ACH in Action

To demonstrate the practical value of the Analysis of Competing Hypotheses, let’s walk through a realistic scenario involving a suspected cybersecurity incident at a mid-sized financial services firm. This example illustrates each step of the ACH process in context, showing how structured analysis can lead to clearer conclusions—even in the face of ambiguity.

Scenario: Unexpected System Downtime in a Secure Network

Background:
At 03:15 on a Tuesday morning, the firm’s primary transaction server went offline, causing a six-hour disruption to client services. The network is normally robust and protected by multiple layers of defence. Internal monitoring systems flagged the event, but initial diagnostics were inconclusive.

The CTO initiates an ACH analysis to determine what caused the failure.

Step 1: Define the Question or Problem

The team agrees to frame the central question as:

What is the most plausible explanation for the unexpected system outage on the secure transaction server?

This wording avoids assumptions about cause or intent and invites multiple lines of inquiry.

Step 2: List All Plausible Hypotheses

The team brainstorms and agrees on the following hypotheses:

  • H1: External cyberattack (e.g., malware, DDoS)
  • H2: Insider sabotage (malicious insider or misuse)
  • H3: Configuration or patching error
  • H4: Hardware failure or infrastructure fault
  • H5: Scheduled maintenance error or oversight

The list is deliberately inclusive to prevent tunnel vision.

Step 3: Identify Evidence and Arguments

The team compiles evidence from logs, interviews, monitoring tools, and server diagnostics. Notable pieces of evidence include:

  • E1: Server logs show a reboot command issued remotely at 03:14
  • E2: No malware signatures or IOCs (Indicators of Compromise) detected
  • E3: A new patch was installed the day prior without full regression testing
  • E4: No external traffic spikes or anomalies around the time of the incident
  • E5: Access logs show a junior administrator logged in remotely at 03:12
  • E6: Server hardware passed all post-incident diagnostics
  • E7: Change management calendar incorrectly listed maintenance for the wrong server

Each item is tagged with a confidence rating and source reliability to support judgment later.

Step 4: Analyse Consistency

The team creates a matrix to compare each hypothesis against the evidence.

EvidenceH1: CyberattackH2: Insider SabotageH3: Config ErrorH4: Hardware FaultH5: Maintenance Error
E1: Remote reboot at 03:14✔️ Consistent✔️ Consistent✔️ Consistent➖ Neutral✔️ Consistent
E2: No malware or IOCs found❌ Inconsistent✔️ Consistent➖ Neutral➖ Neutral➖ Neutral
E3: Patch installed the day before➖ Neutral➖ Neutral✔️ Consistent➖ Neutral➖ Neutral
E4: No external anomalies❌ Inconsistent➖ Neutral➖ Neutral➖ Neutral➖ Neutral
E5: Junior admin logged in remotely➖ Neutral✔️ Consistent✔️ Consistent➖ Neutral❌ Inconsistent
E6: Hardware passed diagnostics➖ Neutral➖ Neutral➖ Neutral❌ Inconsistent➖ Neutral
E7: Calendar showed the wrong server➖ Neutral➖ Neutral➖ Neutral➖ Neutral✔️ Consistent

Step 5: Refine the Matrix

Focusing on disproving hypotheses, the team notes:

  • H1 (Cyberattack) has two clear inconsistencies (E2 and E4)
  • H4 (Hardware fault) is contradicted by E6
  • H5 (Maintenance error) is weakened by E5, as the admin wasn’t scheduled to access that system

H2 (Insider sabotage) and H3 (Configuration error) remain more viable. The presence of an unscheduled login and recent patching suggests a blend of human and technical causes.

The most diagnostic evidence appears to be E2 (no malware) and E3 (untested patch), which significantly affect H1 and H3, respectively.

Step 6: Draw Tentative Conclusions

H1 (Cyberattack) and H4 (Hardware fault) are largely ruled out.
H5 (Maintenance error) is possible but lacks strong support and includes an inconsistency.
That leaves:

  • H2 (Insider sabotage): Plausible, especially with unexpected admin access
  • H3 (Configuration error): Strongly supported by evidence, with few inconsistencies

Given that the administrator may have unknowingly pushed a faulty patch, H3 is deemed the most probable hypothesis, with H2 remaining a secondary consideration requiring HR review.

Step 7: Identify Milestones or Indicators

To confirm or disprove the working conclusion, the team outlines the following future indicators:

  • Confirmation of the patch’s fault during follow-up testing (would support H3)
  • HR interview with the admin reveals intent or confusion (could support or refute H2)
  • Any signs of privilege misuse or unusual access patterns (would raise concern for H2)
  • Vendor advisory on the patch’s known issues (further supporting H3)

The analysis will be updated once these indicators are assessed. In the meantime, patching procedures are temporarily suspended, and access controls are reviewed.


Final Conclusion

The structured application of ACH helped the team reach a reasoned, defensible conclusion while keeping alternate hypotheses in play. Rather than jumping to the common assumption of a cyberattack, the analysis revealed a more mundane but equally critical root cause: likely misconfiguration following a poorly tested software update.

Real-World Reference: The Lucy Letby Case

The power of ACH is underscored by its implicit use in high-stakes investigations such as the Lucy Letby trial. Prosecutors highlighted that Letby was the only staff member present during every critical incident involving infant patients—a fact established through careful analysis of shift patterns and timelines. By systematically evaluating competing hypotheses about who could have caused harm, investigators effectively used the same logic underpinning ACH: disproving alternative explanations and focusing on the hypothesis best supported by consistent evidence. This approach helped build a compelling, structured case based on opportunity and timing, demonstrating ACH’s practical application beyond intelligence into criminal justice.

Benefits and Limitations of ACH

The Analysis of Competing Hypotheses (ACH) offers a powerful framework for navigating complex, ambiguous, or high-stakes problems. But like any method, it comes with both strengths and limitations. Understanding these helps practitioners apply it effectively and appropriately.

Benefits of ACH

1. Reduces Cognitive Bias
ACH is specifically designed to counteract common mental pitfalls, such as confirmation bias and premature conclusions. By forcing the analyst to evaluate all plausible hypotheses and focus on disconfirming evidence, it encourages objectivity and balance.

2. Encourages Structured Thinking
Rather than relying on intuition or fragmented information, ACH imposes a disciplined approach. Analysts must document each step, weigh evidence methodically, and justify conclusions. This structure makes reasoning transparent and defensible, especially important in intelligence, law enforcement, or regulatory settings.

3. Handles Ambiguity and Complexity Well
ACH is particularly effective when information is incomplete, uncertain, or contradictory. By assessing how each piece of evidence aligns (or doesn’t) with multiple hypotheses, it accommodates complexity without oversimplifying.

4. Improves Group Collaboration and Debate
In team settings, ACH helps avoid groupthink by providing a common analytical language and framework. It gives structure to collaborative analysis, enabling different perspectives to be tested against the same evidence matrix.

5. Highlights Gaps and Guides Collection
The process often reveals where evidence is weak or missing, helping analysts identify what further data needs to be gathered. Diagnostic indicators can also be flagged for future monitoring.


Limitations of ACH

1. Time-Consuming
ACH is not always suited to fast-moving or reactive situations. Building and refining matrices, especially for complex cases with numerous hypotheses, can be labour-intensive.

2. Dependent on Quality of Input
The effectiveness of ACH depends entirely on the quality and reliability of the evidence fed into it. Incomplete, misleading, or low-confidence data can skew conclusions, even if the process itself is rigorous.

3. May Oversimplify Nuance
Although ACH structures thinking, it can sometimes encourage a binary view of evidence (e.g. consistent/inconsistent/neutral). This may not capture subtleties, degrees of relevance, or contextual complexity unless analysts make an effort to interpret carefully.

4. Requires Analytical Discipline
The method assumes a willingness to challenge assumptions, avoid premature closure, and remain open to changing conclusions as new evidence arises. In practice, this intellectual discipline can be hard to maintain, especially under pressure.

5. Not a Substitute for Domain Expertise
ACH supports analysis, but it does not replace subject matter knowledge. Without expert insight to interpret evidence correctly, even a well-constructed ACH matrix can produce flawed conclusions.


ACH is a powerful complement to critical thinking, not a magic solution. Used thoughtfully, it strengthens the quality of judgment and provides a clear audit trail for how conclusions were reached.

Tools and Resources

While the Analysis of Competing Hypotheses (ACH) can be applied using simple pen-and-paper methods, various tools can help structure the process, especially when working with complex datasets or collaborating with others. Below are some practical tools that support ACH-style analysis.

Manual Tools

Spreadsheets (e.g., Excel, Google Sheets)
Spreadsheets remain a reliable and widely used method for building ACH matrices. Users can list hypotheses across the top, evidence down the side, and use consistent symbols or colour codes to mark whether each item of evidence is consistent, inconsistent, or neutral. This method offers full transparency and is easily adaptable for individual or team use.

Printable ACH Templates
Basic ACH grids are available as printable templates and can be useful in workshops, briefings, or offline environments. These encourage clarity of thought without requiring technical platforms.

Digital Tools

PARC ACH Tool
Developed by the Palo Alto Research Center, this free, downloadable tool guides users through the ACH process, including hypothesis generation, evidence scoring, matrix creation, and conclusion development. It’s well-suited for training and operational use.

IBM i2 Analyst’s Notebook
Though not purpose-built for ACH, Analyst’s Notebook allows for sophisticated mapping of relationships between people, events, and data, which can support structured hypothesis testing in investigative contexts.


Recommended Reading

  • Psychology of Intelligence Analysis – Richards J. Heuer Jr.
    The original source text on ACH offers both theory and practical examples. Essential reading for analysts across sectors.
  • Tradecraft Primer: Structured Analytic Techniques for Intelligence Analysis – CIA (declassified)
    A practical manual outlining ACH alongside other structured methods such as key assumptions checks and red teaming. Freely available online.

Conclusion

In a world increasingly defined by uncertainty, complexity, and competing narratives, the Analysis of Competing Hypotheses (ACH) offers a methodical way to cut through ambiguity. Originally developed for intelligence professionals, its value extends far beyond, offering anyone engaged in investigative work, cybersecurity, risk assessment, or strategic decision-making a practical framework for clearer thinking.

By focusing on disproving rather than confirming, ACH helps analysts avoid cognitive traps and build conclusions on firmer ground. It doesn’t guarantee certainty, but it does promote discipline, transparency, and intellectual honesty — qualities that are increasingly vital in high-stakes environments.

While the process may require time and rigour, the payoff is well-structured, defensible conclusions. Whether you’re a security analyst examining network breaches, a business leader weighing strategic options, or a researcher interpreting complex data, ACH provides a repeatable model for navigating complexity with confidence.

Incorporating ACH into your analytical toolkit is more than a method — it’s a mindset shift towards structured scepticism, clarity of thought, and resilient decision-making. The more widely it’s adopted, the stronger our collective reasoning becomes.

Header photo by Milad Fakurian on Unsplash.

Photo by fabio on Unsplash.

"Understanding
Investigation, Opinion

Understanding SCATTERED SPIDER: Tactics, Targets, and Defence Strategies

In recent months, a wave of disruptive cyberattacks has swept across high-profile organisations in both the UK and the US, affecting sectors ranging from hospitality and telecommunications to finance and retail. Many of these incidents share a common thread: attribution to a threat actor known as SCATTERED SPIDER, a group now gaining notoriety for its aggressive use of social engineering and its partnership with the DragonForce ransomware-as-a-service (RaaS) operation.

Unlike traditional ransomware gangs that rely heavily on technical exploits or brute-force tactics, SCATTERED SPIDER stands out for its deeply manipulative approach. The group has repeatedly demonstrated its ability to impersonate employees, deceive IT support teams, and bypass multi-factor authentication (MFA) through cunning psychological tactics. Often described as “native English speakers,” they are suspected to operate in or have ties to Western countries, bringing a cultural fluency that makes their phishing and phone-based attacks alarmingly effective.

As law enforcement and cybersecurity professionals scramble to contain the fallout from recent attacks, one thing is clear: SCATTERED SPIDER is not just another ransomware affiliate. They represent a shift toward human-centric intrusion strategies, blending technical skill with social deception in a way that challenges even well-defended organisations.

This article takes a closer look at how SCATTERED SPIDER operates, the tools they use, including DragonForce RaaS and, most importantly, what practical steps individuals and organisations can take to reduce their exposure to this growing threat.

Image Credit: Crowdstrike

Who Is SCATTERED SPIDER?

SCATTERED SPIDER is the name given to a loosely affiliated cybercriminal group that has quickly gained attention for its highly targeted and persistent campaigns against major organisations. Believed to be active since at least 2022, the group is often classified as an Initial Access Broker (IAB) and affiliate actor, working both independently and in partnership with larger ransomware collectives, most notably the ALPHV/BlackCat operation.

What sets SCATTERED SPIDER apart is not just its technical acumen, but its expert use of social engineering, often executed in fluent English and with a level of cultural familiarity that suggests the group is likely based in or has strong ties to the US or UK. Unlike many ransomware actors operating out of Eastern Europe or Russia, SCATTERED SPIDER’s tactics are tailored to Western corporate environments, allowing them to convincingly impersonate staff, manipulate helpdesk personnel, and bypass traditional security barriers with unnerving ease.

The group’s motivation is primarily financial, but their techniques are unusually aggressive. Rather than simply deploying ransomware after gaining access, SCATTERED SPIDER takes the time to navigate internal systems, escalate privileges, and exfiltrate data, ensuring maximum impact and leverage during extortion. This has included threats to publicly leak sensitive data if ransoms aren’t paid, a tactic made easier by their ties to DragonForce RaaS, a ransomware service that offers data leak platforms and other tools to affiliates.

Notable incidents attributed to SCATTERED SPIDER include:

  • The 2023 attack on MGM Resorts, which saw large-scale IT disruption across casinos and hotels in the US, was reportedly caused by a simple phone-based social engineering ploy.
  • Intrusions into telecommunications and managed service providers, where they have targeted identity infrastructure such as Okta and Active Directory to pivot across networks.
  • Disruption and data theft in the financial and insurance sectors, where highly sensitive customer and operational data were exfiltrated and held to ransom.

These campaigns reveal a group that is not only technically capable but strategically manipulative, leveraging trust, urgency, and insider knowledge to achieve access that many automated tools would struggle to obtain.

The Tools of the Trade: DragonForce RaaS

One of the key enablers of SCATTERED SPIDER’s recent success has been their alignment with DragonForce, a relatively new entrant in the expanding Ransomware-as-a-Service (RaaS) ecosystem. RaaS models have radically altered the cybercrime landscape. Much like SaaS (Software-as-a-Service) in the legitimate tech world, RaaS lowers the barrier to entry for less technically capable threat actors by offering turnkey ransomware toolkits, user-friendly dashboards, and profit-sharing agreements between developers and affiliates.

What Is DragonForce?

DragonForce is a commercially operated ransomware platform, complete with a slick user interface, customer “support” channels, and marketing-style updates promoting new features and obfuscation techniques. While it may not yet have the brand recognition of LockBit or BlackCat, it is gaining traction among cybercriminal groups for its reliability, speed, and aggressive encryption routines.

Its offerings typically include:

  • Highly customisable payloads: Affiliates like SCATTERED SPIDER can tweak encryption settings, file extensions, and ransom notes to suit their targets.
  • Data exfiltration modules: These facilitate double extortion, where files are stolen before encryption and used as additional leverage during ransom negotiations.
  • Dark Web leak portals: Victim data is published or threatened with publication unless payment is made.
  • Access to a central control panel: Affiliates can monitor infected machines, initiate encryption manually, and track ransom payments via cryptocurrency wallets.

These features allow threat actors to operate more like cybercrime startups than ad-hoc hacking collectives.

Why SCATTERED SPIDER Uses DragonForce

SCATTERED SPIDER’s strength lies in gaining initial access, often via phone-based social engineering or SIM-swapping tactics, rather than building their own ransomware from scratch. By outsourcing encryption and extortion capabilities to a RaaS provider like DragonForce, they focus on what they do best: manipulating people, navigating corporate networks, and extracting sensitive data.

In this partnership, DragonForce gains a capable affiliate who can deliver high-value access, and SCATTERED SPIDER gains a ready-made suite of tools to monetise their intrusions. This division of labour reflects a broader shift in cybercrime, one where specialisation and scalability are the name of the game.

DragonForce and the RaaS Economy

It’s important to understand that DragonForce is not an isolated actor. It is part of a wider criminal ecosystem where:

  • Access brokers sell stolen credentials or remote access.
  • Malware developers lease out payloads to trusted affiliates.
  • Negotiators and money launderers offer “aftercare” services.

This ecosystem enables threat actors to operate like businesses, complete with hierarchical roles, profit-sharing models, and even internal dispute resolution mechanisms. In this context, SCATTERED SPIDER is not just a lone wolf but a well-placed operator within a highly coordinated cybercrime supply chain.

Why This Matters

The use of DragonForce by SCATTERED SPIDER highlights two alarming trends:

  1. Professionalisation of ransomware: You no longer need deep technical knowledge to execute devastating attacks; just access, confidence, and a few phone calls.
  2. Faster time-to-impact: With everything from encryption to extortion automated and streamlined, the time between compromise and ransom demand is shrinking rapidly, leaving organisations with little time to detect and respond.

As DragonForce continues to evolve and attract new affiliates, we are likely to see more actors adopt this model of rapid-access, rapid-extortion ransomware operations.

Image Credit: Kaspersky

Anatomy of an Attack: How SCATTERED SPIDER Operates

Understanding how SCATTERED SPIDER executes its attacks is crucial for organisations looking to strengthen their defences. Unlike many ransomware operators who rely on brute-force tactics or mass phishing campaigns, SCATTERED SPIDER favours precision, patience, and psychological manipulation.

Here’s a typical flow of operations observed in their campaigns:

1. Reconnaissance and Target Selection

The group begins by identifying high-value targets, often large enterprises in sectors such as telecommunications, financial services, and IT. They may purchase access to credentials or endpoint telemetry from Initial Access Brokers (IABs) or scrape publicly available information from LinkedIn, press releases, and social media to build detailed profiles of staff and infrastructure.

What makes this phase effective:

  • Use of OSINT to identify staff names, departments, and third-party vendors.
  • Focus on companies with complex IT environments and high tolerance for operational risk—prime candidates for extortion.

2. Initial Access via Social Engineering

Once they’ve identified the right entry point, SCATTERED SPIDER often deploys vishing (voice phishing) or phishing techniques to impersonate internal staff. In some cases, they call help desks pretending to be employees locked out of their accounts, requesting MFA resets or password changes.

This is where their native English and cultural familiarity give them a dangerous edge; they sound credible, confident, and urgent.

Common tactics:

  • Impersonating IT staff or executives to pressure support teams.
  • SIM-swapping or MFA fatigue attacks to intercept or bypass two-factor authentication.
  • Spoofed email domains or compromised inboxes used for internal-style phishing.

3. Credential Harvesting and Privilege Escalation

Once inside, the group moves quickly to extract further credentials. Tools such as Mimikatz, Cobalt Strike, and legitimate Windows administration tools (e.g. PowerShell, PsExec) are used to escalate privileges and move laterally across the network.

They specifically look for access to:

  • Identity infrastructure (Active Directory, Okta, Azure AD)
  • Remote access tools (VPNs, RDP gateways, Citrix)
  • Data repositories containing sensitive customer or business data

This phase may last hours or days, depending on the target’s size and the level of access achieved.

4. Data Exfiltration and Pre-Ransom Preparation

Before deploying ransomware, SCATTERED SPIDER usually exfiltrates a trove of sensitive data. This forms the basis of their double extortion strategy; even if a victim can restore from backups, they may still pay to prevent the public release of confidential files.

Common methods:

  • Compressing and uploading files to cloud storage services or attacker-controlled servers
  • Encrypting and staging data to avoid detection by DLP or antivirus tools

In some cases, the group leaves behind backdoors or admin accounts to retain long-term access or re-extort victims in the future.

5. Ransomware Deployment via DragonForce

Once exfiltration is complete and the environment is primed, SCATTERED SPIDER deploys DragonForce ransomware across the compromised network. The ransomware is configured to encrypt files rapidly and disrupt operations, sometimes including domain controllers and backup servers, to maximise impact.

Victims then receive a ransom note directing them to a Tor-based portal for negotiations. If payment isn’t made within a specified timeframe, stolen data is posted on a leak site associated with DragonForce.


Key Takeaways:

  • SCATTERED SPIDER relies on human error as much as technical vulnerabilities.
  • The group’s knowledge of Western IT environments makes it easier for them to blend in and manipulate systems and staff.
  • Their multi-stage attack chain: access, escalation, exfiltration, encryption, is methodical and difficult to detect in real time.

Image Credit – Reeds Solicitors

Why SCATTERED SPIDER’s Approach Is Especially Dangerous

SCATTERED SPIDER doesn’t operate like a traditional ransomware crew. Their campaigns combine social engineering finesse with technical aggression, resulting in a hybrid threat model that blends cybercrime with tactics more often associated with espionage groups. Here’s why they stand out and why they’re so difficult to defend against.

1. Deep Impersonation and Real-Time Manipulation

Unlike typical phishing groups that rely on mass email blasts, SCATTERED SPIDER employs live, targeted deception. Their operators speak fluent, unaccented English and are adept at impersonating IT personnel, executives, or employees in distress.

They frequently call help desks or IT support lines, using:

  • Personalised information gathered through OSINT
  • Spoofed phone numbers and internal-sounding email addresses
  • Calm, confident delivery to manipulate support staff in real time

This level of human-centred deception is rarely seen in conventional cybercrime campaigns and poses a serious challenge for security teams.

2. Precision Targeting of Identity Infrastructure

SCATTERED SPIDER understands that identity is the new perimeter. Rather than merely compromising a system, they aim to take control of identity and access management tools like:

  • Okta
  • Active Directory
  • Azure AD
  • SSO and MFA services

By doing so, they’re not just accessing individual endpoints, they’re taking over the core trust fabric of the organisation. Once they own your identity systems, lateral movement and persistence become trivially easy.

3. Speed and Aggression Outpacing Detection

While many attackers spend weeks in a network quietly collecting data, SCATTERED SPIDER moves with urgency and intent. In many cases:

  • Initial access to ransomware deployment can take place in less than 48 hours.
  • They bypass traditional controls using legitimate tools (Living off the Land), leaving minimal forensic traces.
  • They often disable security tools, delete logs, or backdoor admin accounts to stay one step ahead.

Traditional defences based on known signatures, blacklists, or passive monitoring are often too slow or too blind to respond in time.

4. Blurring the Line Between Cybercrime and Nation-State Tactics

Although motivated by financial gain rather than geopolitics, SCATTERED SPIDER’s tradecraft exhibits a level of maturity and adaptation more typical of state-sponsored APT groups. This includes:

  • Tailored intrusion techniques for specific industries and environments
  • Multi-stage attacks with operational patience
  • Use of multiple extortion channels, including PR pressure and data leak sites

This hybrid operational model: part ransomware gang, part APT, means traditional classifications don’t fully capture the scope of their threat. For defenders, this creates both strategic confusion and escalating risk.

In short, SCATTERED SPIDER is dangerous not just because of what they do, but how they do it. Their blend of psychological manipulation, identity compromise, and rapid escalation makes them one of the most formidable threats facing organisations today.

Defending Against SCATTERED SPIDER: Practical Guidance

While SCATTERED SPIDER’s tactics are sophisticated, they often exploit basic lapses in process, communication, and identity management. That means there are precautions organisations can take to harden themselves against this type of threat, without needing to reinvent their entire security stack.

1. Reinforce Help Desk Security Protocols

Since SCATTERED SPIDER frequently targets help desks and support teams, ensure those teams are trained to:

  • Never reset MFA or passwords without high-assurance identity verification.
  • Use call-back procedures or out-of-band verification for unusual requests.
  • Flag repeated or urgent requests as potential social engineering.

Adding simple checklists and mandatory escalation paths for sensitive account changes can drastically reduce social engineering success rates.

2. Harden Identity and Access Management

Identity remains a prime attack surface. To reduce risk:

  • Enforce phishing-resistant MFA, such as hardware tokens or app-based push authentication with device binding (rather than SMS or email codes).
  • Implement just-in-time access and least privilege policies for administrative accounts.
  • Regularly audit inactive accounts, especially third-party vendors and former employees.

Integrate identity telemetry into your detection stack: suspicious logins, MFA resets, or logins from new devices should trigger alerts.

3. Monitor for Signs of Lateral Movement

Once SCATTERED SPIDER is inside a network, time is of the essence. Deploy tools and strategies to detect:

  • Unusual use of remote admin tools (e.g. PowerShell, PsExec)
  • Use of credential dumping tools or abnormal privilege escalation
  • Lateral movement attempts, especially to identity infrastructure like Active Directory or Okta

EDR/XDR platforms with good behavioural analytics can be critical here, especially when coupled with 24/7 monitoring or MDR services.

4. Protect Your Data, and Know Where It Is

Given the group’s focus on data theft prior to encryption, prevention isn’t just about backups:

  • Map your critical data locations, especially customer, financial, and IP-related data.
  • Use Data Loss Prevention (DLP) tools to monitor exfiltration patterns.
  • Segment sensitive environments and restrict data access to only those who need it.

Ensure that backups are not just secure and segmented from your main network, but also tested regularly.

5. Prepare for the Human Side of a Crisis

Even strong technical controls can be undone by panic or poor decision-making in the moment. Prepare:

  • A ransomware playbook with clear response roles, legal guidance, and communications plans.
  • Crisis simulations or tabletop exercises that include scenarios involving data leaks and public extortion.
  • Training for executives and PR teams on how to manage the reputational and regulatory impact.

Remember: SCATTERED SPIDER succeeds by catching organisations off guard, so make sure your teams know exactly how to respond under pressure.


Security Culture Is Your Best Defence

At the end of the day, SCATTERED SPIDER’s tactics work because they exploit human trust, urgency, and complexity. Investing in detection tools is important, but fostering a culture of scepticism, verification, and shared responsibility across the organisation is what truly builds resilience.

Stay Vigilant, Stay Informed

SCATTERED SPIDER has proven that ransomware is no longer just about encrypted files and ransom notes — it’s about controlling identities, deceiving people, and outpacing traditional defences. Their campaigns demonstrate just how effective a threat actor can be when they combine technical proficiency with social engineering and real-time manipulation.

What makes them especially dangerous is not just the tools they use, but the tactics and mindset behind their operations. This is a group that studies its targets, adapts rapidly, and blends psychological and technical attacks with striking efficiency.

For organisations in the UK, the US, and beyond, the message is clear: security isn’t just a technology problem — it’s a people and process problem too. Preventing the next SCATTERED SPIDER-style breach means:

  • Educating and empowering support staff
  • Hardening identity infrastructure
  • Monitoring for the unexpected
  • And rehearsing how you’ll respond under pressure

Cybercriminals evolve constantly. So must we.

Header image > Photo by Егор Камелев on Unsplash.

"Analysing
Investigation, The Dark Web

Analysing DDoSIA: Threat Intelligence Insights into a Coordinated DDoS Operation

In the evolving landscape of cyber threats, DDoSIA has emerged as a significant force, orchestrating distributed denial-of-service (DDoS) attacks against organisations worldwide. Believed to be operated by pro-Russian hacktivist groups, DDoSIA mobilises volunteer participants to overwhelm targeted networks, causing disruptions to businesses, government institutions, and critical infrastructure. With its decentralised approach and sustained campaigns, this operation has become a persistent threat to cybersecurity resilience.

Tracking DDoSIA is crucial for cybersecurity and threat intelligence (CTI) professionals. By understanding its tactics, techniques, and infrastructure, defenders can better anticipate attacks, mitigate their impact, and adapt defensive strategies. As part of our mission at SOS Intelligence, we continuously monitor, collect, and analyse DDoSIA-related data, offering actionable intelligence to help organisations stay ahead of this evolving threat.

Understanding DDoSIA and Its Attack Infrastructure

DDoSIA is a coordinated distributed denial-of-service (DDoS) campaign operated by pro-Russian hacktivist groups, notably NoName057(16). This group, along with other affiliated threat actors, is known for conducting disruptive cyber operations against organisations and governments deemed hostile to Russian interests. NoName057(16) has been active since at least 2022, launching frequent DDoS attacks against Western institutions, particularly those supporting Ukraine. The group operates as part of a broader ecosystem of pro-Russian cyber collectives, often aligning with entities like KillNet and Anonymous Russia, which share similar geopolitical motivations.

Unlike state-sponsored advanced persistent threats (APTs) that focus on espionage or destructive cyberattacks, DDoSIA is a crowdsourced DDoS initiative, incentivising participants to join attacks. Volunteers—many of whom are ideologically aligned with Russia’s geopolitical stance—are recruited via messaging platforms and forums, where they receive instructions and access to attack tools. Participants are often encouraged through financial rewards or patriotic motivations, making DDoSIA a hybrid between hacktivism and cyber warfare.

How DDoSIA Operates

DDoSIA primarily leverages volumetric and application-layer DDoS attacks, aiming to overwhelm websites, APIs, and network infrastructure. Attack vectors include:

  • HTTP flooding – Generating large numbers of HTTP requests to exhaust server resources.
  • UDP and TCP floods – Saturating network bandwidth with high-volume traffic.
  • Slowloris attacks – Holding connections open to deplete available server connections.
  • Bot-assisted attacks – Some participants utilise proxy networks and automated scripts to scale up attack intensity.

The group has targeted various sectors, including government agencies, financial institutions, defence contractors, and logistics providers. A particular focus has been placed on countries actively supporting Ukraine, such as the UK, the US, Poland, and Germany. Attack campaigns often coincide with key political events, military aid announcements, or sanctions imposed against Russia, demonstrating a coordinated cyber-influence strategy.

The Importance of Real-Time Intelligence

Given DDoSIA’s adaptive tactics and decentralised operational model, real-time intelligence is critical for understanding and mitigating its impact. Traditional DDoS mitigation measures alone are insufficient, as the threat landscape evolves rapidly. Continuous monitoring of:

  • Attack infrastructure changes (e.g., new command-and-control nodes, shifting IP ranges).
  • Recruitment activities in underground forums and messaging platforms.
  • Indicators of compromise (IOCs) and attack patterns.

…enables cybersecurity teams to stay ahead of threats.

At SOS Intelligence, we actively track, collect, and analyse DDoSIA-related intelligence, helping organisations anticipate attacks, implement proactive defences, and mitigate operational disruptions before they escalate. By leveraging OSINT, deep web monitoring, and network telemetry, we provide actionable insights to counter the evolving tactics of DDoSIA and its affiliates.

Analysis, Evaluation, and Recommendations

Understanding DDoSIA’s Attack Trends

Unlike financially motivated DDoS campaigns, which often involve extortion or ransom demands, DDoSIA’s attacks are ideologically driven and aim to disrupt services in nations perceived as adversaries of Russia.

Since October 2024, SOS Intelligence has been collecting data from the DDoSIA network, the analysis of which provides critical insight into DDoSIA’s recent campaigns, revealing its geopolitical focus, attack methodologies, and targeted infrastructure. The findings help contextualise the scope of the operation, exposing which nations, industries, and services are most affected.

1. Top Targeted Countries

The distribution of attacks by country reveals a strategic effort to disrupt organisations aligned against Russian interests. The most targeted nations include:

  • Ukraine – Consistently the most heavily attacked country, aligning with DDoSIA’s broader mission to destabilise Ukrainian institutions and weaken its digital infrastructure. The targeting of government agencies, financial institutions, and media organisations suggests an attempt to create operational disruption and information blackout scenarios.
  • Poland & the Baltic States (Lithuania, Latvia, Estonia) – These nations have been frequent targets of Russian-aligned cyber campaigns due to their strong support for Ukraine. Their strategic position in NATO and the EU’s Eastern flank makes them key adversaries in Russia’s hybrid warfare strategy.
  • Western European Nations (France, Germany, UK, Italy, Spain) – The presence of these countries in DDoSIA’s targeting list suggests an attempt to undermine NATO members and critical Western businesses, particularly those providing support to Ukraine.
  • Czech Republic & Slovakia – These Central European nations have seen increasing attacks, likely due to their role in military aid and logistical support to Ukraine.

Evaluation

The targeting strategy aligns with broader Russian state-aligned cyber operations, which aim to erode public trust in institutions and disrupt critical services. The focus on government, finance, and media sectors indicates an effort to undermine operational stability and create ripple effects that extend beyond the direct victims.

Implications for Cyber Threat Intelligence (CTI):

  • Intelligence gathering on Russian hacktivist groups should prioritise understanding evolving target lists to anticipate future attacks.
  • Governments and high-risk organisations in these regions should implement heightened DDoS protections and real-time monitoring to mitigate potential disruptions.

2. Top Victim IPs and Their DDoS Mitigation Status

A key insight from the dataset is the list of IPs that sustained the highest number of DDoS attacks, offering a window into DDoSIA’s strategic intent. The most frequently targeted IPs include:

  • Ukrainian Government Infrastructure (91.212.223.216, 18 attacks) – This aligns with previous attacks on Ukrainian state services, attempting to disrupt government communications, digital services, and emergency response systems.
  • Microsoft (13.107.246.44 & 13.107.246.61, 14 & 12 attacks) – These IPs are tied to Azure-hosted services, suggesting DDoSIA is attempting to target cloud infrastructure supporting Western businesses or cybersecurity initiatives.
  • Polish Banking Networks (193.19.152.74, 10 attacks) – The focus on financial institutions is indicative of an effort to destabilise economic activity in Poland, a strong supporter of Ukraine.
  • French E-commerce & Hosting Services (51.91.236.193, 8 attacks) – The targeting of commercial platforms suggests that DDoSIA is testing the impact of attacks on economic stability and supply chains.

DDoS Mitigation Status Analysis

One of the most notable findings is that many of these victim IPs do not publicly advertise their use of Cloudflare, AWS Shield, or other major DDoS mitigation services. This raises concerns about their ability to withstand sustained attack campaigns.

  • High-profile organisations like Microsoft likely have in-house protections, but the presence of their IPs on the list suggests that attackers are attempting to overwhelm cloud-based services.
  • Government infrastructure in Ukraine and Poland appears to be a primary target, reinforcing the need for centralised state-sponsored DDoS defences.
  • Smaller financial institutions and e-commerce platforms may lack the necessary defences, leaving them vulnerable to outages.

Evaluation

The data suggests that DDoSIA’s attack strategy is not just about volume but also persistence. By continuously targeting specific IPs associated with critical services, they are attempting to cause prolonged service degradation rather than instant takedowns.

Recommendations:

  • At-risk organisations should conduct a full audit of their current DDoS protection measures, ensuring they use enterprise-grade filtering solutions.
  • Cloud-based services should enhance their rate-limiting policies to mitigate bot-driven HTTP floods.
  • Government agencies should coordinate with cybersecurity providers to implement real-time defence measures.

3. Top Attack Methods and Vectors

DDoSIA utilises a combination of attack techniques designed to bypass basic mitigation measures. The most frequently observed attack vectors include:

  • TCP SYN Floods – A classic technique used to exhaust connection resources on servers.
  • HTTP GET/POST Floods – Targeting application-layer services, often overwhelming login pages, checkout processes, or API endpoints.
  • DNS Amplification – Leveraging misconfigured DNS servers to exponentially increase attack traffic.

Evaluation

The presence of HTTP-layer floods indicates an intentional effort to bypass traditional DDoS filtering, which primarily focuses on volumetric mitigation. The attack patterns suggest that DDoSIA’s botnet includes a mix of compromised systems, VPNs, and residential IPs, making mitigation more complex.

Recommendations

For Organisations at Risk

  1. Implement Layered DDoS Mitigation
    • Use a high-quality DDoS mitigation package, such as Cloudflare, AWS Shield, or Akamai for automated volumetric protection.
    • Deploy Web Application Firewalls (WAFs) to filter out malicious HTTP traffic.
  2. Proactive Threat Intelligence & Monitoring
  1. Implement network anomaly detection tools to identify and block low-volume, high-impact attacks.
  2. Use geolocation filtering to block or challenge traffic from high-risk regions.
  3. Strengthen API & Login Security
  1. Enforce CAPTCHAs and rate-limiting on login and checkout pages.
  2. Deploy bot management solutions to detect automated DDoS tools.

For CTI Professionals & Security Teams

  1. Expand DDoSIA Attribution & Tracking
    • Monitor NoName057(16)’s recruitment channels to identify new botnet strategies.
    • Use honeypots and deception techniques to study attack behaviour in real-time.
  2. Enhance Threat Intelligence Sharing
  1. Collaborate with government agencies and private sector security teams to exchange attack data.
  2. Track botnet infrastructure and preemptively blacklist high-risk traffic sources.
  3. Develop & Update DDoS Playbooks
  1. Conduct regular red team exercises to test DDoS resilience.
  2. Simulate HTTP-layer and multi-vector attacks to identify weaknesses before adversaries exploit them.

Conclusion

The DDoSIA campaign, orchestrated by the NoName057(16) collective, is more than just a disruptive force—it is a tactically coordinated effort aimed at destabilising key institutions in countries opposing Russian geopolitical interests. The data analysed from recent attacks highlights clear patterns in target selection, attack vectors, and mitigation gaps, providing crucial insights into how organisations can defend against such threats.

The attack data reveals a strong geopolitical alignment, with Ukraine, Poland, the Baltic states, and Western European nations being primary targets. The focus on government agencies, financial institutions, and media organisations suggests an intent to erode public confidence, interfere with economic stability, and control narratives in critical regions. Additionally, the fact that Microsoft-hosted services and Polish banking networks have been frequently attacked underlines the strategic importance of both public and private sector entities remaining highly vigilant.

A notable trend is the increasing use of application-layer DDoS techniques (e.g., HTTP floods, DNS amplification, SYN floods), which require more than just volumetric DDoS mitigation. Attackers are leveraging residential proxies, VPN services, and compromised IoT botnets to make their traffic appear legitimate, complicating detection and response efforts.

DDoS as a Smokescreen for Other Cyber Threats

While DDoS attacks are disruptive, they can also serve as a distraction for more insidious cyber activities, such as:

  • Network Intrusions & Data Exfiltration – Attackers may launch DDoS attacks to overwhelm security teams, diverting attention while stealing sensitive data or planting backdoors in the organisation’s infrastructure.
  • Ransomware Deployment – A coordinated DDoS attack could mask the initial stages of ransomware infections, where threat actors attempt to move laterally through a network before detonating their payloads.
  • Supply Chain Compromise – Threat actors may target cloud-based services or third-party providers with DDoS attacks, creating cascading failures that expose vulnerabilities in interconnected systems.

For security teams, this means that DDoS attacks should never be treated in isolation. Organisations must simultaneously monitor network traffic, logs, and user activity for signs of unauthorised access, privilege escalation, or data exfiltration attempts occurring under the cover of a DDoS event.

Strategic Recommendations

To counteract the risks posed by DDoSIA and other hacktivist-driven campaigns, organisations must adopt a multi-layered cybersecurity strategy:

  • Advanced DDoS Protection – Deploy Cloudflare, AWS Shield, Akamai, or on-premise DDoS mitigation solutions, with an emphasis on layer 7 (application-level) attack filtering.
  • Real-Time Threat Intelligence & Incident Response – Maintain continuous monitoring of attack trends and collaborate with threat intelligence providers to detect emerging tactics early.
  • Cross-Channel Security Visibility – Integrate SIEM solutions and Network Detection & Response (NDR) tools to ensure that security teams aren’t solely focused on DDoS traffic, but also on potential concurrent threats.
  • Red Teaming & Attack Simulations – Conduct regular stress-testing of infrastructure and simulate multi-pronged attack scenarios to evaluate how well defensive controls hold up under real-world conditions.
  • Enhanced Access Controls & Zero Trust – Implement strict user authentication, segmentation of critical systems, and anomaly detection mechanisms to prevent lateral movement during attacks.

Final Thoughts

The DDoSIA campaign exemplifies the increasingly coordinated and persistent nature of cyber threats that blend hacktivism, cybercrime, and geopolitical objectives. As attack techniques evolve, organisations must move beyond reactive defences and adopt proactive, intelligence-driven security strategies.

Crucially, security teams must recognise that DDoS attacks may not be the endgame—they could be a diversion tactic for deeper, more damaging intrusions. By combining DDoS mitigation with network forensics, endpoint monitoring, and proactive intelligence-sharing, organisations can stay ahead of evolving threats and prevent large-scale disruptions before they take hold.

Ultimately, early detection, rapid response, and holistic cybersecurity visibility will determine whether organisations withstand or succumb to these politically motivated cyber assaults.

How SOS Intelligence Empowers You to Analyse and Mitigate DDoSIA Threats

For organisations looking to take a proactive approach to defending against DDoSIA, SOS Intelligence provides raw and processed data that can be leveraged for deeper analysis. Rather than simply offering static reports, our platform enables security teams to interrogate the data in real-time, uncovering trends, patterns, and attack methodologies that can directly inform defence strategies.

Using our threat intelligence feeds, organisations can:

  • Correlate Attacker Behaviour – By analysing historical and live attack data, security teams can identify recurring attack patterns, such as preferred attack vectors, geographic focus, and time-based fluctuations in activity.
  • Investigate Victimology – By reviewing which organisations, IP ranges, and services are being targeted, defenders can assess their own risk exposure and determine whether their industry, supply chain, or region is in DDoSIA’s crosshairs.
  • Detect Emerging Attack Trends – With access to raw network and attack metadata, users can identify new methods being deployed by DDoSIA before they become widespread. This allows for early countermeasure deployment before adversaries refine their techniques.
  • Enrich Internal Threat Intelligence – Security teams can cross-reference SOS Intelligence data with their own logs, SIEM alerts, and network telemetry to detect potential early-stage reconnaissance or ongoing infiltration attempts.
  • Assess DDoS Mitigation Effectiveness – By tracking which victims have successfully mitigated attacks, teams can gain insight into which defensive solutions (e.g., Cloudflare, AWS Shield, on-premise filtering) have proven most effective.

Turning Intelligence into Action

The true value of SOS Intelligence’s DDoSIA data lies in its ability to empower security professionals to extract their own insights. By combining our raw intelligence with in-house security expertise, organisations can:

  • Adjust firewall rules and DDoS protection settings based on the latest attack techniques.
  • Pre-emptively strengthen defences if they belong to an at-risk industry, country, or sector.
  • Monitor attack shifts in real-time to anticipate secondary threats such as network intrusions, data exfiltration, or ransomware campaigns that may accompany a DDoS event.
  • Share intelligence within their cybersecurity community to strengthen collective resilience against DDoSIA and similar threats.

Your Intelligence, Your Analysis, Your Defence

SOS Intelligence doesn’t just provide data, it offers a toolset for investigation and insight generation. By leveraging our feeds, logs, and analysis tools, security teams can turn raw data into actionable intelligence, enabling them to detect, understand, and mitigate DDoSIA threats before they escalate.

By combining our intelligence with your expertise, your organisation can stay ahead of DDoSIA’s evolving tactics and transform threat data into a proactive defence strategy.

Header image source – GBHackers.

"China
Investigation

China – A Step Ahead In Digital Espionage

In the digital age, data has emerged as one of the most valuable resources, driving economies, shaping public opinion, and determining the success of nations. Amid this reality, cybercrime has become a potent tool for state actors, with China often cited as a significant player in the realm of cyber espionage and cybercrime. This article delves into how China has allegedly used cybercrime to obtain data, the motivations behind these actions, their methods, and the implications on global geopolitics.

UPDATE – join us on the 13th June for the accompanying webinar.

The Who – Those Working In The Shadows

On the digital battlefield,  whether state-sponsored or self-motivated hacker, anonymity is key.  This makes the task of attributing the activity of threat actors to real-world identities that much harder.  More often than not, we see the evidence of digital crime, and can use available intelligence to make best estimates of a culprit, but a threat actor who wants to remain anonymous can do so with a reasonable application of effort. However, despite these efforts, identification of threat actors and attribution of criminal activity can be possible.

China’s cyber activities are primarily conducted by state-sponsored groups. These groups, often referred to as Advanced Persistent Threats (APTs), include:

APT 1

APT1, also known as the Comment Crew or Shanghai Group, is a highly active cyber espionage unit linked to the Chinese military, specifically PLA Unit 61398. Identified by cybersecurity firm Mandiant in a 2013 report, APT1 is known for targeting a wide array of industries, including information technology, aerospace, telecommunications, and scientific research.

Their primary method of infiltration involves spear-phishing emails, followed by deploying custom and publicly available malware to maintain access and exfiltrate sensitive data. The group’s activities have largely focused on U.S.-based organisations, aiming to steal intellectual property and trade secrets to benefit Chinese companies and government entities.

APT 10

APT10, also known as Stone Panda or MenuPass Group, is a cyber espionage group attributed to the Chinese government. The group has been active since at least 2009 and is known for targeting managed IT service providers (MSPs) and their clients across various industries, including healthcare, aerospace, and manufacturing. APT10’s operations typically involve sophisticated tactics such as spear-phishing, the use of custom malware, and leveraging legitimate credentials to infiltrate networks and exfiltrate data. Their focus on MSPs allows them to gain access to multiple organisations through a single breach, maximising the impact of their espionage efforts.

APT10’s activities have had significant global repercussions, prompting extensive investigations and responses from cybersecurity firms and government agencies. In December 2018, the U.S. Department of Justice indicted two Chinese nationals associated with APT10, accusing them of stealing sensitive data from dozens of companies and government agencies.

APT 31

APT31, also known as Zirconium, Judgment Panda, or Bronze Vinewood, is a Chinese state-sponsored cyber espionage group. The group is known for its advanced and persistent cyber operations targeting a wide range of sectors, including government, finance, technology, and aerospace. APT31 employs sophisticated tactics such as spear-phishing, supply chain attacks, and the deployment of custom malware to infiltrate and maintain access to targeted networks. Their primary goal is to steal sensitive information and intellectual property to support Chinese national interests and provide strategic advantages.

The activities of APT31 have significant global implications, prompting extensive countermeasures from affected organisations and governments. Notably, in 2020, APT31 was linked to cyberattacks targeting the U.S. presidential election campaign, highlighting the group’s capability and intent to influence political processes.

APT 41

APT41, also known as Winnti, Barium, or Wicked Panda, is a Chinese state-sponsored cyber threat group known for its dual role in cyber espionage and financially motivated cybercrime. Active since at least 2012, APT41 targets a wide range of sectors, including healthcare, telecommunications, finance, and video game industries. The group employs diverse tactics, techniques, and procedures (TTPs), such as spear-phishing, supply chain compromises, and the use of custom malware to infiltrate networks. APT41 is particularly notable for its ability to pivot from traditional espionage activities to financially driven attacks, including ransomware and cryptocurrency mining.

The activities of APT41 have led to significant economic and security repercussions globally. In September 2020, the U.S. Department of Justice charged five Chinese nationals associated with APT41 with hacking into over 100 companies and entities worldwide.

These groups are composed of highly skilled hackers and often operate under the direction of the Chinese government, particularly the Ministry of State Security (MSS) and the People’s Liberation Army (PLA).

The What & The Why – China’s Motivations For Stealing Data

“Know yourself and know your enemy, and you shall never be defeated.”

Chinese Advanced Persistent Threats (APTs) target a wide range of data across various sectors. The specific data targeted and stolen can vary depending on the APT group and their specific objectives, but generally includes the following types:

  1. Intellectual Property (IP) and Trade Secrets:
    • Technological innovations: This includes sensitive information from sectors where technological innovation is key, such as aerospace (e.g., designs for new aircraft or satellite technology), biotechnology (e.g., genetic research), semiconductors (e.g., chip designs), and automotive (e.g., electric vehicle technology). The aim is often to reduce the time and cost associated with research and development by acquiring innovations from other nations.
    • Manufacturing processes: This encompasses proprietary methods, production techniques, and formulas used in manufacturing. For example, a pharmaceutical company’s proprietary process for producing a drug or an electronics company’s methods for fabricating microchips.
  1. Corporate Data:
  1. Strategic plans: Corporate strategies can include market expansion plans, new product launches, or competitive tactics. Accessing this information gives competitors an unfair advantage.
  2. Client and partner information: Information about key clients, partners, and their contracts or negotiations can be exploited to undercut or sabotage business deals.
  3. Employee data: Personal information about employees, such as social security numbers, addresses, and employment history, can be used for targeted attacks or to compromise individuals who hold critical positions within an organisation.
  1. Government and Military Information:
  1. Defence and military secrets: This includes detailed information about defence systems, weapons designs, military operational plans, and intelligence reports. Such data is critical for national security and military advantage.
  2. Diplomatic communications: Sensitive communications between diplomats, government officials, and international bodies. This can provide insights into negotiation tactics, foreign policy strategies, and international relations.
  1. Healthcare Data:
  1. Patient records: Patient data includes medical histories, diagnoses, treatments, and personal identification information. This data is valuable not only for identity theft but also for crafting highly targeted social engineering attacks.
  2. Medical research: Data from clinical trials and research into new treatments and drugs is invaluable for both economic and public health reasons. Stealing this data can provide a competitive edge in the pharmaceutical industry.
  1. Financial Data:
  1. Banking information: Includes account numbers, transaction histories, credit card information, and other financial records. This data can be used for financial fraud or to gain insights into the financial health of organisations.
  2. Payment systems: Information related to the security and operation of payment processing systems, such as those used in banking and retail. Compromising these systems can lead to large-scale financial theft or disruption.
  1. Energy and Infrastructure Data:
  1. Operational data: Details about the daily operations of critical infrastructure such as power grids, water supply systems, and telecommunications networks. This information can be used to disrupt services or to understand and replicate operational efficiencies.
  2. Designs and security details: Blueprints and security protocols for infrastructure facilities, which can be used to plan attacks or unauthorised access.
  1. Academic and Research Data:
  1. Scientific research: Data from academic research projects, particularly those in cutting-edge fields like artificial intelligence, quantum computing, and nanotechnology. This can accelerate a nation’s technological progress by acquiring the latest scientific breakthroughs.
  2. Educational resources: Curricula, exam results, and other educational materials can be used to understand and influence the educational standards and outputs of other countries.

The Where – Understanding Which Nations Are Targeted

Chinese Advanced Persistent Threat (APT) groups, which are often associated with state-sponsored cyber activities, have targeted a wide range of countries over the years. Some of their primary targets include:

  1. United States:
    • Chinese APT groups have consistently targeted U.S. government agencies, including defence, diplomatic, and intelligence entities, to gather political and military intelligence.
    • Additionally, they have sought to steal intellectual property from U.S. corporations, particularly in the technology, aerospace, healthcare, and energy sectors.
    • Some notable incidents include the hacking of the Office of Personnel Management (OPM) in 2015, which compromised the sensitive personal data of millions of federal employees, and the targeting of defence contractors involved in sensitive military projects.
  1. European Countries:
  1. European nations have been targeted for intellectual property theft, economic espionage, and political influence operations.
  2. Chinese APT groups have focused on stealing cutting-edge technology, research, and development data from industries such as aerospace, automotive, telecommunications, and pharmaceuticals.
  3. European governments and diplomatic institutions have also been targeted for intelligence gathering and monitoring political developments.
  1. Asian Countries:
  1. China’s regional rivals, such as Japan and South Korea, have been targeted for political and military intelligence gathering, as well as stealing advanced technology.
  2. Countries like India have experienced cyber intrusions aimed at accessing sensitive government information, military strategies, and technological advancements.
  3. Southeast Asian nations have been targeted for economic espionage, particularly related to infrastructure projects, natural resources, and geopolitical influence.
  1. Taiwan:
  1. Due to the ongoing political tensions between China and Taiwan, Taiwanese government agencies, defence contractors, and organisations have been frequent targets of Chinese cyber espionage.
  2. The aim is to gather intelligence on Taiwan’s defence capabilities, political developments, and cross-strait relations.
  1. Australia:
  1. Australian government institutions, defence contractors, and companies across various sectors have been targeted for intellectual property theft, economic espionage, and monitoring of political developments.
  2. Notable incidents include cyber intrusions targeting universities and research institutions to steal sensitive research data and technology.
  1. Canada:
  1. Canadian government agencies, particularly those involved in defence, foreign affairs, and natural resources, have been targeted for intelligence gathering.
  2. Chinese APT groups have also targeted Canadian companies in sectors such as aerospace, telecommunications, and mining for economic espionage purposes.
  1. Africa and Latin America:
  1. While less extensively reported, there have been instances of Chinese cyber espionage targeting countries in Africa and Latin America.
  2. These activities often revolve around gaining access to natural resources, monitoring infrastructure projects, and influencing political developments in alignment with China’s strategic interests.

Overall, Chinese APT groups demonstrate a global reach in their cyber operations, driven by motivations such as geopolitical competition, economic advantage, and technological advancement. They employ sophisticated techniques to infiltrate networks, exfiltrate data, and maintain persistent access for intelligence gathering and other strategic objectives.

The When – A Timeline of Chinese Threat Actor Activity

“If you know the enemy and know yourself, you need not fear the result of a hundred battles.”

For 2500 years, the teachings of Sun Tzu have been the backbone of Chinese military doctrine.  As his text, The Art of War, has become more popularised, its teachings have been applied across other walks of life, particularly business and governance.  Chinese offensive cyber policy has also followed these principles, and what we have previously discussed shows how China is utilising cyber crime activity to gather information and intelligence to satisfy a broad range of objectives.

Over the years, the finger of blame has been levelled at China for some of the biggest data breaches and incidents of corporate espionage.  We look at some of these below:

Chinese Data Breaches

The How – Common TTPs Utilised By Chinese Threat Actors

Chinese Advanced Persistent Threat (APT) groups employ various sophisticated techniques to steal data from targeted organisations. Their methods often involve multiple stages, including reconnaissance, initial compromise, establishing a foothold, escalating privileges, internal reconnaissance, data exfiltration, and covering their tracks. Here are some common techniques and tactics used by Chinese APTs:

  1. Reconnaissance

Chinese APTs conduct thorough reconnaissance to tailor their attacks effectively:

  • Open Source Intelligence (OSINT): Gathering information from social media platforms, corporate websites, and public records to identify key personnel and network architecture.
  • Phishing Campaigns: Utilising spear-phishing emails targeting specific individuals within an organisation to collect credentials or deliver malware. For example, APT41 has been known to send emails mimicking trusted contacts or business partners.
  1. Initial Compromise

Common methods for initial network penetration by Chinese APTs include:

  • Spear-Phishing Emails: Highly targeted emails containing malicious attachments or links. APT10 frequently used this method to deliver malware like PlugX or Poison Ivy.
  • Exploiting Zero-Day Vulnerabilities: Identifying and exploiting vulnerabilities before they are publicly known. APT3, for instance, has leveraged zero-days in widely used software such as Adobe Flash and Internet Explorer.
  • Supply Chain Attacks: Compromising software updates or hardware components. APT41 has been implicated in attacks on software supply chains, embedding malware in legitimate software updates.
  1. Establishing a Foothold

Once access is gained, Chinese APTs work to maintain a persistent presence:

  • Malware Deployment: Installing Remote Access Trojans (RATs) like Sakula, used by APT10, or variants of the Cobalt Strike framework employed by APT41.
  • Setting Up Command and Control (C2) Channels: Creating secure channels to communicate with infected systems. APT41 often uses DNS tunnelling and HTTP/S protocols to evade detection.
  1. Privilege Escalation

To gain higher privileges, Chinese APTs use various techniques:

  • Credential Dumping: Tools like Mimikatz are frequently used by groups such as APT41 to extract credentials from Windows systems.
  • Exploiting Privilege Escalation Vulnerabilities: Utilising known vulnerabilities in operating systems and applications. APT3 has exploited vulnerabilities in Windows to escalate privileges and move laterally within networks.
  1. Internal Reconnaissance

Mapping the internal network to locate valuable data involves:

  • Network Scanning: Using tools like Nmap to identify live hosts and services. APT10 often employs custom network scanning tools.
  • Lateral Movement: Utilising credentials and tools like PsExec or WMI to move across the network. APT41 is known for its proficiency in lateral movement, using legitimate administrative tools to avoid detection.
  1. Data Exfiltration

Stealing data while avoiding detection is critical:

  • Data Compression and Encryption: Compressing and encrypting data to expedite transfer and evade detection. APT10 has been known to use tools like WinRAR for compression and encryption.
  • Steganography: Embedding data within other files or images. APT groups may use steganography to hide data within innocuous files.
  • Covert Channels: Employing techniques like DNS tunnelling or HTTPS to transfer data. APT41, for example, has used custom protocols to exfiltrate data over HTTPS.
  1. Covering Tracks

Chinese APTs employ various methods to avoid detection and analysis:

  • Log Deletion and Manipulation: Removing or altering logs to erase evidence of their activities. APT10 has been observed cleaning up after themselves by deleting logs and temporary files.
  • Use of Proxy Chains: Routing traffic through multiple compromised systems to obscure the origin of their actions. APT41 often uses a series of compromised machines to route their traffic, making it difficult to trace.
  • Anti Forensic Techniques: Using tools to thwart forensic investigations, such as wiping tools or encrypting malware payloads. APT3 has been known to employ these techniques to hinder analysis.

In Conclusion

China’s use of cybercrime to obtain data is a testament to the strategic importance of information in the modern world. As China continues to leverage cyber capabilities to advance its national interests, the global community faces the challenge of balancing technological advancement with security and ethical considerations.

The ongoing cyber skirmishes highlight the need for robust international norms and cooperation to address the complexities of cyber espionage and cybercrime, ensuring a secure and stable digital future for all.

By understanding the scope, motivations, and methods behind China’s cyber activities, the international community can better prepare and respond to the evolving landscape of cyber warfare. As data becomes increasingly integral to national security and economic prosperity, safeguarding it against state-sponsored cybercrime will be crucial in maintaining global stability and trust in the digital age.

The future of cybersecurity will depend on collective efforts to strengthen defences, establish clear policies, and foster international collaboration to mitigate the risks posed by cyber espionage and cybercrime.

UPDATE – join us on the 13th June for the accompanying webinar.

Further Reading

UK Electoral Commission Breach

https://www.bbc.co.uk/news/uk-politics-68652374

MOD Payroll Breach

https://www.bbc.co.uk/news/uk-68967805

How does China use it’s data

https://www.nzz.ch/english/how-does-china-use-the-personal-data-it-steals-ld.1828192

https://www.forbes.com/sites/heatherwishartsmith/2023/11/04/trafficking-data-chinas-digital-sovereignty-and-its-control-of-your-data/?sh=2b78939543a4

F22/F35 Program Breaches

https://www.sandboxx.us/news/the-man-who-stole-americas-stealth-fighters-for-china

Photo by Li Yang on Unsplash.

"SOS
Investigation, Product news

Cracking CAPTCHAs for fun and profit

Through synthetic training sample dataset generation and ML training.

Preface

Cracking CAPTCHAs is already a well-documented and established process which this article looks to expand on. We will approach this article with a general view of how we’ve cracked CAPTCHAs within undesirable conditions. This article is not meant to be a how-to or detailed guide to replicate our steps. However, it may give you some inspiration for your specific challenge. 

We believe that the methods laid out in this article are novel and significantly improve the efficiency of automated CAPTCHA solving in contrast to traditional approaches. Especially when considering a target CAPTCHA system with poor sample harvesting opportunities.

Ethics

We bypass human verification checks to maintain automatic information collection pipelines. The use of the methods we have developed only extends as far as what is required to automate our collection process. 

If a CAPTCHA or other human verification check system is poorly designed and not adequately rate limited, condition checked etc. bypassing it on scale may lead to a DDoS (Distributed Denial of Service) attack in the worst of cases. But with correctly implemented human verification systems, you should mitigate this even with the system bypassed. At best, unethical manipulation of these verification systems can lead to spam posts/comments and otherwise undesirable automated “bot” interaction. We do not condone this type of use. 

The Problem

There are several well-established methods to automate the solving of CAPTCHAs, depending on the complexity of the CAPTCHA, and if we start at the easy end of the spectrum we are presented with a fairly basic alphabetical captcha. 

With a simple distortion background, one might choose to apply a straightforward process of applying denoise filters or Gaussian blurring to an image to reduce or remove the amount of “stars” or random dot pixels present in its background that are applied at random. 

This process can give us a less noisy picture and we can further convert the image to grayscale.  If the source sample is a colour image doing so improves edge detection. 

The image can then be processed through a standard OCR (Optical Character Recognition) library and in our experience can result in a 0.1% failure rate yielding excellent stable solutions. 

In some cases, a good test of CAPTCHA ease of solvability is to feed it to Google Translate as an image; have Google Translate attempt to read the text and translate the letters back into English. If it can, then you have a very good chance that rudimentary OCR libraries will also work for you.

But this article is not about the easy end of the challenge…

What we are dealing with is a CAPTCHA that is both alphanumeric, upper and lower case with random character placement and rotation, and random disruption lines across the image and characters.  Furthermore, most importantly, a point that we will discuss in more detail is where the target source is a Tor Onion website that, at the best of times loads slowly and at the worst of times is offline or responds with backend timeout errors. 

The image complexity of the source CAPTCHA means it’s nearly impossible to effectively read it by OCR. This is made challenging due to the disruption patterns provided by the background random line arrangement (an outward star pattern) and each of our characters are independently disrupted with seemingly random lines of various length and width. Combining all that with offset angles of each character it’s beyond what most OCR or OpenCV methods can handle. 

Therefore, for more complex CAPTCHAs image manipulation (removing noise, grey scaling etc.) is typically not sufficient. These challenges usually require machine learning to get a reasonable failure rate and sufficient solving speed. 

The biggest factor in achieving a good model that will solve accurately is having a large enough sample base. In some cases, many thousands of samples are required for training. Certainly, when dealing with a CAPTCHA that may have upper, lowercase and numerical characters with randomisation of all these points plus randomisation on disruption patterns or lines the larger the sample set, the more accurate a model the training will produce. 

So how do you get thousands of samples from a source that is slow to load and has poor availability, both conditions of the source being a Tor website? Harvesting samples this way would be far too inefficient and we can’t hang around! 

Even with a target source that responds reasonably quickly, has good availability, and can be harvested without aggressively hitting rate limits, who would want to sit there endlessly solving eight thousand captchas to feed to an optical character recognition model? 

I know that’s not going to be me! Sure, there are options to outsource these problems and crowdsource them, but those options take time, money and are likely to introduce errors in our training sample data. Neither of these is desirable, so how do we get 100% accurate sample data cheaply without human solving, without having to harvest the source, and that can scale? 

The Solution

The solution we came up with was first to not focus on the solving of the CAPTCHAs, or the training of our model, or anything that was a direct result or outcome of the end goal we are driving towards. Instead, we looked at how the CAPTCHAs are constructed; what do they look like and what are their elemental parts. 

We know harvesting is not an optimal option, so we have put that aside. Doing so leaves us with a handful of maybe 20 or so harvested solved CAPTCHA samples. Nowhere near enough to start training but it’s enough to start focusing on the sample set we have.

If we look at how the CAPTCHA is constructed and try and break its construction down piece by piece, in a way “reverse engineering” the construction of the CAPTCHA we might either: 1) be able to generate our own `synthetic` CAPTCHAs on demand and at scale all 100% accurately pre solved, or 2) sufficiently understand the method of construction to identify the library or process in which the CAPTCHA is constructed and reimplement it for ourselves with the same 100% accurately pre-solved outcome. 

In our case and the example, we are writing this article from the path of the former option. This option was chosen as some time was spent trying to identify the particular CAPTCHA library but no exact match was found, and in the interest of not burning too much time, and depending on external factors we decided to attempt to create our own synthetic CAPTCHA generation process.

To create our CAPTCHAs, we used Pillow (a PIL Python Fork), a Python Image Manipulation Library that offers a wide range of features all well suited for the job at hand. 

We start by defining a few values that we have observed to be fixed, such as a defined image size (in our case, 280 by 50 pixels) and use this to create a simple image. 

Then we define our letter set (a to z, A to Z, 0 to 9) as we know these to be fixed. 

Using `random.choice` we can pick a required amount of characters.  In our case, the CAPTCHA uses a fixed length of 6 characters. 

The text font is also important and from our source samples we see it is fixed: therefore we try to match the font type as closely as possible. Font size also remains constant. This will be important in ensuring that our training is as accurate as possible when our model is presented with real sample data.

To kick things off, the process carefully establishes the dimensions of the image canvas, akin to laying out a pristine piece of paper before beginning a drawing. Then, with a deft stroke, we construct a blank background canvas, pristine and white, awaiting the arrival of the CAPTCHA characters. But here’s where the true artistry takes centre stage; the process methodically layers complexity onto the character, 

With each character in the CAPTCHA text, our process doesn’t simply slap it onto the canvas; instead, it treats each letter as an individual brushstroke, adding specific characteristics at every turn. We begin by precisely measuring the width and height of each character, ensuring that characters will not be chopped off the edges, correctly fit and fill the CAPTCHA, and that they resemble the source CAPTCHA text. Then, like with the source samples, we introduce randomness into the mix, spacing out the letters with varying degrees of separation, akin to scattering scrabble pieces.

We are also introducing a touch of chaos by randomly rotating each character, giving them a tilt that defies conventional alignment. This clever sleight of hand resembles the source samples accurately and adds to the difficulty level of solving this CAPTCHA. 

Yet the process doesn’t stop there. No, it goes above and beyond, adorning our canvas with a riotous display of crisscrossing lines, as if an abstract artist had gone wild with a brush. These random lines serve as a digital labyrinth, obscuring the text beneath a veil of confusion and intrigue.

We then add and overlay lines of random length and weight across each character, aligned to the character’s angle closely matching that of the source sample. 

Now that we have a way to populate our image canvas, we have a working framework with which we can iterate to get an output that resembles the source samples as closely as possible. 

For now, we generate a few hundred samples, each image file is named the randomly selected CAPTCHA text, assisting us by essentially generating a sample set that has already been solved. 

After that, we compared each iteration’s output closely to the source and made tweaks and adaptations. For each iteration of the CAPTCHA generator we looked closely at just one specific attribute to simplify the synthesis process. We adjust the random scattered background lines, adjusting their length, width and count.  Moving then onto tweaking the letter placement and random angles, to closely match the apparent pseudo randomness of the sample data set.

Following sufficient tweaking and iterations, we are producing a CAPTCHA that is at least visually very closely matching our source samples. It matches so closely that if mixed with real samples it’s difficult to distinguish. This is the ideal level of synthesis we are looking to achieve. 

Example synthetic captcha on the left, real on the right

Next steps

Now that we have a way to produce synthetic CAPTCHAs that very closely match our target, it’s time to produce a few thousand of them. This is easily and quickly done by specifying the total count in our process loop and out pops 5,000 freshly generated pre-solved captchas all nicely labelled and ready for shoving into our training process. 

For model training, we’ve chosen to use the TensorFlow framework alongside the ONNX Runtime machine learning model accelerator. This combination worked well for us for both training accuracy and efficiency. All training was conducted with the use of a Nvidia GPU.

Following initial training, using just our best-produced synthetic CAPTCHA samples as our data set, we achieved a CER (Character error rate) of 3.26%. For a first batch run of a model trained against a synthetic data set was not too bad at all. But we knew we could do better. 

Now that we had a model to work with, we could use it to start solving actual real target CAPTCHAs.  This would allow us to generate a larger pool of real CAPTCHA samples, with a solve set, and mix those in with our synthetic set.  We were looking to generate 5k synthetic and 1k real harvested CAPTCHAs with our newly trained, albeit unoptimized model. 

With a framework in place that would interface with the target website, collect CAPTCHAs, generate a text prediction, check that with the website and if solved, store the solved and labelled CAPTCHA image we generated about 1,000 samples over a short time.

Feeding this back into the mix of training model data we dropped the CER down to 2.77%.

A screen shot of a black screen

Description automatically generated

We were confident that even with 2.7% it was a rate better than a human could achieve, and we were also confident that our methodology was working. 

Our remaining tasks were to reiterate the model once more, using this slightly more optimised model and generate a slightly larger set of labelled real CAPTCHAs. 

We were able to go from the initial model, with a worse CER (orange line) to the best model (green line) in only a few training iterations.

The model training improvements are best shown in the graph below with each improvement yielding a lower CER, for longer (more stable) and at a sooner point in time. 

At which point we settled on a final model, with a CER of 1.4%, opting for an optimal  mix real CAPTCHAs to synthetic. 

Our final ML model diagram: 

Once the efficacy of this model was validated it was then a task of simply plugging it into the collection pipeline process and enlivening it into our production collection system. The automated solver process has been running stable ever since and most of the disruption we’ve observed has solely been to the target source going offline and being unavailable. 

Bias and Variance

A key consideration during the training process was to be aware of and mitigate where possible Overfitting and Overtraining our model. Instead of using the terms `overfitting` and `overtraining` I like to instead use Bias and Variance as two potential pitfalls of ML training as they better explain undesirable conditions that may occur. Without diving into too many details around these ML concepts as to fully understand them you would probably need a PhD. The best way I can describe what my simple mind can understand is as follows.

Due to the nature of our novel, one might say clever iterative process to train a CAPTCHA solver on a very low original source data set we are by virtue potentially adding bias into our training process. For example, from the first model any solved data sets will be solved by a model that has a predefined bias to solving a particular set, style or character combination potentially resulting in a new data set that is biassed towards what that previous model was good at solving thereby amplifying the bias in our next model’s training. 

This bias would result in a real world regression of CER as the model is unoptimised to solve a wider range of character combinations and randomisation characteristics. 

Our second pitfall: overfitting slides at both ends of the extreme in terms of providing an overly varied training set or an insufficiently varied training set, i.e. creeping into bias. Whereby we must consider that although we could train a model to solve many different types of CAPTCHAs, beyond just this one example, from one model using a very varied data set doing so and if not carefully tuned could result in `overfitting` our data set thereby introducing an unoptimised CER as our model is essentially training on more noise than signal. 

We therefore considered both Bias and Variance closely, ensuring a healthy mix of varied real correctly labelled CAPTCHAs harvested from source to a ratio of synthetically generated CAPTCHAs with a randomly distributed character set. An optimal CER band was then discovered through iterative AB testing of data set mix, training iterations until a stable plateau was identified. 

Conclusion

We deploy a final model, incorporating a mix of synthetic and real CAPTCHAs, achieving a CER of 1.4%. The automated solver process seamlessly integrates into our production collection system, ensuring stability and efficiency.

By leveraging synthetic sample training data generation, we’ve advanced CAPTCHA cracking. Our approach offers an effective and efficient solution for CAPTCHA cracking without significant human involvement or effort allowing for effective automated data collection.

With this capability, we are able to add value to our customers by automating the collection from otherwise programmatically inaccessible sources, where we would have to manually have a human solve the CAPTCHA access the page, insert any updates and then alert our customers. Automation is key to what we do at speed and at scale especially when dealing with many hundreds of collection sources as we do.

Photo by Kaffeebart on Unsplash.

"SOS
Investigation, Ransomware

Ransomware – State of Play February 2024

SOS Intelligence is currently tracking 180 distinct ransomware groups, with data collection covering 348 relays and mirrors.

In the reporting period, SOS Intelligence has identified 395 instances of publicised ransomware attacks.  These have been identified through the publication of victim details and data on ransomware blog sites accessible via Tor.  Our analysis is presented below:

LockBit has maintained its position as the most active and popular ransomware strain.

This is despite significant law enforcement interruption, the impact of which will be discussed further below.

Despite law enforcement action towards the end of 2023, ALPHV/Blackcat has maintained a strong presence online and continues to post victim data.  However, owing to how the ransomware process operates, this could be seen to be victims compromised before law enforcement takedown of ALPHV/Blackcat infrastructure.

Increased activity has been identified amongst BianLian, Play, QiLin, BlackBasta, 8base and Hunters ransomware strain.  This increase may be attributed to these strains absorbing affiliates from LockBit and ALPHV/Blackcat as those services went offline.

This month, Ransomhub, AlphaLocker, Mogilevich, & Blackout have emerged as new strains.  Mogilevich has been observed targeting high-value victims, including Epic Games, luxury car company Infiniti, and the Irish Department of Foreign Affairs.

Group targeting continues to follow familiar patterns in terms of the victim’s country of origin.

Attacks have increased in South American countries, particularly in Argentina, which may be a response to presidential elections in November 2023 in which the far-right libertarian Javier Milei was elected.

Targeting continues to follow international, geopolitical lines.  Heavy targeting follows countries that have supported Ukraine against Russia.  Attacks against Sweden continued as it pressed ahead with preparations to join NATO.   This highlights the level of support ransomware groups continue to show towards the Russian state, and they will continue to use cyber crime to destabilise and weaken Western and pro-Ukrainian states.

Manufacturing and Construction and Engineering have remained the key targeted industries for February.  These industries would be more reliant on technology to continue their business activities, and so it logically follows that they would be more likely to pay a ransom to regain access to compromised computer systems.  The Financial, Retail & Wholesale, Legal, and Education sectors have also seen increased activity over the period.  Health & Social Care has seen a significant increase over the period.  This is likely in response to several groups, including ALPHV/Blackcat reacting to law enforcement activity and allowing their affiliates to begin targeting these industries.

We are seeing a shift in tactics for certain industries, particularly those where data privacy carries a higher importance (such as legal or healthcare), where threat actors are not deploying encryption software and instead relying solely on data exfiltration as the main source of material for blackmail and extortion.

LockBit Takedown

On 20 February, an international law enforcement effort was successful in taking control of and shutting down the infrastructure of the LockBit ransomware strain.  Much has been disclosed and said regarding the takedown, some of it speculative, however, it was confirmed by the UK’s National Crime Agency (NCA) and the US’s Federal Bureau of Investigation that control of their dark web domains and infrastructure was obtained, providing them with significant information regarding the activity of the LockBit group and its affiliates.

Since then, multiple LockBit blog sites have re-emerged, and new data continues to be published.  However, it is not clear whether or not this is new activity since the takedown.  It is more likely that these are victims compromised before law enforcement activity which are only now being blackmailed with data release.

We are continuing to monitor the ransomware landscape at this time to properly analyse the impact this takedown will have.  This event has had a significant impact on the reputation of the LockBit group, with many affiliates angry at the perceived lack of operational security resulting in the possible identification of their real-world identities.  We are anticipating many of these will look to gain access to the affiliate programs of other strains, and so we will expect to see a significant increase in reported attacks from those strains in the coming weeks and months.  As for LockBit, the threat actors behind the group remain active, and it is likely we will see a re-emergence as a new group in due course.

ALPHV/Blackcat exit scam

The ALPHV/Blackcat group is making headlines for all the wrong reasons.  After first having their leak site taken over by law enforcement, they now appear to have absconded with affiliate funds.

In February 2024, ALPHV/Blackcat announced an attack against healthcare provider Change Healthcare (part of United Health Group).  Following this, a ransom of $22 million was paid to ALPHV.  Several days later, the responsible affiliate took to the cybercrime forum RAMP to state that they hadn’t been paid their share of the spoils (potentially up to 90%).  It appears now that the group has collapsed from within, ending with a final exit scam as they shut down operations.  The group have further claimed to have sold their source code in the process, so we may see copycat groups emerge in due course.

While the dissolution of a notorious group should be celebrated, especially following successful law enforcement activity, it should be noted that shutting down in this way presents a significant risk to recent victims.  The affiliate responsible for the Change Healthcare data, as well as affiliates who may have been similarly affected, are likely to still hold victim data and so, for those victims, there remains a risk that they may be further blackmailed as affiliates attempt to recoup their lost earnings.

Photo by FLY:D on Unsplash

"SOS
Investigation, Ransomware

Ransomware – State of Play January 2024

SOS Intelligence currently tracks 173 distinct ransomware groups, with data collection covering 324 relays and mirrors.

In the reporting period, SOS Intelligence has identified 274 instances of publicised ransomware attacks.  These were identified through the publication of victim details and data on ransomware blog sites accessible via Tor.  Our analysis is presented below:

Threat Actor Activity

Lockbit has remained the market leader, maintaining a market share of approximately 23%.  Blackbasta, Akira, Trigona, 8base and Bianlian have seen significant increases in activity over the month, while there have been decreases in activity from Cactus, Werewolves, Siegedsec, Dragonforce, and Play.

January is typically a quieter month for ransomware threat actors.  In 2022, the volume of attacks was 17% less than the yearly average. In 2023, this increased to 54%.  This slowing of activity is likely due to the proximity of several national and religious holidays observed globally between December and January.  However, in 2024, we observed a significant increase in attacks across January.  Two factors stand out as possible causes for this:

  1. Ongoing global hostilities

It has been observed that pro-Russian cybercriminal groups have been vocally supportive of the ongoing war in Ukraine, and have diverted significant resources in targeting the supporters of Ukraine.  Similar patterns have been noted in the targeting of victims in countries which have shown support for Israel.

Although ransomware groups and threat actors are primarily financially motivated, their resources and skills are often seen turned against perceived enemies of the state, blurring the lines between criminal and hostile state activity.

  1. Counter Ransomware Initiative

The Counter Ransomware Initiative (CRI) is a US-led group of 50 nations and organisations dedicated to promoting solidarity and support in the face of ransomware activity.  In October 2023, CRI members pledged not to pay ransoms when faced with cyber attacks.

As a result, it is expected that the number of observed postings to ransomware blogs will increase as victims no longer pay ransoms.  This may show an increase in victims’ data being published, rather than an overall increase in the number of victims.

Country Targeting

As stated above, ransomware threat actors’ choice of targets can be politically motivated, as well as financially.  This is why we continue to see the majority of attacks target the USA, UK, Canada, France, Germany and Italy.  As members of the G7, these countries have strong economies and therefore possess lucrative targets for financially-minded threat actors.  However, this surge in activity may be politically motivated.  Continued support for Israel and Ukraine may give certain threat actors additional motivation to target those countries.

This month has seen an increase in attacks against victims in Sweden.  Sweden is in the process of joining NATO, which appears to have presented the country as a target for pro-Russian threat actors in support of the Russian state.  Sweden’s membership would increase NATO’s presence in and around the Baltic Sea, a key waterway for allowing the Russian Navy into the North Sea and onward into the Atlantic.  Furthermore, it would increase a NATO presence close to Russia’s border with the rest of Europe.

Industry Targeting

Manufacturing, Construction & Engineering, and Logistics & Transportation have remained the key targeted industries for January.  These industries would be more reliant on technology to continue their business activities, so it logically follows that they would be more likely to pay a ransom to regain access to compromised computer systems.  The Financial and Education sectors have also seen increased activity over the period.

We are seeing a shift in tactics for certain industries, particularly those where data privacy carries a higher importance (such as legal or healthcare), where threat actors are not deploying encryption software and instead relying solely on data exfiltration as the main source of material for blackmail and extortion.

ALPHV/Blackcat

In December 2023, law enforcement agencies from multiple jurisdictions targeted the ALPHV/Blackcat ransomware group, disrupting the groups’ activities and seizing their domain.  Shortly after, the domain was “un-seized” before law enforcement agencies took back control.  As a result of this action, the operators behind ALPHV/Blackcat have publicly withdrawn their rules regarding the targeting of Critical National Infrastructure (CNI), in apparent revenge for law enforcement activity.

Since the takedown, ALPHV/Blackcat activity has slowed but does not appear to have stopped.  In recent weeks they claim to have targeted and stolen confidential and sensitive data from Trans-Northern Pipelines in Canada, as well as Technica, a contractor working with the US Department of Defence, FBI, and USAF. 

The veracity of these claims is still being investigated, and so should be taken with a grain of salt.  The ALPHV/Blackcat group has been hurt by law enforcement, impacting their operations and losing them customers.  Therefore, it is possible that exaggerated claims are being made to save face and their reputation amongst the cybercrime community.

Photo by FLY:D on Unsplash

1 2 3
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound