Other Cyber Attacks

Codecademy Team
Learn about some of the more complicated or esoteric types of cyber attacks.

What we’ll be learning

Some types of cyber attacks are more complicated than others. In this article, we’ll be taking a look at the more complicated ones. This includes attacks that rely on compromising an intermediate target, as well as attacks that involve tampering with machine learning and artificial intelligence (AI) — sometimes by pitting one AI against another.

Supply chain attacks

What are supply chain attacks?

In Cybersecurity, there is a concept known as pivoting, where an attacker will compromise one computer, then use that computer to compromise another computer. Supply chain attacks are the same idea, but with organizations instead of single computers.

For example, suppose we’re targeting a defense contractor to steal their secret documents. The defense contractor probably has good security, as mandated by their contract with the government. However, we happen to know that the contractor uses a piece of commercial software developed by a different company with weaker security. By hacking the software company, we could inject malware into the software they develop that gives us a backdoor into the computers it is installed on. When the defense contractor next updates their software, we now have access to their computers.

Practical examples

Many serious breaches have involved supply chain attacks that highlight how a single weak link in security can have far-reaching implications.


In 2013, a retailer called Target suffered a massive data breach, with an estimated 110 million people affected. The attackers first gained access to Target’s network by using credentials stolen from a heating, ventilation, and air conditioning (HVAC) company that had worked with Target. The company had access to Target’s network in order to monitor their HVAC and refrigeration systems, and the attackers were able to use that access to compromise Target’s network.

An image showing a Target store in the background as an attacker gets away with a shopping cart full of user credentials.


Another more recent example is the SolarWinds breach in December of 2020. Solarwinds is a cybersecurity company that provides software to many other organizations, including state and federal governments. Attackers were able to break into SolarWinds’ network and add malware into their products which was then pushed to their clients via software updates. By compromising a single organization, the attackers were able to gain access to many organizations, including Fortune 500 companies and federal agencies.

An image showing a hacker with backdoors to governments, airlines, web browsers, servers, and more.

Adversarial AI attacks

AI and cybersecurity

AI is a very useful tool for cybersecurity, especially when trying to secure large or complicated environments. AI can be used to help identify abnormal or suspicious behavior, and report it to analysts who can investigate further.

However, like many cybersecurity tools, AI can also be used for more nefarious purposes, such as identifying vulnerabilities and methods of circumventing security. This is known as adversarial AI.

Machine learning

Some types of machine learning allow us to create algorithms that cannot practically be manually programmed. However, these algorithms have a downside: We don’t really understand how they make decisions. For example, if we train an algorithm to tell whether or not a picture is a cat or a dog, it’s very difficult to determine the algorithm’s “thought process”, even if its decisions are accurate.

Tainted training data

Machine Learning algorithms require training data to learn how to function. If we want an algorithm to tell cats from dogs, we going to need a lot of pictures of both. This introduces an opportunity for malicious actors to influence the algorithm: By modifying the data used to train the algorithm, they can modify how the resulting algorithm will function. Maliciously modified training data is known as tainted training data.

An image showing garbage, or input, labeled “Training Data”, going into a machine. The machine is labeled “AI Training Machine”. The machine is outputting a garbage robot.

Unfortunately, even data that hasn’t been tainted on purpose can be harmful: It’s easy for unconscious biases and unintentional oversights to create training data that is biased in a harmful way. Examples include image-recognition algorithms that offensively mislabel people of color or resume-assessment algorithms that are biased against women. When we use biased data to train machine learning algorithms, even if that bias is unintentional, the resulting algorithms will encode that bias and continue to perpetuate it.

AI vs. AI

While we may not be able to understand exactly how machine learning algorithms think, that doesn’t mean we can’t trick them. In fact, we have an excellent tool for developing ways to trick machine learning algorithms: other machine learning algorithms. By training one algorithm to fool another, we can create data that looks normal to us but will trick the target algorithm into giving nonsensical answers. (For example, we could trick a Google algorithm into thinking a cat is guacamole.)

An image showing that we can trick an AI into thinking that a picture of a cat is a picture of guacamole with edits that can't be seen by the naked eye.

Fooling image recognition algorithms sounds funny until we remember that image recognition algorithms are used for things like autonomous vehicles, surveillance, and authentication. Other types of algorithms can be fooled too, including the algorithms used to detect suspicious behavior on computers or networks.

Protecting our AI

There’s no foolproof way to prevent adversarial AI attacks, but there are some steps that can be taken to help protect against these attacks:

  • Keeping training data secret can help prevent the data from being maliciously modified, and makes it harder for malicious parties to analyze the data.
  • Training algorithms to detect and block against adversarial AI attacks is also an option.


Cybersecurity is a broad field, and many other fields will have some Cybersecurity aspects, from business logistics to machine learning. When we’re working on projects, it’s a good idea to think about the security implications of those projects: how they might be attacked and what consequences they might have. We don’t need to know everything about Cybersecurity to be “good” at it! We just need to know enough to think critically about it and to be able to effectively research the subjects we’re less familiar with.