Skip main navigation

Hurry, only 2 days left to get one year of Unlimited learning for £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

Blue Team Kill Chain



In the information security lexicon, a kill chain describes the structure of an attack against an objective. While usually used to describe the phases of a red team’s operation, it’s also common in the information security literature for blue teams to have their own kill chain. Rather than describing the structure of an attack against an objective, the blue team kill chain describes the phases of detecting and responding to an organizational attack. Although there are a variety of different kill chain phases discussed in the information security literature, blue team kill chains generally include the following phases:

  • Gather baseline data
  • Detect
  • Alert
  • Investigate
  • Plan a response
  • Execute

Gather baseline data

Having adequate amounts of baseline data allows you to understand what your environment looks like when it is not under attack. It is difficult to know what is unusual for your network unless you have a good idea of what usually looks like. To analogize, it’s easier to find needles in a haystack, if you have an extremely good understanding of the characteristics of a haystack that has no needles present.

Gathering good baseline data means configuring effective logging, monitoring, and auditing for your organization. When configuring how you will collect baseline data, consider enabling all auditing and logging options. The more telemetry that you have, the better picture you’ll be able to generate of what normality looks like in your organization’s environment. If you haven’t configured all telemetry options, it is possible that you won’t have a clear enough picture that will allow you to accurately distinguish normal from abnormal activity. Collect telemetry over a sustained period that represents your organization’s normal operations. Baseline data should also be regenerated as changes are made to information systems on the network, so it reflects the current operation of the network, rather than only representing the organization’s information systems as they existed at a fixed point in the past.


Detecting an intruder is often a case of noticing abnormal activity in your organization’s information systems. For example, one may notice that a server, where for the last few months connections via remote desktop protocol (RDP) have only been made during business hours is suddenly servicing RDP requests late at night on weekends or where a computer is transmitting unusually large amounts of data to hosts on the internet where previously the amount of traffic it transmitted was negligible.

Detection can be difficult as competent intruders will attempt to leave minimal trace of their activities in the telemetry logs of your organization’s information systems. Rather than detecting abnormalities by manually examining event logs, many organizations today rely upon Intrusion Detection Systems (IDS) and Security Information and Event Management (SIEM) systems to identify suspicious anomalies in the telemetry generated by information systems.


When does a series of unusual events correlated across multiple logs reach the stage of being worthy of further investigation? Correlation with other events is important. A series of failed attempts at remote RDP access by themselves are suspicious but don’t indicate a problem. A series of failed attempts at remote RDP access, a successful remote login via RDP, and then suspicious failures of the lsass.exe service occurring in succession is worthy of investigation.

Alerting is the process of bringing suspicious anomalies in the telemetry generated by information systems to the attention of the blue team. It is important though that the members of the blue team tune their IDS and/or SIEM systems to provide an appropriate level of alert. If an alert system provides too many false positives, that is it triggers too many alerts that aren’t associated with the attacker or red team activity, then the blue team may miss an alert that is associated with red team activity through alert fatigue. For example, during a recent breach at a famous retailer, alerts were generated by the retailer’s internal monitoring about the attacker’s activity but were discounted at the time as false positives because the internal systems generated so many alerts for innocuous activity that it wasn’t clear to the security team whether any individual alert indicated a problem or misclassified routine events.

Some IDS and/or SIEM systems will provide recommendations as to which activity requires further investigation and may even suggest further ways to find evidence to validate the hypothesis that an intruder is present within organizational systems and that the organization is under attack.


Once the blue team has verified the presence of an intruder on the network they need to determine the degree to which the intruder has infiltrated the network. A detailed and thorough investigation should determine which systems the intruder has compromised, when those systems were compromised and how those systems were compromised. These steps are important because the scope of many intrusions often exceeds the initial assessment of the severity of the intrusion. Only by understanding where, how, and when systems were compromised is it possible to begin to effectively remediate vulnerabilities that led to the compromise and to achieve the goal of ejecting the intruder from the organizational network.

This article is from the free online

Microsoft Future Ready: Fundamentals of Enterprise Security

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now