By Michael K. Meehan, Captain, Seattle Police Department Editor’s note: The author was the police incident commander for the 2003 TOPOFF2 full-scale terrorism response exercise in Seattle and served as a controller and evaluator for the 2005 Marine Terrorism Response Exercise in Seattle and the 2005 TOPOFF3 full-scale exercise in Connecticut.
s law enforcement develops terrorism countermeasures and prepares for the next terrorist attack, police can borrow useful training techniques from the military and private industry. One of these techniques is known as red teaming.
The military uses red teams in wargaming scenarios as the opposing force in a simulated conflict to reveal weaknesses in the military force. In the business world, red teaming refers to an independent peer review of abilities, vulnerabilities, and limitations. As the concept applies to homeland security, red teaming involves thinking or acting like a terrorist in an effort, for example, to identify security weaknesses and potential targets. Red teaming can be used in either analytical exercises or field-level exercises.
The Red Team
The National Strategy for Homeland Security states that “employing ‘red team’ techniques” is a major initiative in the intelligence and warning mission area.1 The Congressional Research Service, in its report “Border and Transportation Security: Possible New Directions and Policy Options,” also recommends the expanded use of red teams.2
Scope of Red Teaming
Red teaming is a term that describes a variety of exercise activities. The most basic level of red teaming is to conduct peer review of plans and policies to detect vulnerabilities or perhaps to simply offer alternative views of scenarios.
Another definition of red teaming is an interactive process conducted during crisis action planning to assess planning decisions, assumptions, processes, and products from the perspective of friendly, enemy, and outside organizations.3 Red teaming has also been described as the “capability-based analytical or physical manifestation of an adversary, which serves as an opposing force.”4 It can be a form of risk assessment and mitigation, with the key difference that red teaming involves the presence of an adversarial condition. Red teaming is not an oversight function.
According to the DHS Exercise and Evaluation Program, a red team is a “group of subject matter experts with various appropriate disciplinary backgrounds, that provides an independent peer review of plans and processes, acts as a devil’s advocate, and knowledgably role-plays the enemy using a controlled, realistic, interactive process during operations planning, training, and exercising."5
Red teams evaluate a target or tactic, but not the likelihood that a particular target will be attacked. Red team members are strategists who identify what to attack and domain experts who identify how to attack.
As used in this article, red teaming refers to the presence of an active, thinking, and adaptive opponent during an exercise. Adaptive opponents allow exercise participants to engage in both prevention- and protection-related activities simultaneously.
As a color coding for training exercises, the red team is the adversary and the blue team is the police. The role of the blue team is to think about how surprise attacks might occur, identify indicators and warnings of those attacks, collect intelligence on those indicators, and adopt defenses against the most likely possibilities or at least provide early warning.6 Partners and neutral forces represent green team members. White team members frame, execute, and evaluate the exercise and otherwise ensure that the exercise continues. While using a nomenclature that color codes all participants is optional, it is an age-old technique understood by most involved groups.
There are potentially many levels of red teaming, but the two most common are analytical red teaming and physical red teaming.
Analytical Red Teaming
Analytical red teams portray an adversary but are not involved in actual field play. Analytical red teaming adds value to discussionbased exercises and can range from basic peer review to near-real-time (notional) force-onforce interaction, as in games or simulations.7
During analytical red teaming, participants analyze the attack plans and look for indicators and warnings, key decision points, and vulnerabilities in the plan. Participants should assess whether their current plans, policies, and procedures would be able to successfully repel an attack and, if not, work to modify and improve them to enable a better opportunity for success.
Analytical red teaming can be conducted by agencies possessing almost any level of capability, at a lower cost, over a shorter time, and with fewer personnel than physical red teaming. Of course, using fewer participants presents both positive and negative aspects, as fewer participants also means that fewer people are trained.
Analytical red teaming provides a potential adversary’s view of threats, vulnerabilities, and countermeasures. Without testing the physical limitations of antiterrorism measures, analytical red teaming can challenge prevailing views, prevent surprise, allocate resources, and expand the bounds of imagination. Analytical red teaming can occur as part of a discussion-based exercise or as a standalone activity.8
Generally, analytical red team participants need not all be subject matter experts, but all must have a strong working knowledge of their organization’s plans, policies and procedures. At least one subject matter expert should participate and should help develop the scenario and adversary profile and assist with facilitation and team member indoctrination in the chosen adversary’s ideology, motivation, capability, objective, and tactics.
Physical Red Teaming
Physical red teaming involves individuals portraying actual, realistic adversary moves and countermoves in an exercise. A physical red team embodies the selected adversary, acting according to the selected group’s motivations, capabilities, and intent. Using a sliding level of realism, they act out and execute the steps dictated by known terrorist tactics, techniques, and procedures and provide the means for the blue team players to interact with an adversary in an exercise setting.9
Benefits of Red Teaming
Successful red teaming offers a hedge against surprise and inexperience and a guard against complacency. It tests the fusion of policy, operations, and intelligence. Red teams can highlight deviations from doctrine, reveal overlooked opportunities, and determine how well an agency understands its own plans and procedures.
In addition, effective red teaming can define a threshold of detection, suspicion, and action. It can and should cause blue team exercise players to recognize suspicious behavior, share information, and take any number of other steps to prevent or deter a particular attack. Specific examples of these behaviors might include attempts to purchase weapons and inquiries made to private-sector security, law enforcement officers, and others regarding security measures or infrastructure vulnerabilities. Red teaming should not, however, include potentially dangerous activities such as erratic driving, physical threats, or foot and vehicle pursuits.10
Impediments to Effective Red Teaming
Impediments to conducting effective and helpful red teaming fall into two categories: situational and organizational. Situational impediments can include the chosen scenarios, the selection and training of members, and exercise conditions. Organizational impediments can include red team interactions with the blue team, organizationally imposed constraints, and the interpretation, distribution, and reception of the lessons learned.11
The Defense Science Board has identified elements of successful, and unsuccessful, red teaming. Red team success requires an organizational culture that values constructive criticism and provides top cover for exercise participants, meaning independence with accountability and accepting and acting upon red team recommendations.12 U.S. Army Colonel Gregory Fontenot adds that organizations should value intellectual preparation as seriously as physical preparation.13 Intellectual preparation is perhaps the most important factor in conducting successful red teaming.
Among the more common reasons for failure of red team exercises are red teams that do not have enough latitude, do not approach the task with sufficient seriousness, do not capture the culture of potential adversaries, and do not have team members of sufficient quality and training.
Failure to share information also undermines the success of a red teaming exercise by limiting not only the ability to effectively carry out the exercises but also the usefulness of lessons learned. Furthermore, if red team play is overly scripted, it can limit the training value by taking the realism out of what should be a realistic exercise. Conversely, play that lacks sufficient scripting can lead to unexpected and undesired outcomes, make assessment more difficult, and increase safety risks.
Methodology for Using Red Teaming in Exercises
There are a number of steps the hosting or lead organization must take during the development of red team exercises:
- Determine the objectives or desired results
- Communicate with government and private partners
- Determine the scale and type of exercise, the type of scenario, the method of evaluation, and the documentation plan
- Develop the scenario
- Identify and train the appropriate participants
- Conduct and evaluate the exercise
- Prepare thorough documentation
- Evaluate the performance
- Develop the improvement plan
- Make required and desired improvements
- Exercise again
This basic outline applies to virtually all exercises, not just red teaming, and many of the steps are intuitive. Omitting steps, because doing so is more expedient, is less costly, or reduces the potential for embarrassment, is not recommended.
A red team exercise should be an action-reaction-counteraction game prompting move and countermove analysis. Red team operations should affect the actions of the blue team (in other words, be realistic but noticeable), potentially affect other red team actions (cause a change of plans, for example), and provide data and information that will stress the system and drive exercise play.14 Real value can be obtained by using red teams at varying suspicion thresholds. For example, a team can be activated and conduct operations in the least suspicious manner possible, presenting few indicators and warnings on which blue teams can react. If they are not discovered, they increase the level of suspicious behavior until the prevention system engages. This practice allows the threat detection system to be tested and evaluated more precisely and training needs to be identified.
The red team scenario should be a general outline, not a detailed script, and it should be based on historical threats or known current threats. An example scenario outline might include an adversary profile, an objective, a target, a weapon, a location, and a timeline. Red teams should only have access to information to which real-world adversaries could have access.
In figure 2, the vertical dotted lines represent information firewalls or filters. To drive exercise play, information must flow between the red and blue teams, just as it would in the real world. For example, red teams may observe (and adapt to) increased security at an intended target. The red team typically would not, however, have additional information about the cause of the increased security unless interactive play between the teams has allowed the information to be obtained. In short, the firewalls or filters are designed to ensure that information possessed by red and blue teams is as realistic as possible.
Creating the adversary scenario requires knowledge of the adversary; otherwise, the scenario may not reflect real-world threats. If they are not an accurate reflection of the threat, and the jurisdiction measures its capabilities against them, the jurisdiction could develop a false sense of security and even inappropriate countermeasures. Choosing a plausible adversary for a specific geographic location, however, can be sensitive, especially if it is too closely based on actual threats. To avoid the need to use or release actual threat information, organizations can use a predetermined universal adversary (UA), as developed by the U.S. Department of Homeland Security to replicate actual adversaries. The UA essentially collects real-world threat group information and sanitizes it into usable materials in unclassified exercises for all levels of government. The universal adversary data enables exercise players to simulate intelligence gathering and analysis and ensure realistic representation of the hazards posed to the personnel, procedures, and target being exercised. Local or regional intelligence background information can serve as the foundation for the selection of the universal adversary and its targets.15
Safety in Physical Exercises
In physical exercises, the red team operationally portrays adversaries in the field. To minimize the risks inherent in this type of exercise, red teaming must always keep safety as the foremost consideration. Without adequate safety measures there can be no exercise. Accidents, in addition to causing harm to valuable resources, can lead to negative perception of exercise play and players and can cause participants to reconsider the value of red team exercising.
Red teaming does involve increased risks. Therefore, organizations need to make informed decisions, plan carefully, and safely execute the plan. The exercise should include a red team handbook that is a collection of all red team documentation. The purpose of the handbook is to help red team controllers understand their roles and responsibilities. Among the documents to include in the handbook are a profile of the adversary, the type of threat posed by the adversary, the rules of exercise play, the operational safety requirements, detailed scenario information, a description of each red team operation, target information, a communications plan, contact information, and details of red team members’ unique identification and credentialing.16
Safety can be achieved by establishing clear and consistent rules of exercise play and by ensuring that red team members are properly selected, are adequately supervised, carry unique identification, and receive sufficient training. The rules of exercise play should define the boundaries of exercise play and include guidance on the use of force, geographic boundaries, personal safety, hazardous environments, and related topics.17 Real weapons should be prohibited during the exercise and the red team should act only within the law. Additionally, all props should be safe, levels of force set at defined levels, protective equipment sufficient for the scenario and the type of exercise, exercise sites checked for hazards, warning signs posted where appropriate, and first aid available.18
Red team safety controllers should be able to observe and monitor red team operators and operations without interfering or drawing unnecessary attention to their presence. Finally, every action of the red team should be observed by at least one evaluator.19
Limitations of Red Teaming
Past behavior might be the best predictor of future behavior, but it will not necessarily identify a never-before-seen method of attack. There will never be enough information to predict all possible means of attack. Typically, red team exercises are based on prior events and are less likely to anticipate new, unplanned, or never-before-seen events. In addition, attackers may look at whole systems or multiple targets, and it is not possible to exercise every area.20 “Red teaming will not prevent surprises,” as Richard Brennan notes. “But [it] can prepare . . . organizations to deal with surprise. In particular, it can create the mental framework that is prepared for the unexpected.”21
Red teaming is difficult to do and even more difficult to do well. It is also not well suited to developing solutions to problems so much as for raising issues and exploring potential responses.22 Even the Defense Science Board’s extensive research could not find agreed-upon red team capabilities, functions, or means to ensure quality. Finally, there will always be some things that are tainted or influenced in some way by the fact that red teams are not really attackers but simply police officers doing their best to mimic potential attackers.
One researcher has concluded that “where red teams existed in active and vigorous forms . . . organizations have almost invariably outperformed their opponents.” 23 If done correctly, red teaming is realistic training. As the Homeland Security Institute has said, “Red teaming must be advanced in order to aid in the understanding and anticipation of the adaptive and complex nature of the adversary.”24
Attackers will adapt to our plans and our responses. So we must also continually adapt and improve. Plans and procedures need to be stressed and, once stressed, must evolve and improve. Progress does not need to be dramatic; it can be a series of incremental improvements over time. The key is that strategic, operational, and tactical planning and exercising are an interative and evolving process.■
1Office of Homeland Security, “National Strategy for Homeland Security,” (Washington, D.C., 2002): viii.
2Congressional Research Service, “Border and Transportation Security: Possible New Directions and Policy Options” (Washington, D.C., March 2005): 19.
3Timothy G. Malone and Reagan E. Schaupp, “The ‘Red Team’: Forging a Well-Conceived Contingency Plan,” Aerospace Power Journal 16, no. 2 (Summer 2002).
4U.S. Department of Homeland Security, Homeland Security Exercise and Evaluation Program.
5U.S. Department of Homeland Security, Homeland Security Exercise and Evaluation Program.
6Congressional Research Service, “Border and Transportation Security,” 19.
7U.S. Department of Homeland Security, Prevention Exercise Training Course: Participant Handbook (Washington, D.C.: March 2006), module 4.
8U.S. Department of Homeland Security, Homeland Security Exercise and Evaluation Program, 14.
9U.S. Department of Homeland Security, Homeland Security Exercise and Evaluation Program, 6.
10U.S. Department of Homeland Security, Homeland Security Exercise and Evaluation Program, 51.
11Anna M. Culpepper, “Effectiveness of Using Red Teams to Identify Maritime Security Vulnerabilities to Terrorist Attack,” (thesis, Naval Postgraduate School, Monterey, California, 2004), 11.
12Defense Science Board, “Role and Status of DoD Red Teaming Activities,” 6.
13Gregory Fontenot, “Seeing Red: Creating a Red-Team Capability for the Blue Force,” Military Review (September–October 2005): 6.
14U.S. Department of Homeland Security, Prevention Exercise Training Course, module 4.
15U.S. Department of Homeland Security, Homeland Security Exercise and Evaluation Program, 11.
16U.S. Department of Homeland Security, Homeland Security Exercise and Evaluation Program, 32.
17U.S. Department of Homeland Security, Homeland Security Exercise and Evaluation Program, appendix B.
18Michael D. Lynch, “Developing a Scenario-Based Training Program,” FBI Law Enforcement Bulletin (October 2005), 7.
19U.S. Department of Homeland Security, Homeland Security Exercise and Evaluation Program, 4.
20Toby Eckert, “U.S. ‘Red Teams’ Think Like Terrorists to Test Security,” San Diego Union-Tribune, August 20, 2002.
21Anna M. Culpepper, “Effectiveness of Using Red Teams,” 59.
22Rand Corporation, Protecting the Homeland: Insights from Army War Games, by Richard Brennan (Arlington, Virginia: 2002), viii.
23Hicks and Associates, “Thoughts on Red Teaming,” by Williamson Murray, (McClean, Virginia, May 2003), 2.
24Homeland Security Institute, “Staying One Step Ahead: Advancing Red Teaming Methodologies through Innovation,” by Shelley Kirkpatrick, Shelley Asher, and Catherine Bott (Arlington, Virginia: 2005), 1.