State of the art
Empirical studies on determinants and consequences of terrorism and responses to terrorism are increasing in number. For instance, seven out of ten articles published in a special issue of Oxford Economic Papers on terrorism comprise empirical content. This positive trend is enabled by growing data gathering efforts and by the increasing statistical and technical sophistication of empirical methods. Concerning the former, there are currently three key event datasets used by scholars conducting terrorism research.
International Terrorism: Attributes of Terrorist Events (ITERATE) was the first terrorist event dataset created in early 1980s. It contains information solely on the transnational terrorist attacks over the time period 1968-2012. ITERATE contains more than 40 variables, among others, on the features of transnational terrorist organizations, their activities and impact of their operations.
The second widely used event dataset is RAND Database of Worldwide Terrorism Incidents. RAND compiles terrorist events from 1968 through 2009. While for the period of 1968-1997 RAND includes only data on transnational terrorist attacks, notably for the later period it differentiates between transnational and domestic terrorist events.
The third data endeavor is the Global Terrorism Database (GTD), which is presumably the most comprehensive unclassified dataset on terrorist events (it contains 150 thousands observations, almost four times as much as RAND database). It includes information on terrorist events around the world from 1970 through 2015 and for the time span 2008-2015 it distinguishes between domestic and transnational terrorist events. For each identified terrorist event, the GTD comprise information on the date and location of the event, the weapons used and characteristic of the target, the number of casualties, and if possible the group/s or individuals liable. Yet, even such a comprehensive dataset as the GTD contains flaws. For instance, for 1993 the data are rather incomplete. For summary of the datasets see, for instance, Sandler (2015) or Sheehan (2011).
Yet, even with the data readily available doing empirical studies on terrorism is challenging, particularly when a researcher tackles a “cause and effect” question. This is not inherently a problem of terrorism studies, but of empirical research relying on observational (naturally occurring) data more generally. Research based on observational data is often plagued with self-selection and omitted variables bias (OVB) problems. Self-selection or OVB refers to setting where a researcher cannot control for all confounding variables, which are associated with both a potential cause and with a possible outcome (Dunning 2012). To illustrate, even if one observes a strong correlation between poverty and terrorism intensity, inferring causality in that case (i.e. that poverty causes terrorism) would be wrong if both poverty and terrorism intensity are driven by some other factor, such as weak and inefficient government. To alleviate the problem in this specific case, one could control for the weak or inefficient government. Nonetheless, in practice the identification of all confounding factors is nearly impossible. Another problem related to empirical investigation is a reverse causality. For instance, is it the poverty and inequality that cause terrorism or is terrorism responsible for the growth of poverty and inequality?
A genuine causal inference is possible only in a situation when a researcher has full control over the confounding factors or when the researcher does not need to control for confounders. A key concept, which obviates the need to control for confounders, is randomization (Dunning 2012). The randomization enables the breaking of a link between confounding factors and the causal variable of interest. If, in a randomized setting, a correlation is found between poverty and a terroristic incidence, this correlation can be interpreted in a causal fashion. As in medicine and clinical trials, also in social science the best way to obtain randomization is through experimentation. Likewise, the terminology for experiments in social science is drawn from medicine, such as control, treatment and treatment effect.
Today laboratory experiments are widely used in social sciences to study human behavior and incentives, but also institutions. Lab experiments use human subjects who are randomly exposed (assigned) to various settings (control or treatment groups). Due to random assignment, outcomes can be directly compared between groups. If a difference between outcomes is identified, this can be attributed to the different conditions between groups (treatment effect). Experiments warrant, therefore, large internal validity. Yet, an immanent drawback of the laboratory experiments is that outside the specific experimental – highly controllable – context, they might not have too much of a predictive value (Angrist and Pischke 2009) and, hence, they might lack external validity. In the terrorism context, one may wonder how far from the real perception of income inequality would be the income inequality created in the lab. Likewise, human subjects used in the experiments – often college students – might fail to behave like decision-makers in the real world (e.g. they might not replicate behavior of government reacting to terrorist event).
Despite these obvious critical remarks, laboratory experiments constitute a good complementary to theoretical and empirical analysis of terrorism. As mentioned by Arce et al. (2011), lab experiments might be instrumental in testing the effectiveness of counterterrorism policies before they are launched in the field. If the predicted results of counterterror policies cannot be obtained in the lab, one may question whether they will be effective in the field. This might lead policy-makers to rethink their policies.
More in-depth analysis of advantages and disadvantages of lab experiments in studying terrorism and responses to it will be presented by Professor Catherine Eckel, who will be a guest speaker during the DGA Research Seminar on July 6, 2016. The DGA seminars are free and open to the public.