Linearity is a reductionist’s dream, and nonlinearity can be a reductionist’s nightmare.
1. Everything we think we know about the world is a model. Every word and every language is a model. All maps and statistics, books and databases, equations and computer programs are models. So are the ways I picture the world in my head–my mental models. None of these is or ever will be the real world.
2. Our models usually have a strong congruence with the world. That is why we are such a successful species in the biosphere. Especially complex and sophisticated are the mental models we develop from direct, intimate experience of nature, people, and organizations immediately around us.
3. However, and conversely, our models fall far short of representing the world fully. That is why we make mistakes and why we are regularly surprised. In our heads, we can keep track of only a few variables at one time. We often draw illogical conclusions from accurate assumptions, or logical conclusions from inaccurate assumptions. Most of us, for instance, are surprised by the amount of growth an exponential process can generate. Few of us can intuit how to damp oscillations in a complex system.
You can’t navigate well in an interconnected, feedback-dominated world unless you take your eyes off short-term events and look for long-term behavior and structure; unless you are aware of false boundaries and bounded rationality; unless you take into account limiting factors, nonlinearities, and delays. You are likely to mistreat, misdesign, or misread systems if you don’t respect their properties of resilience, self-organization, and hierarchy.
In the summer of 2016, I had the good fortune to be selected along with nine other students to attend the 2016 Graduate Workshop in Computational Social Science and Modeling (GWCSS) at the Santa Fe Institute in Santa Fe, NM. Carnegie Mellon Professor John H. Miller and University of Michigan Professor Scott E. Page were our excellent Santa Fe faculty guides for the workshop, which included a combination of lectures on complexity science, complex systems, and computational modeling by Professors Miller and Page, as well as other Santa Fe faculty including Chris Kempes, Simon DeDeo, and Mirta Galesic.
The workshop was a two-week, residential intensive in beautiful Santa Fe. Concurrently, SFI runs a four-week Complex Systems Summer School which brings additional notables in the field of complexity and computational social science, including economist Brian Arthur, Robert Boyd whose 1988 book on cultural evolution coauthored by Peter J. Richerson has been cited more than 7000 times, and Doyne Farmer. As a GWCSS attendee, I was able to attend any of the Summer School lectures I could slip away for.
Lectures and discussions were wide-ranging but all anchored in the science of complexity and computational modeling as tools of scientific investigation. A sampling of our discussions in the GWCSS:
* model diversity (Scott Page)
* evolutionary computation (John Miller)
* information theory and conflict (Simon DeDeo)
* network structure and performance (Mirta Galesic)
I was incredibly lucky to attend the 2016 workshop with a distinguished cohort of fellow graduate students, from whom I learned a great deal during conversation, discussion, and spirited debate over the course of our two weeks together.
During the 2016 GWCSS, I began work on an agent-based model to investigate the role of personality variables in situations in a work setting (view project on ResearchGate). This work continues.
I am grateful to have had the opportunity to attend the 2016 Graduate Workshop in Computational Social Science and recommend it highly.
Please note that the application deadline for the 2017 GWCSS is 14 February 2017.
More information about the 2016 workshop I attended, including a reading list, is available from the 2016 website and wiki.
At the 2016 Winter Simulation Conference, I presented “Active Shooter: An Agent-Based Model of Unarmed Resistance” (PDF) (view/follow project on ResearchGate), coauthored with William G. Kennedy. The paper appeared in WSC 2016’s Social and Behavioral Simulation track, chaired by Ugo Merlone of the University of Torino and Stephen Davies from the University of Mary Washington.
Background and motivation:
I initially undertook this modeling project during a 2015 seminar on agent-based modeling for military applications (ABM) offered by Kenneth Comer at George Mason University. The issue of active and mass shooters is obviously not unique to the military, but the shootings at Ft. Hood and the very few ABMs [1 , 2] I could find on mass shootings suggest more attention from computational modelers is needed.
Examining mass shootings through the lens of complexity science and agent-based modeling raised an additional wrinkle that’s seldom addressed in the literature or training for active shooter events. Specifically, the prescription to “Run, Hide, Fight” (or “Avoid, Deny Defend”) might work for individuals, but what about the impact one individual’s behavior has on another individual’s likelihood of surviving an active shooter incident? To illustrate, imagine an active shooting is unfolding and a potential victim is able to run and hide, barricading himself in a supply closet. What happens when the next potential victim arrives at that same supply closet, having also planned to use it? The individual inside the closet has monopolized the secure hiding space, leaving the second individual at greater risk.
The study of mass shootings is complicated. Despite disproportionate media coverage, mass shootings are extremely rare and each mass shooting is unique and difficult to generalize. Even the definition of “mass shooting” is debated. In my work, I’ve relied on the U.S. Federal Bureau of Investigation for official definitions, not because I think they are necessarily the best possible, but because they provide an authoritative baseline to study the issue. The FBI considered a “mass shooting” a single incident in which four or more individuals are killed (this is the definition of “mass murder”) involving the criminal use of a firearm. Importantly, this definition means that an incident in which three are shot and killed would not rise to the level of “mass shooting,” nor would an incident in which many were shot but less than four died of their wounds.
Compounding the rarity of mass shootings available for study is the fact that many of the primary sources (including the shooter himself) are often deceased. This fact, combined with methodological issues of relying on eyewitness accounts that are subject to recall errors and bias, makes the study of historical mass shootings difficult. Conducting experiments in which participants believed they could possibly die would be unethical and dangerous. Computational modeling and simulation, on the other hand, would offer an infinitely repeatable virtual laboratory in which to study this issue and would have no possibility of harming human subjects.
I had initially imagined a model that would include a variety of agents: shooter(s), victims, and peacekeepers/LEOs. While developing the model, I realized that what was most important was creating a model that allowed users–those with more subject-matter expertise than me–to calibrate the model to their knowledge. The eventual version 1 was a generalized outdoor model in which a shooter targets and fires upon victims, victims flee the shooter, and, if set by the user, and small proportion of victims attempt to subdue the shooter, as occurred during the thwarted attack in 2015 on a Thalys train traveling from Amsterdam to Paris.
User-settable parameters permit the user to control the shooter’s accuracy, armament, and, importantly, the probability of a “fighter” overcoming a shooter.
Initial findings suggest that even with a very small probability of success – a 1 out of 100 chance for each second of the struggle – a very small proportion of “fighters” in a population can potentially subdue an active shooter, even if that shooter has a 50 percent probability per second of disabling the “fighters.”
My model is available under the Apache 2.0 license from OpenABM for others to use in their own research. I encourage other modelers to extend this model and have made several suggestions for doing so in the paper. In particular, I think the idea of rapid collective action–a coordinated group or swarm attack–should be studied as a potential counter to an active shooter.
Mass shootings unfold quickly and are rarely foreseen by victims. Increasingly, training is provided to increase chances of surviving active shooter scenarios, usually emphasizing “Run, Hide, Fight.” Evidence from prior mass shootings suggests that casualties may be limited should the shooter encounter unarmed resistance prior to the arrival of law enforcement officers (LEOs). An agent-based model (ABM) explored the potential for limiting casualties should a small proportion of potential victims swarm a gunman, as occurred on a train from Amsterdam to Paris in 2015. Results suggest that even with a miniscule probability of overcoming a shooter, fighters may save lives but put themselves at increased risk. While not intended to prescribe a course of action, the model suggests the potential for a reduction in casualties in active shooter scenarios.
Keywords: agent-based modeling, ABM, active shooter, mass shooter, military, law enforcement, LEO, firearms, guns, violence, terrorism, security
Briggs, T. W. & Kennedy, W. G. (2016). Active shooter: An agent-based model of unarmed resistance. In Proceedings of the 2016 Winter Simulation Conference (WSC), 3521–3531. https://doi.org/10.1109/WSC.2016.7822381 (PDF)
Uri Wilensky and William Rand’s An Introduction to Agent-Based Modeling: Modeling Natural, Social, and Engineered Complex Systems with NetLogo (find in a library) is the single best book I’ve encountered for anyone interested in agent-based modeling (ABM) in any discipline and at any level (K-12, undergraduate, graduate, professional).
At nearly 500 full-color pages, Wilensky and Rand’s book does an excellent job progressively walking through the decision to use agent-based modeling, creating simple ABMs, extending preexisting ABMs, creating more complicated ABMs, analyzing ABMs, and conducting verification, validation, and replication.
One of the greatest strengths of Wilensky and Rand’s approach is that IABM (Introduction to Agent-Based Modeling) is that it is a hands-on, exploratory book intended for use with the NetLogo multi-agent modeling environment, which is freely available for download. Each chapter of IABM includes many illustrative examples, all implemented and executed in NetLogo. Moreover, the example models and code are not just available to readers (again, free of charge), but are conveniently bundled in the current release of NetLogo. In other words, rather than just read about the models, the reader is encouraged to run the models his or herself. The Chinese proverb says it best:
Tell me and I’ll forget;
show me and I may remember;
involve me and I’ll understand.
Beyond just running the models described in the book, each chapter concludes with a substantial number of exercises or “Explorations,” usually numbering 20 to 30. Each Exploration is a potentially deep opportunity to learn more about ABM by getting involved rather than just reading, as the Chinese proverb suggests.
Wilensky and Rand do a very nice job of using illustrative models from a variety of disciplines; one example might come from the social sciences and the next example from ecology. This is helpful since each reader may come from a different background or have different experience or interests.
The book requires no special background in mathematics or computer science, which is a huge plus in terms of accessibility to a broader audience.
The authors suggest that it could be used as a textbook for an undergraduate course on complex systems or a computer science course on ABM, or even as a supplement to science, social science, or engineering classes. Graduate students who wish to use ABM in their research – regardless of discipline – would likely find IABM one of the best possible places to start. Even experienced researchers with no agent-based modeling experience would benefit from IABM as an introduction to the method.
While the book is aimed at high-level undergraduates and graduate students, it is sufficient to successfully create very detailed and scientifically valuable agent-based models in NetLogo. The authors reserve a final chapter for “advanced” applications potentially of greater interest to individuals interested in specific sorts of ABM: computationally intensive models, participatory or stakeholder-driven modeling, robotics, spatial and geographic information systems (GIS), and network science / social network analysis. They select just a handful of NetLogo’s more advanced capabilities to describe in this chapter, but include helpful references enabling interested readers to learn more.
I can’t find anything about IABM to criticize, though reading it cover to cover (as I did) is certainly an investment of time, albeit a worthwhile one for the reader wishing to learn and use agent-based modeling.
One of the best ways to explore the science of complexity and how complexity theory can be applied to the numerous real-world phenomena we experience and study is through agent-based modeling. Uri Wilensky and Bill Rand have written an excellent book to help anyone do just that, and I recommend An Introduction to Agent-Based Modeling: Modeling Natural, Social, and Engineered Complex Systems with NetLogo (find in a library) to anyone wishing to get started with agent-based modeling.
Wilensky and Rand make ABM accessible and, importantly, thoroughly enjoyable to learn.
The book’s companion website is http://www.intro-to-abm.com/.
I picked up Yuval Noah Harari’s Sapiens (find in a library) because of my academic interest in early social complexity and specifically how we humans became the complex social creatures embedded in networks that we are today.
Ostensibly a “history” book (Harari’s PhD at Oxford was in history), Sapiens unexpectedly turned out to be much more than a history book full of names, dates, and places. Instead, Harari focuses on what I can best describe as large-scale shifts in the population (both form and quantity) of the Earth, and, of critical importance, the origins of these shifts.
While Harari doesn’t specifically bring a complexity science perspective to Sapiens, he is erudite and obviously exposed to a broad range of ideas and academic disciplines in addition to history. Biological anthropology, archaeology, cognitive psychology, environmental science, and economics are all very well represented.
Harari doesn’t pull any punches and relies heavily on research and science throughout the book. He discusses both sides of issues and notes if evidence is scant and debate continues, for example, in the competing hypotheses regarding what happened to Homo neanderthalensis, commonly known as Neanderthals. This question appears to have been answered in recent years by DNA evidence, though some “how” questions still remain.
The book is heavy at just over 400 pages, but Harari’s style drew me in from the start: The book opens with a two-page “Timeline of History,” in which he starts at 13.5 billion years ago goes through the Industrial Revolution and ends, interestingly, at “the Future.”
To give an example of his style:
13.5 billion years ago: Matter and energy appear. Beginning of physics. Atoms and molecules appear. Beginning of chemistry.
3.8 billion years ago: Emergence of organisms. Beginning of biology.
500 years ago: The Scientific Revolution. Humankind admits its ignorance and begins to acquire unprecedented power.
I know of no other author who’d pen “Beginning of physics” and “Beginning of biology” in this manner.
Harari is both concise and a contrarian, and I love a contrarian thinker.
Moreover, he gets complexity and the means by which large-scale cascades and changes can occur as the result of many small interactions (“tipping points,” to borrow Malcolm Gladwell’s book title).
In one of my favorite passages, Harari invokes chaos theory in explaining why history can’t be explained deterministically nor can the future be predicted. He writes on page 240:
So many forces are at work and their interactions are so complex that extremely small variations in the strength of the forces and the way they interact produce huge differences in outcomes.
He continues, explaining that exacerbating the problem of predicting the future is the fact that history is a Level Two chaotic system. A Level One chaotic system, like the weather, does not react to predictions made about it. A Level Two chaotic system, on the other hand, reacts to predictions made about it. Example: stock marketscog.
Harari also talks about the spread of ideas over networks: culture (the idea of “memetics”) and nationalism. [Robert Axelrod’s model The Dissemination of Culture uses agent-based modeling to explain the process of cultural dissemination.]
Perhaps the most helpful idea in Sapiens is Harari’s discussion of how we evolved to become Homo sapiens from our chimpanzee forbears and, importantly, what differentiated the Sapiens species from our closest relatives (i.e., the now extinct other members of genus Homo: Homo rudolfensis, Homo erectus, Homo neaderthalensis, Homo denisova, Homo floresiensis, Homo ergaster, Homo soloensis and, quite possibly, others which have simply not been discovered in the archaeological signatures to date). This section of the book, the Cognitive Revolution, tackles the implausibility of the Sapiens catapulting from “an animal of no significance” to the very top of the food chain and spreading like wildfire across an entire planet. Rich with discussions of extant research in psychology and genetics, Harari argues that the collective ability of Sapiens to create shared mental models and myths very likely explains the successes that simply could not be achieved without collective action on such a massive scale. This idea is key for both cognitive and social psychologists seeking to understand how individual cognition results in the emergence of behaviors at the aggregate level of groups, and sometimes enormous groups.
For those looking for a quick exposure to Harari and his ideas, while I heartily recommend reading Sapiens, Harari’s 2015 TED talk nicely covers his take on the role of shared mental models (or “stories”) in the Cognitive Revolution of Homo sapiens:
I’d be remiss if I stopped here, since Harari goes on to discuss the transition of Sapiens from hunter-gatherer bands to agricultural pastoralists during the Agricultural Revolution and then, provocatively, the surprisingly very few forces that have managed to unite mankind into what is increasingly one single global society on planet Earth: money (and, importantly, trust in what money represents), empires, and religions. Harari does not shy away from frank discussion of religion, including humanism.
Finally, Harari covers the Scientific Revolution and how a fundamental shift in our thinking – namely, that despite what empires or religions might profess to know, Sapiens in fact, were ignorant of many things that science could answer – that ultimately spurred so much progress in what is truly the blink of an eye in the very long history of Homo sapiens.