Skip to content

Tom Briggs

Improve performance. Make work better.

  • Twitter
  • LinkedIn Profile
  • ResearchGate

About

  • Tom Briggs
  • Use of Affiliate Links

Recent Posts

  • Campbell’s Law: Why your metric will be gamed
  • Why Management Science Fails to Perform, according to Peter Drucker
  • Evicted: Matthew Desmond’s Pulitzer Prize-Winning Ethnography of Tenants, Landlords, and Eviction in an American City
  • Organization Design: “All the elements interact in a system”
  • When is a system complex?

Tags

ABM agent-based modeling bibliometrics book review books boundary spanners career cognition collaboration complexity complexity science computational modeling computational social science conference CSS data data science design of experiments ethics ethnography HR leadership long-tailed distribution management measurement Moore's Law NetLogo network science nonlinearity organizations performance psychology PTCMW quote research methods science scientometrics SNA social network analysis social networks Social Science sociology survey research systems systems thinking

Archives

  • June 2022
  • March 2021
  • August 2019
  • February 2019
  • January 2019
  • November 2018
  • October 2018
  • September 2018
  • July 2018
  • May 2018
  • April 2018
  • January 2018
  • December 2017
  • April 2017
  • March 2017
  • January 2017
  • December 2016
  • November 2016
  • July 2016
  • June 2016
  • March 2016
  • November 2015

Miscellaneous

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

computational social science

Review: The Half-Life of Facts – Why Everything We Know Has an Expiration Date by Samuel Arbesman

November 25, 2018November 25, 2018 Tom Briggsbibliometrics, book review, computational social science, long-tailed distribution, measurement, Moore's Law, network science, science, scientometrics, social networks

“What we study is not always what is actually out there. It is often what we’re interested in, or what’s easiest to discover.” –Samuel Arbesman

Samuel Arbesman, a mathematician and network scientist at Harvard, begins his fun romp through the science of science The Half Life of Facts – Why Everything We Know Has an Expiration Date (find in a library) with a few cheeky examples of scientific “facts” that have differed depending on the time period. In the first half of the 20th century, it was widely known that there are 48 chromosomes in a human cell, but in the latter half of the 20th century, it became widely known that there are, in fact, only 46 chromosomes in a human cell.

In Chapter 1, Arbesman walks through the mathematical regularities of the growth of knowledge and the decay of knowledge, aptly using as metaphor the half-life of radioactive material. In Chapter 2, “The Pace of Discovery,” he provides an enjoyable introduction to scientometrics, beginning with a story of Derek J. de Solla Price – considered the founder of scientometrics, or the “science of science” – stacking, chronologically, every issue of a British scientific journal against the wall of his apartment and realizing in an idle moment that the heights of the volumes conformed to a specific shape: an exponential distribution. This discovery led Price to focus his research on scientometrics, leading to the publication of Little Science, Big Science, in which Price calculates the doubling times (i.e., exponentially grow) for various components of science and technology.

Price found, for example:

Domain

Doubling Time (in years)

Number of entries in a dictionary of national biography

100

Number of universities

50

Number of important discoveries; number of chemical elements known; accuracy of instruments

20

Number of scientific journals; number of chemical compounds known; membership of scientific institutes

15

Number of asteroids known; number of engineers in the United States

10

Arbesman also cites Harvey Lehman, who published in the journal Social Forces an attempt to count major contributions in different areas of studies, and Arbesman provides the following expanded table:

Field

Doubling Time (in years)

Medicine and hygiene

87

Philosophy

77

Mathematics

63

Geology

46

Entomology

39

Chemistry

35

Genetics

32

Grand opera

20

Chapter 2 also introduces some of the hallmarks of bibliometrics: the h-index and journal impact factors, as well as the study of scientific discoveries, which Arbesman calls “eurekometrics.” Some of the names mentioned in Chapter 2 include: Jorge Hirsh, Harriet Zuckerman, Arthur C. Clarke, Nicholas Christakis, Tyler Cowen, Galileo, Isaac Newton, Stanley Migram.

Chapter 3, “The Asymptote of Facts,” tackles the decay of knowledge. Primarily using citation analysis, Arbesman examines the length of time until papers become “out of date.” Using this approach, determining the half-life of a field is possible: the time it takes for others to stop citing half of the literature in a field. He uses a variety of examples to illustrate the “long tail” of knowledge as it decays.

Arbesman cites a 2008 work by Rong Tang examining scholarly books in different fields, finding the following half-lives by field:

Field

Half-life (in years)

Physics

13.07

Economics

9.38

Math

9.17

Psychology

7.15

History

7.13

Religion

8.76

“[W]hen people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.” –Isaac Asimov

Some of the names mentioned in Chapter 3 include: Marjory Courtenay-Latimer, John Hughlings Jackson, Sean Carroll, Kevin Kelly

Chapter 4, “Moore’s Law of Everything,” explores the intersection of technological progress via Moore’s Law and human knowledge. The chapter is key to understanding how the field of computational social science emerged, and why scientometrics has grown: it’s now possible to do, computationally, what was an incredibly painstaking process to do manually. The advent of computation and the exponential growth in computing power has changed what humans are able to know. Topics in Chapter 4 include: carrying capacity, logistic curves, information transformation, innovation, scientific prefixes as evidence of progress, actuarial escape velocity, population growth, and travel distances.

“Technological growth facilitates changes in facts, sometimes rapidly, in many areas: sequencing new genomes…; finding new asteroids (often done using sophisticated computer algorithms that can detect objects moving in space); even proving new mathematical theorems through increasing computer power.” –Samuel Arbesman

Some of the names mentioned in Chapter 4 include: Clayton Christensen, Rodney Brooks, Jonathan Cole, Henry Petroski, Aubrey de Grey, Bryan Caplan, Michael Kremer, Thomas Malthus, Robert Merton

In Chapter 5, “The Spread of Facts,” Arbesman gently introduces network science and mentions some of his work with Nicholas Christakis, explaining how behaviors (e.g., health behaviors) and information has empirically been shown to move through networks. The spread of information through networks introduces the possibility—or perhaps the eventuality—of fact-transmission errors or even all-out fabrications. Arbesman uses the children’s game of “telephone” to illustrate how easily a piece of knowledge can be distorted as it moves through the network. Without stealing his thunder, I’ll also note that Arbesman chooses some fantastic examples of misinformation spread in this chapter: Popeye the Sailor and dinosaurs! More mundane, but relevant to scientometrics, is the problem of inaccurate citations “entering the wild,” only to be replicated and spread to an almost unbelievable degree. Some of the names mentioned in Chapter 5 include: Gottfried Leibniz, Jukka-Pekka Onnela, Jeremiah Dittmar, Mark Granovetter, James Fallows, David Liben-Nowell, Jon Kleinberg

In Chapter 6, “Hidden Knowledge,” Arbesman goes deeper into network science and computational techniques for studying networks: he introduces random graphs, preferential attachment, evolutionary programming, meta-analysis, and the academic citation and networking product Mendeley along with several other software products. Some of the names mentioned in Chapter 6 include: Albert-László Barabási, Réka Albert, Herbert Simon, William Shakespeare

In Chapter 7, “Fact Phase Transitions,” Arbesman describes in greater detail the use of mathematical tools from physics to investigate the underlying regularity in the change of knowledge. Topics include Ising models, Fermat’s Last Theorem, and human space exploration.

Chapter 8, “Mount Everest and the Discovery of Error,” explores one of my personal favorites as an applied social scientist: measurement, and its importance to all of science, human knowledge, and understanding. Arbesman uses the change in the “fact” of the height of Mt. Everest during the 20th century—as measurement techniques either improved or, in the case of GPS, were invented and deployed—to note yet another source contributing to the decay of knowledge. Likewise, measures of length have been similarly inconsistent, until scientists finally arrived at the use of the speed of light to define the meter.

“As our measurements become more precise, the speed of light doesn’t change; instead, the definition of a meter does.”

Arbesman defines precision and accuracy (or reliability and validity for the psychologists and social scientists among us). Precision refers to the consistency of measurements over time; accuracy refers to how similar measurements are to the real value. Importantly, he also discusses error: “all methods are neither perfectly precise nor perfectly accurate; they are characterized by a mixture of imprecision and inaccuracy. But we can keep trying to improve our measurement methods. When we do, changes in precision and accuracy affect the facts we know, and sometimes cause a more drastic overhaul in our facts.” [emphasis mine]

“Statistics is the science that lets you do twenty experiments a year and publish one false result in Nature.” –John Maynard Smith

Next, Arbesman jumps into a discussion of probability, its importance in science, and the woefully misunderstood p-value. He discusses the problem of publishing false results, issues of replication in science, and the dangers of poor measurement.

“When you cannot measure, your knowledge is meager and unsatisfactory.” –Lord Kelvin

“If you can measure it, it can also be measured incorrectly.” –Samuel Arbesman’s corollary to Lord Kelvin

The penultimate chapter, Chapter 9, “The Human Side of Facts,” discusses the human aspects that so often contribute to getting facts wrong: cognitive bias, self-serving bias, representativeness bias, theory-induced blindness, change blindness, and language change, to name a few. Some of the names mentioned in this chapter include: Daniel Kahneman, John Maynard Keynes, Michael Chabon, Thomas Kuhn, John McWhorter, Isaac Newton, and Henry Kissinger.

Finally, in the final Chapter 10, “At the Edge of What We Know,” Arbesman discusses the pace of information change and whether—and how—humans can cope. Arbesman argues, essentially, “quite well,” and points to a variety of examples of where humans seem to be doing mostly okay, despite having human limits, such as Dunbar’s Number of “150 to 200 people we can know and have meaningful social ties,” which surpisingly still seems to hold considering the average number of Facebook friends was 190 at the time of writing. Arbesman points to a variety of technology that will enhance our capabilities as we bump against our limits. Some of the names mentioned in this chapter include: Kathryn Schultz, Carl Linnaeus, Robin Dunbar, Chris Magee, Jonathan Franzen

The Half Life of Facts – Why Everything We Know Has an Expiration Date (find in a library) includes almost 20 pages of endnotes and citations for those wishing to dig deeper. It’s a particularly enjoyable read for science and measurement geeks, or for anyone wanting to know more about the science of science and why what we know is always in flux. I particularly recommend Chapter 8, since measurement is so fundamental to science and to our knowledge of the world.

Review: The Half-Life of Facts – Why Everything We Know Has an Expiration Date by Samuel Arbesman

Essay: Complex Social Systems – You’ll Need More Than Just Big Data

November 14, 2018February 7, 2019 Tom Briggscomplexity, complexity science, computational modeling, computational social science, data science, measurement, systems

What makes a “complex system” so vexing is that its collective characteristics cannot easily be predicted from underlying components: the whole is greater than, and often significantly different from, the sum of its parts. A city is much more than its buildings and people. Our bodies are more than the totality of our cells. This quality, called emergent behavior, is characteristic of economies, financial markets, urban communities, companies, organisms, the Internet, galaxies and the health care system.

–Geoffrey West, Santa Fe Institute

The ability to collect and pin to a board all of the insects that live in the garden does little to lend insight into the ecosystem contained therein.

–John H. Miller and Scott E. Page

Defining and Understanding Complexity

Miller and Page (2007) initially punt on defining complexity by invoking Justice Stewart’s definition of pornography: “I know it when I see it.” Many definitions of complexity and complex systems have been suggested, yet none is widely accepted. Often, simply describing the features of complexity seems the modus operandi in explaining complexity.

Key features of a complex system include:

  • Constituent parts that interact in such ways as to give rise to non-linear and often unanticipated or unpredictable outcomes, outcomes particularly unexpected if the system were examined in a reductionist fashion
  • Feedback loops between parts and levels of the system and often the system and the environment (Simon, 1969)
  • Self-organization of constituent parts, adaptation, evolution
  • And, possibly unique to complex human social systems, the possibility for second-order emergence – emergence of reflexive social institutions based on human collective action (Gilbert & Troitzsch, 2005)

Gilbert and Troitzsch (2005) offer the idea that emergence requires new descriptions that are not required to describe the behavior of underlying components: an individual atom has no temperature, but the interaction of atoms in motion gives rise to temperature. Complexity scholars have also distinguished complication from complexity as a means of explaining complexity. Miller and Page (2007) suggest that removing a seat from a car makes it less complicated while removing the timing belt makes it less complex. Santa Fe Institute president David Krakauer noted in an August 2015 interview, “A watch is complicated…your family is complex,” suggesting that we understand how all of the constituent parts of a watch work together to make a functioning timepiece, but we do not fully understand the various forces that make a family function or not function. Removing a specific part from a watch has a predictable, known consequence. Removing–or adding–a family member changes the interactions in the family’s social system in unknown and unpredictable ways.

The crux of complexity in social systems, then, is how the interactions between individuals in the system give rise to new, emergent properties of the system that cannot be understood by studying each individual alone, as represented by the poetic if macabre Miller and Page (2007) quote regarding pinning butterflies. Perhaps one of the most well-known examples in computational social science (CSS) of macro-level emergence from the interaction of agents in a complex social system is Thomas Schelling’s model of segregation, in which he demonstrated that as individuals choose where to live based on their even very slight preference for having some neighbors who look similar to them, a tremendous degree of residential segregation akin to that observed in many American cities results without any governmental or other top-down organizing schema (1971). Likewise, Simon (1969) recounts teaching urban land use to architectural students who had difficulty accepting that land-use patterns in medieval cities arose from cumulative individual decisions over time rather than top-down guidance from a central planner or designer.

Miller and Page (2007) suggest that innate features of social systems tend to produce complexity: social agents are “enmeshed in a web of connections with one another and, through a variety of adaptive processes, they must successfully navigate through their world” (p. 10). Part of agents’ navigation of the world necessarily involves making decisions and undertaking behaviors either in response to the decisions and behaviors of others, or, importantly, in anticipation of what others will do. The number and disparate types of connections result in non-linear behavior and an inability to reduce the system to its constituent parts without losing the emergent properties of the system (Miller & Page, 2007). Torrens (2010) notes that self-organization and the propagation of information back and forth across scales – notable features of human social systems – embody emergence, a hallmark of complexity.

“Big Data” and Data Science – Not Enough for a Science of Complex Social Systems

Like complexity, definitions of “big data” can seem difficult to pin down, particularly depending on perspective. Technical perspectives approach big data in terms of the “3Vs”: volume, velocity, and variety of data. This perspective is concerned with factors like storage space, transmission networks, and sensors. Another perspective is that of the scientist and researcher: instead of data collection as an expensive, painstaking, time-consuming process that nevertheless results in small samples and woefully inadequate statistical power, it is now possible in some disciplines to quite literally download data that can plausibly be used for research by writing just a small amount of code and tapping the API of a site like Twitter.

Cioffi-Revilla (2014) has described computational social science (CSS) as an “instrument-enabled discipline.” Inasmuch as CSS utilizes computation to investigate complex social systems, big data—and even bigger “computers”—are perhaps an extension of this paradigm: an improvement to our scientific instruments for the study of social complexity. A fascinating example is the controversial research on massive-scale emotional contagion through the social network Facebook. In the research team’s paper, which sought to investigate a phenomenon in which individuals are affected by the emotional expressions of others—and, in turn, affect others through their own expression or withholding of emotion—they noted that the miniscule but statistically significant effect size could only have been detected in a sample as large as that available to the Facebook Data Science team (Kramer, Guillory, & Hancock, 2014). In the context of complex social systems, then, big data represents improved measurement possibilities. At one time, measures of length were imprecise at best – the width of a man’s thumb, length of his foot, the breadth of his outstretched arms – these were the original, inconsistent measures of inch, foot, and yard. Measurement certainly became more precise and more accurate tools were propagated, but more or better data did not change the underlying construct of human height, though it may have helped improve the ability to study it.

Big data isn’t required to appreciate or understand social complexity, however. Returning to Schelling’s work on residential segregation, it is noteworthy that his initial investigation required little more than coins placed on a checkerboard that were then moved according to a series of simple rules. Schelling did not possess or even generate big data, but the modeled social system contained all of the features of a complex system: interacting agents, feedback, adaptation, and emergence. It is also the case that enormous datasets might reveal nothing about complexity; a computer is a complicated machine capable of generating enormous amounts of data on CPU and memory cycles as it operates, but this is not complexity: it is merely executing code, as designed and instructed. A computer, then, is a vastly more complicated watch.

In 2008, WIRED Editor-in-Chief Chris Anderson proclaimed that the deluge of data spelled “the end of theory” and made the scientific method obsolete. Anderson argues that we’ve moved beyond needing to seek causation when we find correlation, that “correlation is enough” with big data. The question – perhaps best left to philosophers of science – is how to define “enough?” Anderson points to Google’s success at solving tasks algorithmically, by throwing more data at more computational power, without the need to even understand the underlying data. Surely one can think of examples in which “enough” might pass the sniff test for a profit-motivated entity, but perhaps not for the scientist driven by intellectual curiosity. An enormous dataset of measures of sky color all over the earth would establish a strong correlation with the sky being blue at midday, yet this tells us nothing about why the sky appears blue to us. Likewise, human beings, owing to our bounded rationality and limited cognition (Simon, 1969, 1976), are fairly terrible sensors in comparison to the satellites and robots NASA might send to Mars. Yet preparation and training for a manned Mars mission is earnestly underway. Why? Arguably, because human curiosity transcends merely knowing “good enough” correlation. Simon described “the vivid new perspective we gained of our place in the universe when we first viewed our own pale, fragile planet from space” (1969). Enormous data on the tremendous number of stars and planetary bodies hadn’t taught that lesson; it required space travel, an enormous feat of collective action in a complex society (Cioffi-Revilla, 2014).

While the “big data” buzzword declined in the first decade of this century—at least according to Google’s Ngram Viewer (see embedded chart at top of post), it is still a paradigm taken seriously by complexity scholars and computational scientists. SFI’s Geoffrey West sees a role for big data in enabling large-scale simulations and models of complex social systems – if, he asserts, we determine a “big theory” to guide which questions we ask and which data we use (2013). In the Manifesto of Computational Social Science, Conte et al. (2012) likewise suggest that big data will play an important role in investigating important questions of human social complexity, but only when coupled with the core principles and concepts of CSS: psychology and the human mind, uncertainty, social change and adaptation, networks, and non-linear and non-equilibrium dynamics, to name but a few.

Pietsch (2013) also takes a highly integrative perspective, using philosophy of science to answer the charge that big data spells “the end of science.” Calling big data “the new science of complexity,” he refutes the notion that big data is not concerned with causality in complex social systems, and in fact suggests that big data will allow for a “contextualization of science” at the level of complex systems rather than attempting to model causality by reducing a phenomena through “dubious simplifications” common in techniques like structural equation modeling used in social science (Pietsch, 2013).

There is little doubt that big data offers exciting new prospects for the study of complex social systems, perhaps in validating complex social system models like Robert Axtell’s 1:1 model of the U.S. economy (Axtell, 2016) or providing more reliable and robust datasets on agent interaction through the sensors contained in smartphones and other so-called “wearables.” Big data advocates who decry the end of the scientific method, however, would do well to keep the complexity hallmark of emergence in mind, though, since emergent behavior is by nature unpredictable. If the emergent property of a complex social system has not yet emerged, there may be nothing in the data – regardless of size – that can describe or predict what’s yet to come. Moreover, the adaptation to feedback that is characteristic of complex social systems also suggests the possibility that big data itself becomes part of the environmental landscape, feedback to which our existing complex social systems and the agents therein will adapt and evolve!

Conte et al. (2012) see a role for big data in the modeling stage when investigating complex social systems; that is, data can reveal statistical features of the system to be studied, and these features can be incorporated in complex social system model, or the emergence of such features may become the object of study. Caution should be exercised in “forcing” big data into simulation models (Conte et al., 2012) and highly detailed predictions of complex social systems, even with big data, may never be possible (West, 2013).

In sum, complexity in social systems is present with or without “big data”; simply observing three preschoolers as they interact, communicating with each other via the linguistic symbol system that emerged to transcend individual human cognitive limitations and with each preschooler predicting and reacting to what each other says or does, can very well lead to highly unpredictable and emergent behavior! At the same time, enormous data can exist from very complicated machines that are not, themselves, complex because they fail the hallmark tests of complexity: self-organization, feedback, emergence. From a methodological perspective, big data technologies and techniques represent new possibilities for how complex social systems might be studied in the discipline of computational social science (e.g., Conte et al., 2012). The fact that computational social science is generative – i.e., can you grow it? (Epstein, 1999) – at times invites the dubious if well-meaning “But where did the data in your model come from?” question, as if actual data generated by human beings – regardless of how or under what circumstances – somehow trumps even the most elegant and effective model. CSS must continue to expand its interdisciplinary toolbox of scientific instruments (Cioffi-Revilla, 2014) and embrace big data as yet another tool to improve our models, our understanding, and our explanations of the complexity inherent in social systems.

REFERENCES

Axtell, R. L. (2016, May). 120 million agents self-organize into 6 million firms: a model of the US private sector. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems (pp. 806-816). International Foundation for Autonomous Agents and Multiagent Systems.

Cioffi-Revilla, C. (2014). Introduction to computational social science: principles and applications. London: Springer.

Conte, R., Gilbert, N., Bonelli, G., Cioffi-Revilla, C., Deffuant, G., Kertesz, J., … Helbing, D. (2012). Manifesto of computational social science. The European Physical Journal Special Topics, 214(1), 325–346. http://doi.org/10.1140/epjst/e2012-01697-8

Epstein, J. M. (1999). Agent-based computational models and generative social science. Generative Social Science: Studies in Agent-Based Computational Modeling, 4(5), 4–46.

Gilbert, G. N., & Troitzsch, K. G. (2005). Simulation for the social scientist (2nd ed). Maidenhead, England ; New York, NY: Open University Press.

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. http://doi.org/10.1073/pnas.1320040111

Miller, J. H., & Page, S. E. (2007). Complex adaptive systems: an introduction to computational models of social life. Princeton, N.J: Princeton University Press.

Pietsch, W. (2013). Big Data–The New Science of Complexity. Retrieved from http://philsci-archive.pitt.edu/9944/

Schelling, T. C. (1971). Dynamic models of segregation†. Journal of Mathematical Sociology, 1(2), 143–186.

Simon, H. A. (1969). The sciences of the artificial (3. ed., [Nachdr.]). Cambridge, Mass.: MIT Press.

Simon, H. A. (1976). Administrative behavior: a study of decision-making processes in administrative organization (3d ed). New York: Free Press.

Torrens, P. M. (2010). Geography and computational social science. GeoJournal, 75(2), 133–148.

West, G. (2013). Big data needs a big theory to go with it. Scientific American, May, 15.

Essay: Complex Social Systems – You’ll Need More Than Just Big Data

Formal Organizations, Informal Networks, and Work Flow: An Agent-Based Model

July 11, 2018July 12, 2018 Tom BriggsABM, boundary spanners, computational modeling, computational social science, management, network science, organizations, social networks

At the 2018 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation (SBP-BRiMS), I presented a short paper titled “Formal Organizations, Informal Networks, and Work Flow: An Agent-Based Model” (PDF).

The paper presents initial findings of a continuing project (view/follow project on ResearchGate) to develop and refine a generalized organizational agent-based model that includes both formal organization hierarchy (i.e., a so-called “organization chart”) and the informal networks that really matter in a company (i.e., what David Krackhardt and Jeffrey R. Hanson aptly called “the company behind the chart“). Such a generalized model would be useful to create simulations of a variety of individual and organizational processes at multiple levels (e.g., employees, managers, executives, and overall organization) and to precisely quantify processes as the simulations unfold.

Initial findings from early model runs suggest potential decreases in both individual and organizational productivity as supervisory span-of-control increases in organizations with cultures of micromanagement.

Presentation:

Below you can read the paper abstract and find out more information about the model.

Abstract:

Few computational network models contrasting formal organization and informal networks have been published. A generalized organizational agent-based model (ABM) containing both formal organizational hierarchy and informal social networks was developed to simulate organizational processes that occur over both formal network ties and informal networks. Preliminary results from the current effort demonstrate “traffic jams” of work at the problematic middle manager level, which varies with the degree of micromanagement culture and supervisory span of control. Results also indicate that some informal network ties are used reciprocally while others are practically unidirectional.

Keywords: organizations, networks, ABM, boundary spanning

Screen capture of NetLogo model interfact for organization networks model
GUI of org networks model interface showing a small organization with span of control of 5. Yellow lines show “passes” of jobs using informal network ties when the formal hierarchy was inaccessible.

Model availability:

My model will be made available under the Apache 2.0 license from OpenABM for others to use in their own research. Please feel free to use, refine, or extend this model with attribution.

Full Reference:

Briggs T.W. (2018) Formal Organizations, Informal Networks, and Work Flow: An Agent-Based Model. In: Thomson R., Dancy C., Hyder A., Bisgin H. (eds) Social, Cultural, and Behavioral Modeling. SBP-BRiMS 2018. Lecture Notes in Computer Science, vol 10899. Springer, Cham https://doi.org/10.1007/978-3-319-93372-6_21 (PDF)

Formal Organizations, Informal Networks, and Work Flow: An Agent-Based Model

Mass Violence: A Computational Social Science Approach – presented at the Association of Threat Assessment Professionals (ATAP) DC Chapter Meeting

May 17, 2018August 12, 2018 Tom Briggscomplexity, complexity science, computational modeling, computational social science, systems, systems thinking

At the Association of Threat Assessment Professionals (ATAP) DC chapter, I had the opportunity to share my perspectives on using a computational social science / complexity science approach for the prevention/mitigation of mass violence.

The ATAP DC group convened in person in Northern Virginia and by videoconference in several other locations along the East Coast. (I’m very grateful to the technical staff at Northern Virginia Community College for all their prework to make sure the technology worked and everything ran smoothly!). The ATAP DC members were a fantastic audience and humored me for what I understand was a slightly different take on mass violence than their usual.

Following a brief overview of computational social science and complexity science, I discussed some of the challenges of researching mass violence: mass violence is rare, complex, difficult to study, lacks agreed-upon theoretical models of causation, and is unfortunately often politicized.

We discussed different types of modeling, from verbal and mental models to mathematical and computational models. I believe that computational models are particularly suited to studying mass violence, and previously constructed one such model – Active Shooter: An Agent-Based Model of Unarmed Resistance. Computational models offer many benefits for mass violence research, education, and training, including the fact that they pose no risk to human subjects, they are infinitely repeatable, they are superlative for studying processes in systems, and can incorporate network features to study the influence of network ties in a particular process or outcome.

I demonstrated several computational models, including my active shooter ABM, as well as Epstein’s civil violence model and an epidemic model showing the spread of a virus between populations. If mass violence is, at least in part, germinated through the spread of the idea of perpetrating mass violence, whether by mass media or the internet, such models are useful in exploring how quickly and broadly such ideas could spread.

Finally, I discussed my view that the cumulative strain model proposed by Levin & Madfis is a verbal model that is ripe for a computational implementation.

I concluded by sharing my view that the threat assessment/threat prevention community could make use of computational modeling for training, for research, and perhaps ultimately for pre-warning. Computational modelers have demonstrated the value of working collaboratively with process stakeholders – for example, key officials in threat planning and response in schools and organizations – to perform “participatory” or “companion” modeling, in which stakeholder input is used to iteratively refine a model such that it is useful for the stakeholders in policy development or in response planning.

Computational modeling uses for threat assessment
I see computational modeling benefiting threat assessment and management professionals in at least three areas: for training, both of threat assessors and their clients, for research, and potentially for pre-warning when models are combined with appropriate sources of data on populations.

In the discussion following my presentation, I received several excellent and thoughtful questions, including whether psychopathy could be represented in agents (yes, through an additional modeling effort), whether my prior model could be extended to include armed responders or law enforcement officers (yes), whether these models can be validated (yes, but validation is challenging for many of the same reasons that mass violence research is difficult), and whether such models can be used for pre-scenario planning (yes).

I thoroughly enjoyed my time with ATAP DC and appreciated the opportunity to contribute.

Presentation:

Briggs – 2018 – Mass Violence from linkedincontent_2c1

Abstract:

Mass violence is a rare event. Attempts to empirically study episodes of mass violence can present a variety of challenges. The complex nature of episodes of mass violence, which may have germinated in years prior to actual attacks, make attempts to use conventional statistical techniques problematic.

Complexity science and the relatively new field of computational social science offer new paradigms and computational tools suited to the study of this dynamic problem. This talk reviews some of the challenges of mass violence research, provides an overview of complexity and computational social science, offers a live demonstration of a computational model of an active shooter scenario, and discusses a potential use case to computationally implement the cumulative strain model proposed by Levin and Madfis in 2009.

Why is this research important? Computational approaches enable new and innovative ways of studying, thinking about, and communicating with stakeholders about mass violence, and should become part of the threat assessment community toolkit.

Mass Violence: A Computational Social Science Approach – presented at the Association of Threat Assessment Professionals (ATAP) DC Chapter Meeting

Apply for the Santa Fe Institute 2017 Graduate Workshop in Computational Social Science Modeling and Complexity – Deadline 14 February 2017

January 16, 2017January 17, 2017 Tom Briggscomplexity science, computational social science, conference, CSS, network science, Santa Fe Institute, social network analysis

In the summer of 2016, I had the good fortune to be selected along with nine other students to attend the 2016 Graduate Workshop in Computational Social Science and Modeling (GWCSS) at the Santa Fe Institute in Santa Fe, NM. Carnegie Mellon Professor John H. Miller and University of Michigan Professor Scott E. Page were our excellent Santa Fe faculty guides for the workshop, which included a combination of lectures on complexity science, complex systems, and computational modeling by Professors Miller and Page, as well as other Santa Fe faculty including Chris Kempes, Simon DeDeo, and Mirta Galesic.

2016 Santa Fe Institute Graduate Workshop on Computational Social Science and Complexity Modeling - Tom Briggs

The workshop was a two-week, residential intensive in beautiful Santa Fe. Concurrently, SFI runs a four-week Complex Systems Summer School which brings additional notables in the field of complexity and computational social science, including economist Brian Arthur, Robert Boyd whose 1988 book on cultural evolution coauthored by Peter J. Richerson has been cited more than 7000 times, and Doyne Farmer. As a GWCSS attendee, I was able to attend any of the Summer School lectures I could slip away for.

Lectures and discussions were wide-ranging but all anchored in the science of complexity and computational modeling as tools of scientific investigation. A sampling of our discussions in the GWCSS:

* model diversity (Scott Page)
* evolutionary computation (John Miller)
* information theory and conflict (Simon DeDeo)
* network structure and performance (Mirta Galesic)

I was incredibly lucky to attend the 2016 workshop with a distinguished cohort of fellow graduate students, from whom I learned a great deal during conversation, discussion, and spirited debate over the course of our two weeks together.

During the 2016 GWCSS, I began work on an agent-based model to investigate the role of personality variables in situations in a work setting (view project on ResearchGate). This work continues.

I am grateful to have had the opportunity to attend the 2016 Graduate Workshop in Computational Social Science and recommend it highly.

If you have an interest in complexity science and computational social science, consider applying for either the 2017 GWCSS or the 2017 Complex Systems Summer School at the Santa Fe Institute.

Please note that the application deadline for the 2017 GWCSS is 14 February 2017.

More information about the 2016 workshop I attended, including a reading list, is available from the 2016 website and wiki.

Apply for the Santa Fe Institute 2017 Graduate Workshop in Computational Social Science Modeling and Complexity – Deadline 14 February 2017

Active Shooter: An Agent-Based Model (ABM) of Unarmed Resistance

December 14, 2016December 15, 2017 Tom BriggsABM, agent-based modeling, complexity science, computational modeling, computational social science, NetLogo

At the 2016 Winter Simulation Conference, I presented “Active Shooter: An Agent-Based Model of Unarmed Resistance” (PDF) (view/follow project on ResearchGate), coauthored with William G. Kennedy. The paper appeared in WSC 2016’s Social and Behavioral Simulation track, chaired by Ugo Merlone of the University of Torino and Stephen Davies from the University of Mary Washington.

Presentation:

Background and motivation:

I initially undertook this modeling project during a 2015 seminar on agent-based modeling for military applications (ABM) offered by Kenneth Comer at George Mason University. The issue of active and mass shooters is obviously not unique to the military, but the shootings at Ft. Hood and the very few ABMs [1 , 2] I could find on mass shootings suggest more attention from computational modelers is needed.

Examining mass shootings through the lens of complexity science and agent-based modeling raised an additional wrinkle that’s seldom addressed in the literature or training for active shooter events. Specifically, the prescription to “Run, Hide, Fight” (or “Avoid, Deny Defend”) might work for individuals, but what about the impact one individual’s behavior has on another individual’s likelihood of surviving an active shooter incident? To illustrate, imagine an active shooting is unfolding and a potential victim is able to run and hide, barricading himself in a supply closet. What happens when the next potential victim arrives at that same supply closet, having also planned to use it? The individual inside the closet has monopolized the secure hiding space, leaving the second individual at greater risk.

The study of mass shootings is complicated. Despite disproportionate media coverage, mass shootings are extremely rare and each mass shooting is unique and difficult to generalize. Even the definition of “mass shooting” is debated. In my work, I’ve relied on the U.S. Federal Bureau of Investigation for official definitions, not because I think they are necessarily the best possible, but because they provide an authoritative baseline to study the issue. The FBI considered a “mass shooting” a single incident in which four or more individuals are killed (this is the definition of “mass murder”) involving the criminal use of a firearm. Importantly, this definition means that an incident in which three are shot and killed would not rise to the level of “mass shooting,” nor would an incident in which many were shot but less than four died of their wounds.

Compounding the rarity of mass shootings available for study is the fact that many of the primary sources (including the shooter himself) are often deceased. This fact, combined with methodological issues of relying on eyewitness accounts that are subject to recall errors and bias, makes the study of historical mass shootings difficult. Conducting experiments in which participants believed they could possibly die would be unethical and dangerous. Computational modeling and simulation, on the other hand, would offer an infinitely repeatable virtual laboratory in which to study this issue and would have no possibility of harming human subjects.

I had initially imagined a model that would include a variety of agents: shooter(s), victims, and peacekeepers/LEOs. While developing the model, I realized that what was most important was creating a model that allowed users–those with more subject-matter expertise than me–to calibrate the model to their knowledge. The eventual version 1 was a generalized outdoor model in which a shooter targets and fires upon victims, victims flee the shooter, and, if set by the user, and small proportion of victims attempt to subdue the shooter, as occurred during the thwarted attack in 2015 on a Thalys train traveling from Amsterdam to Paris.

User-settable parameters permit the user to control the shooter’s accuracy, armament, and, importantly, the probability of a “fighter” overcoming a shooter.

Active Shooter Model GUI showing crowded open landscape, randomly-located shooter (red), “fighters” (yellow), and user-adjustable parameters and real-time output metrics.

Findings:

Initial findings suggest that even with a very small probability of success – a 1 out of 100 chance for each second of the struggle – a very small proportion of “fighters” in a population can potentially subdue an active shooter, even if that shooter has a 50 percent probability per second of disabling the “fighters.”

Active shooter subdued at 39 seconds by fighters in the victim population. Fighters (yellow) are disproportionately represented in the casualties.

Model availability:

My model is available under the Apache 2.0 license from OpenABM for others to use in their own research. I encourage other modelers to extend this model and have made several suggestions for doing so in the paper. In particular, I think the idea of rapid collective action–a coordinated group or swarm attack–should be studied as a potential counter to an active shooter.

Abstract:

Mass shootings unfold quickly and are rarely foreseen by victims. Increasingly, training is provided to increase chances of surviving active shooter scenarios, usually emphasizing “Run, Hide, Fight.” Evidence from prior mass shootings suggests that casualties may be limited should the shooter encounter unarmed resistance prior to the arrival of law enforcement officers (LEOs). An agent-based model (ABM) explored the potential for limiting casualties should a small proportion of potential victims swarm a gunman, as occurred on a train from Amsterdam to Paris in 2015. Results suggest that even with a miniscule probability of overcoming a shooter, fighters may save lives but put themselves at increased risk. While not intended to prescribe a course of action, the model suggests the potential for a reduction in casualties in active shooter scenarios.

Keywords: agent-based modeling, ABM, active shooter, mass shooter, military, law enforcement, LEO, firearms, guns, violence, terrorism, security

Full Reference:

Briggs, T. W., & Kennedy, W. G. (2016, December). Active shooter: an agent-based model of unarmed resistance. In Proceedings of the 2016 Winter Simulation Conference (pp. 3521-3531). IEEE Press. https://doi.org/10.1109/WSC.2016.7822381 (PDF)

Active Shooter: An Agent-Based Model (ABM) of Unarmed Resistance

Getting Started with Agent-Based Modeling (ABM)

July 12, 2016July 12, 2016 Tom BriggsABM, agent-based modeling, computational modeling, computational social science, CSS, NetLogo, SNA, social network analysis

A colleague recently asked how to get started with agent-based modeling (ABM).

It’s never been easier to learn ABM, whether you’re a social scientist, physical scientist, engineer, computer scientist, or from any discipline, really.

If you want to start right this minute, the very best thing to do is to head over to Uri Wilensky’s NetLogo website, download NetLogo (available for any OS) free of charge, and then work through the three learning tutorials available under “Learning NetLogo” in the User Manual.

The first tutorial is titled “Models” and, as its title suggests, introduces you to interacting with existing NetLogo models such as the Wolf-Sheep Predation model of an ecosystem.

The second tutorial is titled “Commands” and takes you a bit deeper in issuing commands to the NetLogo interface.

The third tutorial is titled “Procedures” and walks you through building a model from scratch – writing the necessary NetLogo code to implement a basic agent-based model.

After the three tutorials, the NetLogo website encourages reading through the guides available in the NetLogo documentation (Interface, Info Tab, Programming) and making use of the NetLogo Dictionary, a comprehensive index of NetLogo methods, procedures, and keywords.

What’s great about NetLogo is that it is fairly intuitive and “programming” or “coding” in NetLogo is very quickly learned, making a first agent-based model possible in a very short time.

If you prefer using a textbook as a guide, my recommendation is Uri Wilensky and Bill Rand’s Introduction to Agent-Based Modeling (find in a library), which uses NetLogo and includes companion code and models to run through all of the essentials of agent-based modeling.

Please see my review of Wilensky and Rand’s Introduction to Agent-Based Modeling for more detail on the book – which is excellent – and what it covers.

If you want to get started with ABM, download NetLogo today.

A short 2009 video describing NetLogo and some capabilities:

Video

Review: An Introduction to Agent-Based Modeling by Uri Wilensky and William Rand

June 10, 2016June 11, 2016 Tom Briggsbook review, complexity science, computational modeling, computational social science

iabm-cover

Uri Wilensky and William Rand’s An Introduction to Agent-Based Modeling: Modeling Natural, Social, and Engineered Complex Systems with NetLogo (find in a library) is the single best book I’ve encountered for anyone interested in agent-based modeling (ABM) in any discipline and at any level (K-12, undergraduate, graduate, professional).

At nearly 500 full-color pages, Wilensky and Rand’s book does an excellent job progressively walking through the decision to use agent-based modeling, creating simple ABMs, extending preexisting ABMs, creating more complicated ABMs, analyzing ABMs, and conducting verification, validation, and replication.

One of the greatest strengths of Wilensky and Rand’s approach is that IABM (Introduction to Agent-Based Modeling) is that it is a hands-on, exploratory book intended for use with the NetLogo multi-agent modeling environment, which is freely available for download. Each chapter of IABM includes many illustrative examples, all implemented and executed in NetLogo. Moreover, the example models and code are not just available to readers (again, free of charge), but are conveniently bundled in the current release of NetLogo. In other words, rather than just read about the models, the reader is encouraged to run the models his or herself. The Chinese proverb says it best:

Tell me and I’ll forget;
show me and I may remember;
involve me and I’ll understand.

Beyond just running the models described in the book, each chapter concludes with a substantial number of exercises or “Explorations,” usually numbering 20 to 30. Each Exploration is a potentially deep opportunity to learn more about ABM by getting involved rather than just reading, as the Chinese proverb suggests.

Wilensky and Rand do a very nice job of using illustrative models from a variety of disciplines; one example might come from the social sciences and the next example from ecology. This is helpful since each reader may come from a different background or have different experience or interests.

The book requires no special background in mathematics or computer science, which is a huge plus in terms of accessibility to a broader audience.

The authors suggest that it could be used as a textbook for an undergraduate course on complex systems or a computer science course on ABM, or even as a supplement to science, social science, or engineering classes. Graduate students who wish to use ABM in their research – regardless of discipline – would likely find IABM one of the best possible places to start. Even experienced researchers with no agent-based modeling experience would benefit from IABM as an introduction to the method.

While the book is aimed at high-level undergraduates and graduate students, it is sufficient to successfully create very detailed and scientifically valuable agent-based models in NetLogo. The authors reserve a final chapter for “advanced” applications potentially of greater interest to individuals interested in specific sorts of ABM: computationally intensive models, participatory or stakeholder-driven modeling, robotics, spatial and geographic information systems (GIS), and network science / social network analysis. They select just a handful of NetLogo’s more advanced capabilities to describe in this chapter, but include helpful references enabling interested readers to learn more.

I can’t find anything about IABM to criticize, though reading it cover to cover (as I did) is certainly an investment of time, albeit a worthwhile one for the reader wishing to learn and use agent-based modeling.

One of the best ways to explore the science of complexity and how complexity theory can be applied to the numerous real-world phenomena we experience and study is through agent-based modeling. Uri Wilensky and Bill Rand have written an excellent book to help anyone do just that, and I recommend An Introduction to Agent-Based Modeling: Modeling Natural, Social, and Engineered Complex Systems with NetLogo (find in a library) to anyone wishing to get started with agent-based modeling.

Wilensky and Rand make ABM accessible and, importantly, thoroughly enjoyable to learn.

The book’s companion website is http://www.intro-to-abm.com/.

Review: An Introduction to Agent-Based Modeling by Uri Wilensky and William Rand
Proudly powered by WordPress | Theme: Minnow by WordPress.com.
 

Loading Comments...