In the summer of 2016, I had the good fortune to be selected along with nine other students to attend the 2016 Graduate Workshop in Computational Social Science and Modeling (GWCSS) at the Santa Fe Institute in Santa Fe, NM. Carnegie Mellon Professor John H. Miller and University of Michigan Professor Scott E. Page were our excellent Santa Fe faculty guides for the workshop, which included a combination of lectures on complexity science, complex systems, and computational modeling by Professors Miller and Page, as well as other Santa Fe faculty including Chris Kempes, Simon DeDeo, and Mirta Galesic.
The workshop was a two-week, residential intensive in beautiful Santa Fe. Concurrently, SFI runs a four-week Complex Systems Summer School which brings additional notables in the field of complexity and computational social science, including economist Brian Arthur, Robert Boyd whose 1988 book on cultural evolution coauthored by Peter J. Richerson has been cited more than 7000 times, and Doyne Farmer. As a GWCSS attendee, I was able to attend any of the Summer School lectures I could slip away for.
Lectures and discussions were wide-ranging but all anchored in the science of complexity and computational modeling as tools of scientific investigation. A sampling of our discussions in the GWCSS:
* model diversity (Scott Page)
* evolutionary computation (John Miller)
* information theory and conflict (Simon DeDeo)
* network structure and performance (Mirta Galesic)
I was incredibly lucky to attend the 2016 workshop with a distinguished cohort of fellow graduate students, from whom I learned a great deal during conversation, discussion, and spirited debate over the course of our two weeks together.
During the 2016 GWCSS, I began work on an agent-based model to investigate the role of personality variables in situations in a work setting (view project on ResearchGate). This work continues.
I am grateful to have had the opportunity to attend the 2016 Graduate Workshop in Computational Social Science and recommend it highly.
I initially undertook this modeling project during a 2015 seminar on agent-based modeling for military applications (ABM) offered by Kenneth Comer at George Mason University. The issue of active and mass shooters is obviously not unique to the military, but the shootings at Ft. Hood and the very few ABMs [1 , 2] I could find on mass shootings suggest more attention from computational modelers is needed.
Examining mass shootings through the lens of complexity science and agent-based modeling raised an additional wrinkle that’s seldom addressed in the literature or training for active shooter events. Specifically, the prescription to “Run, Hide, Fight” (or “Avoid, Deny Defend”) might work for individuals, but what about the impact one individual’s behavior has on another individual’s likelihood of surviving an active shooter incident? To illustrate, imagine an active shooting is unfolding and a potential victim is able to run and hide, barricading himself in a supply closet. What happens when the next potential victim arrives at that same supply closet, having also planned to use it? The individual inside the closet has monopolized the secure hiding space, leaving the second individual at greater risk.
The study of mass shootings is complicated. Despite disproportionate media coverage, mass shootings are extremely rare and each mass shooting is unique and difficult to generalize. Even the definition of “mass shooting” is debated. In my work, I’ve relied on the U.S. Federal Bureau of Investigation for official definitions, not because I think they are necessarily the best possible, but because they provide an authoritative baseline to study the issue. The FBI considered a “mass shooting” a single incident in which four or more individuals are killed (this is the definition of “mass murder”) involving the criminal use of a firearm. Importantly, this definition means that an incident in which three are shot and killed would not rise to the level of “mass shooting,” nor would an incident in which many were shot but less than four died of their wounds.
Compounding the rarity of mass shootings available for study is the fact that many of the primary sources (including the shooter himself) are often deceased. This fact, combined with methodological issues of relying on eyewitness accounts that are subject to recall errors and bias, makes the study of historical mass shootings difficult. Conducting experiments in which participants believed they could possibly die would be unethical and dangerous. Computational modeling and simulation, on the other hand, would offer an infinitely repeatable virtual laboratory in which to study this issue and would have no possibility of harming human subjects.
I had initially imagined a model that would include a variety of agents: shooter(s), victims, and peacekeepers/LEOs. While developing the model, I realized that what was most important was creating a model that allowed users–those with more subject-matter expertise than me–to calibrate the model to their knowledge. The eventual version 1 was a generalized outdoor model in which a shooter targets and fires upon victims, victims flee the shooter, and, if set by the user, and small proportion of victims attempt to subdue the shooter, as occurred during the thwarted attack in 2015 on a Thalys train traveling from Amsterdam to Paris.
User-settable parameters permit the user to control the shooter’s accuracy, armament, and, importantly, the probability of a “fighter” overcoming a shooter.
Initial findings suggest that even with a very small probability of success – a 1 out of 100 chance for each second of the struggle – a very small proportion of “fighters” in a population can potentially subdue an active shooter, even if that shooter has a 50 percent probability per second of disabling the “fighters.”
My model is available under the Apache 2.0 license from OpenABM for others to use in their own research. I encourage other modelers to extend this model and have made several suggestions for doing so in the paper. In particular, I think the idea of rapid collective action–a coordinated group or swarm attack–should be studied as a potential counter to an active shooter.
Mass shootings unfold quickly and are rarely foreseen by victims. Increasingly, training is provided to increase chances of surviving active shooter scenarios, usually emphasizing “Run, Hide, Fight.” Evidence from prior mass shootings suggests that casualties may be limited should the shooter encounter unarmed resistance prior to the arrival of law enforcement officers (LEOs). An agent-based model (ABM) explored the potential for limiting casualties should a small proportion of potential victims swarm a gunman, as occurred on a train from Amsterdam to Paris in 2015. Results suggest that even with a miniscule probability of overcoming a shooter, fighters may save lives but put themselves at increased risk. While not intended to prescribe a course of action, the model suggests the potential for a reduction in casualties in active shooter scenarios.
Keywords: agent-based modeling, ABM, active shooter, mass shooter, military, law enforcement, LEO, firearms, guns, violence, terrorism, security
Briggs, T. W., & Kennedy, W. G. (2016, December). Active shooter: an agent-based model of unarmed resistance. In Proceedings of the 2016 Winter Simulation Conference (pp. 3521-3531). IEEE Press.https://doi.org/10.1109/WSC.2016.7822381 (PDF)
At the 2016 conference of the Computational Social Science Society of the Americas (CSSSA), I presented a paper, coauthored with Andrew Crooks, titled “Close, But Not Close Enough: A Spatial Agent-Based Model of Manager-Subordinate Proximity.” In the paper we present our preliminary effort to explore how workplace layout impacts manager-subordinate interaction likelihood. We developed a spatial agent-based model to simulate how the physical seat locations of individuals with reporting relationships might enhance or detract from an effective manager-subordinate relationship.
Initial findings from the model suggest that when subordinates are separated from their manager by even a single floor in a building, they are substantially less likely to encounter their manager during a typical day or week. This results in same-floor or same-space subordinates receiving a disproportionate share of “unearned” managerial attention than different-floor colleagues, with many potential implications for issues like onboarding, feedback and performance management, and potentially career advancement.
Below you can read the abstract of our paper and find out more information about the model.
Employees may be co-located with their manager or they may be separated by distances ranging from a short walk to across oceans, with many gradations in between. Some distances, such as those between floors of an office building, are physically short but may be psychologically quite far. The current project developed a spatial ABM to examine the likelihood of unplanned manager-subordinate encounters in an office setting with two floors. Early results suggest that subordinates located on a different floor than their manager are substantially less likely to have even a single spontaneous encounter with their manager in a work day, despite a relatively short physical separation. If leader-follower (i.e., manager-subordinate) relationships are influenced by spontaneous face-to-face encounters, this finding represents a challenge for organizations with managers having subordinates who are close, but not close enough. Additionally, attempting to impose top-down requirements to travel between floors (e.g., when scheduling meetings) may do surprisingly little to abate this problem. Implications of these findings for organizations are discussed, as are limitations and future research, including possibilities for future model verification and validation.
Briggs, T.W. and Crooks, A.T. (2016), Close, But Not Close Enough: A Spatial Agent-Based Model of Manager-Subordinate Proximity. The Computational Social Science Society of Americas Conference, Santa Fe, NM. (View project on ResearchGate).
Kadushin, emeritus Professor of Sociology at the CUNY Graduate Center, has been engaged in social science research on network topics since the mid 1960s and has example after example of not only his own work with networks in social science, but also citations of all of the other social scientists I’d expect to see: Ron Burt, Ed Laumann, Stanley Milgram, Stephen Borgatti, Daniel Brass, and Barry Wellman, to name only a few.
Kadushin takes a decided and purposefully social approach to social networks, noting in his introduction that although network science can be applied to power grids, for example, understanding social networks really requires examining them “as if people mattered.” Kadushin proceeds to explore both the psychological and sociological theories underpinning networks as well as the social consequences of networks and their structures.
The first few chapters provide an overview of network concepts, moving from individual network members (Chapter 2) through entire social networks and their subcomponents and network properties (Chapter 3) and finally network segmentation (Chapter 4).
Chapter 5 explores the psychological foundations of social networks and the book continues through successive levels, next examining small groups and leaders (Chapter 6), then entire organizations (Chapter 7), small-world networks and community structures (Chapter 8), followed by network processes like influence and diffusion (Chapter 9). Chapter 10 explores social capital as a function of networks and network position and Chapter 11 gives much-needed attention to ethical dilemmas in social network research. Finally, Chapter 12 reviews “ten master ideas” of social networks.
I found Kadushin’s book extremely helpful in pointing to citations of social network analysis applied to social science. For any social scientist interested in social networks, I’d strongly recommend starting with Understanding Social Networks (with Borgatti, Everett, and Johnson’s Analyzing Social Networks as a second choice). I will also note that while Kadushin focuses on social science, he does not shy away from covering the work of physicists and others on networks, though he avoids mathematics in his explanations (but references the appropriate papers).
Likewise, for the general reader, I can’t think of a better book that explains social networks and their applications to social science and social ideas than what Kadushin offers here. An additional strength of the book is Kadushin’s enjoyable writing style and clear and concise recap at the end of each chapter in which he informs the reader “where we are now.”
A colleague recently asked how to get started with agent-based modeling (ABM).
It’s never been easier to learn ABM, whether you’re a social scientist, physical scientist, engineer, computer scientist, or from any discipline, really.
If you want to start right this minute, the very best thing to do is to head over to Uri Wilensky’sNetLogo website, download NetLogo (available for any OS) free of charge, and then work through the three learning tutorials available under “Learning NetLogo” in the User Manual.
The first tutorial is titled “Models” and, as its title suggests, introduces you to interacting with existing NetLogo models such as the Wolf-Sheep Predation model of an ecosystem.
The second tutorial is titled “Commands” and takes you a bit deeper in issuing commands to the NetLogo interface.
The third tutorial is titled “Procedures” and walks you through building a model from scratch – writing the necessary NetLogo code to implement a basic agent-based model.
After the three tutorials, the NetLogo website encourages reading through the guides available in the NetLogo documentation (Interface, Info Tab, Programming) and making use of the NetLogo Dictionary, a comprehensive index of NetLogo methods, procedures, and keywords.
What’s great about NetLogo is that it is fairly intuitive and “programming” or “coding” in NetLogo is very quickly learned, making a first agent-based model possible in a very short time.
If you prefer using a textbook as a guide, my recommendation is Uri Wilensky and Bill Rand’s Introduction to Agent-Based Modeling (find in a library), which uses NetLogo and includes companion code and models to run through all of the essentials of agent-based modeling.
At nearly 500 full-color pages, Wilensky and Rand’s book does an excellent job progressively walking through the decision to use agent-based modeling, creating simple ABMs, extending preexisting ABMs, creating more complicated ABMs, analyzing ABMs, and conducting verification, validation, and replication.
One of the greatest strengths of Wilensky and Rand’s approach is that IABM (Introduction to Agent-Based Modeling) is that it is a hands-on, exploratory book intended for use with the NetLogo multi-agent modeling environment, which is freely available for download. Each chapter of IABM includes many illustrative examples, all implemented and executed in NetLogo. Moreover, the example models and code are not just available to readers (again, free of charge), but are conveniently bundled in the current release of NetLogo. In other words, rather than just read about the models, the reader is encouraged to run the models his or herself. The Chinese proverb says it best:
Tell me and I’ll forget;
show me and I may remember;
involve me and I’ll understand.
Beyond just running the models described in the book, each chapter concludes with a substantial number of exercises or “Explorations,” usually numbering 20 to 30. Each Exploration is a potentially deep opportunity to learn more about ABM by getting involved rather than just reading, as the Chinese proverb suggests.
Wilensky and Rand do a very nice job of using illustrative models from a variety of disciplines; one example might come from the social sciences and the next example from ecology. This is helpful since each reader may come from a different background or have different experience or interests.
The book requires no special background in mathematics or computer science, which is a huge plus in terms of accessibility to a broader audience.
The authors suggest that it could be used as a textbook for an undergraduate course on complex systems or a computer science course on ABM, or even as a supplement to science, social science, or engineering classes. Graduate students who wish to use ABM in their research – regardless of discipline – would likely find IABM one of the best possible places to start. Even experienced researchers with no agent-based modeling experience would benefit from IABM as an introduction to the method.
While the book is aimed at high-level undergraduates and graduate students, it is sufficient to successfully create very detailed and scientifically valuable agent-based models in NetLogo. The authors reserve a final chapter for “advanced” applications potentially of greater interest to individuals interested in specific sorts of ABM: computationally intensive models, participatory or stakeholder-driven modeling, robotics, spatial and geographic information systems (GIS), and network science / social network analysis. They select just a handful of NetLogo’s more advanced capabilities to describe in this chapter, but include helpful references enabling interested readers to learn more.
I can’t find anything about IABM to criticize, though reading it cover to cover (as I did) is certainly an investment of time, albeit a worthwhile one for the reader wishing to learn and use agent-based modeling.
I’ve been fielding more questions about research ethics and protecting individuals with regard to data science and big data. The topic warrants a much more in-depth discussion than this blog post, but I’ve noticed one trend that’s worth pointing out: academics previously working at research universities either leaving academia temporarily or permanently for tech companies and industry.
Academic researchers are almost always required to submit their research proposals to their organization’s Institutional Review Board (IRB), an interdisciplinary group of researchers charged with protecting human subjects as outlined in the 1979 Belmont Report and overseeing research ethics training at most universities and research organizations. Private companies are under no such obligation, as the controversial Facebook study (PDF) of emotional contagion demonstrated. These companies rely on the permissions granted by users who consent to the Terms of Service agreements prior to signing up for the service.
For me, it remains an open question whether researchers in private industry are adhering to a “do no harm” maxim. The obvious tension is that profit-motivated entities like startups and publicly-traded tech companies are interested in maximizing investor or shareholder value and are not subject to the same research ethics requirements as publicly-funded research universities.
I’m encouraged that some academic researchers like Jessica Vitak are tackling these issues and looking for ways to increase transparency in big data use. Vitak’s Privacy + Security Internet Research Lab is tackling exactly these questions. I had the opportunity to hear Vitak speak at the recent Human-Computer Interaction Laboratoryannual symposium at the University of Maryland, College Park. One of the potential solutions that Vitak suggests is that the peer review process for academic publications and conferences needs to fill gaps left by insufficient IRB expertise in some areas of data science. This won’t necessarily change what private companies do with individual data, but it’s certainly a start. The controversial Facebook study now includes an “Editorial Expression of Concern,” which appeared after the publication of the study. Had the editor and peer reviewers at PNAS been more attuned to research ethics and human subjects protection during the peer review process, the Facebook authors might have been asked to do a much better job of addressing the ethical implications in their research.
Of course, this raises the thornier question of rejecting research that does not adhere to accepted human subjects protections: in this case, we do not reward the authors for failing to conduct research in an ethical manner, but we prevent information about the research from entering the public domain. I don’t have a good answer to this issue.
I don’t specifically intend to pick on the tech companies here. Plenty of other industries have, in the name of profit-driven research, done harm. But tech companies also represent a particularly desirable organization in which to do research. Traditionally, researchers, especially in the social sciences, had to painstakingly collect their own experimental or correlational data. This was both time consuming and expensive, and perhaps too often resulted in non-significant findings because the research sample was too small. Tech companies, on the other hand, are awash in data that represents a potential intellectual gold mine for social scientists.
My hope is that those who leave academia for the bountiful data available at tech companies remember and abide by their research ethics training, even when they aren’t required to. I also hope that tech companies are engaging with experts in research ethics and taking any objections by those experts seriously.
A recent NPRHidden Brain podcast episode “This is Your Brain on Uber” featured an interview with Keith Chen, who appears to be both Head of Economic Research at Uber and also tenured professor at Yale. If he indeed holds dual roles, it raises important ethical questions about the research he is conducting for Uber. Does Chen conform to the same human subjects protection protocols at Uber that he must when working “at” Yale? Or is there an artificial separation because Uber isn’t Yale and isn’t subject to the same requirements?
During the episode, Shankar Vendantam at one point asks Chen about the implications for individual users’ privacy in research projects based on users’ data. Chen seemed concerned about the implications Vendantam raised, but also somewhat dismissive, simply suggesting that Uber has a Privacy Officer, a hire that was made only after a user outcry when it was discovered that an Uber executive may have inappropriately used his access to track the movements of a reporter. Chen said he didn’t usually worry about his behavioral data being used by tech companies, but that Vendantam’s question is now making him think more about it.
I am encouraged that reporters are challenging researchers and industry on their data and research practices and I certainly don’t believe we should throw the proverbial baby out with the bathwater here. There is much to be gained by using these first-ever datasets of human behavior that will add to what we know and understand about humans and social behavior.
It’s also the case that with great power comes great responsibility. Greater transparency, the involvement of research ethicists, and ensuring truly informed participants should be required not just for academic researchers, but also for researchers working in industry.
Look for a future post on the role of psychologists in the ethical conduct of research, and why I believe that a professional code of ethics is a vital component of protecting individuals.
I picked up Yuval Noah Harari’s Sapiens (find in a library) because of my academic interest in early social complexity and specifically how we humans became the complex social creatures embedded in networks that we are today.
Ostensibly a “history” book (Harari’s PhD at Oxford was in history), Sapiens unexpectedly turned out to be much more than a history book full of names, dates, and places. Instead, Harari focuses on what I can best describe as large-scale shifts in the population (both form and quantity) of the Earth, and, of critical importance, the origins of these shifts.
While Harari doesn’t specifically bring a complexity science perspective to Sapiens, he is erudite and obviously exposed to a broad range of ideas and academic disciplines in addition to history. Biological anthropology, archaeology, cognitive psychology, environmental science, and economics are all very well represented.
Harari doesn’t pull any punches and relies heavily on research and science throughout the book. He discusses both sides of issues and notes if evidence is scant and debate continues, for example, in the competing hypotheses regarding what happened to Homo neanderthalensis, commonly known as Neanderthals. This question appears to have been answered in recent years by DNA evidence, though some “how” questions still remain.
The book is heavy at just over 400 pages, but Harari’s style drew me in from the start: The book opens with a two-page “Timeline of History,” in which he starts at 13.5 billion years ago goes through the Industrial Revolution and ends, interestingly, at “the Future.”
To give an example of his style:
13.5 billion years ago: Matter and energy appear. Beginning of physics. Atoms and molecules appear. Beginning of chemistry.
3.8 billion years ago: Emergence of organisms. Beginning of biology.
500 years ago: The Scientific Revolution. Humankind admits its ignorance and begins to acquire unprecedented power.
I know of no other author who’d pen “Beginning of physics” and “Beginning of biology” in this manner.
Harari is both concise and a contrarian, and I love a contrarian thinker.
Moreover, he gets complexity and the means by which large-scale cascades and changes can occur as the result of many small interactions (“tipping points,” to borrow Malcolm Gladwell’s book title).
In one of my favorite passages, Harari invokes chaos theory in explaining why history can’t be explained deterministically nor can the future be predicted. He writes on page 240:
So many forces are at work and their interactions are so complex that extremely small variations in the strength of the forces and the way they interact produce huge differences in outcomes.
He continues, explaining that exacerbating the problem of predicting the future is the fact that history is a Level Two chaotic system. A Level One chaotic system, like the weather, does not react to predictions made about it. A Level Two chaotic system, on the other hand, reacts to predictions made about it. Example: stock marketscog.
Harari also talks about the spread of ideas over networks: culture (the idea of “memetics”) and nationalism. [Robert Axelrod’s model The Dissemination of Culture uses agent-based modeling to explain the process of cultural dissemination.]
Perhaps the most helpful idea in Sapiens is Harari’s discussion of how we evolved to become Homo sapiens from our chimpanzee forbears and, importantly, what differentiated the Sapiens species from our closest relatives (i.e., the now extinct other members of genus Homo: Homo rudolfensis, Homo erectus, Homo neaderthalensis, Homo denisova, Homo floresiensis, Homo ergaster, Homo soloensis and, quite possibly, others which have simply not been discovered in the archaeological signatures to date). This section of the book, the Cognitive Revolution, tackles the implausibility of the Sapiens catapulting from “an animal of no significance” to the very top of the food chain and spreading like wildfire across an entire planet. Rich with discussions of extant research in psychology and genetics, Harari argues that the collective ability of Sapiens to create shared mental models and myths very likely explains the successes that simply could not be achieved without collective action on such a massive scale. This idea is key for both cognitive and social psychologists seeking to understand how individual cognition results in the emergence of behaviors at the aggregate level of groups, and sometimes enormous groups.
For those looking for a quick exposure to Harari and his ideas, while I heartily recommend reading Sapiens, Harari’s 2015 TED talk nicely covers his take on the role of shared mental models (or “stories”) in the Cognitive Revolution of Homo sapiens:
I’d be remiss if I stopped here, since Harari goes on to discuss the transition of Sapiens from hunter-gatherer bands to agricultural pastoralists during the Agricultural Revolution and then, provocatively, the surprisingly very few forces that have managed to unite mankind into what is increasingly one single global society on planet Earth: money (and, importantly, trust in what money represents), empires, and religions. Harari does not shy away from frank discussion of religion, including humanism.
Finally, Harari covers the Scientific Revolution and how a fundamental shift in our thinking – namely, that despite what empires or religions might profess to know, Sapiens in fact, were ignorant of many things that science could answer – that ultimately spurred so much progress in what is truly the blink of an eye in the very long history of Homo sapiens.
The combination of history, science, and critical interpretation made Sapiens (find in a library) a thoroughly enjoyable read.
James Hayton opens his book titled simply PhD with an admission of being a sort of accidental PhD student, using what he was told in one failed admission interview to game the next one. I appreciate his honesty. Where he succeeds in this book, subtitled an “uncommon guide to research, writing, & PhD life” is offering both practical strategies and also a reasoned understanding of human nature, citing Daniel Kahneman’s excellent Thinking Fast and Slow (which should be on everyone’s required reading list) as one of his major influences. Even though his PhD was in applied physics, Hayton did a nice job generalizing his experience and what he learned about the process of PhD study so that it applies to other fields.
His thoughts on skill development during a PhD are probably the best part of the book, as are his perspectives on what earning a PhD actually means rather than what people often imagine it to mean. He gently disagrees with some ideas like writing garbage and fixing it later and offers alternatives.
Many of the other ideas he shares are not new per se (e.g., cut off internet access), but he wraps them together in a slim and accessible volume. The first two-thirds of the book cover bigger-picture issues like research, academic literature, academic writing, publishing, and conferences. The final third of the book focuses specifically on writing a dissertation or thesis, building on earlier ideas in the book.
The book suffers from a handful of editing mishaps – an irony considering Hayton’s insistence on relatively careful writing and the editing process – and it was much less comprehensive than other books in this genre like Getting What You Came For by Robert Peters. I think the US $30 list price was a bit of a stretch considering it’s a small, short book with very wide margins, but it’s certainly worth US $13 if you are pursuing a PhD.
This is a short post on a minor but consequential pitfall of social network analyses of film actors.
One thing that has always bothered me about social network analysis of so-called “actor networks” using data from IMDB is the very simple fact that these analyses are based on the assumption that because two actors appear in the same film, they know each other.
This is simply not true.
Modern filmmaking techniques and the high cost of actor set time incentivizes filmmakers not to have expensive actors on set at the same time unless absolutely necessary. Instead, stand-ins are often used in place of star actors–especially in dialogue scenes–and footage is later edited to put the two star actors together in the finished product.
So, in theory, two actors can appear in the same film and even in the same scenes but never actually be on set together. Extrapolating, two actors could appear in the same film and never actually meet.
I’ve been waiting to find a solid example and finally found one.
Robert Rodriguez (@Rodriguez), the writer-director-producer best known for his films Sin City, From Dusk Til Dawn, Once Upon a Time in Mexico, and Spy Kids, was interviewed on the Tim Ferriss Show and described exactly this situation occurring during Sin City. Rodriguez describes Sin City as one of the most rapidly-executed projects he ever worked on, from initial concept and collaboration with Frank Miller to actually shooting the film in a matter of months. In fact, Rodriguez describes shooting scenes for Sin City with actor Mickey Rourke, in which Rodriguez or another crew member would stand in for the villain who at that time hadn’t been cast. Rutger Hauer was later cast and the complementary footage was shot for the scenes. According to Rodriguez, Rourke and Hauer claim they never met, despite appearing together in a Sin City scene in which Rourke’s character appears to have his hands on Hauer’s throat.
The lesson is what every good data scientist and computational modeler should always keep in mind: justify all assumptions and always include or at least consult subject-matter experts who know the system and data being studied!