Linearity is a reductionist’s dream, and nonlinearity can be a reductionist’s nightmare.
Are you risk-literate?
Do you understand how reputable cancer treatment centers in the U.S. lie or mislead you by confounding statistics in their marketing?
Which preventive cancer screenings cause more harm than good? Which preventive cancer screenings are worth getting?
Gerd Gigerenzer’s Risk Savvy (find in a library) is a crash course in risk literacy and a fun romp through several areas in which understanding risk and uncertainty matters enormously: your health and medical care (including defensive medicine and preventive screenings), bank finance and your money, leadership, romance, terrorism, and various runaway panics – Do you remember mad cow disease and how many people died from it?*
I first encountered Gigerenzer while studying cognitive models that could be implemented in computational agents, and specifically, his highly-cited papers on “fast and frugal” heuristics for us boundedly-rational mortals. [Note to economists: this includes you.] While his academic papers are quite accessible, Risk Savvy (find in a library) feels approachable to an even wider audience.
What’s particularly fun and enjoyable about the book is that Gigerenzer doesn’t just pick on laypeople for not understanding risk, but he also picks on experts for not only communicating risk so poorly, but for often lacking risk literacy, themselves. He backs this up with experimental data collected from physicians, bankers, and executives, and uses as a foil some of the best experts at developing simple solutions to complex problems: children.
Gigerenzer includes practical tools that could revolutionize the way we communicate and think about risk – for example, discontinuing the use of relative risks with unspecified or poorly specified reference classes (e.g., a 20 percent risk reduction!) and instead using absolute risks (e.g., a reduction in risk from 5 in 1000 to 4 in 1000).
Gigerenzer also advocates the use of icon boxes and fact boxes (examples available from the Harding Center for Risk Literacy) when health professionals and health-related organizations communicate with individuals.
The concluding chapter suggests ways that we might revolutionize school by teaching risk literacy using simple tools from a very early age. Gigerenzer specifically focuses on applied statistical thinking, rules of thumb, and psychology of risk and suggests focusing these in three areas: health literacy, financial literacy, and digital risk competence. He backs his suggestions with experimental data demonstrating that children as young as the second grade can, when presented with statistical information in the proper format, learn to accurately calculate risk.
Overall, I enjoyed Gerd Gigerenzer’s Risk Savvy (find in a library) and recommend it to just about anyone, and especially to those working in health-related professions and anyone interested in making better decisions about their health, finances, and other areas of life.
*Over 10 years, about 150 people in all of Europe died of mad cow disease. In the same ten years, the other cause that led to an equivalent number dying was drinking scented lamp oil.
The gross national product does not allow for the health of our children, the quality of their education or the joy of their play. It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials.
It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion to our country, it measures everything in short, except that which makes life worthwhile.
—Robert F. Kennedy
1. Everything we think we know about the world is a model. Every word and every language is a model. All maps and statistics, books and databases, equations and computer programs are models. So are the ways I picture the world in my head–my mental models. None of these is or ever will be the real world.
2. Our models usually have a strong congruence with the world. That is why we are such a successful species in the biosphere. Especially complex and sophisticated are the mental models we develop from direct, intimate experience of nature, people, and organizations immediately around us.
3. However, and conversely, our models fall far short of representing the world fully. That is why we make mistakes and why we are regularly surprised. In our heads, we can keep track of only a few variables at one time. We often draw illogical conclusions from accurate assumptions, or logical conclusions from inaccurate assumptions. Most of us, for instance, are surprised by the amount of growth an exponential process can generate. Few of us can intuit how to damp oscillations in a complex system.
You can’t navigate well in an interconnected, feedback-dominated world unless you take your eyes off short-term events and look for long-term behavior and structure; unless you are aware of false boundaries and bounded rationality; unless you take into account limiting factors, nonlinearities, and delays. You are likely to mistreat, misdesign, or misread systems if you don’t respect their properties of resilience, self-organization, and hierarchy.
Imagine that the goal of human life isn’t to win the game; the goal of human life is to win the set of all possible games. And in order to win the set of all possible games, you don’t need to win any particular game, you have to play in a manner that ensures that you will be invited to play more and more games.
So when you tell your children to “be good sports” and to “play properly,” what you mean is, play to win, but play to win in such a way that people on your team are happy to play with you, and people on the other teams are happy to play with you so that you keep getting invited to games.
If a factory is torn down but the rationality which produced it is left standing, then that rationality will simply produce another factory. If a revolution destroys a government, but the systematic patterns of thought that produced that government are left intact, then those patterns will repeat themselves….
There’s so much talk about the system. And so little understanding.
Zen and the Art of Motorcycle Maintenance
Reid Hoffman, Ben Casnocha, and Chris Yeh’s The Alliance: Managing Talent in the Networked Age (find in a library) is a short but engaging read focused on three core ideas for talent management in the “networked” age:
1. Use an “Alliance” framework between employer and employee
2. Invest in and leverage employee networks
3. Encourage and/or run employee alumni networks and groups
The Alliance Framework
The book opens with the usual assertion that the old model of “lifetime” employment is dead. Where it begins to veer from the typical, though, is by frankly criticizing the alternatives seen as replacing lifetime employment: falsely ascribing “family” status to an organization and its members and employees, or fully resigning to a free agent, market-ruled alternative.
Most CEOs have good intentions when they describe their company as being “like family.” They’re searching for a model that represents the kind of relationships they want to have with their employees–a lifetime relationship with a sense of belonging. But using the term “family” makes it easy for misunderstandings to arise.
In a real family, parents can’t fire their children.
The authors instead point to professional sports teams as an exemplar of the Alliance framework. The professional sports team has a specific mission (win games and championships) and members come together to accomplish the mission, even as the composition of the team changes over time.
While a professional sports team doesn’t assume lifetime employment, the principles of trust, mutual investment, and mutual benefit still apply. Teams win when their individual members trust each other enough to prioritize team success over individual glory; paradoxically, winning as a team is the best way for the team members to achieve individual success.
Borrowing a military term, the authors suggest that organizations harness entrepreneurial talent by using a tour of duty framework. They are careful to note that companies are very different from the military: while a departing employee might get a farewell party, a soldier who leaves his unit before his tour is complete is AWOL and gets court-martialed. They argue that the metaphor is still useful, however, since both military and business tours of duty focus on honorably completing a specific, finite mission.
Tours of duty are defined by the specific mission to be accomplished, and not time-in-role as career experience is often reduced to. Tours of duty are also not “one size fits all,” and three different types of tours are suggested:
Rotational Tour of Duty
Typically at the entry or junior level, Rotational tours are not personalized to specific employees. Rotational tours are often used by consulting firms, investment banks, and tech companies who provide standardized on-boarding for new junior employees, often allowing them to rotate through a finite number of roles during the often two to four years of the tour, usually for a predetermined number of months (3, 6, or 9) in each role. The primary purpose of the Rotational tour is to evaluate potential long-term fit on both sides: employer and employee.
Transformational Tour of Duty
Transformational tours are personalized to individual employees and are less about specific time commitments and more about a clear and specific mission to be accomplished. The promise of the Transformational tour is that it gives the employee the opportunity to transform both his or her career and the company by accomplishing something substantive. The crux of the Transformational tour is this win-win synergy for employer and employee. The Transformational tour is personalized and structured at the outset with both the employer’s goal and the employee’s future career aspirations–whether in the current company or elsewhere–front and center.
Foundational Tour of Duty
Foundational tours often occur at the highest (founder/executive) level. Foundational tours occur when there is “exceptional alignment” between employer and employee as a defining hallmark of the relationship, and the employee is identified with the organization and vice-versa (e.g., Warren Buffett and Berkshire Hathaway). Typical tenure in Foundational tours is 10 years or more, though Foundational tours are not restricted to executives, since Foundational tours at all levels ensure ownership, continuity, and serve as keepers of institutional memory.
No one ever washes a rental car. A Foundational employee would never allow the company to cut corners to meet short-term financial goals.
The authors spend the next several chapters of the book carefully laying out the prerequisites and steps for using tours of duty. First, they discuss the importance of defining an organization’s core mission and values so specifically and rigorously that some players feel strong alignment while others feel so out of alignment they might leave the organization. (The authors argue that organizations want to lose this latter group.) Next, they provide specifics on having the kind of honest, raw conversations with employees that are crucial for effectively using a tour of duty framework. Finally, they provide suggested timelines and tools for checking in and using feedback during the course of a tour of duty, as well as negotiating subsequent tours.
Employee Network Intelligence
In the second major strategy in The Alliance, the authors claim that employee networking is a good thing. Rather than seeing networking as a detriment to the organization or a behavioral indicator that an employee is thinking about leaving, The Alliance suggests that employers should pay employees to build, maintain, and leverage their networks. The authors argue that in the current era of knowledge work, human capital is defined not simply by the knowledge, skills, and abilities in each individual employee, but by all that those employees can bring to an organization through the responsible and skilled use of their individual networks. Employers should enable and train all employees to skillfully utilize social media, pay for learning opportunities and institute a formal system of knowledge transfer whenever external learning occurs, and even start a “networking fund” and allow employees to expense networking lunches.
Corporate Alumni Networks
The third strategy in The Alliance is that organizations should network with ex-employees substantially more than most currently do, specifically by creating corporate alumni networks to facilitate lifelong alliances between organizations and former employees. The authors note extensive potential ROI from corporate alumni networks, including the ability to hire more great people through referrals, new customers, access to competitive and network intelligence, and alumni as brand ambassadors. The authors provide specific how-to guidance on setting up and running corporate alumni networks, ranging from the relatively low-cost to the highly-involved.
Overall, The Alliance: Managing Talent in the Networked Age (find in a library) turns some existing talent management practices sideways, if not upside down. While the authors are perhaps too light on caveating that the Silicon Valley talent ecosystem in which they operate may not generalize to other industries or fields, the talent strategies Hoffman, Casnocha, and Yeh are suggesting are by no means reserved for the tech world. The Alliance challenges leaders, managers, and HR strategists to think differently about legacy talent management practices that may no longer fit today’s environment.
The Personnel Testing Council of Metropolitan Washington (PTCMW) is a Washington DC membership organization for practitioners of industrial-organizational psychology and organization science.
The January 2017 speaker was the outgoing PTCMW president, Matt Fleisher, who heads Global Talent Analytics at FTI Consulting, a global business advisory firm with approximately 5,000 employees and annual turnover of about 1,000 consultants. FTI fields an annual employee engagement survey with a response rate typically between 75 and 85 percent of the workforce.
Matt shared lessons he learned standing up the Talent Analytics function at FTI, many of which echo what I’ve heard from other practitioners in both private industry and government.
Key takeaways for practicing talent analytics
- Start by focus on the actual organizational challenges, not the availability of data or preferred analysis
- Use the research literature to identify and report the KPIs that will drive strategic business decisions
- Use descriptive analytics and predictive analytics to get to prescriptive analytics – prescribing actionable recommendations based on the data and analysis
- When communicating analysis to stakeholders, use the following three-step process:
Here’s what. So what? Now what…
Highs and lows from standing up a talent analytics group
- Created the function
- Automated routine tasks using R
- Linked 360, employee engagement, and turnover
- Became victims of their own success – too much incoming work led to quality assurance (QA) issues
- Formalized the work intake process so customers were no longer calling the analytics team directly. Instead, requests for HR analytics went to the HR contact center which created a ticket and put the request in the queue
- Delegated reporting from the analytics group to HR business partners
- Created more time for quality assurance activities
- Dedicated more time to planning longer-term strategic, predictive analytic work
Other lessons learned
- Using 360, linked individual employee turnover to disrespectful treatment from senior leaders
- Using employee engagement survey results to predict turnover up to 6 months from survey administration
- Data quality control / quality assurance should occur in the HRIS and not the analytic software – this may take longer on the front end, but prevents future QA issues with products
Tips for creating products that are actually used
- Write short emails – 3-4 sentences, max
- Write short reports – 1-2 pages, max
- For data-savvy users – create drill-down dashboards, but caveat that small n sizes don’t generalize
- Keep it as simple as possible – it’s okay to use advanced techniques, but don’t show them
- Manage expectations – a predictive analysis is not a 30-minute job
- State and be clear about your assumptions – note what can happen if assumptions don’t hold
In the summer of 2016, I had the good fortune to be selected along with nine other students to attend the 2016 Graduate Workshop in Computational Social Science and Modeling (GWCSS) at the Santa Fe Institute in Santa Fe, NM. Carnegie Mellon Professor John H. Miller and University of Michigan Professor Scott E. Page were our excellent Santa Fe faculty guides for the workshop, which included a combination of lectures on complexity science, complex systems, and computational modeling by Professors Miller and Page, as well as other Santa Fe faculty including Chris Kempes, Simon DeDeo, and Mirta Galesic.
The workshop was a two-week, residential intensive in beautiful Santa Fe. Concurrently, SFI runs a four-week Complex Systems Summer School which brings additional notables in the field of complexity and computational social science, including economist Brian Arthur, Robert Boyd whose 1988 book on cultural evolution coauthored by Peter J. Richerson has been cited more than 7000 times, and Doyne Farmer. As a GWCSS attendee, I was able to attend any of the Summer School lectures I could slip away for.
Lectures and discussions were wide-ranging but all anchored in the science of complexity and computational modeling as tools of scientific investigation. A sampling of our discussions in the GWCSS:
* model diversity (Scott Page)
* evolutionary computation (John Miller)
* information theory and conflict (Simon DeDeo)
* network structure and performance (Mirta Galesic)
I was incredibly lucky to attend the 2016 workshop with a distinguished cohort of fellow graduate students, from whom I learned a great deal during conversation, discussion, and spirited debate over the course of our two weeks together.
During the 2016 GWCSS, I began work on an agent-based model to investigate the role of personality variables in situations in a work setting (view project on ResearchGate). This work continues.
I am grateful to have had the opportunity to attend the 2016 Graduate Workshop in Computational Social Science and recommend it highly.
Please note that the application deadline for the 2017 GWCSS is 14 February 2017.
More information about the 2016 workshop I attended, including a reading list, is available from the 2016 website and wiki.
At the 2016 Winter Simulation Conference, I presented “Active Shooter: An Agent-Based Model of Unarmed Resistance” (PDF) (view/follow project on ResearchGate), coauthored with William G. Kennedy. The paper appeared in WSC 2016’s Social and Behavioral Simulation track, chaired by Ugo Merlone of the University of Torino and Stephen Davies from the University of Mary Washington.
Background and motivation:
I initially undertook this modeling project during a 2015 seminar on agent-based modeling for military applications (ABM) offered by Kenneth Comer at George Mason University. The issue of active and mass shooters is obviously not unique to the military, but the shootings at Ft. Hood and the very few ABMs [1 , 2] I could find on mass shootings suggest more attention from computational modelers is needed.
Examining mass shootings through the lens of complexity science and agent-based modeling raised an additional wrinkle that’s seldom addressed in the literature or training for active shooter events. Specifically, the prescription to “Run, Hide, Fight” (or “Avoid, Deny Defend”) might work for individuals, but what about the impact one individual’s behavior has on another individual’s likelihood of surviving an active shooter incident? To illustrate, imagine an active shooting is unfolding and a potential victim is able to run and hide, barricading himself in a supply closet. What happens when the next potential victim arrives at that same supply closet, having also planned to use it? The individual inside the closet has monopolized the secure hiding space, leaving the second individual at greater risk.
The study of mass shootings is complicated. Despite disproportionate media coverage, mass shootings are extremely rare and each mass shooting is unique and difficult to generalize. Even the definition of “mass shooting” is debated. In my work, I’ve relied on the U.S. Federal Bureau of Investigation for official definitions, not because I think they are necessarily the best possible, but because they provide an authoritative baseline to study the issue. The FBI considered a “mass shooting” a single incident in which four or more individuals are killed (this is the definition of “mass murder”) involving the criminal use of a firearm. Importantly, this definition means that an incident in which three are shot and killed would not rise to the level of “mass shooting,” nor would an incident in which many were shot but less than four died of their wounds.
Compounding the rarity of mass shootings available for study is the fact that many of the primary sources (including the shooter himself) are often deceased. This fact, combined with methodological issues of relying on eyewitness accounts that are subject to recall errors and bias, makes the study of historical mass shootings difficult. Conducting experiments in which participants believed they could possibly die would be unethical and dangerous. Computational modeling and simulation, on the other hand, would offer an infinitely repeatable virtual laboratory in which to study this issue and would have no possibility of harming human subjects.
I had initially imagined a model that would include a variety of agents: shooter(s), victims, and peacekeepers/LEOs. While developing the model, I realized that what was most important was creating a model that allowed users–those with more subject-matter expertise than me–to calibrate the model to their knowledge. The eventual version 1 was a generalized outdoor model in which a shooter targets and fires upon victims, victims flee the shooter, and, if set by the user, and small proportion of victims attempt to subdue the shooter, as occurred during the thwarted attack in 2015 on a Thalys train traveling from Amsterdam to Paris.
User-settable parameters permit the user to control the shooter’s accuracy, armament, and, importantly, the probability of a “fighter” overcoming a shooter.
Initial findings suggest that even with a very small probability of success – a 1 out of 100 chance for each second of the struggle – a very small proportion of “fighters” in a population can potentially subdue an active shooter, even if that shooter has a 50 percent probability per second of disabling the “fighters.”
My model is available under the Apache 2.0 license from OpenABM for others to use in their own research. I encourage other modelers to extend this model and have made several suggestions for doing so in the paper. In particular, I think the idea of rapid collective action–a coordinated group or swarm attack–should be studied as a potential counter to an active shooter.
Mass shootings unfold quickly and are rarely foreseen by victims. Increasingly, training is provided to increase chances of surviving active shooter scenarios, usually emphasizing “Run, Hide, Fight.” Evidence from prior mass shootings suggests that casualties may be limited should the shooter encounter unarmed resistance prior to the arrival of law enforcement officers (LEOs). An agent-based model (ABM) explored the potential for limiting casualties should a small proportion of potential victims swarm a gunman, as occurred on a train from Amsterdam to Paris in 2015. Results suggest that even with a miniscule probability of overcoming a shooter, fighters may save lives but put themselves at increased risk. While not intended to prescribe a course of action, the model suggests the potential for a reduction in casualties in active shooter scenarios.
Keywords: agent-based modeling, ABM, active shooter, mass shooter, military, law enforcement, LEO, firearms, guns, violence, terrorism, security
Briggs, T. W. & Kennedy, W. G. (2016). Active shooter: An agent-based model of unarmed resistance. In Proceedings of the 2016 Winter Simulation Conference (WSC), 3521–3531. https://doi.org/10.1109/WSC.2016.7822381 (PDF)