So much talk about the system

If a factory is torn down but the rationality which produced it is left standing, then that rationality will simply produce another factory. If a revolution destroys a government, but the systematic patterns of thought that produced that government are left intact, then those patterns will repeat themselves….

There’s so much talk about the system. And so little understanding.

–Robert Pirsig
Zen and the Art of Motorcycle Maintenance

So much talk about the system

The Alliance: Managing Talent in the Networked Age

Reid Hoffman, Ben Casnocha, and Chris Yeh’s The Alliance: Managing Talent in the Networked Age (find in a library) is a short but engaging read focused on three core ideas for talent management in the “networked” age:

1. Use an “Alliance” framework between employer and employee
2. Invest in and leverage employee networks
3. Encourage and/or run employee alumni networks and groups

The Alliance Framework

The book opens with the usual assertion that the old model of “lifetime” employment is dead. Where it begins to veer from the typical, though, is by frankly criticizing the alternatives seen as replacing lifetime employment: falsely ascribing “family” status to an organization and its members and employees, or fully resigning to a free agent, market-ruled alternative.

Most CEOs have good intentions when they describe their company as being “like family.” They’re searching for a model that represents the kind of relationships they want to have with their employees–a lifetime relationship with a sense of belonging. But using the term “family” makes it easy for misunderstandings to arise.

In a real family, parents can’t fire their children.

The authors instead point to professional sports teams as an exemplar of the Alliance framework. The professional sports team has a specific mission (win games and championships) and members come together to accomplish the mission, even as the composition of the team changes over time.

While a professional sports team doesn’t assume lifetime employment, the principles of trust, mutual investment, and mutual benefit still apply. Teams win when their individual members trust each other enough to prioritize team success over individual glory; paradoxically, winning as a team is the best way for the team members to achieve individual success.

Borrowing a military term, the authors suggest that organizations harness entrepreneurial talent by using a tour of duty framework. They are careful to note that companies are very different from the military: while a departing employee might get a farewell party, a soldier who leaves his unit before his tour is complete is AWOL and gets court-martialed. They argue that the metaphor is still useful, however, since both military and business tours of duty focus on honorably completing a specific, finite mission.

Tours of duty are defined by the specific mission to be accomplished, and not time-in-role as career experience is often reduced to. Tours of duty are also not “one size fits all,” and three different types of tours are suggested:

Rotational Tour of Duty

Typically at the entry or junior level, Rotational tours are not personalized to specific employees. Rotational tours are often used by consulting firms, investment banks, and tech companies who provide standardized on-boarding for new junior employees, often allowing them to rotate through a finite number of roles during the often two to four years of the tour, usually for a predetermined number of months (3, 6, or 9) in each role. The primary purpose of the Rotational tour is to evaluate potential long-term fit on both sides: employer and employee.

Transformational Tour of Duty

Transformational tours are personalized to individual employees and are less about specific time commitments and more about a clear and specific mission to be accomplished. The promise of the Transformational tour is that it gives the employee the opportunity to transform both his or her career and the company by accomplishing something substantive. The crux of the Transformational tour is this win-win synergy for employer and employee. The Transformational tour is personalized and structured at the outset with both the employer’s goal and the employee’s future career aspirations–whether in the current company or elsewhere–front and center.

Foundational Tour of Duty

Foundational tours often occur at the highest (founder/executive) level. Foundational tours occur when there is “exceptional alignment” between employer and employee as a defining hallmark of the relationship, and the employee is identified with the organization and vice-versa (e.g., Warren Buffett and Berkshire Hathaway). Typical tenure in Foundational tours is 10 years or more, though Foundational tours are not restricted to executives, since Foundational tours at all levels ensure ownership, continuity, and serve as keepers of institutional memory.

No one ever washes a rental car. A Foundational employee would never allow the company to cut corners to meet short-term financial goals.

The authors spend the next several chapters of the book carefully laying out the prerequisites and steps for using tours of duty. First, they discuss the importance of defining an organization’s core mission and values so specifically and rigorously that some players feel strong alignment while others feel so out of alignment they might leave the organization. (The authors argue that organizations want to lose this latter group.) Next, they provide specifics on having the kind of honest, raw conversations with employees that are crucial for effectively using a tour of duty framework. Finally, they provide suggested timelines and tools for checking in and using feedback during the course of a tour of duty, as well as negotiating subsequent tours.

Employee Network Intelligence

In the second major strategy in The Alliance, the authors claim that employee networking is a good thing. Rather than seeing networking as a detriment to the organization or a behavioral indicator that an employee is thinking about leaving, The Alliance suggests that employers should pay employees to build, maintain, and leverage their networks. The authors argue that in the current era of knowledge work, human capital is defined not simply by the knowledge, skills, and abilities in each individual employee, but by all that those employees can bring to an organization through the responsible and skilled use of their individual networks. Employers should enable and train all employees to skillfully utilize social media, pay for learning opportunities and institute a formal system of knowledge transfer whenever external learning occurs, and even start a “networking fund” and allow employees to expense networking lunches.

Corporate Alumni Networks

The third strategy in The Alliance is that organizations should network with ex-employees substantially more than most currently do, specifically by creating corporate alumni networks to facilitate lifelong alliances between organizations and former employees. The authors note extensive potential ROI from corporate alumni networks, including the ability to hire more great people through referrals, new customers, access to competitive and network intelligence, and alumni as brand ambassadors. The authors provide specific how-to guidance on setting up and running corporate alumni networks, ranging from the relatively low-cost to the highly-involved.

Overall, The Alliance: Managing Talent in the Networked Age (find in a library) turns some existing talent management practices sideways, if not upside down. While the authors are perhaps too light on caveating that the Silicon Valley talent ecosystem in which they operate may not generalize to other industries or fields, the talent strategies Hoffman, Casnocha, and Yeh are suggesting are by no means reserved for the tech world. The Alliance challenges leaders, managers, and HR strategists to think differently about legacy talent management practices that may no longer fit today’s environment.

Download the first chapter from the book website.

Source for the SlideShare at the top of this post: The Alliance: A Visual Summary from Reid Hoffman

The Alliance: Managing Talent in the Networked Age

Lessons from Implementing a Human Capital Analytics Function

The Personnel Testing Council of Metropolitan Washington (PTCMW) is a Washington DC membership organization for practitioners of industrial-organizational psychology and organization science.

The January 2017 speaker was the outgoing PTCMW president, Matt Fleisher, who heads Global Talent Analytics at FTI Consulting, a global business advisory firm with approximately 5,000 employees and annual turnover of about 1,000 consultants. FTI fields an annual employee engagement survey with a response rate typically between 75 and 85 percent of the workforce.

Matt shared lessons he learned standing up the Talent Analytics function at FTI, many of which echo what I’ve heard from other practitioners in both private industry and government.

Key takeaways for practicing talent analytics

  1. Start by focusing on the actual organizational challenges, not the availability of data or preferred analysis
  2. Use the research literature to identify and report the KPIs that will drive strategic business decisions
  3. Use descriptive analytics and predictive analytics to get to prescriptive analytics – prescribing actionable recommendations based on the data and analysis
  4. When communicating analysis to stakeholders, use the following three-step process:

Here’s what. So what? Now what…

Highs and lows from standing up a talent analytics group

Year 1
  • Created the function
Year 2
  • Automated routine tasks using R
  • Linked 360, employee engagement, and turnover
  • Became victims of their own success – too much incoming work led to quality assurance (QA) issues
Year 3
  • Formalized the work intake process so customers were no longer calling the analytics team directly. Instead, requests for HR analytics went to the HR contact center which created a ticket and put the request in the queue
  • Delegated reporting from the analytics group to HR business partners
  • Created more time for quality assurance activities
  • Dedicated more time to planning longer-term strategic, predictive analytic work

Other lessons learned

  • Using 360, linked individual employee turnover to disrespectful treatment from senior leaders
  • Using employee engagement survey results to predict turnover up to 6 months from survey administration
  • Data quality control / quality assurance should occur in the HRIS and not the analytic software – this may take longer on the front end, but prevents future QA issues with products

Tips for creating products that are actually used

  • Write short emails – 3-4 sentences, max
  • Write short reports – 1-2 pages, max
  • For data-savvy users – create drill-down dashboards, but caveat that small n sizes don’t generalize
  • Keep it as simple as possible – it’s okay to use advanced techniques, but don’t show them
  • Manage expectations – a predictive analysis is not a 30-minute job
  • State and be clear about your assumptions – note what can happen if assumptions don’t hold
Lessons from Implementing a Human Capital Analytics Function

Review: Understanding Social Networks by Charles Kadushin

9780195379471

I stumbled on Charles Kadushin’s excellent book Understanding Social Networks: Theories, Concepts, and Findings (find in a library) last year while preparing for my PhD qualifying exams. I already own Wasserman and Faust’s Social Network Analysis: Methods and Applications, which is pretty much the go-to text and reference on SNA, as well as Borgatti, Everett, and Johnson’s Analyzing Social Networks but as a social scientist, I was looking for social science applications of network science, and Kadushin’s highly accessible book fit the bill nicely.

Kadushin, emeritus Professor of Sociology at the CUNY Graduate Center, has been engaged in social science research on network topics since the mid 1960s and has example after example of not only his own work with networks in social science, but also citations of all of the other social scientists I’d expect to see: Ron Burt, Ed Laumann, Stanley Milgram, Stephen Borgatti, Daniel Brass, and Barry Wellman, to name only a few.

Kadushin takes a decided and purposefully social approach to social networks, noting in his introduction that although network science can be applied to power grids, for example, understanding social networks really requires examining them “as if people mattered.” Kadushin proceeds to explore both the psychological and sociological theories underpinning networks as well as the social consequences of networks and their structures.

The first few chapters provide an overview of network concepts, moving from individual network members (Chapter 2) through entire social networks and their subcomponents and network properties (Chapter 3) and finally network segmentation (Chapter 4).

Chapter 5 explores the psychological foundations of social networks and the book continues through successive levels, next examining small groups and leaders (Chapter 6), then entire organizations (Chapter 7), small-world networks and community structures (Chapter 8), followed by network processes like influence and diffusion (Chapter 9). Chapter 10 explores social capital as a function of networks and network position and Chapter 11 gives much-needed attention to ethical dilemmas in social network research. Finally, Chapter 12 reviews “ten master ideas” of social networks.

I found Kadushin’s book extremely helpful in pointing to citations of social network analysis applied to social science. For any social scientist interested in social networks, I’d strongly recommend starting with Understanding Social Networks (with Borgatti, Everett, and Johnson’s Analyzing Social Networks as a second choice). I will also note that while Kadushin focuses on social science, he does not shy away from covering the work of physicists and others on networks, though he avoids mathematics in his explanations (but references the appropriate papers).

Likewise, for the general reader, I can’t think of a better book that explains social networks and their applications to social science and social ideas than what Kadushin offers here. An additional strength of the book is Kadushin’s enjoyable writing style and clear and concise recap at the end of each chapter in which he informs the reader “where we are now.”

My physical copy of Understanding Social Networks: Theories, Concepts, and Findings is heavily annotated so I also ended up buying the Kindle version, which was only $9.99 at the time of this writing. (The paperback version is $19.96 on Amazon at the time of this writing, but Amazon’s prices do regularly fluctuate).

In sum, Kadushin’s Understanding Social Networks: Theories, Concepts, and Findings (find in a library) is probably the most enjoyable book on social networks I’ve read and has been particularly helpful in identifying particular applications of network science in the social sciences.

Review: Understanding Social Networks by Charles Kadushin

Data Science, Ethics, and Academics in Industry

I’ve been fielding more questions about research ethics and protecting individuals with regard to data science and big data. The topic warrants a much more in-depth discussion than this blog post, but I’ve noticed one trend that’s worth pointing out: academics previously working at research universities either leaving academia temporarily or permanently for tech companies and industry.

Academic researchers are almost always required to submit their research proposals to their organization’s Institutional Review Board (IRB), an interdisciplinary group of researchers charged with protecting human subjects as outlined in the 1979 Belmont Report and overseeing research ethics training at most universities and research organizations. Private companies are under no such obligation, as the controversial Facebook study (PDF) of emotional contagion demonstrated. These companies rely on the permissions granted by users who consent to the Terms of Service agreements prior to signing up for the service.

For me, it remains an open question whether researchers in private industry are adhering to a “do no harm” maxim. The obvious tension is that profit-motivated entities like startups and publicly-traded tech companies are interested in maximizing investor or shareholder value and are not subject to the same research ethics requirements as publicly-funded research universities.

I’m encouraged that some academic researchers like Jessica Vitak are tackling these issues and looking for ways to increase transparency in big data use. Vitak’s Privacy + Security Internet Research Lab is tackling exactly these questions. I had the opportunity to hear Vitak speak at the recent Human-Computer Interaction Laboratory annual symposium at the University of Maryland, College Park. One of the potential solutions that Vitak suggests is that the peer review process for academic publications and conferences needs to fill gaps left by insufficient IRB expertise in some areas of data science. This won’t necessarily change what private companies do with individual data, but it’s certainly a start. The controversial Facebook study now includes an “Editorial Expression of Concern,” which appeared after the publication of the study. Had the editor and peer reviewers at PNAS been more attuned to research ethics and human subjects protection during the peer review process, the Facebook authors might have been asked to do a much better job of addressing the ethical implications in their research.

Of course, this raises the thornier question of rejecting research that does not adhere to accepted human subjects protections: in this case, we do not reward the authors for failing to conduct research in an ethical manner, but we prevent information about the research from entering the public domain. I don’t have a good answer to this issue.

I don’t specifically intend to pick on the tech companies here. Plenty of other industries have, in the name of profit-driven research, done harm. But tech companies also represent a particularly desirable organization in which to do research. Traditionally, researchers, especially in the social sciences, had to painstakingly collect their own experimental or correlational data. This was both time consuming and expensive, and perhaps too often resulted in non-significant findings because the research sample was too small. Tech companies, on the other hand, are awash in data that represents a potential intellectual gold mine for social scientists.

My hope is that those who leave academia for the bountiful data available at tech companies remember and abide by their research ethics training, even when they aren’t required to. I also hope that tech companies are engaging with experts in research ethics and taking any objections by those experts seriously.

A recent NPR Hidden Brain podcast episode “This is Your Brain on Uber” featured an interview with Keith Chen, who appears to be both Head of Economic Research at Uber and also tenured professor at Yale. If he indeed holds dual roles, it raises important ethical questions about the research he is conducting for Uber. Does Chen conform to the same human subjects protection protocols at Uber that he must when working “at” Yale? Or is there an artificial separation because Uber isn’t Yale and isn’t subject to the same requirements?

During the episode, Shankar Vendantam at one point asks Chen about the implications for individual users’ privacy in research projects based on users’ data. Chen seemed concerned about the implications Vendantam raised, but also somewhat dismissive, simply suggesting that Uber has a Privacy Officer, a hire that was made only after a user outcry when it was discovered that an Uber executive may have inappropriately used his access to track the movements of a reporter. Chen said he didn’t usually worry about his behavioral data being used by tech companies, but that Vendantam’s question is now making him think more about it.

I am encouraged that reporters are challenging researchers and industry on their data and research practices and I certainly don’t believe we should throw the proverbial baby out with the bathwater here. There is much to be gained by using these first-ever datasets of human behavior that will add to what we know and understand about humans and social behavior.

It’s also the case that with great power comes great responsibility. Greater transparency, the involvement of research ethicists, and ensuring truly informed participants should be required not just for academic researchers, but also for researchers working in industry.

Look for a future post on the role of psychologists in the ethical conduct of research, and why I believe that a professional code of ethics is a vital component of protecting individuals.

Data Science, Ethics, and Academics in Industry