fbpx
jul18-11-115578541-spiderstock
spiderstock/Getty Images

One of the biggest challenges facing management scientists has been the struggle to produce knowledge that is both academically rigorous and applicable to practicing managers. In an Academy of Management Journal editorial, we described two problems that contribute to this challenge.

The first is what we called the “Lost in Translation” problem, which refers to the fact that almost no managers turn to academic journals (publications like the Academy of Management Annals, the Academy of Management Journal, the Academy of Management Review, the Journal of Applied Psychology, Organizational Behavior and Human Decision Processes, to name a few) for advice on how to improve their skills or practices. Researchers have found that managers tend to be unaware of research-supported management insights reported in academic journals, and that such insights are typically excluded in practitioner-oriented journals. Relatedly, managers tend to hold on to long-held assumed truths that often management scholars’ studies have dispelled. For example, many managers still believe that the errors they make in evaluating their employees can be corrected by training them to recognize the potential errors and suggesting ways to avoid them, while the actual evidence shows that such training can actually increase the number of errors they make.

The second problem, which we refer to as “Lost Before Translation,” is the tendency for academic researchers to design studies without input from managers or employees — the populations that their studies’ results are meant to help.

If academics want to help practitioners improve the way they manage and have an impact in the real world, they need to address these two problems. Unfortunately, the current way that business schools reward their professors will make this very difficult. This is because promotions and salary increases at most business schools are primarily based on professors’ number of peer-reviewed, “A” journal publications (or those appearing in journals with the highest impact factor, or frequency of citation-counts). Using these publications as the main “currency” for career advancement has produced four unintended consequences.

First, counting solely or primarily “A” journal publications when determining rewards, recognitions, and other career advancement-prizes communicates, in essence, that professors won’t advance if they choose to publish in outlets that are more widely read by managers.

Second, placing greater weight on professors’ number of “A” journal publications than on other ways in which they produce scholarship weakens professors’ ability to pursue activities that epitomize “engaged scholarship,” such as research and learning opportunities that occur when applying knowledge to help communities solve social problems.

Third, evaluating professors’ scholarly contributions by counting their “A” journal publications incentivizes scholars to produce as many published studies as possible. That can sometimes encourage scholars to use relatively quick research-gathering strategies such as outsourced sample-acquisition (like Amazon’s MTurk data collection tool), convenience samples (including their own students), and student-pools for laboratory studies over more time-consuming research methodologies, such as cross-cultural ethnographic studies or other qualitative studies, longitudinal studies, decades-long studies, manually-coded customization of publicly-available archival databases, which tend to have more generalizable results. For example, if a researcher wants to understand how employees respond to particular incentives, the results of a study measuring how undergraduate students in the U.S. respond to those incentives will have limited generalizability, and may not be at all relevant in most work contexts or in other cultures. One review of methodologies concluded that, management researchers “…do what they know, what they have done, what is efficient and easier, and what is rewarded (i.e., published)” which is not always the same as what would be most illuminating, or most useful.

Fourth, evaluating “scholarship” primarily by counting professors’ “A” journal publications also could encourage academics to engage in questionably ethical research practices in order to produce results that will be accepted by these journals. For example, researchers might omit variables that are associated with typically non-publishable (i.e., non-significant) findings — a phenomenon referred to as the “file drawer problem”). Or they might present surprising findings as though these were hypothesized all along — a phenomenon called “HARKing” (hypothesizing after results are known). These practices not only weaken the understanding of the phenomena being studied but call into question the validity of the research.

It’s time that business schools take a broader approach to assessing what it means to have scholarly impact. We advocate for three main changes. First, business schools shouldn’t just measure impact from within academia but outside it as well. For example, rather than counting up the number of times a professor’s articles have been cited by other academics, we should also be looking at how often the work is cited or used by students, practicing managers, policy-makers, and in articles (e.g., news, periodicals, magazines, podcasts, etc.) that are mass-distributed to these multiple stakeholders. Doing this is called taking a “pluralistic approach to scholarly impact.”

Second, we believe academics should focus on conducting research that positively impacts business and society, or what a global multidisciplinary team of leading management scholars calls “responsible research.” Responsible research has been described as research that balances the interests of shareholders with the social and economic outcomes of companies, uses rigorous research methods to understand puzzling local phenomena, and seeks truth above all else by using ethical research methods. Metrics that capture the extent to which research achieves these goals could be additional ways to assess scholarly impact.

Third, given that any research study is part of an ecosystem, it is incumbent upon all stakeholders — scholarly researchers, business school administrators, funding agencies, government, practicing managers, and journal editors — to work together in a concerted way to encourage and reward responsible research and move beyond the limited approaches to conducting and disseminating management research that are currently used.

What would these things look like in practice? On measurement, business schools could begin to look at the following metrics: the number of invites to highly visible business events; number of practitioner publications, including popular press books; media coverage in outlets that are viewed or read by a broad (non-academic as well as academic) audience; requests for time from industry or government agencies; number of presentations to practitioner events and communities; the amount of external funding received from well-known funding agencies such as the National Science or Kauffman Foundations; and partnerships with external stakeholders, such as local and state legislatures or other policy makers. Each of these metrics is a reflection that a management scholar’s research has helped or enlightened communities beyond academia.

Business schools could also rely on new technologies to measure a professor’s impact. They might take into account the following: inclusion of work in digital libraries; number of downloads of scholarly articles; online engagement, both on social media and with fellow researchers on sites like academia.edu, and ResearchGate; mentions in Wikipedia; and discussions in news outlets, such as newspapers, blogs, and websites. Business schools could use web-based tools known as “altmetrics” to collect data on how often research is mentioned in these outlets.

Assessing academics against these kinds of metrics or indicators would result in progress against the second change we are advocating for: more relevant and useful research. In particular, we see the following advantages: (1) more engaged scholarship; (2) a broader set of consumers who use scholarly work, including managers, employees, consumers, and policy makers in addition to management scholars; (3) increased likelihood that research topics and study designs will incorporate input from those same populations; (4) increased diversity in research methodologies used, including longer-term studies; and (5) more ethical research practices.

Unfortunately, in our view, it’s not likely that business schools will redesign performance management systems to include the metrics we listed above. Scholars who have benefited from the way scholarly impact is traditionally assessed (by counting only or primarily “A” journal publications) may resist seeing scholarship assessed more pluralistically. (Indeed, both of us have benefitted from this more traditional rewards system through publishing in top-tier journals with sufficient frequency to be promoted at our respective universities to positions of “distinguished professor” and named professorships.) Moreover, even if individual scholars, their departments, or schools were to see value in this new way of assessing professors, it will be difficult for them to make changes when other institutions continue to evaluate scholars’ records using more traditional methods. An academic who had received a promotion or tenure under one system may not have that recognition reciprocated at another school. Consequently, it will take the entire academic community – or at a minimum, the set of schools and universities that typically call upon their faculty to evaluate each other – to simultaneously broaden the way scholarly impact gets assessed.

Despite this seemingly uphill battle, it’s important that schools take initiative if management science is to survive and thrive as a relevant as well as rigorous science. And, there are schools that are starting to take the lead on this. As one example, the University of Michigan’s Ross Business School now has a “Business + Impact” initiative devoted to making sure research produced actually has a tangible social impact. The host of benefits is clear: scholars will use their voice in ways that go beyond merely publishing in top-tier academic journals with very limited practitioner readership; there will be less incentive to engage in questionably ethical research practices, such as only putting their best research findings forward and hiding non-significant findings; and perhaps most importantly, it will lead to more interest in, and perceived legitimacy of, management scholars’ work.

The consequences of not taking action hit home for us recently in the classroom when, after describing some of our research findings from a study done a few years ago, one of our MBA students asked, “Why haven’t I heard this before, this would have really helped me multiple times over the last few years of my career? Where has this stuff been hiding?” We could only console the student by saying that we’re all working on getting findings like this out to the broader market. We’re just not there yet. But we should be.

from HBR.org https://ift.tt/2zSxpbf