Hostname: page-component-7b9c58cd5d-dlb68 Total loading time: 0 Render date: 2025-03-15T04:30:07.949Z Has data issue: false hasContentIssue false

When Are Models of Technology in Psychology Most Useful?

Published online by Cambridge University Press:  22 November 2017

Richard N. Landers*
Affiliation:
Old Dominion University
Tara S. Behrend
Affiliation:
George Washington University
*
Correspondence concerning this article should be addressed to Richard N. Landers, Department of Psychology, 250 Mills Godwin Building, Old Dominion University, Norfolk, VA 23529. E-mail: rnlanders@odu.edu
Rights & Permissions [Opens in a new window]

Extract

In industrial-organizational (I-O) psychology, much like in the organizational sciences more broadly (Hambrick, 2007), we have a bit of an addiction to theoretical models. It is commonly assumed that developing new theory is the most valuable way to solve pressing research problems and to drive our field forward (Mathieu, 2016). However, this assumption is untested, and there is growing awareness among organizational scientists that this hardline approach, which is unusual among both the natural sciences and other social sciences, may even be damaging the reputation and influence of our field (Antonakis, 2017; Ones, Kaiser, Chamorro-Premuzic, & Svensson, 2017). As Hambrick (2007) describes, the requirement for theory first “takes an array of subtle, but significant, tolls on our field” (p. 1348). As we will describe in this article, Morelli, Potosky, Arthur, and Tippins’ (2017) suggestions, if taken at face value, will likely create such tolls by encouraging the creation of new theories of dubious value. To be clear, we agree with Morelli et al. that better theory is needed for technology's impact on I-O psychology broadly and talent assessment in particular. We disagree, however, that creating new technology theories using the approaches that I-O psychology typically employs is likely to accomplish this broader goal. Rather, it will ultimately only isolate research on I-O technologies even further from both mainstream I-O research and technology research. Given that we are already quite isolated, this would be a disastrous path.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2017 

In industrial-organizational (I-O) psychology, much like in the organizational sciences more broadly (Hambrick, Reference Hambrick2007), we have a bit of an addiction to theoretical models. It is commonly assumed that developing new theory is the most valuable way to solve pressing research problems and to drive our field forward (Mathieu, Reference Mathieu2016). However, this assumption is untested, and there is growing awareness among organizational scientists that this hardline approach, which is unusual among both the natural sciences and other social sciences, may even be damaging the reputation and influence of our field (Antonakis, Reference Antonakis2017; Ones, Kaiser, Chamorro-Premuzic, & Svensson, Reference Ones, Kaiser, Chamorro-Premuzic and Svensson2017). As Hambrick (Reference Hambrick2007) describes, the requirement for theory first “takes an array of subtle, but significant, tolls on our field” (p. 1348). As we will describe in this article, Morelli, Potosky, Arthur, and Tippins’ (Reference Morelli, Potosky, Arthur and Tippins2017) suggestions, if taken at face value, will likely create such tolls by encouraging the creation of new theories of dubious value. To be clear, we agree with Morelli et al. that better theory is needed for technology's impact on I-O psychology broadly and talent assessment in particular. We disagree, however, that creating new technology theories using the approaches that I-O psychology typically employs is likely to accomplish this broader goal. Rather, it will ultimately only isolate research on I-O technologies even further from both mainstream I-O research and technology research. Given that we are already quite isolated, this would be a disastrous path.

Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017) do appear to anticipate the challenges ahead when they state, “Undoubtedly, the question of how to create a unifying conceptual model of ‘technology’ applied within I-O psychology is a broad one with many tacks to an informative answer” (pp. 635--636), before immediately abandoning that task, never to be revisited, focusing instead upon the much narrower domain of talent assessment. This is a wise move, because attempting to develop such a model given currently available data would be a fool's errand. But, even within talent assessment, there are significant problems with the theory-first approach that are likely only to become apparent after a substantial number of researcher hours have been wasted. We hope our comments will prevent at least a few such cases; to that end, we will explore four major concerns we have identified with Morelli et al.’s approach.

1. Theories Before Facts Is Backward

As Hambrick (Reference Hambrick2007) describes, management and its sister field, I-O psychology, are unusual among social sciences in their insistence that theory should always precede facts. Because of this approach, we often end up with theories that are never tested, and are based upon researcher intuition and interpretation, with few or no facts to support them (Kacmar & Whitfield, Reference Kacmar and Whitfield2000). Although Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017) do not advocate for this approach explicitly, we believe most I-O psychologists reading a call for theory will interpret it to mean that researchers should develop new theories about technology and then test these theories later (or not) using available data. We urge our colleagues to reject this interpretation, which would be particularly harmful in the context of technology given the pace with which technology progresses. Technology needs to be researched inductively or proactively. This does not mean we should abandon theory but that facts, obtained from research, should be used to construct and revise theories, which can then be subsequently tested with further research.

A significant barrier is that this is effectively impossible given the current publishing environment of our top journals. One cannot simply ask an interesting, important question, test it with a rigorous research design, and publish the result. There must be some theory, no matter how half relevant or inane, upon which to hang those results. In the technology-enhanced talent assessment context, we have seen this norm cause harm to scientific progress, keeping academic research lagging behind practitioner understanding. For example, there is a significant amount of unpublished data, mostly presented at the annual conference of the Society for Industrial and Organizational Psychology (SIOP) or kept within organizational research vaults, that suggests that under certain unclear circumstances, there are differences in observed means between job applicants who complete assessments on mobile devices and nonmobile devices. This is an interesting and compelling problem, absent any theory. If natural scientists were approaching this problem, they would likely design, conduct, and publish experimental lab studies exploring under what conditions these differences exist, that is, create a database of facts. A research literature on the boundary conditions of such differences would develop, and eventually, a meta-analysis summarizing this work would reveal patterns of interest, from which useful frameworks and models could be developed. Once such models were developed from facts, only then would researchers attempt to test them in the field. Yet this mixed-methods approach to theory development is effectively impossible in most of our “top” journals; realistically, researchers would need to conduct this entire series of studies on their own, without publishing any of the intermediary work, and then publish them all together in an anthology. This approach is common in social psychology; for example, in the Journal of Personality and Social Psychology, it is not uncommon to see a single article containing five to eight studies with 20 to 40 participants each. This represents not only a colossal waste of researcher time but also creates even greater delays until that research can be consumed and interpreted by our practitioner colleagues. We thus urge researchers (and reviewers!) not to fall into the theory-first trap in our first meaningful steps into studying technology. It is damaging and unnecessary for high-quality science. For the study of technology in I-O psychology to be productive, we must abandon this misguided belief.

2. Models and Frameworks Are Not Interchangeable

Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017) use the terms conceptual model, conceptual framework, and theoretical framework essentially interchangeably, but it is important to be precise with such terms because they can imply different underlying assumptions about the phenomena they describe. Commonly, a model is a system of mathematical formulas describing the interrelationships between constructs and measures—an “economical description of natural phenomena” (Box, Reference Box1976, p. 792) that combines assumptions that cannot be tested with assertions that we believe to be true. It is a precise and technical realization of theory, and we suspect such models to be useful for future description and prediction despite being incomplete representations of reality. Sometimes the mathematics of these models are explicitly stated, such as when using Raudenbush, Bryk, and Congdon's (Reference Raudenbush, Bryk and Congdon2013) hierarchical linear modeling software or the lavaan package (Rosseel, Reference Rosseel2012) in R, and sometimes they are implicit, such as when drawing the nomological net of a construct using path diagram notation. In the language of causal modeling, constructs are latent causal forces—concepts that cannot be measured directly, yet are theorized to cause manifestations of themselves in the form of specific indicators and to influence each other. For example, when attempting to assess conscientiousness, we are attempting to manifest it via well-designed survey questions; for a person with high conscientiousness, that level of conscientiousness should cause that person to respond differently than a person with low conscientiousness.

Framework, in contrast, is a more general term referring to an organizational system by which concepts can be organized. Importantly, in the only published article at the time of this writing cited by Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017) as an example of high-quality, technology-related theorizing (i.e., Potosky, Reference Potosky2008), the ideas discussed are referred to as a “framework” and not a “model,” which we consider the responsible approach. This is because calling for “models” of technology implies that “technology” is a construct with a causal structure and therefore that technology can manifest itself as indicators. It implies that technology causes measurable manifestations of itself. Although researchers in some other social sciences, such as sociology, take this approach, technology in I-O psychology is better conceptualized as a stimulus, a boundary condition, or an exogeneous variable with objectively defined features (see, e.g., Landers & Reddock, Reference Landers and Reddock2017). Although a family of technologies may exist with certain shared characteristics, those characteristics are highly subject to change by human intervention. Thus, technology in I-O psychology can be organized in frameworks; it should not be modeled as a construct.

Despite the call for more theory, Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017, p. 640) argue implicitly against viewing technology as a construct by reiterating Potosky's (Reference Potosky2008) statement that “practitioners who want to use a new technology to administer selection tests might not want equivalence with an older administration medium. . . .” The intuition that new methods should be identically valid to old ones is driven by an unstated theoretical model of selection technology: It reflects an underlying assumption that selection technologies are simply different modes of conveyance for information about constructs—that particular selection technologies are simply indicators of a broader impactless selection technology construct. From that perspective, aside from random error, all technologies must be the same; thus, assessment technologies should be equivalent, and therefore measurement equivalence is a meaningful analysis to undertake. A view of technology as a theoretical construct does not allow for the possibility that some methods might be better measures of some constructs than others, and applying the label “technology” to only specific types of subsets of technologies does not solve this problem. Instead, we should abandon that view entirely, endorsing instead the viewpoint of Arthur and Villlado (Reference Arthur and Villado2008) that predictors (i.e., constructs) and methods (i.e., technologies) are distinct concepts that should be considered both independently and interactively in terms of their effects on both measurement quality and other outcomes.

Where I-O psychology could do some useful a priori modeling is regarding the psychological experiences resulting from technology. No matter what technologies are developed in the future, at least short of Matrix-style brain jacks, the fundamental human aspects of experiences should remain relatively consistent, because new technologies do not create new human emotions or cognitions. Within the context of talent assessment, this includes constructs like experienced novelty, frustration, confusion, and motivation. Although they never state this outright, Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017) are in fact interested in this type of modeling; the framework of Arthur, Keiser, and Doverspike (Reference Arthur, Keiser and Doverspike2017) they describe reflects an effort to organize psychological constructs relevant to a particular technological context. Its constructs (e.g., working memory, perceptual speed) are not themselves technologies. Thus, this is not a framework of technology but is instead a framework of psychological factors in the context of talent assessment that are particularly relevant to technological delivery. This highlights the importance of the difference between conceptualizing these ideas as frameworks versus models; if a new technology-based assessment method were invented (as happens from time to time), new psychological factors might become relevant. A framework of psychological experiences might need to be updated to add previously unconsidered constructs, but the models specifying how psychological constructs already in the framework are interrelated and defined should not need to change because of that framework update. For that reason, we urge researchers to develop only conceptual frameworks of technology and theoretical models of related psychological factors, not theoretical models of technology itself.

3. When Other Fields Have Tried to Model Technology, It Has Not Gone Well

A prime example of the pitfalls associated with theorizing about technology as a construct comes from the field of educational technology, where a key theoretical question is: “Can instructional medium (i.e., Web vs. lecture) influence learning?” Like most questions about technology, on its surface, the answer seems simple: Either it can or it cannot. Yet, because of this apparent intuitiveness, a researcher's answer to this question reveals a lot about his or her own theoretical stance toward both learning and technology. Over years of debate, the two theoretical stances implied by these responses created what came to be known as the Clark–Kozma debate, named after the two researchers who argued most publicly about it (Clark, Reference Clark1994; Kozma, Reference Kozma1994). The core of the argument can be represented thus. Clark argued that technology, which included learning technologies such as lectures and correspondence education, is merely a vehicle to deliver human decisions about instructional content to learners. In contrast, Kozma argued that specific media features, such as the number of words visible from an e-book at any given moment, changes how those human decisions can be interpreted or realized. This argument, which can be traced back to 1982, is still essentially unresolved. There continue to be proponents on both sides, and neither Clark nor Kozma has changed his position in the following 35 years of debate. These debates have even wormed their way into the I-O psychology training literature (cf., Bell, Tannenbaum, Ford, Noe, & Kraiger, Reference Bell, Tannenbaum, Ford, Noe and Kraiger2017). Perhaps the only clear conclusion to be drawn from Clark–Kozma is that educational technologists see no path to a resolution and are tired of arguing about it (cf., Hastings & Tracey, Reference Hastings and Tracey2005).

We contend that similar roadblocks are ahead if we start down the rabbit hole of theorizing about technology in I-O psychology; we will demonstrate why this is the case via the following thought experiment. Fundamentally, all current I-O assessments are designed by humans. Given this fact, assessments are manifestations of the human decision-making process. Once an assessee is exposed to the results of those decisions, the assessee makes her own decisions regarding how to react; thus, it is the main effects and interaction of the assessment designer's and assessee's decisions that create assessment scores. In this view, the assessee's decision to use a mobile device and the assessment designer's decision not to create a mobile-friendly site together could have created the measurement equivalence problems that we now see. If technology is the physical manifestation of the will of assessment designers, assessees, and the interaction between the two, then all assessment theory could be made more accurate by creating new models of the psychological aspects of assessment design decisions and assessee motivation in response to those decisions. Because the technology itself is the product of human will, the study of technology's direct impacts is therefore a waste of time.

To mangle Schneider (Reference Schneider1987) a bit, this possibility that “the people make the assessment” is a reasonable theoretical stance suggesting that the most fundamental scientific questions about technology can be best answered by studying decision making in assessment design and reactions to those decisions. Yet following through by testing this theory—or more likely, scaffolding new subtheories on it without ever actually testing any of them—is unlikely to ever produce useful information in the context of I-O psychology practice, even if it is a better representation of reality, at least from a postpositivistic point of view. If we are not careful and let theory development drive our research rather than facts, that is precisely the road down which we are heading. This is a future we need to work actively to avoid. There is nothing so practical as a good theory, but given the way publishing works in our field, it is a relatively trivial endeavor to create interesting but useless theory that lingers, poisoning all researchers that cite it in the future or allow it to influence their thinking.

4. We Can't Selectively Apply This Approach to Favored Technologies

Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017) define technology as “the constellation of individual tools that assist a user with controlling or adapting to his or her environment” (p. 636) calling this a “middle, plural level” (p. 637) of definitional possibilities, intended to balance several competing needs for such a definition. We agree with this basic conceptualization, but we find the discussion that follows to selectively apply this definition. Specifically, with this definition, the term “technology-based assessment” is nonsensical because talent assessment itself is a technology—a tool that I-O psychologists use to get information about people to which they otherwise would not have access. Yet we doubt that Morelli et al. would be as enthusiastic about the usefulness of a higher-order conceptual framework of selection technologies that lists more specific assessment technologies, such as surveys, assessment centers, interviews, and social media scraping. Such a framework, although it could accurately summarize the state of modern selection methods at the time it was created, would not be very useful.

A Path Forward

In summary, we agree with the general thrust of Morelli et al.’s (Reference Morelli, Potosky, Arthur and Tippins2017) arguments, that we need to better integrate research on technology and I-O psychology to build more flexible, useful theories and frameworks from which practical decision making can flow, but we disagree regarding the details of the approach that will accomplish this goal. We contend that the best path forward for I-O psychology will be found by developing flexible conceptual frameworks of technology within meaningful domains, avoiding the rabbit holes of theorizing for the sake of theorizing, theoretically modeling only the psychological impacts and covariates of technology-related experiences, using a combination of deductive and inductive research methods, and doing all of this with a more inclusive definition of technology. Otherwise, we risk repeating the same mistakes of technologists who have spent their entire research careers fighting over minutia, and repeating those mistakes will be useful for no one.

References

Antonakis, J. (2017). On doing better science: From thrill of discovery to policy implications. The Leadership Quarterly, 28, 521.Google Scholar
Arthur, W. Jr., Keiser, N., & Doverspike, D. (2017). An information processing-based conceptual framework of the effects of the use of Internet-based testing devices on scores on employment-related assessments and tests. Manuscript submitted for publication.Google Scholar
Arthur, W., & Villado, A. J. (2008). The importance of distinguishing between constructs and method when comparing predictors in personnel selection research and practice. Journal of Applied Psychology, 93, 435442.CrossRefGoogle ScholarPubMed
Bell, B. S., Tannenbaum, S. I., Ford, J. K., Noe, R. A., & Kraiger, K. (2017). 100 years of training and development research: What we know and where we should go. Journal of Applied Psychology, 102, 305323.Google Scholar
Box, G. E. P. (1976). Science and statistics. Journal of the American Statistical Association, 71, 791799.Google Scholar
Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42 (1), 2129.Google Scholar
Hambrick, D. C. (2007). The field of management's devotion to theory: Too much of a good thing? Academy of Management Journal, 50, 13461352.Google Scholar
Hastings, N. B., & Tracey, M. W. (2005). Does media affect learning? Where are we now? TechTrends, 49 (2), 2830.CrossRefGoogle Scholar
Kacmar, K. M., & Whitfield, J. M. (2000). An additional rating method for journal articles in the field of management. Organizational Research Methods, 3, 392406.Google Scholar
Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42, 10421629.CrossRefGoogle Scholar
Landers, R. N., & Reddock, C. M. (2017). A meta-analytic investigation of objective learner control in web-based instruction. Journal of Business & Psychology, 32, 455–478.CrossRefGoogle Scholar
Mathieu, J. E. (2016). The problem with [in] management theory. Journal of Organizational Behavior, 37, 11321141.Google Scholar
Morelli, N., Potosky, D., Arthur, W. Jr., & Tippins, N. (2017). A call for conceptual models of technology in I-O psychology: An example from technology-based talent assessment. Industrial and Organizational Psychology: Perspectives on Science and Practice, 10 (4), 634–653.CrossRefGoogle Scholar
Ones, D. S., Kaiser, R. B., Chamorro-Premuzic, T., & Svensson, C. (2017). Has industrial-organizational psychology lost its way? The Industrial-Organizational Psychologist. Retrieved from http://www.siop.org/tip/april17/lostio.aspx Google Scholar
Potosky, D. (2008). A conceptual framework for the role of the administration medium in the personnel assessment process. Academy of Management Review, 33, 629648.Google Scholar
Raudenbush, S. W., Bryk, A. S., & Congdon, R. (2013). HLM 7.01 for Windows [Computer software]. Skokie, IL: Scientific Software International, Inc.Google Scholar
Rosseel, Y. (2012). lavaan: An R package for structure equation modeling. Journal of Statistical Software, 48, 136.Google Scholar
Schneider, B. (1987). The people make the place. Personnel Psychology, 40, 437453.Google Scholar