Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-11T20:44:52.327Z Has data issue: false hasContentIssue false

Practical theory about workplace technology requires integrating design perspectives

Published online by Cambridge University Press:  22 September 2021

Richard N. Landers*
Affiliation:
University of Minnesota - Twin Cities
*
Corresponding author. Email: rlanders@umn.edu
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

Although Hu et al. (Reference Hu, Barber, Park and Day2021) rightfully call out industrial-organizational (I-O) psychology’s general failures in studying the largely technology-driven, rapidly changing nature of work, they also display one of the major causes of the problem: the psychologization of technology. Psychologization here is not in the sense of psychology spreading to other disciplines (cf. De Vos, Reference De Vos and Teo2014) but rather to the tendency by psychologists themselves to apply psychological methods and philosophies to the technologies they study. This is perhaps most succinctly captured by Maslow’s (Reference Maslow1966) hammer, modified a bit: When all you have are psychological methods, everything looks like a construct. The core problem with this approach is that technology, conceptualized the way Hu et al. describe, is not a construct (Landers & Behrend, Reference Landers and Behrend2017).

This general conclusion is obvious when one considers the definitional requirements for constructs. Constructs are unobserved and often unobservable. Historically, constructs only served as tentative explanations for observed phenomena until disproven, a necessary step along the path to knowledge (e.g., see Einstein’s [Reference Einstein1924] dismissal of an idea in Newtonian physics as a theoretical construct). We can only infer information about constructs based on the observation of indicators that we argue reflect them. For example, in psychology, a person’s cognitive ability is not something we in fact know to exist through observation. Its existence is something that must be rationally argued based on available evidence (see Gottfredson, Reference Gottfredson1997). These arguments in turn rest upon a broad network of assumptions and past evidence. The existence of cognitive ability is not a question that can be fully resolved with current empirical methods because there is no way to directly measure cognitive ability; we can only observe outcomes that are believed to be caused by cognitive ability. Thus, cognitive ability is a construct in a quite literal sense: It is a socially constructed concept that many believe to exist, given the current weight of evidence, but cannot prove is real. Maybe one day, but not today.

Therefore, technology, as we refer to and operationalize it in psychology, is clearly not a construct. It is literal. It exists as it appears. If a person does not fully understand how a piece of technology works, this cannot be attributed to the technology potentially not existing, because someone, or more likely a group of someones, who do understand that technology designed it, developed it, refined it, and either used it themselves or produced it for others to use. This step is a necessary precursor for any technology to causally influence any human processes. If a technology appears to “influence” a human process, it is more accurate to say that the technology was designed, used, and experienced in such a way that this influence occurred, whether intentionally or unintentionally. Any apparent effect of technology is an illusion, reflecting the combination of human decision making, skill, and other resources when creating that technology and the decision making, skill, and other resources employed by other humans when using it or experiencing it.

A clear example of the harmful effects on scholarship that can occur when technology is treated as a construct can be observed in the literature on media effects in learning, which spent a lot of time addressing the question, “Does choice of instructional media, such as face to face versus the web, influence learning outcomes?” It is now more generally understood that this is a poorly formed question (Carter, Reference Carter1996), but several decades of researchers, back at least as far as tests of the “effect” of mail-based correspondence courses, generated a literature of thousands of studies on this question, with equivocal results. In short, the results of such studies depended heavily on exactly how the technology was treated in both technology-enhanced and nonenhanced groups, and because researchers rarely shared working definitions of the technologies they were studying, it became impossible to draw any meaningful conclusions regarding generalizable effects from any one study. In I-O psychology, this issue became most evident in the results of a meta-analysis showing that studies comparing web-based and in-person instructional experiences in which designers actively tried to create equivalent learning experiences across media generally showed no differences at all (Sitzmann et al., Reference Sitzmann, Kraiger, Stewart and Wisher2006). This does not imply that technologies do not differ in the unique capabilities they provide or that it is not easier to use some technologies to create more powerful learning experiences than others. It instead implies that simply treating “web-based instruction” as a construct worth investigating whitewashes over meaningful differences and nuance, to describe an “average” technology that often does not exist. Thus, studies focusing on simple comparisons of technology or not are a waste of resources because when they are done thoughtlessly, as they often are, they do not support the conclusions that people want to draw when they conduct them.

Landers and Marin (Reference Landers and Marin2021) developed a taxonomy of researcher orientations toward technology to explain the hidden assumptions that influence the quality of theory developed involving technology. Approaches as in the media effect literature described above, where the complexities of design and development are all collapsed under a single label and studied as a unitary concept, is referred as the technology-as-causal paradigm. Considering the role of individual differences or comparing specific versions of technologies alone, often operationalized as adding moderators or mediators to causal approaches and called the technology-as-instrumental paradigm, does not help much. Landers and Marin critique both paradigms as “often harmful oversimplifications … unhelpful … [and] superficial” (p. 245). By studying a technology while assuming it to be representative of a broader construct that will remain consistent between studies, researchers within these paradigms limit the usefulness of their own research, binding generalizability to the precise version of the technology studied. Yet no other organization may ever use that version of the technology as implemented, and even the studied organization is likely to upgrade at some point, limiting generalizability even to the organization in which the research was conducted.

Instead of unnecessarily limiting generalizability like this, Landers and Marin (Reference Landers and Marin2021) encourage researchers to take a longer and more comprehensive view, that any one technology, as it might be used in a study or considered theoretically, is only a snapshot of the technology in its development history, a unique version implemented in a particular organization that evolved out of past versions, was customized to current problems, and will be updated in the future. Each snapshot has specific features and capabilities intended to meet the needs of a moment, a particular use case. Characteristics of technologies that enable new human behaviors or capabilities are called affordances, a term used in the research literature on human–computer interaction. It is these affordances, along with how the affordances are ultimately exploited in organizations, that are meaningful constructs worthy of study. It is by studying this intersection point between affordances and use that accurate, useful, generalizable research will be created.

Perhaps the best example in the I-O literature currently is Arthur, Keiser, Hagen, & Traylor (Reference Arthur, Keiser, Hagen and Traylor2018) and Arthur, Keiser, & Doverspike (Reference Arthur, Keiser and Doverspike2018) structural characteristic/information processing framework. Arthur et al. developed this framework to directly address a pressing research question in psychometrics: How do we determine measurement equivalence across device types when a questionnaire respondent could use any of thousands of different potential devices, some of which will not even exist by the time a study of them is published? In a technology-as-causal paradigm, the logical approach would be to randomly assign research participants to either use a smartphone or a desktop and compare, drawing a meaningless conclusion of “equivalent” or “not,” hardly a comprehensive answer to the research question. In a technology-as-instrumental paradigm, the logical approach would be to look at the specific smartphone type as a moderator, but the practical constraints of randomly assigning people to thousands of different devices, not to mention the generalizability problem when new smartphone models are released, makes this a clear waste of time. Instead, Arthur et al.’s framework asks us to conceptualize smartphones as a combination of features potentially meaningful to measurement, such as screen size, display resolution, or interaction style, and identifies affordances provided by each of these features. A smartphone with greater display resolution, for example, can more legibly display finer visual details, suggesting that resolution might create inequivalence between mobile devices that differ in resolution if distinguishing fine visual details is central to measurement. If a person’s device does not afford the detection of fine details, then failing to detect such details is not a valid reflection of a person’s construct standing. By theoretically linking affordances to desired outcomes, Arthur et al. demonstrate the technology-as-designed paradigm, a consideration of the technology they study as possessing a long history and vast future of device affordances. In considering a fuller scope of “devices,” they develop a theory that describes not only the mobile measurement of today but also the mobile measurement of yesterday and tomorrow. It is only by taking this view that they can create generalizable theory about technology that does not rot with age.

The research agenda on information and communication technologies described by Hu et al. (Reference Hu, Barber, Park and Day2021) stands at this same paradigmatic crossroads but shifts between paradigms. The statements of greatest concern are when they lean toward technology-as-causal and technology-as-instrumental views. For example, they at one point conclude that “meta-analyses have provided evidence supporting the benefits of telecommuting in reducing work–family conflict” and “telecommuting is more helpful in alleviating work-to-family interference than family-to-work interference,” statements that suggest both causal, general effects created by the use of telecommuting technology. However, especially as the pandemic of the early 2020s emphasized, these effects quite obviously depend on exactly what is meant by “telecommuting” and how telecommuting technology is being used by the people involved in it, the affordances of the technology, and the specific policies used to implement and control it. For the telecommuters with unsympathetic bosses and demanding home lives, telecommuting undoubtedly increases interference. For telecommuters who are required to use activity-tracking software, such as platforms that report to their supervisors every few seconds when an employee’s computer is currently being used for work, one has difficulty imagining any positive effect on work–family conflict at all. Simply considering these situational factors to be potential moderators ignores the potential and likelihood of telecommuting software developers altering their software in response to better fulfill market needs. Treat these technologies as simple causes of organizational behavior, moderated or not, undercuts the practical use of I-O theory.

Importantly, these concerns are not meant to undermine the ambitious research agenda and next steps laid out by Hu et al. (Reference Hu, Barber, Park and Day2021). They describe critical research areas and major problems. Failing to study these areas and update the field’s methods would harm the relevance of I-O psychology to real-world organizational functioning for decades. Yet, we also risk studying these problems from an incomplete perspective, developing poor-quality theory with neither accuracy nor practicality, widening the science–practice gap even further. We cannot afford this. Hu et al. recognize the danger when stating, “it appears that the current literature often works from the assumption that studying the influence of a specific ICT context on broader social and work environment issues would provide us with new knowledge beyond what we already knew.” This is precisely the problem directly addressed by adopting the technology-as-designed paradigm. We must stop studying individual technologies without embracing the broader picture.

There is no simple solution to this problem; it is a grand challenge, a wicked problem of siloed scholarship that we must overcome together. This is not just a matter of adding project team members from other disciplines or borrowing theories from other fields. The problem will not be solved by simply tweaking our theories, sifting through them to separate the “good” from the “bad,” adding new boxes and arrows to fill minor, unimportant gaps. It requires fundamentally changing how we conceptualize technology in relation to humans at work. It requires becoming experts in work-relevant classes of technology, their affordances, and their use. It requires actively embracing interdisciplinary methods and understanding how these technologies were designed and will be redesigned, and how such changes affect their deployment in organizations. Only with such a comprehensive and thoughtful approach to technology will I-O psychology develop practical theory for describing, understanding, predicting, and influencing authentic workplace behavior in the years to come.

References

Arthur, W. Jr, Keiser, N. L., & Doverspike, D. (2018). An information-processing-based conceptual framework of the effects of unproctored internet-based testing devices on scores on employment-related assessments and tests. Human Performance, 31(1), 132.CrossRefGoogle Scholar
Arthur, W. Jr, Keiser, N. L., Hagen, E., & Traylor, Z. (2018). Unproctored internet-based device-type effects on test scores: The role of working memory. Intelligence, 67, 6775.CrossRefGoogle Scholar
Carter, V. (1996). Do media influence learning? Revisiting the debate in the context of distance education. Open Learning: The Journal of Open, Distance and e-Learning, 11(1), 3140. https://doi.org/10.1080/0268051960110104 CrossRefGoogle Scholar
De Vos, J. (2014). Psychologization. In Teo, T. (Ed.), Encyclopedia of critical psychology (pp. 15471551). Springer. https://doi.org/10.1007/978-1-4614-5583-7_247 CrossRefGoogle Scholar
Einstein, A. (1924). Über den Äther (Trans.). Verhandlungen Der Schweizerischen Naturforschenden Gesellschaft, 105, 85–93. Retrieved from http://www.jonathonfreeman.org/wp-content/uploads/2018/05/Einstein-Concerning-the-aether-1924.pdf Google Scholar
Gottfredson, L. S. (1997). Mainstream science on intelligence. Intelligence, 24(1), 1323.CrossRefGoogle Scholar
Hu, X., Barber, L., Park, Y., & Day, A. (2021). Defrag and reboot? Consolidating information and communication technology research in I-O psychology. Industrial and Organizational Psychology: Perspectives on Science and Practice, 14(3), 371396.Google Scholar
Landers, R. N., & Behrend, T. S. (2017). When are models of technology in psychology most useful? Industrial and Organizational Psychology: Perspectives on Science and Practice, 10(4), 668675.CrossRefGoogle Scholar
Landers, R. N., & Marin, S. (2021). Theory and technology in organizational psychology: A review of technology integration paradigms and their effects on the validity of theory. Annual Review of Organizational Psychology and Organizational Behavior, 8(1), 235258. https://doi.org/10.1146/annurev-orgpsych-012420-060843 CrossRefGoogle Scholar
Maslow, A. H. (1966). The psychology of science: A reconnaissance. Harper & Row.Google Scholar
Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of web-based and classroom instruction: A meta-analysis. Personnel Psychology, 59, 623664. https://doi.org/10.1111/j.1744-6570.2006.00049.x CrossRefGoogle Scholar