Modern technology and technological advances offer a variety of benefits and challenges for assessment, data collection, communication, and other research- and practice-related endeavors. The focal article written by Morelli, Potosky, Arthur, and Tippins (Reference Morelli, Potosky, Arthur and Tippins2017) offers a segue into discussions about some of these issues. Although the authors offer some unique insights, we believe their view is incomplete, as it is potentially limited by their focus on testing and assessment. Below, we outline a few key points we hope will advance the conversation. Our commentary is largely grounded in the field of human–computer interaction (HCI), which is an interdisciplinary field that integrates expertise from computer science, psychology (and other behavioral sciences), and many other fields. Whereas psychology tends to place the human user at the forefront of discussions concerning technology, HCI expands beyond just the user's psychology, focusing on the design of interfaces that allow users to interact with computing technology in new ways (Card, Moran, & Newell, Reference Card, Moran and Newell1983).
Technology and Error
One of the biggest concerns we have when the topic of technology surfaces is that the term “technology” is often used synonymously with the term “computerization.” Yet, these two terms are not synonymous. Rather, technology has been advancing since before the development of computerized formats for data collection, assessment, and testing. Consider, for example, how changes in printing, format, and copying may or may not have impacted the results of various tests and assessments. Comparing what was possible when “ditto” technology was available versus what was possible with Scantron forms or modern copying capabilities could just have easily impacted the results of paper-and-pencil testing, potentially calling into question the validity of any research or assessment data that were collected at differing points of paper-and-pencil technological advancement.
A fundamental underlying issue that often surfaces when it comes to computer-related technology is akin to a basic level of mistrust and the assumption that computerized media are inferior to the baseline (whatever that baseline happens to be) unless proven otherwise. Yet, such a premise is a faulty one. For example, when it comes to equivalence testing as laid out by Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017), the premise would appear to be that the baseline medium (often paper-and-pencil) is the true representation of reliability/validity and that any deviation from this as a result of a different medium is introducing systematic error into the data collection or assessment process. However, there is a very lively discipline of scholars and practitioners focused on enhancing and optimizing the human–computer interaction (which never emerged for enhancing the human-to-paper-and-pencil-test interaction). Therefore, it is potentially likely that computerized technology that optimizes human–computer interaction (whether that computer is a desktop or mobile computer) actually introduces less medium-specific error than paper-and-pencil testing. When engaging in equivalence testing, the computerized medium is constrained and tested based on the noncomputerized results, thus setting a one-directional standard for equivalence.
We encounter this a lot within an academic setting that offers online education. The fundamental assumption that exists among faculty who teach in more traditional settings (i.e., in person) is that online courses are less rigorous, are more cheater friendly, and produce inferior learning and academic outcomes. These conclusions, of course, have been refuted by a plethora of research (e.g., Bowen, Lack, Chingos, & Nygren, Reference Bowen, Lack, Chingos and Nygren2012; Colvin et al., Reference Colvin, Champaign, Liu, Zhou, Fredericks and Pritchard2014; Stack, Reference Stack2015). Still, when evaluations or assessments are made, the baseline standard tends to be the traditional format, and attempts to assess equivalency assume that deviance from the traditional is problematic (e.g., if grades happen to be higher in an online class, it is assumed to be because the class is easier).
Instead, much like with online versus onground learning, some of the observed effects may be influenced by comfort levels with various technologies and perhaps other individual differences as well. Although assessments and surveys may be optimally designed for various formats of delivery (e.g., desktop computers, mobile phones), we would expect individual variation with regard to comfortability with those formats, which may introduce error as a result of these individual differences. Such comfortability has been highlighted by various national polls and some empirical research, which have often found that comfort and usage differ as a function of various demographic factors (e.g., age, sex, disability status; American Psychological Association, 2013; Anderson & Perrin, Reference Anderson and Perrin2017; Rosen, Whaling, Carrier, Cheever, & Rokkum, Reference Rosen, Whaling, Carrier, Cheever and Rokkum2013). Hence, it may have less to do with the medium, per se, and more to do with the interaction between the medium and individual differences. Although some of the frameworks discussed by Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017) offer a starting point for considering technology, they seem to be more focused on technology as the ultimate cause of variability differences than on the possibility that these differences may result from the interaction between people and the technology. Because industrial and organizational (I-O) psychology infuses paradigms and perspectives from many of psychology's subdisciplines, it has the potential to add value to HCI research by focusing on factors such as personality traits, organizational characteristics, and workplace climate and culture in better understanding human–computer interactions. Indeed, literature in the area of information systems has studied effect of the combined fit between characteristics of a task, characteristics of technology used for performing it, and characteristics of individuals performing the task, on the overall outcome: how well the task was done (cf. the unified technology acceptance and use of technology model [UTAUT] proposed by Venkatesh, Morris, Davis, and Davis, Reference Venkatesh, Morris, Davis and Davis2003). I-O psychologists can bring their nuanced understanding of human behavior to extend such a model by relating how the acceptance and use of technology by employees affects their performance, in a variety of organizational contexts, by bringing in various explanatory factors that were not previously considered.
Context Matters
We see no issue in Morelli et al.’s (Reference Morelli, Potosky, Arthur and Tippins2017) broad, plural definition of technology applied to I-O psychology, yet the more constrained evaluation of technology's impact on assessment underscores the issues of more singularly focused technology-related research. We believe it also highlights the critical role of context when studying it. Traditional technology perspectives, such as the sociotechnical systems perspective, view technology and the human aspect as independent entities, omitting reciprocal/recursive relationships. Identifying technology as an independent, static variable is justified in some contexts, such as in selection and assessment research with legal implications; but in other contexts, treating technology as a standardized variable fails to capture the reciprocal relationships between people and technology and also assumes the adoption of technology to be predictable (Orlikowski & Hofman, Reference Orlikowski and Hofman1997).
Orlikowski, Yates, Okamura, and Fujimoto (Reference Orlikowski, Yates, Okamura and Fujimoto1995) argued for a focus on technology as it is used in particular contexts, given that how organizations interface with technology is unique, evolves over time, and requires constant experimentation and adaptation. Given the fluid, dynamic nature of today's organizations, that argument is still valid. Technology can be customized and utilized differently in different contexts. On the condition that the interaction between people and technology is dynamic and evolving, the usefulness of technology-related I-O research may be short-lived and not necessarily generalizable. We will not only be testing yesterday's technology but testing yesterday's understanding and utilization of that technology in a particular, perhaps very limited, context.
Interdisciplinary Approaches Are Needed
To expect I-O psychology to address the issue of computerized media is to provide I-O psychology with a superordinate role in understanding the interactions and transactions between people and their environments. If the field has a real interest in advancing the issues presented by Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017), it must work to promote collaboration with others who study, develop, and refine various computerized technologies. I-O psychologists working collaboratively with those who study HCI offer exciting opportunities for both fields. We find it a bit problematic that more focused attention has not been placed on such a collaborative endeavor, given the importance of both fields to modern society.
We highlight HCI, but there have been other notable approaches to studying and understanding the interface between technology and organizational life, such as socio-technical systems, materiality, and sociomateriality, to name a few. Indeed, there is a copious amount of research-based literature on task-technology fit dating back to the mid-1990s (cf. Goodhue & Thomson, Reference Goodhue and Thompson1995). In this literature, the interactions among the characteristics of technology, the individual using it, and the task context in which the individual is using the technology, are considered as predictors (if not determinants) of the efficacy of technology in meeting its assigned goal. By carefully considering all of the above-stated characteristics, and measuring them in a valid and reliable manner, the specific impact of technology in aiding the production of the outcome of interest can be ascertained. In addition, more recent work based on the UTAUT model referenced earlier (Venkatesh et al., Reference Venkatesh, Morris, Davis and Davis2003) provides empirically supported material upon which sophisticated, interdisciplinary theory can be based. Our concern, however, is that such attempts have rarely made their way into I-O psychology research and seem to exist on the periphery of the discipline. The authors’ referencing only the work of Orlikowski and Scott (Reference Orlikowski and Scott2008) from the vast amount of extant literature in this area is reflective of our concern. As noted by Orlikowski and Scott (Reference Orlikowski and Scott2008), when looking at the field of management, only 4.9% of articles directly considered the role of technology in organizational research. Orlikowski and Scott (Reference Orlikowski and Scott2008) further argued that lack of expertise or understanding of technology is partly to blame. Thus, in addition to making an increased consideration of technology in research more central to I-O psychology, we argue that an interdisciplinary approach is a potential means to jumpstart the conversation. Collaboration with colleagues from disciplines where human–computer interactions are studied systematically might be a fruitful approach. Such collaborations could include (a) formatting assessments so they are optimized for multiple media (e.g., laptop, smartphone, desktop) and studying the efficacy of different assessment media; (b) minimizing onscreen distractions, ads, and buttons to minimize error in responding to assessment items; (c) ensuring instructions and response anchors remain visible at all times; and (d) collection of time-per-item, -section, and -assessment information and assessing the impact of time on assessment performance.
Conclusion
The focal article by Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017) brings the issue of technology into the forefront of I-O psychology. It offers some insights, but it also highlights the lack of integration of HCI into the field of I-O psychology. I-O psychology should openly acknowledge the need to integrate what has been learned from those conducting basic and applied HCI research and recognize that other disciplines have a substantial amount of expertise and accumulated knowledge that would be of benefit to I-O psychology research and practice. Broadening the conversation to be more inclusive of those from other disciplines would be a great place to start.
Modern technology and technological advances offer a variety of benefits and challenges for assessment, data collection, communication, and other research- and practice-related endeavors. The focal article written by Morelli, Potosky, Arthur, and Tippins (Reference Morelli, Potosky, Arthur and Tippins2017) offers a segue into discussions about some of these issues. Although the authors offer some unique insights, we believe their view is incomplete, as it is potentially limited by their focus on testing and assessment. Below, we outline a few key points we hope will advance the conversation. Our commentary is largely grounded in the field of human–computer interaction (HCI), which is an interdisciplinary field that integrates expertise from computer science, psychology (and other behavioral sciences), and many other fields. Whereas psychology tends to place the human user at the forefront of discussions concerning technology, HCI expands beyond just the user's psychology, focusing on the design of interfaces that allow users to interact with computing technology in new ways (Card, Moran, & Newell, Reference Card, Moran and Newell1983).
Technology and Error
One of the biggest concerns we have when the topic of technology surfaces is that the term “technology” is often used synonymously with the term “computerization.” Yet, these two terms are not synonymous. Rather, technology has been advancing since before the development of computerized formats for data collection, assessment, and testing. Consider, for example, how changes in printing, format, and copying may or may not have impacted the results of various tests and assessments. Comparing what was possible when “ditto” technology was available versus what was possible with Scantron forms or modern copying capabilities could just have easily impacted the results of paper-and-pencil testing, potentially calling into question the validity of any research or assessment data that were collected at differing points of paper-and-pencil technological advancement.
A fundamental underlying issue that often surfaces when it comes to computer-related technology is akin to a basic level of mistrust and the assumption that computerized media are inferior to the baseline (whatever that baseline happens to be) unless proven otherwise. Yet, such a premise is a faulty one. For example, when it comes to equivalence testing as laid out by Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017), the premise would appear to be that the baseline medium (often paper-and-pencil) is the true representation of reliability/validity and that any deviation from this as a result of a different medium is introducing systematic error into the data collection or assessment process. However, there is a very lively discipline of scholars and practitioners focused on enhancing and optimizing the human–computer interaction (which never emerged for enhancing the human-to-paper-and-pencil-test interaction). Therefore, it is potentially likely that computerized technology that optimizes human–computer interaction (whether that computer is a desktop or mobile computer) actually introduces less medium-specific error than paper-and-pencil testing. When engaging in equivalence testing, the computerized medium is constrained and tested based on the noncomputerized results, thus setting a one-directional standard for equivalence.
We encounter this a lot within an academic setting that offers online education. The fundamental assumption that exists among faculty who teach in more traditional settings (i.e., in person) is that online courses are less rigorous, are more cheater friendly, and produce inferior learning and academic outcomes. These conclusions, of course, have been refuted by a plethora of research (e.g., Bowen, Lack, Chingos, & Nygren, Reference Bowen, Lack, Chingos and Nygren2012; Colvin et al., Reference Colvin, Champaign, Liu, Zhou, Fredericks and Pritchard2014; Stack, Reference Stack2015). Still, when evaluations or assessments are made, the baseline standard tends to be the traditional format, and attempts to assess equivalency assume that deviance from the traditional is problematic (e.g., if grades happen to be higher in an online class, it is assumed to be because the class is easier).
Instead, much like with online versus onground learning, some of the observed effects may be influenced by comfort levels with various technologies and perhaps other individual differences as well. Although assessments and surveys may be optimally designed for various formats of delivery (e.g., desktop computers, mobile phones), we would expect individual variation with regard to comfortability with those formats, which may introduce error as a result of these individual differences. Such comfortability has been highlighted by various national polls and some empirical research, which have often found that comfort and usage differ as a function of various demographic factors (e.g., age, sex, disability status; American Psychological Association, 2013; Anderson & Perrin, Reference Anderson and Perrin2017; Rosen, Whaling, Carrier, Cheever, & Rokkum, Reference Rosen, Whaling, Carrier, Cheever and Rokkum2013). Hence, it may have less to do with the medium, per se, and more to do with the interaction between the medium and individual differences. Although some of the frameworks discussed by Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017) offer a starting point for considering technology, they seem to be more focused on technology as the ultimate cause of variability differences than on the possibility that these differences may result from the interaction between people and the technology. Because industrial and organizational (I-O) psychology infuses paradigms and perspectives from many of psychology's subdisciplines, it has the potential to add value to HCI research by focusing on factors such as personality traits, organizational characteristics, and workplace climate and culture in better understanding human–computer interactions. Indeed, literature in the area of information systems has studied effect of the combined fit between characteristics of a task, characteristics of technology used for performing it, and characteristics of individuals performing the task, on the overall outcome: how well the task was done (cf. the unified technology acceptance and use of technology model [UTAUT] proposed by Venkatesh, Morris, Davis, and Davis, Reference Venkatesh, Morris, Davis and Davis2003). I-O psychologists can bring their nuanced understanding of human behavior to extend such a model by relating how the acceptance and use of technology by employees affects their performance, in a variety of organizational contexts, by bringing in various explanatory factors that were not previously considered.
Context Matters
We see no issue in Morelli et al.’s (Reference Morelli, Potosky, Arthur and Tippins2017) broad, plural definition of technology applied to I-O psychology, yet the more constrained evaluation of technology's impact on assessment underscores the issues of more singularly focused technology-related research. We believe it also highlights the critical role of context when studying it. Traditional technology perspectives, such as the sociotechnical systems perspective, view technology and the human aspect as independent entities, omitting reciprocal/recursive relationships. Identifying technology as an independent, static variable is justified in some contexts, such as in selection and assessment research with legal implications; but in other contexts, treating technology as a standardized variable fails to capture the reciprocal relationships between people and technology and also assumes the adoption of technology to be predictable (Orlikowski & Hofman, Reference Orlikowski and Hofman1997).
Orlikowski, Yates, Okamura, and Fujimoto (Reference Orlikowski, Yates, Okamura and Fujimoto1995) argued for a focus on technology as it is used in particular contexts, given that how organizations interface with technology is unique, evolves over time, and requires constant experimentation and adaptation. Given the fluid, dynamic nature of today's organizations, that argument is still valid. Technology can be customized and utilized differently in different contexts. On the condition that the interaction between people and technology is dynamic and evolving, the usefulness of technology-related I-O research may be short-lived and not necessarily generalizable. We will not only be testing yesterday's technology but testing yesterday's understanding and utilization of that technology in a particular, perhaps very limited, context.
Interdisciplinary Approaches Are Needed
To expect I-O psychology to address the issue of computerized media is to provide I-O psychology with a superordinate role in understanding the interactions and transactions between people and their environments. If the field has a real interest in advancing the issues presented by Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017), it must work to promote collaboration with others who study, develop, and refine various computerized technologies. I-O psychologists working collaboratively with those who study HCI offer exciting opportunities for both fields. We find it a bit problematic that more focused attention has not been placed on such a collaborative endeavor, given the importance of both fields to modern society.
We highlight HCI, but there have been other notable approaches to studying and understanding the interface between technology and organizational life, such as socio-technical systems, materiality, and sociomateriality, to name a few. Indeed, there is a copious amount of research-based literature on task-technology fit dating back to the mid-1990s (cf. Goodhue & Thomson, Reference Goodhue and Thompson1995). In this literature, the interactions among the characteristics of technology, the individual using it, and the task context in which the individual is using the technology, are considered as predictors (if not determinants) of the efficacy of technology in meeting its assigned goal. By carefully considering all of the above-stated characteristics, and measuring them in a valid and reliable manner, the specific impact of technology in aiding the production of the outcome of interest can be ascertained. In addition, more recent work based on the UTAUT model referenced earlier (Venkatesh et al., Reference Venkatesh, Morris, Davis and Davis2003) provides empirically supported material upon which sophisticated, interdisciplinary theory can be based. Our concern, however, is that such attempts have rarely made their way into I-O psychology research and seem to exist on the periphery of the discipline. The authors’ referencing only the work of Orlikowski and Scott (Reference Orlikowski and Scott2008) from the vast amount of extant literature in this area is reflective of our concern. As noted by Orlikowski and Scott (Reference Orlikowski and Scott2008), when looking at the field of management, only 4.9% of articles directly considered the role of technology in organizational research. Orlikowski and Scott (Reference Orlikowski and Scott2008) further argued that lack of expertise or understanding of technology is partly to blame. Thus, in addition to making an increased consideration of technology in research more central to I-O psychology, we argue that an interdisciplinary approach is a potential means to jumpstart the conversation. Collaboration with colleagues from disciplines where human–computer interactions are studied systematically might be a fruitful approach. Such collaborations could include (a) formatting assessments so they are optimized for multiple media (e.g., laptop, smartphone, desktop) and studying the efficacy of different assessment media; (b) minimizing onscreen distractions, ads, and buttons to minimize error in responding to assessment items; (c) ensuring instructions and response anchors remain visible at all times; and (d) collection of time-per-item, -section, and -assessment information and assessing the impact of time on assessment performance.
Conclusion
The focal article by Morelli et al. (Reference Morelli, Potosky, Arthur and Tippins2017) brings the issue of technology into the forefront of I-O psychology. It offers some insights, but it also highlights the lack of integration of HCI into the field of I-O psychology. I-O psychology should openly acknowledge the need to integrate what has been learned from those conducting basic and applied HCI research and recognize that other disciplines have a substantial amount of expertise and accumulated knowledge that would be of benefit to I-O psychology research and practice. Broadening the conversation to be more inclusive of those from other disciplines would be a great place to start.