Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-11T00:10:22.886Z Has data issue: false hasContentIssue false

A need to “veto” the “vett” in cybervetting to prevent DEI efforts from DIEing

Published online by Cambridge University Press:  09 September 2022

Aditya Simha*
Affiliation:
University of Wisconsin-Whitewater
Gordon B. Schmidt
Affiliation:
University of Louisiana Monroe
*
*Corresponding author. Email: simhaa@uww.edu
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

We read the Wilcox etal. (Reference Wilcox, Damarin and McDonald2022) focal article about cybervetting with a good deal of interest and attention. Although we certainly appreciate the nuanced take on cybervetting portrayed in the focal article, we would like to add a more forceful perspective on the use of cybervetting in this commentary piece. We believe that our commentary adds to the Wilcox etal. article’s central discussion of the negative effects of cybervetting on job candidates, especially related to diverse groups. Our perspective on that matter is essentially that cybervetting is in practice often a highly subjective and potentially discriminatory practice. Specifically, it can serve to impede important diversity, equity, and inclusion (i.e., DEI) recruitment and selection initiatives, with existing research suggesting that diverse candidates are negatively affected by the use of social media data in selection (Van Iddekinge etal., Reference Van Iddekinge, Lanivich, Roth and Junco2016; Zhang etal., Reference Zhang, Van Iddekinge, Arnold, Roth, Lievens, Lanivich and Jordan2020). Cybervetting screens out people who differ from current employees, potentially on non-job-related characteristics. We suggest that organizations instead focus on other more cogent and unbiased methods of assessing person–job and person–organization fits, and actual job qualifications.

Screening out wrong people for wrong reasons

At the very outset, we contend that cybervetting needs to be (cyber)vetoed, as it often forces individuals to comply with imaginary protocols of appropriate behavior. For instance, consider the case of a person attending a party, where he or she imbibes an alcoholic beverage of some sort. Acybervetter who sees a picture of that party will end up harboring negative perceptions of that individual or an artificial intelligence algorithm (AI) might screen them out for it. As Wilcox etal. (Reference Wilcox, Damarin and McDonald2022) and Zhang etal. (Reference Zhang, Van Iddekinge, Arnold, Roth, Lievens, Lanivich and Jordan2020) note, online content featuring alcohol use is often considered in a negative way by the cybervetter. However, if one looks at it objectively, unless the job candidate is being recruited for a position in an organization devoted to the temperance movement, how is he or she imbibing alcoholic beverages even pertinent to the hiring process for those who are legally able to drink? The negativity obviously bleeds in from the hidden biases of the cybervetter. The trouble with cybervetting is that there is no good solution for vetting the cybervetter’s personal implicit and explicit biases.

Artificial intelligence replicates societal biases

AI-based cybervetting, like all AI, is infused with the prejudice of its creator or society, and such societal biases are built into the data and information used by the AI (Landers & Behrend, Reference Landers and Behrend2022; Noble, Reference Noble2018). Thus, although AI might seem like a “solution” to this potential bias it may instead reinforce such biases, just with the veneer of being objective because technology is erroneously seen as bias free. Although organizational policies related to cybervetting can be drafted that would try to minimize the bias (Black etal., Reference Black, Stone and Johnson2014; Schmidt & O’Connor, Reference Schmidt, O’Connor, Landers and Schmidt2016), few organizations currently implement such policies for their cybervetting processes and even fewer consider DEI-related concerns in the process. To expect this will change soon is unrealistic from a practical perspective.

Solving a “problem” that leads to more problems

An oft-stated position on why cybervetting is required is that it is linked with risk management and risk mitigation and that hiring managers should do so in order to guard their companies’ reputations by ensuring no embarrassment befalls them. That may well be in theory a good reason; however, it belies the fact that employment in the United States is at will in all 50 states but one (Montana), with limited areas of legal or contractual protections requiring termination for cause (National Conference of State Legislatures, 2008). Thus, in general, employees can be fired for any reason at any time that the employer chooses. For example, people have been fired for espousing and expressing their political beliefs or for wearing a “wrong” colored tie. So, if it does turn out that a hired employee is actually a bad fit or they perform poorly, the process of firing the individual is not an arduous task. This is also based on actual job behavior rather than subjective impression of online content. Why bother with cybervetting, after all, when in the case of a bad or poorly fitting hire, the firing process isn’t that difficult to exercise?

The ability to engage in cybervetting is even more difficult when we consider that the practice is legally required to get the consent of candidates within the European Union due to their General Data Protection Regulation (CVCheck, 2019). Global companies that are legally compliant need to contend with different legal frameworks on the process and the potential for U.S.-based companies to violate existing legal protections for candidates in other countries. Requiring European Union citizens to consent as part of a hiring process could potentially lead to systematic screening out of people who don’t consent and, if they are kept in the process, trying to examine candidates for whom very different content is available based on whether they opted in or not.

Vetting out those who are different, not those who are unqualified

Several studies have focused on how individuals perceived as “different” tend to not advance in the hiring process (Gaddis, Reference Gaddis2015; Jarman etal., Reference Jarman, Kallies, Joshi, Smink, Sarosi, Chang, Green, Greenberg, Melcher, Nfonsam, Ramirez, Borgert and Whiting2019; Mishel, Reference Mishel2016). This is due to the stereotypes and beliefs of the individuals who are undertaking the hiring process. Cybervetting aggravates these phenomena—after all, as we mentioned earlier, there is usually no clear or formalized process for cybervetting. What prevents a cybervetter from rejecting a candidate due to their membership in a marginalized category, whether the decision is made consciously or unconsciously (“they don’t seem like a good fit here”)? For instance, some individuals may harbor negative attitudes toward members from the LGBTQ community. If such job applicants are open about their gender or sexual identities on social media, nothing prevents biased cybervetters from quietly discarding those individuals’ applications from consideration or their bias unconsciously leading to removal. If that transpires, company initiatives to increase representation from marginalized populations will never succeed and organizations can become liable to discrimination lawsuits.

Similarly, savvy job seekers know how to manage their online presence in a way that hiring agents’ engaging in cybervetting end up believing that those savvy individuals are better fits for the jobs, whether that fit is accurate or not. Research suggests that people can successfully influence evaluators with such tactics (Myers etal., Reference Myers, Price, Roulin, Duval and Sobhani2021; Schroeder & Cavanaugh, Reference Schroeder and Cavanaugh2018). This creates the possibility that inauthentic individuals end up with the positions just because the cybervetters believed they were authentic or organization AI is gamed by content to see them as such. Yet again, individuals from marginalized and from first-generation college backgrounds will perhaps not have the knowledge to maintain that level of sophistry while managing their online identities.

Wilcox etal’s (Reference Wilcox, Damarin and McDonald2022) point about a bias toward homogenization of the workforce signals how cybervetting is detrimental to DEI initiatives. If the hiring agent or the organization engages in cybervetting, chances are good that the prospects of job seekers’ who don’t fit into the homogenous majority in the workplace will be dismal.

Foregoing cybervetting entirely

Wilcox etal. (Reference Wilcox, Damarin and McDonald2022) have provided good considerations for various stakeholders related to cybervetting; however, we simply do not think cybervetting is useful enough for the risks it entails related to DEI. The “red cup” and “Halloween” examples that they allude to are synonymous with cybervetting itself—the process is disposed to so many stereotypes and biases. On the surface of it, cybervetting may sound like a great tool for ensuring good person–job fit; however, it’s pretty much a wolf in a sheep’s clothing. All it does is ensure that some job candidates end up faking their online selves in order to comply with what the hiring agents apparently seem to want, whereas those less impression-management savvy get screened out. Do hiring agents really want individuals who are adept at impression management over authentic individuals?

Another factor to consider is that older generations are starting to retire from the workplace and a younger generation is coming in. The younger generation has shown a marked hostility toward the broad concept of cybervetting (e.g., Drouin etal., Reference Drouin, O’Connor, Schmidt and Miller2015), suggesting negative applicant reactions. Cybervetting practices may repel quality applicants from applying to an organization. We believe that instead of developing formal cybervetting practices and policies, a much more elegant and simple solution is to not use cybervetting altogether in organizations because it does not help in its current state.

By prohibiting cybervetting altogether as company policy, organizations and hiring agents will necessarily have to come up with better justifications as to why they’re rejecting candidates. In such cases, implicit biases will have one less avenue to have an outsized effect on job seeker outcomes. And, perhaps, organizational efforts to improve their DEI initiatives will be a lot more fruitful.

To summarize the central point of our commentary, we have suggested that cybervetting is not currently a valuable selection tool, as it causes significant issues for DEI-related concerns. DEI initiatives will continue to suffer if cybervetting continues to flourish or if it becomes institutionalized formally. We encourage more research on the negative consequences of cybervetting to better understand its effects and whether there are some types of cybervetting that may not have negative consequences related to DEI. As it is, in these COVID times, the “Great Resignation” prevails. By continuing to engage in cybervetting, companies will simply make their recruiting efforts even more ineffective while working against their own DEI goals.

References

Black, S. L., Stone, D. L., & Johnson, A. F. (2014). Use of social networking websites on applicants’ privacy.” Employee Responsibilities and Rights Journal, 27(2), 115–59. https://doi.org/10.1007/s10672-014-9245-2 Google Scholar
CVCheck. (2019, August 8). The GDPR and its effect on social media screening. Checkpoint. https://checkpoint.cvcheck.com/the-gdpr-and-its-effect-on-social-media-screening/ Google Scholar
Drouin, M., O’Connor, K. W., Schmidt, G. B., & Miller, D. A. (2015). Facebook fired: Legal perspectives and young adults’ opinions on the use of social media in hiring and firing decisions. Computers in Human Behavior, 46, 123128.CrossRefGoogle Scholar
Gaddis, S. M. (2015). Discrimination in the credential society: An audit study of race and college selectivity in the labor market. Social Forces, 93(4), 14511479.Google Scholar
Jarman, B. T., Kallies, K. J., Joshi, A. R., Smink, D. S., Sarosi, G. A., Chang, L., Green, J. M., Greenberg, J. A., Melcher, M. L., Nfonsam, V., Ramirez, L. D., Borgert, A. J., & Whiting, J. (2019). Underrepresented minorities are underrepresented among general surgery applicants selected to interview. Journal of Surgical Education, 76(6), e15e23.Google ScholarPubMed
Landers, R. N., & Behrend, T. S. (2022, February 14). Auditing the AI auditors: Aframework for evaluating fairness and bias in high stakes AI predictive models. American Psychologist. Advance online publication. http://doi.org/10.1037/amp0000972 CrossRefGoogle Scholar
Mishel, E. (2016). Discrimination against queer women in the US workforce: Arésumé audit study. Socius: Sociological Research for a Dynamic World, 2, 113.CrossRefGoogle Scholar
Myers, V., Price, J. P. B., Roulin, N., Duval, A., & Sobhani, S. (2021). Job seekers’ impression management on Facebook: Scale development, antecedents, and outcomes. Personnel Assessment and Decisions, 7(1), 102113. https://scholarworks.bgsu.edu/pad/vol7/iss1/10 Google Scholar
National Conference of State Legislatures. (2008, April 15). At-will employment—overview. https://www.ncsl.org/research/labor-and-employment/at-will-employment-overview.aspx Google Scholar
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.CrossRefGoogle Scholar
Schmidt, G. B., & O’Connor, K. (2016). Legal concerns when considering social media data in selection. In Landers, Richard N. and Schmidt, Gordon B. (Eds.), Social Media in employee selection and recruitment: Theory, practice and current challenges (pp. 265287). Springer.Google Scholar
Schroeder, A. N., & Cavanaugh, J. M. (2018). Fake it ‘til you make it: Examining faking ability on social media pages. Computers in Human Behavior, 84, 2935.CrossRefGoogle Scholar
Van Iddekinge, C. H., Lanivich, S. E., Roth, P. L., & Junco, E. (2016). Social media for selection? Validity and adverse impact potential of a Facebook-based assessment. Journal of Management, 42(7), 18111835.Google Scholar
Wilcox, A., Damarin, A. K., & McDonald, S. (2022). Is cybervetting valuable? Industrial and Organizational Psychology: Perspectives on Science and Practice, 15(3), 315–333.Google Scholar
Zhang, L., Van Iddekinge, C. H., Arnold, J. D., Roth, P. L., Lievens, F., Lanivich, S. E., & Jordan, S. L. (2020). What’s on job seekers’ social media sites? Acontent analysis and effects of structure on recruiter judgments and predictive validity. Journal of Applied Psychology, 105(12), 15301546. https://doi.org/10.1037/apl0000490 Google Scholar