Computers in Human Behavior Reports 18 (2025) 100682 Contents lists available at ScienceDirect Computers in Human Behavior Reports journal homepage: www.sciencedirect.com/journal/computers-in-human-behavior-reports When does the “assistant” heuristic work? Examining the effect of AI job titles in tasks with varying criticalities on the use of conversational AI-based services Mehmet Yanit University Canada West, 1461 Granville Street, Vancouver, British Columbia, V6Z 0E5, Canada A R T I C L E I N F O A B S T R A C T Keywords: Hierarchical status Humanlikeness Competence Warmth Task criticality Algorithm aversion Recent marketing trends involve companies using low-status job titles, such as "assistant" (e.g., Google Home Assistant), to label conversational AI agents. This strategy aims to activate an altruistic "assistant" heuristic and enhance users’ willingness to use these AI agents. However, this paper—comprising one pretest (N = 313), three experiments (N = 307, N = 300, N = 308), and one partial least squares structural equation modeling (PLS-SEM) analysis (N = 309)—demonstrates that the effect of this strategy on willingness to use is positive only when the task criticality is high. When the task criticality is not high, higher-hierarchy AI titles (e.g., "manager," "teacher," "analyst") generate greater willingness to use. The research examines three alternative serial mediation path­ ways—perceived warmth, perceived control, and perceived risks—to test for competing explanations alongside the focal serial mediation through perceived humanlikeness and competence. Across the four studies, the serial mediation via perceived humanlikeness and competence remained robust, even when controlling for alternative pathways and scenario realism (Study 3). The final model indicates that when task criticality is not high, increased perceptions of hierarchical status in conversational AI settings enhance perceived humanlikeness. This, in turn, boosts perceived competence, ultimately increasing users’ willingness to use the AI. However, when task criticality is high, the effect reverses—higher-status AI is perceived as less humanlike and less competent, reducing users’ willingness to engage with it. 1. Introduction Humanity is currently witnessing a remarkable proliferation of Artificial Intelligence (AI) technology. The widespread adoption of AI is reshaping how organizations assign hierarchical roles to AI-based ser­ vice agents, influencing their acceptance and effectiveness in customer interactions. Today, AI plays a central role in engaging with customers through conversational interactions while adopting hierarchyindicating job titles (e.g., assistant vs. manager), which affect customer satisfaction, brand attitude, and the customers’ willingness to buy the products recommended by these conversational agents (Jeon, 2022). In human-provided services, job titles signal the agency and exper­ tise of the service provider, with higher-ranking titles indicating greater authority and domain expertise, which can influence people’s percep­ tions of competence of those service providers (Grant et al., 2014; Trautt & Bloom, 1982). Similar to human service providers, computerized service providers such as AI are perceived as social actors to whom human social norms apply (Sundar, 2008, pp. 73–100; Sundar & Nass, 2000). Therefore, in the context of AI, assigning job titles to AI service providers may have comparable attitudinal implications, shaping per­ ceptions of competence and influencing behavioral outcomes, such as the willingness to use AI-based services. Despite the growing prevalence of AI, research on the effect of AI’s job titles remains scarce. For example, Jeon (2022) demonstrated that when AI is assigned higher-status job titles, it is perceived as more knowledgeable (more expert) and likable, leading to increased willingness to follow AI-generated recommendations. Similarly, existing research has shown that higher-status AI is more strongly associated with attributes of competence than lower-status AI (Sundar et al., 2017). However, in the context of AI service providers, the underlying mechanism linking AI’s job titles to perceptions of competence and, in turn, people’s willingness to use remains largely unexplored. More importantly, previous research has not examined contexts in which lower-hierarchy AI may be perceived as more competent and therefore preferred to a greater extent. This study addresses these gaps by E-mail address: mehmet.yanit@ucanwest.ca. https://doi.org/10.1016/j.chbr.2025.100682 Received 28 January 2025; Received in revised form 1 May 2025; Accepted 2 May 2025 Available online 5 May 2025 2451-9588/© 2025 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/bync-nd/4.0/). M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 investigating the role of perceived humanlikeness as a mediating factor between conversational AI service providers’ job titles and their perceived competence and examining this mechanism across three levels of task criticality—low, medium, and high—through five studies (one pretest) using a multimethod approach that combines partial least squares structural equation modeling (PLS-SEM) and experimental de­ signs. The findings show that people’s willingness to use a conversa­ tional AI-based service based on their job title may depend on the task’s criticality. Moreover, previously, Sundar et al. (2017) and Jeon (2022) used experiments to investigate the similar mechanisms. This paper’s approach differs from previous studies in that this paper enhances the reliability of findings by combining experimental research with PLS-SEM (Kurtaliqi et al., 2024). Integrating PLS-SEM into the studies improves predictive accuracy (Gudergan et al., 2025; Richter & Tudoran, 2024) while allowing researchers to address more complex research questions with greater external validity (Richter et al., 2022). Experiments, on the other hand, add on greater internal validity to findings. Therefore, when both combined validity of the findings in­ creases (Hair et al., 2021). The status quo in marketing practices for AI agents shows that companies commonly use low status titles like “assistant” to signal altruism and helpfulness, aiming to enhance willingness to use (Sundar et al., 2017). This approach is almost taken for granted, as the "assistant" title is expected to activate a “helper” heuristic and attract people more towards adopting AI in service settings (Puntoni et al., 2021). This paper delves deeper into this idea and poses the following research question: “When does this ‘assistant’ heuristic really work?”, claiming that conversational AI’s job titles may show different effects at different levels of task criticality. In the pretest, the scenarios to be used in the following studies were compared in terms of participants’ perceived task criticality using oneway between-participants ANOVA and planned contrasts. Results showed that Scenario 1 ("home assistant task") and Scenario 2 ("task management for work") were perceived as the lowest criticality tasks, while Scenario 3 ("tutoring/teaching") was rated as having medium criticality. On the other hand, Scenario 4 ("credit management") was perceived as the highest criticality task. In the structure of the current paper, each study examined the focal conceptual framework at a different level of task criticality. Only Study 1a and 1b explored the same criticality level but employed different methodologies—PLS-SEM and an experiment, respectively. Study 2 built on the findings of Study 1b by eliminating potential biases associated with the "manager" title, instead using a different high-status title ("teacher") in an educational context. Finally, Study 3, which tested the mechanism in a high-task-criticality setting, revealed a reversal of the focal effect. In this study, the conceptual relationships were also controlled for the effect of scenario realism. In summary, this paper argues that the “assistant heuristic” does not always hold, and the effect of job titles on AI’s perceived humanlikeness, competence, and people’s willingness to use an AI is dependent on task criticality. In high criticality tasks, algorithm aversion is heightened (Castelo et al., 2019; Wanner et al., 2021), and people perceive greater risks in delegating the task to an algorithm. To mitigate uncertainties and alleviate the tension, they tend to make quick, self-defending, heuristic attributions about the service provider (Mozafari et al., 2022; Pieters et al., 1998). Heuristically, people become more inclined to search for a “helper” or “ally” from the available information cues in the environment, aiming to increase the chances of positive outcomes (Fiske, 2004, p. 69). Therefore, in high criticality tasks, “assistant” cues may become more attractive to service recipients as they look for mental shortcuts to make attributions with a risk aversive mindset (Kahneman, 2011). In these situations, hierarchical attributions are less relevant, as people tend to allocate less mental effort to making such judgments. Besides ally-seeking in high-criticality tasks, such tasks can also inherently create a sense of vulnerability for human service recipients due to their lack of domain-specific knowledge compared to the service provider. This knowledge gap can lead to a perceived status loss relative to the provider, further influencing their trust and willingness to engage with AI-based services. (Struch & Schwartz, 1989). The current paper argues that, amplified by heightened algorithm aversion due to task criticality, high-status AI in high-criticality tasks is dehumanized as a defensive mechanism by human service recipients to protect their own status and perceived superiority. This effect is particularly strong because, as a non-human entity, a conversational AI is expected to hold a lower status than humans by default (Alicke et al., 1995; Woolley & Risen, 2018; Bonaccio & Dalal, 2006). In other words, in high-criticality tasks, the human service recipient’s perceived status relative to a high-hierarchy conversational AI service provider diminishes. To psy­ chologically reconcile this gap, individuals may dehumanize the high-hierarchy AI, reducing its perceived humanlikeness as a subcon­ scious defense response. When the task criticality is not high, people are less threatened and defensive, allowing them greater mental flexibility to evaluate the AI service provider. In low criticality tasks, service recipients are more capable of evaluating the AI analytically rather than heuristically (Blut et al., 2021; Burton et al., 2020; Mozafari et al., 2022). Consequently, in low criticality tasks, people are more familiar with the task and have more mental capacity to analyze the service provider’s agency and expertise beyond readily available cues like the “helper” signal conveyed by assistant titles. They are also more likely to pick up more subtle cues, such as hierarchy information indirectly signaled by job titles. Assigning higher-status (or higher-hierarchy) job titles to conversational AI service providers increases expectations of their agency and expertise, which are subcomponents of competence, in much the same way as in human-provided services (Grant et al., 2014; Teo­ doridis et al., 2019; Trautt & Bloom, 1982). The assignment of greater agency and expertise through higher-hierarchy job titles should enhance AI’s perceived humanlikeness, as these traits are characteristic of “human nature” (Haslam, 2006). However, when these traits are not conveyed, the entity undergoes a process of dehumanization, often referred to as “mechanistic dehumanization” or “approximation to automata” in the existing literature (Loughnan & Haslam, 2007). This perceived shift towards automata can push people’s perceptions of conversational AI toward the machine end of the human-machine con­ tinuum (Yanit et al., 2023), thereby decreasing its perceived human­ likeness. AI’s ability to manifest humanlike qualities is especially important in conversational settings, as the competence of AI in these settings is largely driven by its ability to compensate for the absence of human service providers—a factor that, when fulfilled, leads to greater adoption of AI in services. Therefore, aiming to resolve the contradiction between market practices and the findings of the past research, this paper suggests that the expected positive effects of the “assistant heu­ ristic” by the marketing practices should only be observed in high crit­ icality tasks. In contrast, when the task criticality is not high, people are likely to show a greater inclination towards conversational AIs with higher-hierarchy job titles as the evidence in the existing literature shows. Furthermore, AI’s perceived humanlikeness and competence should serially mediate this relationship in both conditions. This paper also examines three alternative explanations regarding the role of competence: perceived warmth, perceived control, and perceived risks. Prior research (Yanit et al., 2023) indicates that perceived humanlikeness enhances perceived warmth of AI, which may serve as an alternative explanation for the effect of perceived hierarchy in the focal relationship. In high criticality tasks, particularly, service recipients may seek greater relationship closeness and warmth with the service provider, making them more willing to use a service provided by a lower-hierarchy AI (Fiske, 2004, p. 69). Additionally, conversational AI is often perceived as eerie (Raff et al., 2024) and low in explicability (Coombs et al., 2021), which may influence users’ perceived control over the AI and their perceived risks from interacting with it, ultimately affecting their willingness to use the service (Kim & McGill, 2011). In 2 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 high criticality tasks, the importance of control and risk perceptions becomes more pronounced in people’s approach to conversational AI. Specifically, in high criticality tasks, individuals experience a status loss relative to the high-status AI and lose their sense of control over the entity, which may lead to less willingness to use services provided by high-hierarchy AI (Shaffer et al., 2013). These potential explanations are tested through parallel mediation pathways to the focal relationship in this paper. Ultimately, this paper suggests that low-status titles like "assistant" may not always be the most effective strategy for enhancing AI adoption. Higher-status titles are likely to foster greater willingness to use AI, as they enhance the perceived humanlikeness and competence of AI in conversational settings, particularly when the task criticality is not high. The remainder of the paper is structured as follows: related literature is reviewed, and hypotheses are presented. This is followed by a pretest and four main studies testing the focal ideas. The final section discusses the results, along with their theoretical and practical implications, limitations, and conclusions. Despite the limited research on how the perceived hierarchical status of AI agents influences subsequent user behavior (see Table 1), evidence suggests that people may use the same cognitive mechanism to evaluate AI-provided services based on the AI’s assigned job title. This is because computerized service providers, such as AI, are perceived as social actors to which human social norms apply (Sundar, 2008, pp. 73–100; Sundar & Nass, 2000). Consequently, people treat AI similarly to human social actors, adjusting their behavior according to the AI’s hierarchical status (Kramer et al., 2011). This perspective allows the examination of human-AI interactions through the lens of human-human interactions. AI’s hierarchical status, signaled through its job title, may influence its perceived status, similar to human service providers (Berger et al., 1972, 1980), particularly in the context of conversational AI, as it engages customers in direct social connections. For example, Jeon (2022) found greater favoritism towards a conversational AI agent titled as a customer service manager compared to a representative. Sundar et al. (2017) found that people are more socially attracted to higher-status conver­ sational AI, comparing a “companion” robot to an “assistant” robot. Conversational AI agents are direct service providers with humanlike qualities that verbally engage with customers firsthand. Similar to human-provided services (Goodman & Gareis, 1993; Koski et al., 2015), assigning higher-status job titles to conversational AI service providers may lead to more positive attitudes from service recipients. Therefore, limited research in the existing literature shows evidence against the effectiveness of wide use of “assistant” titles by the AI companies to market their products. However, can the countless companies that use low-hierarchy “assistant” title to market their conversational AI agents all be mistaken? Or are there specific situations where a conversational AI with a “assistant” title actually increases users’ willingness to use it? A closer look at the literature reveals that existing research largely overlooks the contexts in which low-status AI could be preferred to a greater extent. The current paper asserts that people’s attitudes toward conversational AIs of different hierarchical statuses may be influenced by the cognitive systems they engage when evaluating these agents, which are shaped by situational factors (Kahneman, 2011). For example, depending on their situational cognitive capacity, people evaluate ser­ vices either heuristically—using mental shortcuts, existing knowledge, and first impressions to make quick judgments (Nazlan et al., 2018; Shi et al., 2021)—or analytically, where they invest comparatively greater cognitive effort to evaluate the information and make the most optimal choice (Gervais & Norenzayan, 2012). People’s cognitive capacity is limited when they feel threatened or when their attention is significantly diverted by environmental noise or situational pressures (Wang et al., 2021). In such cases, people are more likely to make decisions heu­ ristically, relying on the most readily available cues in the environment. From this perspective, people may engage different cognitive sys­ tems depending on the perceived criticality of an AI-provided service. Task criticality is defined by the extent to which a task’s outcome is personally relevant to an individual and the potential harm they may experience if the task is not satisfactorily completed. High-criticality tasks are those that are more likely to elicit an emotional response, require greater personalization, and have more detrimental conse­ quences if they fail (Chanseau et al., 2019; Følstad et al., 2024; Yanco & Drury, 2002). Past research has discussed that people make heuristic attributions to AI service providers in high-criticality tasks (Blut et al., 2021; Burton et al., 2020; Mozafari et al., 2022). In performing high-criticality tasks, people experience greater pressure and vulnera­ bility due to a lack of domain-specific knowledge and uncertainties (Wanner et al., 2021). This anxiety triggers a defensive mechanism, leading to "algorithm aversion" in these domains (Castelo et al., 2019). When people experience algorithm aversion, they make heuristic and biased decisions, avoiding cases that allocate greater agency to AI such as hierarchy information (Heβler et al., 2022; Jussupow et al., 2020; Longoni et al., 2019). Therefore, in high-criticality tasks, people are more likely to rely on the readily available altruistic signal of the “assistant” title to quickly 2. Related work 2.1. Hierarchical status and task criticality Within human relationships, social hierarchies naturally evolve (Sidanius & Pratto, 2003). These hierarchies are rooted in people’s relative status to one another, a crucial mechanism for fostering coop­ erative behaviors in dyadic relationships (Groysberg et al., 2011; Koski et al., 2015). Hence, individuals often navigate their place within the hierarchy based on their status in relation to the other member of the dyad. These hierarchies are innate social reflexes developed through evolution, enabling humans to quickly assess and adapt their behavior according to community norms (Boehm & Boehm, 2009; Kraus & Park, 2017; Saaty, 2001). In organizations, hierarchical statuses are communicated through job titles. Job titles clearly define the chain of command and ensure that the organization operates in harmony, while communicating an agent’s expertise and agency with prospective customers (Ahearne et al., 2005). In these settings, service providers with higher hierarchical status are stereotypically believed to possess more expertise (Anderson et al., 2012; Ansoff et al., 2018) and agency (Haslam et al., 2009) compared to those with lower status. For instance, a professor is typically believed to manifest greater agency and expertise compared to a teaching assistant (Koski et al., 2015), and engaging with a service manager instead of a representative suggests an escalation to a higher authority, indicating that the issue is being handled by someone with more expert knowledge (Jeon, 2022). In a similar vein, research indicates that service recipients perceive higher-status agents as more legitimate and are more willing to interact with them (Smith & Tyler, 1996; Tyler & Degoey, 1995, 1996, pp. 331–356), and higher status leads service recipients to acknowledge and follow the higher-status party’s suggestions more closely (Fiske, Cuddy, Peter, & Xu, 2024). Just like human service providers, conversational AI service pro­ viders are also assigned job titles. In the context of AI, these titles often function as a marketing strategy rather than factual reflection of the AI’s capabilities. A look at the AI marketplace reveals that while many conversational AI-based service providers perform identical or highly similar tasks, their assigned job titles vary significantly. For instance, in AI-driven psychological therapy, Wysa.AI and JoBot, despite having nearly identical functions, are marketed under different titles: "coach" and "psychologist," respectively. Both assist users with inquiries related to Cognitive-Behavioral Therapy (CBT), relaxation techniques, and various mental health concerns, as described on their respective web­ sites (jobot.ai/about.html; wysa.com/faq). Similarly, in the medical field, LucasAI is promoted as “The Ultimate Medical Assistant” (luca­ shealth.ai), whereas Biotalk’s AI is labeled as the “AI Doctor” (biotalk. ai/doctor/). 3 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 Table 1 Related work on the examined conceptual relationships. Author Examined Relationship Context Result Journal Jeon (2022) Perceived hierarchical status > People’s attitudes AI agents in customer service Journal of Business Research Sundar et al. (2017) Perceived hierarchical status > People’s attitudes Perceived hierarchical status > Perceived humanlikeness Robotic companionship in elderly care Human stereotypes against low social hierarchy As perceived hierarchical status of a service agent increases, people’s attitudes toward the agent become more positive. People manifested greater usage intentions for higherstatus robots with more serious manners. People with low socio-economic status tend to be dehumanized by others. Perceived hierarchical status > Perceived humanlikeness Perceived humanlikeness > Perceived competence Perceived humanlikeness > Perceived competence Perceived competence > User engagement Perceived competence > Recommendation intention Perceived competence > Willingness to use Voice assistants People perceive low-status AI (assistant) less humanlike than high-status AI (companion). Increasing humanlike autonomy increased perceived competence of AI. Perceived humanlikeness increases expected effort competencies from an AI agent. Humanlike competencies of AI service providers increase user engagement. Perceived competence increases recommendation intention of users. Perceived competence increases customers’ willingness to use. Journal of Business Research International Journal of Information Management International Journal of Information Management Journal of Management Information Systems Current Psychology Perceived competence > Expected service quality Service robots Loughnan, Haslam, Sutton, & Spencer, 2014 McLean et al. (2021) Hu et al. (2021) Gursoy et al. (2019) Chandra et al. (2022) Chi et al. (2024) Kim and Hur (2023) Yoganathan et al. (2021) Intelligent personal assistants AI service providers Conversational AI service providers Service robots Conversational AI service providers Increased perceived competence elevates expected service quality from robotic service providers. identify an “ally” from the available cues, rather than making analytic judgments about the hierarchical connotation of the title within the task context, which would also require greater cognitive effort. Supporting this idea, the Stereotype Content Model (SCM) suggests that when people make heuristic judgments of others, such as relying on stereo­ types, they primarily focus on the extent to which the other party could be an ally (Fiske, Cuddy, Peter, & Xu, 2024). Some companies may have already capitalized on people’s tendency to seek allies in high-criticality tasks. For example, Credit Risk Community (CRiskCo) explicitly markets their credit risk management AI as an ally for individuals involved in credit analysis (CRiskCo, nd). Consequently, this paper argues that the effect of perceived hierarchy should diminish in high-criticality tasks, while the altruistic “in-group” connotations of the “assistant” heuristic should be more strongly activated. This, in turn, fosters greater will­ ingness to use conversational AI, as heuristic thinking becomes more dominant in such tasks. In contrast, when the task criticality is not high, people tend to think more analytically and rely less on heuristics. In such contexts, they have greater cognitive capacity to assess an AI service provider’s ability based on the hierarchical status conveyed through its job title, as this status simultaneously signals agency and expertise. Existing research suggests that, by default, when evaluating service providers, people attempt to analytically assess their expertise and tend to rely more on those with greater expertise (Bendapudi & Berry, 1997). Accordingly, this paper suggests that people will evaluate AI service providers based on their hierarchical status to a greater extent in low-criticality tasks, as these tasks encourage more analytical thinking. Formally, this relationship can be expressed as follows. International Journal of Human-Computer Studies Social Psychology International Journal of Human-Computer Interaction Tourism Management affect perceived humanlikeness. The following section explores a serial mediation through perceived humanlikeness and competence depend­ ing on the perceived task criticality, as agency and expertise attributions derived from AI’s hierarchical status may affect its humanlikeness and eventually, shape perceptions of competence—given that agency and expertise are subcomponents of humanlike competence (Haslam, 2006). 2.2. AI’s perceived hierarchy, humanlikeness, and competence across tasks of varying criticality In addition to changing their perceived hierarchical status, assigning human role titles (e.g., manager, assistant) to AI agents may also change their perceived humanlikeness. Conversational AI is inherently a hu­ manlike entity, which places it on a machine-human continuum, making it neither completely machine nor completely human (Touré-Tillery & McGill, 2015; Yanit et al., 2023). This paper claims that depending on the task criticality people attribute different magnitudes of humanlike­ ness to conversational AI service providers based on their hierarchical status. To begin with, when attributing humanlikeness to other entities (including real human beings), people use their own self-perception of humanness as a reference point for comparison (Harris & Fiske, 2011, pp. 123–134), often valuing their own humanness as higher than others in this comparison (Haslam, 2006; Haslam et al., 2008; Leyens et al., 2000). This bias arises heuristically, as it is easier to cognitively access innate human cues, such as emotionality, in ourselves compared to those in others (Haslam et al., 2008). Existing research also explains this tendency through the "better-than-average" effect, where people tend to attribute more desirable traits, such as humanness, to themselves compared to others (Alicke et al., 1995; Woolley & Risen, 2018; Bonaccio & Dalal, 2006). Moreover, humanness attributions are highly dependent on perceived similarities between the attributor and the attributed, whether in visual, cognitive, or social terms (Waytz et al., 2010). Hierarchical status, which creates disparities in social standing, influences these perceived similarities and leads to attributions of “in-group” or “out-group” membership (Lee & Fiske, 2006). Regarding humanlikeness, perceived hierarchical similarity to another person can lead to stronger "in-group" feelings toward individuals at a similar status level (Leyens et al., 2000). For instance, social status distance has been shown to contribute to the dehumanization of individuals with lower social status by those with higher social status and vice versa (Gwinn et al., 2013; Haslam, 2006; Lammers & Stapel, 2011). Consequently, Hypothesis 1a. (H1a): When task criticality is not high, people will manifest greater willingness to use a high hierarchy (e.g., manager) conversational AI. Hypothesis 1b. (H1b): When task criticality is high, people will manifest greater willingness to use a low hierarchy (e.g., assistant) conversational AI. Despite highlighting the behavioral differences of service recipients in tasks of varying criticality, one question remains unanswered: What underlying mechanism drives these differing preferences based on task criticality? A key contribution of this paper is the argument that, because AI is merely a humanlike entity, agency and expertise attribu­ tions should influence competence only to the extent that they first 4 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 those perceived as more dissimilar are attributed less humanlikeness (Leyens et al., 2001). People’s negative judgments of dissimilar others’ humanness are an innate mechanism that fosters negative biases toward those perceived as different (Haslam et al., 2000). Being only a humanlike entity rather than a real human being, conversational AI service providers are perceptually ascribed not “hu­ manness”, but “humanlikeness” and positioned on a lower rung in the eyes of a human service recipient (Messingschlager and Appel, 2023). Here, the human service recipient attributes AI’s humanlikeness on a scale from zero to their own level of humanness. In doing so, people attribute two different senses of humanlikeness to other conversational AI—either as attributes that are “uniquely human” or as attributes that form the foundation of “human nature,” as humans manifest certain characteristics that are inherent and invisible forces endowed by nature (Gergen, 1991, pp. 19–20). More importantly, existing research dis­ cusses that when “uniquely human” attributes are denied to entities, they are perceptually approximated to “animals,” whereas when “human nature” attributes are denied, they are approximated to “automata” (Loughnan & Haslam, 2007). For example, manifesting deliberate agency and acting on expertise are characteristics of “human nature” (Haslam, 2006). This is because agency and expertise make one’s behavior more independent and, therefore, less likely to conform to pattern expectations (Harris and Fiske, 2011). Not conforming to existing patterns of behavior is a form of manifesting deliberateness, which triggers attributions of “mindedness” and humanness to nonhuman entities (Bering, 2002). Existing research also discusses that those who are high in social hierarchy are more humanized, as gaining status is also associated with “human nature” (Haslam et al., 2008; Loughnan, Haslam, Sutton, & Spencer, 2014; McLean et al., 2021). Therefore, when analytically thought, lowering the hierarchical posi­ tioning of AI conversational agents corresponds to denying them agency in the sense that AI’s actions are now presented as more “induced” rather than driven by independent will (Haslam, 2006). As discussed in the previous section, when a task is not highly critical for a service recipient, they become more inclined to analytically eval­ uate the service provider. This means that in such tasks, people should make greater associations between AI’s hierarchical status and its humanlikeness due to the agency and expertise cues conveyed through job titles. In these cases, conversational AI is perceived as a lower-status entity by default, as analytical thinking makes it easier to broadly recognize its mechanistic nature, which is considered “inferior” to its humanlike nature (Gray et al., 2007). Hence, by increasing the percep­ tions of agency and expertise, ascribing higher-hierarchy job titles should bring the AI closer to the upper limit of humanlikeness, which is defined by the human service recipient based on their self-attributed humanness. Similarly, attributing less agency to AI through lower-status job titles leads to greater objectification of the AI while approximating its social status to that of a mere “tool” rather than a “humanlike entity,” a phenomenon defined in the literature as “mech­ anistic dehumanization” (Haslam, 2006). Therefore, this paper asserts that in low-criticality tasks, hierarchical status should be more acces­ sible through job title information due to analytical thinking tendencies, and in turn, people should perceive a low-hierarchical-status conversa­ tional AI (e.g., home assistant) as less humanlike. As task criticality increases, the status dynamics between the customer and the service provider shift in a way that favors the higher hierarchy service provider over the customer. This happens because people often lack the capacity to handle high-criticality tasks alone due to task complexity, limited capabilities, or the risk of negative outcomes (Følstad et al., 2024). These pressures trigger greater algorithm aversion and heuristic thinking, prompting individuals to seek quick support to handle the task favorably. Algorithm aversion is a risk response where people make heuristic decisions by ignoring certain parts of the infor­ mation to create mental shortcuts for quicker decision-making (Wanner et al., 2021). As a result, people become more inclined to approach higher status AI as a “out-group” that needs to be kept under control, as they experience a status loss due to their domain specific vulnerability (Jussupow et al., 2020). Therefore, in such tasks, ascribing greater agency and expertise to conversational AI through high-hierarchy job titles can further expand the status distance between the human service recipient and conversational AI service provider (Haslam & Bain, 2007). For example, assigning a medical advice task (a high-criticality task) to a conversational AI is likely to trigger algorithm aversion. In such a sce­ nario, assigning a “doctor” title, rather than a “medical assistant” title, to a conversational AI may evoke a stronger sense of status loss due to the higher agency and expertise connotations associated with the "doctor" title (vs. "medical assistant") in a domain where the service recipient (patient) is likely to have less agency and expertise. As a result, the service recipient may feel vulnerable and threatened by allowing a dissimilar agent (in terms of its nature and status) to make critical de­ cisions for them. In this case, they may prefer the lower-hierarchy medical assistant, seeking reassurance and comfort by minimizing the status gap between themselves and the AI service provider (Bendapudi & Berry, 1997). As discussed previously, heuristic thinking and biased reliance on one’s own humanness to set the upper limit in such tasks make service recipients more likely to evaluate a lower-hierarchy AI as more humanlike due to status similarities as hierarchical status simi­ larities trigger an “in-group” feeling (Pieters et al., 1998), while dis­ similar others are dehumanized as being “out-groups” (Leyens et al., 2001). Moreover, in high-criticality tasks, dehumanizing the higherhierarchy AI may emerge as a defensive mechanism against the status loss of the human party due to expertise and agency gaps. In this sce­ nario, the human party activates a counter mechanism to protect their own self-attributed humanness from being diminished by a supposedly less humanlike entity (AI). Existing research shows that as a defense mechanism, people may exhibit a greater inclination to dehumanize an entity with a higher status in a domain where they themselves lack expertise and agency (Haslam et al., 2008; Loughnan & Haslam, 2007; Struch & Schwartz, 1989). People may subconsciously dehumanize out-groups or dissimilar others who hold greater status to feel better about themselves (Fein & Spencer, 1997), maintain their status in comparison (Scaillet & Leyens, 2000), and preserve their positive self-image (Branscombe & Wann, 1994; Crocker et al., 1987). The ten­ dency to ascribe less humanness to dissimilar others for these reasons is defined as “infrahumanization” (Leyens et al., 2001). Therefore, it is posited that in high-criticality tasks, low-hierarchy conversational AI should be perceived as more humanlike due to people’s inclination to infrahumanize high-hierarchy conversational AI. Research also shows that a high level of humanlikeness in conver­ sational AI presentations, such as digital assistants, is crucial for increasing purchase intentions (Balakrishnan & Dwivedi, 2024; Kim et al., 2022) and willingness to use (Moussawi et al., 2021) as perceived humanlikeness also positively influences perceived competence of AI in conversational settings (Gursoy et al., 2019; Hu et al., 2021). It is because the ability to maintain a coherent conversation is primarily a human quality (Cheng et al., 2022) and is considered a manifestation of competence when the current level of AI technologies is taken into ac­ count (Neururer et al., 2018). As discussed so far, humanlikeness in the context of conversational AI also carries connotations of agency and expertise, which are constructs of competence. The existing research supports this notion showing that the perceived humanlikeness of conversational agents increases their perceived competence, facilitating positive attitudes toward the agent (Chandra et al., 2022; Chi et al., 2024). For example, Yoganathan et al. (2021) found that humanlike robots are perceived as more competent in service roles than their non-humanlike counterparts, leading to higher evalua­ tions of service quality. Similarly, Borau et al. (2021) argued that more humanlike conversational AI algorithms are perceived as more compe­ tent in customer-facing service duties due to their perceived ability to understand unique customer needs. Kim and Hur (2023) demonstrated that more competent AI is more likely to be used compared to less 5 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 competent counterparts. Other research also showed that the compe­ tence of AI is crucial for its acceptance in various contexts like human-AI teaming (Harris-Watson et al., 2023), recommendation of AI service robots (Chi et al., 2024), and user engagement with conversational AI service providers (Chandra et al., 2022) (see Table 1). Therefore, in high-criticality tasks, lower-hierarchy conversational AI should be perceived as more humanlike and competent, leading to a higher willingness to use it. In contrast, when the task criticality is not high, higher-hierarchy conversational AI should be perceived as more humanlike and competent, ultimately activating a greater willingness to use it. Formally. (McLean et al., 2021), resulting in less perceived warmth from their interactions (Yanit et al., 2023). Thus, AI’s perceived humanlikeness may increase its perceived warmth, and both perceived humanlikeness and warmth may mediate the link between the perceived hierarchical status of conversational AI and people’s willingness to use in a serial mediation relationship as an alternative explanation. 2.3.2. Perceived control and risks Existing literature suggests that the perceived humanlikeness of agents can influence people’s perceptions of risks of interacting with them and their sense of control over these agents (Kim & McGill, 2011). Since AI agents are often seen as black boxes (Castelvecchi, 2016) lacking explicability (Tambe et al., 2019), it’s challenging to understand exactly how AI generates its output. This uncertainty contributes to AI being perceived as uncanny, affecting its adoption based on perceived risks and people’s perceived control over them. Belanche et al. (2019) found that risk-averse customers were less willing to interact with machinelike robots compared to humanlike ro­ bots, indicating that low humanlikeness in those agents increases perceived risks from interacting with them. Yan et al. (2022) discuss how higher perceptions of control over AI systems increase the intention to use those systems. The effect of perceived control on conversational AI and the risks posed by adopting AI-provided services may be particularly more potent in high criticality tasks, as greater algorithm aversion is triggered in such contexts. People may perceive less control over and greater risks from higher-hierarchy AI due to its diminished humanlikeness in high criti­ cality tasks. On the other hand, as conversational AI becomes more humanlike through increased perceptions of hierarchical status in low criticality tasks, people may perceive fewer risks in interacting with it and feel a greater sense of control. Thus, serial mediation relationships through perceived humanlikeness, control, and risks may serve as alternative explanations to the focal pathway through competence. Hypothesis 2a. (H2a): When the task criticality is not high, high hi­ erarchy AI will be perceived as more humanlike and thus more competent, and in turn, people’s willingness to use it will be higher. Hypothesis 2b. (H2b): When the task criticality is high, low hierarchy AI will be perceived as more humanlike and thus more competent, and in turn, people’s willingness to use it will be higher. 2.3. Alternative explanations This paper also aims to explore other alternative explanations that may better explain the relationship between the perceived hierarchical status of conversational AI and people’s willingness to use it. For instance, if low-status job titles such as ’assistant’ activate a ’helper’ heuristic in high task criticality, this could impact the perceived warmth of the conversational AI through perceived humanlikeness, subse­ quently influencing people’s willingness to use it. Similarly, low-status job titles may alter the perceived controllability of the AI by the human user and influence the perceived risks it poses due to its perceived humanlikeness, each of which could be relevant variables particularly in high task criticality. 2.3.1. Perceived hierarchy, humanlikeness and perceived warmth of the AI In high task criticality, activating the “helper” heuristic with lowstatus job titles may increase the perceived humanlikeness of conver­ sational AI and impact its perceived warmth in conversational settings. In such contexts, high-status AI is likely to be infrahumanized to the level of an automaton or machine. Mechanical entities are perceived as lacking empathy and emotionally inert (Haslam, 2006). For example, it is stereotypically believed that high-status groups (e.g., businessmen) are less humanlike compared to low-status counterparts, being seen as competent but lacking warmth (Harris & Fiske, 2011, pp. 123–134). Leyens et al. (2001) discuss how dehumanized entities are biasedly perceived as incapable of experiencing complex emotions, which re­ duces their perceived warmth. Machine-like qualities tend to reduce the warmth of conversational AI, increasing its uncanny features (Mori, 1970), whereas human-like qualities enhance it. Cheng et al. (2022) argue that humanlike cues are essential for conversational AI to be perceived as warm. People form attitudes toward conversational AI based on its perceived warmth, balancing the machine and human attributes of these agents (Yanit et al., 2023). Research shows that the humanlikeness of automated agents increases their perceived warmth (Kim et al., 2019), improving people’s attitudes toward them (Baek et al., 2022; Christo­ forakos et al., 2021; Zhu & Chang, 2020). For instance, Ahn, Kim, and Sung (2022) demonstrated that as AI’s perceived warmth increases, people rely more on its recommendations. As discussed previously, when the task criticality is not high, higher status titles may increase AI’s perceived humanlikeness approximating it to the level of a real human being. Adding on this notion, past research has shown that perceptual approximation to real human beings en­ hances the warmth of AI (Kim et al., 2019). For instance, when an AI is assigned a low-status title such as "assistant", service recipients are more inclined to perceive the AI as a tool rather than a humanlike agent 3. Research design This paper uses a mixed-method approach, incorporating one partial least squares structural equation modeling (PLS-SEM) study (Study 1a) and four experimental studies (Pretest, Studies 1b, 2, and 3) to test the proposed conceptual framework (see Fig. 1). Adopting a multimethod approach that combines PLS-SEM and experimental studies offers several benefits, as this combination enhances the reliability of findings (Kurtaliqi et al., 2024), improves predictive accuracy (Gudergan et al., 2025; Richter & Tudoran, 2024), and provides greater external validity (Richter et al., 2022). PLS-SEM has several advantages over other SEM methods, as it is more flexible in terms of required sample size and can handle single-item measurements (Cheah et al., 2018; Matheus et al., 2023). While PLS-SEM enhances external validity, experimental studies complement it by strengthening internal validity (Hair et al., 2021). The structure of the research design in the current paper is as follows: First, a pretest (N = 313) was conducted to categorize the scenarios used in the following studies based on their task criticality. Then, Study 1a (low criticality task 1) (N = 309) was conducted to examine the initial model fit indices and assess measurement validity using PLS-SEM, where the effect of the perceived hierarchical status of a home assistant conversational AI on willingness to use through focal and alternative pathways was analyzed to examine hypotheses 1a and 2a. Following the PLS-SEM study, experimental studies were conducted to increase the internal validity of the findings. Study 1b (low criticality task 2) enhances the internal validity of the initial findings from the PLS-SEM study through an experimental design. This study (N = 307) experimentally tests the hypotheses 1a and 2a by comparing a task assistant AI (low status) to a task manager AI (high status). Previous studies have shown that "assistant" and "manager" titles effectively manipulate the hierarchical status of conversational AI due to their connotations with an employee’s position in the organizational 6 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 Fig. 1. Proposed full model. Note: Dotted lines indicate potential alternative paths. Although not shown on the diagram to reduce visual complexity, in studies 1a, 1b, 2, and 3, the model was controlled for the direct effect of perceived hierarchical status on all the second-order mediators, as well as the direct effect of perceived humanlikeness on will­ ingness to use. In study 3, model is also controlled for perceived scenario realism. chart (Jeon, 2022). In the context of the current paper, any job title that conveys greater agency and expertise should create an adequate disparity for people to place a conversational AI at a higher hierarchical level than “assistant” title. Similarly, Study 2 (N = 300) employs an experimental design to examine the focal conceptual framework in a medium criticality task while mitigating potential biases associated with the connotations of the ’manager’ title in the high-status condition. Existing research suggests that people have stereotypical behavioral attributions toward managers, defined as “managerial characteristics,” which include perceived competence, trust, likability, and a lack of perceived self-interest (Pinto et al., 2017). Such heuristic attributions of positive characteristics could introduce biases favoring a conversational AI with a manager title. To address this, Study 2 uses a different context (education) with different role titles, comparing a teaching assistant AI (low status) to a teacher AI (high status) to test hypotheses 1a and 2a. Finally, Study 3 (high criticality) (N = 308) demonstrates a reversed effect, where the assistant heuristic holds with positive effects as intended by current marketing practices and confirms hypotheses 1b and 2b. All studies control for alternative explanations and test the mediating effect of perceived humanlikeness between perceived hier­ archical status and willingness to use. Additionally, study 3 controls for scenario realism within the overall conceptual framework. Studies 1b, 2, and 3 use Hayes’ PROCESS module in SPSS to conduct path analyses (Hayes, 2012) and the results of study 1a has been analyzed on SmartPLS 4.0. They all adopt between-subjects designs, with main effects analyzed using independent samples t-tests or re­ gressions (Studies 1a and 3). Since the pretest involves comparing multiple categories, a one-way between-subjects ANOVA and planned contrasts were used. PROCESS was chosen for path analysis in this paper because it provides a straightforward, regression-based approach for testing mediation and moderation without requiring complex structural modeling. Unlike SEM, which is more suitable for latent variable anal­ ysis, PROCESS is designed for observed variables and automatically conducts bootstrapping to generate robust confidence intervals for in­ direct effects (Clement & Bradley-Garcia, 2022). Additionally, its ease of implementation in SPSS, R, and SAS makes it a practical choice for examining direct and indirect effects without the need for extensive model specification or fit indices. The studies were conducted on Prolific. Prolific was selected because participant backgrounds (ethnicity, nationality, employment, etc.) on this platform vary more widely than on other platforms such as MTurk. Therefore, the results are less likely to be biased due to the imbalanced representation of certain groups. More importantly, Douglas et al. (2023) showed that the quality of the data collected on Prolific was significantly higher than on other platforms, as Prolific participants were “more likely to pass various attention checks, provide meaningful answers, follow instructions, remember previously presented informa­ tion, have a unique IP address and geolocation, and work slowly enough to be able to read all the items.” Throughout the studies, the main effect was tested mostly using in­ dependent samples t-tests. G*Power analyses (Faul et al., 2007) indi­ cated that to achieve 99 % statistical power with a medium effect size (Cohen’s d = 0.5) in an independent samples t-test, a minimum of 296 participants was required. Therefore, in each study, 300 participants with at least a 95 % submission acceptance rate were recruited through Prolific. However, except for Study 2, the final sample sizes exceeded this number. According to Prolific, this can occur due to participants timing out or returning their submissions, which affects the total participant count as in some cases, participants complete the survey on the survey platform but do not submit their participation on Prolific or chose not to submit for various reasons (Prolific, 2024). Although their participation was not recorded under "completed assignments," these participants were identified using their participant IDs and were compensated with a bonus payment. Scale items can be found in Table 2. The studies were analyzed using SPSS and SmartPLS 4.0. All data is available at https://doi.org /10.6084/m9.figshare.28674704. Participants have not been asked any demographic questions instead this information has been obtained from participant’s self-reported information to Prolific to ensure survey parsimony. 4. Pretest: categorizing scenarios based on task criticality 4.1. Overview and method The pretest was conducted to categorize scenarios for use in subse­ quent studies based on their perceived task criticality. Tzafestas (2016), had previously categorized task criticality into three levels: ’high,’ ’medium,’ and ’low.’ This study, therefore, introduced different sce­ narios for conversational AI use, aiming to measure task criticality and examine whether significant differences exist among them to categorize 7 Computers in Human Behavior Reports 18 (2025) 100682 M. Yanit Table 2 Scale items. Table 2 (continued ) Scales Items Adopted from Perceived task criticality (only in pretest) The task this AI performs is highly personal. The task this AI performs is highly critical. The task this AI performs is highly private. If the task this AI performs is not perfromed adequately, it may put the user in a risky situation. The task this AI performs requires a personalized touch. In this question, considering your potential interaction with this AI, we want you to assess the status of the given AI in the hierarchy by moving the sliders below. From left to right, assigned social status increases (0 = Lowest Status, 7 = Highest Status). Measured using 7-point likert scale (1 = Strongly disagree, 7 = Strongly agree) and adapted from Følstad et al. (2024). Perceived hierarchical status (also, manipulation check scale) Willingness to use the AI Perceived humanlikeness Perceived warmth I would be willing to use this AI. I would be willing to adopt this AI to run certain tasks. I would be willing to at least try this AI. I think this AI is capable of thinking. I think this AI is able to understand how others are feeling. I think this AI is very similar to real human beings. I think this AI is thinking and acting like a real human being. I think this AI is able to tell what is right and what is wrong. This AI feels warm. This AI feels sincere. This AI feels kind. This AI feels friendly. Perceived competence This AI feels competent. This AI feels skillful. This AI feels intelligent. This AI feels capable. Perceived risks It feels risky to use this AI. This AI feels like it would work against me. This AI feels like it does not have good intentions. Scales Perceived control of AI Items Adopted from This AI feels like it would exploit me. I could dominate this AI. I could make this AI obey me. I could control this AI. I could manage this AI. Measured using 7-point likert scale (1 = Strongly disagree, 7 = Strongly agree) and adapted from Kim and McGill (2011) these scenarios into low, medium, and high task criticality categories. The study was conducted on Qualtrics, with participants recruited from Prolific. Qualtrics’ AI tools predicted that the study would take less than a minute to complete, as it included only a single 5-item scale measuring perceived task criticality (see Table 2). To enhance flexibility, participation rewards were calculated for a duration of 4 min. Prolific’s fair payment policy recommends a minimum of £0.60 GBP for a 4-min study (Prolific, 2025). Therefore, a total of 313 participants (Mage = 35.10, 52 % female) took part in the study in exchange for £0.65 GBP. At the end of the study, the median completion time was 1 min and 1 s. Table 3 shows the demographic information for these participants. Scenario introductions that participants had read before answering the scale questions, along with the real-life applications from which they were inspired are given in Table 4. This pretest used a between-subjects design. Participants who confirmed the consent form joined the study and were randomly assigned to one of four scenario conditions. After being introduced to the scenarios, participants were asked to respond to scale items measuring the perceived criticality of the task. The scale items included: ’The task this AI performs is highly personal,’ ’The task this AI performs is highly critical,’ ’The task this AI performs is highly private,’ ’If the task this AI performs is not completed adequately, it may put the user in a risky situation,’ and ’The task this AI performs requires a personalized touch.’ (MCriticality = 4.82, SDCriticality = 1.08, Cronbach’s alpha = 0.73) (Følstad et al., 2024). Measured using 7-point Likert scale (1 = Strongly disagree, 7 = Strongly agree) and adapted from Kraus and Keltner (2013); Kraus et al. (2009). A slider is used to measure perceived hierarchical status because hierarchies are typically mentally visualized in a tiered structure, akin to a "social ladder" (Demczuk et al., 2023), and a slider aligns with this cognitive mapping. Measured using 7-point Likert scale. The first item has been adapted from Yanit et al. (2023), while the remaining items were added to strengthen the scale’s association with usage behavior. Consequently, word-of-mouth (WOM)-related items from the original scale have been replaced to better align with usage-related items the focus on actual engagement with conversational AI and its usage behaviors. Measured using 7-point likert scale (1 = Strongly disagree, 7 = Strongly agree) and adapted from Touré-Tillery and McGill (2015) 4.2. Results To analyze the results, a one-way between-subjects ANOVA and planned contrasts were conducted. First, the results of the one-way ANOVA were significant (MScenario1 = 4.28, SDScenario1 = 1.06, MSce­ nario2 = 4.54, SDScenario2 = 1.03, MScenario3 = 4.98, SDScenario3 = 0.88, MScenario4 = 5.44, SDScenario4 = 0.99, F(3, 309) = 20.66, p < 0.001, Etasquared η2 = 0.17). Additionally, planned contrast analyses revealed that Scenario 4 (credit score management task) was perceived as significantly more critical than the other scenarios (Contrast value = Table 3 Demographics in pretest. Item Gender Female Male Prefer not to say Age 18-30 31-43 44-56 57-69 70-82 No Information Ethnicity Asian Black Mixed White Other No Information Measured using 7-point likert scale (1 = Strongly disagree, 7 = Strongly agree) and adapted from Horchak et al. (2016) Measured using 7-point likert scale (1 = Strongly disagree, 7 = Strongly agree) and adapted from Horchak et al. (2016) Measured using 7-point likert scale (1 = Strongly disagree, 7 = Strongly agree) and adapted from Kim and McGill (2011) 8 Count Percentage 162 137 14 52 % 44 % 4% 128 110 39 18 2 16 41 % 35 % 12 % 6% 1% 5% 41 68 24 149 15 16 13 % 22 % 8% 47 % 5% 5% M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 Table 4 Scenario introductions in pretest. Table 5 Demographics in study 1a. # Scenario Context Real Life Applications 1 Please try to imagine yourself in the following situation: “An AI (Artificial Intelligence)based software is assisting you with tasks in a conversational manner, similar to Google Home and Amazon Alexa.” Please try to imagine yourself in the following situation: “An AI solves various work problems for you in a conversational manner. For example, it sends emails on your behalf based on the instructions you provide.” Please try to imagine yourself in the following situation: “An AI is utilizing machine learning and data analysis techniques to assist you with your courses and exam preparations. This AI provides one-on-one tutoring for students and promptly addresses their questions directly in a conversational manner. Additionally, it involves parents in discussions on ways to enhance students’ performance.” Please try to imagine yourself in the following situation: “An AI is utilizing machine learning and data analysis techniques to provide you with personalized ways to enhance credit scores. This AI provides one-on-one services for the users and makes decisions based on the input it collects from them in a conversational manner.” Conversational home assistants https://home.goo gle.com/https ://alexa.amazon.ca/ 2 3 4 Task management for work Teaching/tutoring Item Gender Female Male Prefer not to say No information Age 18-30 31-43 44-56 57-69 No information Ethnicity Asian Black Mixed White Other No Information https://copilot. microsoft.com/ https://professorai. ai/ Count Percentage 160 139 1 9 52 % 45 % 0% 3% 186 80 27 7 9 60 % 26 % 9% 2% 3% 11 59 24 188 17 1 4% 19 % 8% 61 % 6% 3% 5.2. Method Credit Score Management After agreeing to the consent form, all participants were presented with the same information about a new hypothetical AI tool called peanut.ai. The information stated, “Peanut.ai is an AI (Artificial Intelligence)-based software that can assist users with tasks in a conversational manner, similar to Google Home and Amazon Alexa”. Following this information, participants proceeded to answer the survey questions. The scale questions have been given in Table 2. Descriptive statistics for the scale questions in Study 1a can be found in Table 6. To analyze the results, partial least squares structural equation modeling (PLS-SEM) was employed. In selecting Partial Least Squares Structural Equation Modeling (PLS-SEM) over Covariance-Based Struc­ tural Equation Modeling (CB-SEM), several considerations were pivotal. PLS-SEM is particularly advantageous when the research objective em­ phasizes prediction and explanation of target constructs, as it focuses on maximizing the explained variance (R2 values) of endogenous variables (Hair et al., 2021). This approach is also well-suited for studies with smaller sample sizes and non-normal data distributions, as it imposes fewer restrictions compared to CB-SEM. Additionally, PLS-SEM effec­ tively handles complex models, including those with formatively measured constructs and single-item measures—such as the indepen­ dent variable in this paper—without encountering the identification issues that often arise in CB-SEM. While CB-SEM is traditionally asso­ ciated with theory testing and provides comprehensive model fit indices, PLS-SEM’s evolving fit measures, such as the Standardized Root Mean Square Residual (SRMR), offer relevant assessments of model quality (Dash & Paul, 2021). Given these factors and considering the explor­ atory nature of this study with all previously untested pathways, PLS-SEM was deemed more appropriate for this study’s objectives and data characteristics. The data were analyzed using SmartPLS 4.0 software. https://www.knapsa ck.ai/ 2.53, Std. Error = 0.39, t(309) = 6.51, p < 0.001). For the remaining scenarios, Scenario 3 (teaching task) was considered more critical than Scenario 1 (home assistance) and Scenario 2 (task management) (Contrast value = 1.15, Std. Error = 0.26, t(309) = 4.45, p < 0.001). However, there was no significant difference between Scenario 1 and Scenario 2 in terms of perceived task criticality (Contrast value = 0.26, Std. Error = 0.17, t(309) = 1.56, p = 0.12). Based on these results, Study 1a and Study 1b used "low" criticality tasks—home assistance and task management, respectively—as scenario manipulations. Study 2 employed the teaching scenario as a "medium" criticality task for the same purpose, while Study 3 investigated the credit management scenario as a "high" criticality task. In the next sec­ tion, the focal conceptual framework is examined by gradually increasing task criticality through studies, and the hypotheses are tested across different levels of criticality. 5.3. Results 5. Study 1a: home assistance – low criticality task 1 5.3.1. Measurement model Following the procedure applied by Matheus et al. (2023), three types of validity were tested: content validity with Cronbach’s alpha scores (Straub et al., 2004), discriminant validity using heterotrait-monotrait (HTMT) ratios per pair of components, and cross loadings (Matheus et al., 2023) convergent validity using composite reliability (CR) and average variance extracted (AVE). Cronbach’s alpha values exceeded the threshold of 0.7, ensuring content validity (Bernardi, 1994) (see Table 6). All AVE scores were above the threshold value of 0.5 (Fornell & Larcker, 1981), and all CR values were above the 5.1. Overview Study 1a has been predicted to take 3 min to complete by Qualtrics. Therefore, 309 participants (Mage = 30.63, 52 % female) recruited from Prolific in exchange for 0.65 GBP. At the end of the study, the median completion time was 2 min 57 s. Detailed demographics of these par­ ticipants can be found in Table 5. The data was collected through a survey on Qualtrics, and no data was eliminated from the dataset as there was no incomplete responses. 9 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 Table 6 Construct reliability and validity. Perceived humanlikeness Perceived warmth Perceived competence Perceived control Perceived risks Willingness to use M (SD) Cronbach’s alpha Composite reliability (CR) Average variance extracted (AVE) 3.18 (1.43) 4.06 (1.25) 4.86 (1.27) 4.38 (1.14) 3.27 (1.24) 5.15 (1.31) 0.92 0.94 0.94 0.85 0.90 0.93 0.92 0.94 0.94 0.89 0.91 0.94 0.75 0.84 0.84 0.69 0.78 0.88 suggested threshold of 0.7 (Chin, 1998) (see Table 10). These results indicated satisfactory content and convergent validities. Cross loading analyses showed that each item was strongly corre­ lated with its corresponding construct (Matheus et al., 2023). The dif­ ferences between the closest cross-loading values were greater than 0.1 for each item, which eliminated cross-loading related concerns (Jamali, Ayatollahi, & Jafari, 2018). (see Table 7). Moreover, an HTMT value below 0.9 is necessary for each pair of variables to ensure discriminant validity (Henseler et al., 2015). Some researchers suggest that HTMT values within the 0.85–0.90 range may indicate potential concerns regarding discriminant validity (Henseler et al., 2015). In this study, all the HTMT values were below 0.85 with a maximum value of 0.67, which indicated a satisfactory discriminant validity (see Table 8). Herman’s single-factor test on SPSS revealed that the percentage of total variance explained by a single factor was 36.2 %, supporting the absence of common method bias. A maximum Variance Inflation Factor (VIF) of 2.12, which is well below the most conservative threshold of 3 (O’Brien, 2007), further supports the absence of bias. Table 8 HTMT values. 5.3.2. Structural model The analyses of model fit indices showed that standardized root mean squared residual (SRMR) was 0.06. When sample size is greater than 100, an SRMR ≤0.08 indicates an acceptable fit (Cho et al., 2020). Therefore, model fit was ensured. 5.3.3. Main effect Linear regression analyses showed a significant and positive main effect of perceived hierarchy of conversational AI on people’s willing­ ness to use (β = 0.17, SE = 0.03, p < 0.001). This result further sup­ ported hypothesis 1a (H1a). Relationship Heterotrait-monotrait ratio (HTMT) Perceived competence <− > Perceived hierarchy Perceived control <− > Perceived hierarchy Perceived control <− > Perceived competence Perceived humanlikeness <− > Perceived hierarchy Perceived humanlikeness <− > Perceived competence Perceived humanlikeness <− > Perceived control Perceived risks <− > Perceived hierarchy Perceived risks <− > Perceived competence Perceived risks <− > Perceived control Perceived risks <− > Perceived humanlikeness Willingness to use <− > Perceived hierarchy Willingness to use <− > Perceived competence Willingness to use <− > Perceived control Willingness to use <− > Perceived humanlikeness Willingness to use <− > Perceived risks Perceived warmth <− > Perceived hierarchy Perceived warmth <− > Perceived competence Perceived warmth <− > Perceived control Perceived warmth <− > Perceived humanlikeness Perceived warmth <− > Perceived risks Perceived warmth <− > Willingness to use 0.368 0.146 0.142 0.362 0.620 0.111 0.069 0.354 0.284 0.114 0.283 0.601 0.244 0.406 0.455 0.356 0.666 0.224 0.649 0.360 0.560 5.3.4. Analyses of the full model To test the path coefficients, the bootstrapping re-sampling method with 500 re-samples at a 95 % confidence interval was used. The results Table 7 Cross loadings. PH1 humanlike1 humanlike2 humanlike3 humanlike4 humanlike5 warm1 warm2 warm3 warm4 comp1 comp2 comp3 comp4 control1 control2 control3 control4 risk1 risk2 risk3 risk4 use1 use2 use3 Perceived hierarchy Perceived humanlikeness Perceived warmth Perceived competence Perceived control Perceived risk Willingness to use 1.000 0.301 0.285 0.273 0.356 0.282 0.246 0.352 0.322 0.342 0.322 0.339 0.364 0.286 0.108 0.127 0.080 0.136 − 0.049 − 0.003 − 0.120 − 0.059 0.264 0.286 0.219 0.347 0.837 0.898 0.877 0.883 0.827 0.513 0.600 0.564 0.530 0.518 0.491 0.572 0.536 0.128 0.006 − 0.109 − 0.030 − 0.082 0.035 − 0.113 − 0.116 0.355 0.410 0.293 0.346 0.489 0.514 0.521 0.552 0.524 0.892 0.910 0.939 0.926 0.587 0.568 0.540 0.602 0.250 0.154 0.075 0.190 − 0.299 − 0.236 − 0.324 − 0.307 0.497 0.521 0.457 0.357 0.531 0.489 0.435 0.500 0.530 0.540 0.599 0.546 0.604 0.899 0.929 0.911 0.934 0.085 0.113 0.030 0.179 − 0.270 − 0.269 − 0.331 − 0.276 0.544 0.548 0.489 0.138 − 0.080 − 0.014 − 0.029 0.030 0.034 0.157 0.178 0.180 0.208 0.186 0.123 0.064 0.110 0.751 0.844 0.878 0.847 − 0.230 − 0.199 − 0.269 − 0.232 0.223 0.239 0.171 − 0.070 0.001 − 0.047 − 0.086 − 0.098 − 0.132 − 0.288 − 0.280 − 0.319 − 0.340 − 0.364 − 0.327 − 0.193 − 0.318 − 0.144 − 0.130 − 0.249 − 0.316 0.848 0.883 0.910 0.879 − 0.396 − 0.381 − 0.405 0.274 0.330 0.304 0.299 0.334 0.361 0.458 0.450 0.480 0.534 0.538 0.540 0.484 0.505 0.157 0.168 0.160 0.244 − 0.391 − 0.330 − 0.394 − 0.353 0.956 0.945 0.912 10 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 of the path analyses are presented in Table 9. The findings indicated that serial mediation pathways through perceived humanlikeness and warmth, as well as through perceived humanlikeness and competence, were both positive and significant. As controlled pathways, mediating effects of perceived warmth and perceived competence alone were also significant. This result suggests that people attributed warmth and competence to the AI without requiring it to have humanlike qualities. This may be because the AI was positioned as similar to Google Home and Alexa in the manipulation condition, leading to a preobtained perception of humanlikeness based on people’s familiarity and experi­ ences with these real AI assistants. This suggestion requires further investigation. The serial mediations through perceived risks and perceived control as well as the indirect effect through perceived humanlikeness alone, and the direct effect of perceived hierarchical status on willingness to use the AI were found to be nonsignificant (see Table 9). This result indicated full mediations through observed statistically significant in­ direct effects. In the context of this paper’s focal mechanism, this in­ dicates a positive and significant relationship between the perceived hierarchical status of conversational AI and perceived humanlikeness. As the perceived humanlikeness of the AI increases, perceived compe­ tence also rises, ultimately enhancing people’s willingness to use the AIbased service. Therefore, hypothesis 2a (H2a) was supported. study 1a with an experimental design. A total of 300 participants were requested on the Prolific platform for 0.65 GBP for their participation. At the end of the study, 307 participants took part (Mage = 28.9, 56 % fe­ male) and the median completion time was 3 min 38 s. Detailed de­ mographic analyses are provided in Table 10. Participants accessed the questionnaire through the Qualtrics web platform, and no data were excluded from the dataset. 6.2. Method To manipulate the perceived hierarchical status of the AI, partici­ pants were randomly assigned to different conditions. Accordingly, participants were presented with the following information based on their assigned condition, where the hierarchical status of the conversa­ tional AI was manipulated through the assigned job titles of either “task assistant” or “task manager.” For the low-status AI: "Nina is your everyday AI-based task assistant that solves various problems for you in a conversational manner. Below, you will see an interface of Nina", and: "Nina is your everyday AI-based task manager that solves various problems for you in a conversational manner. Below, you will see an interface of Nina" for the high-status AI condition. After receiving the information, participants were presented with an AI interface directly extracted as a screenshot from the Microsoft Copilot interface on Win­ dows OS, as depicted in Appendix A. The image of the interface was taken as a screenshot from Copilot to enhance scenario realism. The account name used to capture the screenshot was concealed in the greeting message of Copilot after the screenshot was taken. Participants saw how the AI greets a user and were shown the recommended quick prompts that could be inquired, such as “Write an ‘out of office’ email reply while I am on vacation.” These prompts were designed to help participants understand the tasks the AI could assist with. As a result, participants did not interact with the AI directly but only observed its interface on an image. Next, participants proceeded to answer the survey questions given in Table 2. Descriptive statistics for the scale questions can be found in Table 11. 6. Study 1b: task assistant versus task manager - low criticality task 2 6.1. Overview The aim of study 1b was to gain internal validity to the findings of Table 9 Direct and indirect effects in the full model. Perceived hierarchy →Perceived humanlikeness Perceived hierarchy → Perceived warmth Perceived hierarchy → Perceived competence Perceived hierarchy → Perceived control Perceived hierarchy → Perceived risks Perceived humanlikeness → Perceived warmth Perceived humanlikeness → Perceived competence Perceived hierarchy → Willingness to use Perceived humanlikeness → Willingness to use Perceived warmth → Willingness to use Perceived competence → Willingness to use Perceived control → Willingness to use Perceived risk → Willingness to use Perceived humanlikeness → Perceived control Perceived humanlikeness → Perceived risk Perceived hierarchy → Willingness to use Perceived hierarchy → Perceived humanlikeness → Willingness to use Perceived hierarchy → Perceived warmth → Willingness to use Perceived hierarchy → Perceived competence → Willingness to use Perceived hierarchy → Perceived control → Willingness to use Perceived hierarchy → Perceived risks → Willingness to use Perceived hierarchy → Perceived humanlikeness → Perceived warmth → Willingness to use Perceived hierarchy → Perceived humanlikeness → Perceived competence → Willingness to use Perceived hierarchy → Perceived humanlikeness → Perceived control → Willingness to use Perceived hierarchy → Perceived humanlikeness → Perceived risk → Willingness to use Estimate Lower BootCI Upper BootCI 0.35* 0.16* 0.18* 0.16* − 0.05 0.55* 0.52* 0.24 0.06 0.1 0.04 − 0.16 0.45 0.43 0.45 0.25 0.28 0.28 0.07 0.64 0.59 0.05 0.04 0.19* 0.31* 0.08 − 0.22* − 0.07 − 0.07 0.05 0.02 − 0.05 − 0.08 0.05 0.17 − 0.02 − 0.34 − 0.23 − 0.20 − 0.05 − 0.03 0.15 0.17 0.34 0.46 0.18 − 0.1 0.09 0.06 0.15 0.06 0.03* 0.004 0.07 0.06* 0.02 0.10 0.01 − 0.003 0.04 0.01 − 0.02 0.04 0.04* 0.01 0.07 0.06* 0.03 0.09 − 0.002 − 0.01 0.003 0.005 − 0.004 0.02 6.3. Results 6.3.1. Manipulation check Results of an independent samples t-test revealed that the manipu­ lation was successful (MLow = 2.95, SDLow = 2.05, MHigh = 4.39, SDHigh = 1.59, t(305) = 6.87, p < 0.001, d = 0.8) as people in the high-status AI condition perceived AI to be significantly higher in the hierarchy than people in the low-status AI condition. Table 10 Demographics in study 1b. Item Gender Female Male No Information Age 18-30 31-43 44-56 57-69 70+ Ethnicity Asian Black Mixed White Other No Information Note: An asterisk (*) denotes statistically significant relationships. 11 Count Percentage 171 133 3 56 % 43 % 1% 209 72 21 3 2 68 % 23 % 7% 1% 1% 16 82 28 156 22 3 5% 27 % 9% 51 % 7% 1% M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 Table 11 Descriptive statistics in study 1b. Table 12 Direct and indirect effects in the full model. Variable Mean St. Dev. Cronbach’s Alpha Examined effects in the full model Results Willingness to use the AI Perceived humanlikeness Perceived warmth Perceived competence Perceived risks Perceived control 5.30 3.16 4.49 5.07 2.83 4.66 1.29 1.39 1.29 1.16 1.26 1.13 0.92 0.91 0.94 0.92 0.92 0.86 Perceived hierarchy > Perceived humanlikeness β* = 0.46, SE = 0.16, 95 % CI = [0.15, 0.77] β* = 0.47, SE = 0.05, 95 % CI = [0.38, 0.56] β* = 0.40, SE = 0.04, 95 % CI = [0.31, 0.48] β = 0.05, SE = 0.05, 95 % CI = [-0.05, 0.15] β = 0.03, SE = 0.05, 95 % CI = [-0.06, 0.12] β* = 0.12, SE = 0.05, 95 % CI = [0.03, 0.21] β = − 0.02, SE = 0.11, 95 % CI = [-0.23, 0.19] β = 0.25, SE = 0.13, 95 % CI = [-0.001, 0.50] β = 0.19, SE = 0.12, 95 % CI = [-0.04, 0.42] β = − 0.17, SE = 0.15, 95 % CI = [-0.46, 0.12] β = − 0.02, SE = 0.13, 95 % CI = [-0.28, 0.23] β* = 0.19, SE = 0.06, 95 % CI = [0.08, 0.30] β* = 0.43, SE = 0.06, 95 % CI = [0.31, 0.55] β = 0.001, SE = 0.05, 95 % CI = [-0.09, 0.09] β* = − 0.27, SE = 0.04, v CI = [-0.36, − 0.18] β* = 0.05, BootSE = 0.03, 95 % BootCI = [0.01, 0.12] β = 0.05, BootSE = 0.03, 95 % BootCI = [-0.001, 0.12] β = 0.08, BootSE = 0.06, 95 % BootCI = [-0.02, 0.20] β = 0.00001, BootSE = 0.01, 95 % BootCI = [-0.02, 0.01] β = 0.05, BootSE = 0.04, 95 % BootCI = [-0.03, 0.13] β * = 0.04, BootSE = 0.02, 95 % BootCI = [0.01, 0.09] β* = 0.08, BootSE = 0.03, 95 % BootCI = [0.02, 0.16] β = − 0.01, BootSE = 0.01, 95 % BootCI = [-0.02, 0.01] β = 0.00001, BootSE = 0.002, 95 % BootCI = [-0.004, 0.003] Perceived humanlikeness > Perceived warmth Perceived humanlikeness > Perceived competence Perceived humanlikeness > Perceived risks 6.3.2. Main effect The main effect analysis, conducted via an independent samples ttest, indicated that participants in the high-status AI condition were significantly more willing to use the AI (MLow = 5.14, SDLow = 1.36, MHigh = 5.46, SDHigh = 1.19, t(305) = 2.20, p = 0.03, d = 0.25) compared to those in the low-status AI condition. Thus, hypothesis 1a (H1a) was supported one more time. Perceived humanlikeness > Perceived control 6.3.3. Analyses of the full model A custom model was built using PROCESS syntax to analyze the full model. The indirect effect through perceived humanlikeness was sig­ nificant (β = 0.05, BootSE = 0.03, BootCI = [0.01, 0.12]). The findings also revealed significant indirect effects of the serial mediations through perceived humanlikeness and warmth (β = 0.04, BootSE = 0.02, BootCI = [0.01, 0.09]), as well as perceived humanlikeness and competence (β = 0.08, BootSE = 0.03, 95 % BootCI = [0.02, 0.16]). Additionally, the direct effect of perceived hierarchical status became nonsignificant (β = − 0.02, SE = 0.11, p = 0.84, CI = [− 0.23, 0.19]). Therefore, it was concluded that perceived humanlikeness fully mediated the relationship between perceived hierarchical status and willingness to use. There were full serial mediations through humanlikeness and warmth, as well as through humanlikeness and competence. Increased perceptions of hierarchical status also elevated perceived humanlikeness, which in turn increased people’s willingness to use the AI through heightened per­ ceptions of competence and warmth of the AI. These results showed that, as expected in a low-criticality task, higher-hierarchy conversational AI is perceived as more humanlike and more competent. Consequently, people’s willingness to use it becomes higher. Therefore, Hypothesis 2a was supported again. The indirect effects of serial mediations through humanlikeness and perceived risks (β = − 0.01, BootSE = 0.01, BootCI = [− 0.02, 0.01]) and through humanlikeness and perceived control (β = 0.00001, BootSE = 0.002, BootCI = [− 0.004, 0.003]) did not reach significance (see Table 12). Perceived hierarchy > Perceived risks Perceived humanlikeness > Willingness to use Perceived hierarchy > Willingness to use Perceived hierarchy > Perceived warmth Perceived hierarchy > Perceived competence Perceived hierarchy > Perceived control Perceived warmth > Willingness to use Perceived competence > Willingness to use Perceived control > Willingness to use Perceived risks > Willingness to use Perceived hierarchy > Perceived humanlikeness > Willingness to use Perceived hierarchy > Perceived warmth > Willingness to use Perceived hierarchy > Perceived competence > Willingness to use Perceived hierarchy > Perceived control > Willingness to use Perceived hierarchy > Perceived risks > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived warmth > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived competence > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived risks > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived control > Willingness to use Note: An asterisk (*) denotes statistically significant relationships. 6.3.4. Common method bias Herman’s single-factor test on SPSS showed that common method bias was not a concern for these analyses, as the percentage of total variance explained by a single factor was 34.6 %, which is less than the threshold value of 50 %. The results also showed that the maximum vif was 1.95, which is well below the most conservative threshold of 3. Therefore, the possibility of method bias is eliminated. Table 13 Demographics in study 2. Item Gender Female Male Age 18-30 31-43 44-56 57-69 Ethnicity Asian Black Mixed White Other No Information 7. Study 2: teaching assistant versus teacher – medium criticality task 7.1. Overview A total of 300 participants were requested on the Prolific platform for 0.65 GBP for their participation (Mage = 30.9, 54 % female). The median completion time for this study was 3 min and 24 s. Detailed de­ mographic analyses are provided in Table 13. As in Study 1a and 1b, participants accessed the questionnaire through the Qualtrics web platform, and no data were excluded from the dataset. 12 Count Percentage 161 139 54 % 46 % 195 69 27 9 65 % 23 % 9% 3% 11 59 24 188 17 1 4% 20 % 8% 63 % 6% 0% M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 7.2. Method Additionally, the direct effect of perceived hierarchical status became nonsignificant (β = 0.09, SE = 0.12, p = 0.47, CI = [− 0.15, 0.33]). Therefore, the observed significant indirect effects indicated full serial mediations, supporting hypothesis 2a (H2a). Increased percep­ tions of hierarchical status also elevated perceived humanlikeness, which, in turn, engendered greater perceptions of competence (H2a), reduced risks, and elevated perceived control on the AI, respectively, leading to greater willingness to use the AI. However, the indirect effect through perceived humanlikeness alone (β = 0.05, BootSE = 0.04, BootCI = [− 0.003, 0.14]) and the serial in­ direct effect through perceived humanlikeness and warmth (β = 0.01, BootSE = 0.03, BootCI = [− 0.04, 0.07]) did not reach significance (see Table 15). This result has been discussed in the limitations section. The script for this study was adapted from the website of a real AI software, Professor AI (PAI) (professorai.ai), which provides course assistance to students and educational guidance to parents. PAI was specifically chosen to ensure that a broad range of participants could relate to the scenario, whether as students themselves or as parents, as the scenario is relevant to both groups. Participants were randomly assigned to different conditions and were presented with one of the following pieces of information. For the low-status AI condition, people read: "PAI is an artificial intelligence (AI)-based teaching assistant, uti­ lizing machine learning and data analysis techniques to assist students with their courses and exam preparations. PAI provides one-on-one tutoring for students and promptly addresses their questions directly in a conversational manner. Additionally, it involves parents in discus­ sions on ways to enhance students’ performance". For the high-status AI condition, people read: "PAI is an artificial intelligence (AI)-based teacher, utilizing machine learning and data analysis techniques to assist students with their courses and exam preparations. PAI provides one-onone tutoring for students and promptly addresses their questions directly in a conversational manner. Additionally, it involves parents in discus­ sions on ways to enhance students’ performance". After receiving the information, participants were presented with a screenshot of PAI’s marketing communications from its website. These communications were adapted according to the corresponding condition and are presented in Appendix B. Next, participants proceeded to answer the survey questions given in Table 2. Descriptive statistics for the scale questions in Study 2 can be found in Table 14. 7.3.4. Common method bias Herman’s single-factor test on SPSS revealed that the percentage of total variance explained by a single factor was 37.5 %, which is less than the threshold value of 50 %. Similarly, vif analyses showed that the highest vif factor was 2.15, which is below the threshold of 3. Therefore, the possibility of a common method bias has been eliminated. Table 15 Direct and indirect effects in the full model. Perceived humanlikeness > Willingness to use Perceived hierarchy > Willingness to use Perceived hierarchy > Perceived warmth Perceived hierarchy > Perceived competence Perceived hierarchy > Perceived risks Perceived hierarchy > Perceived control Perceived warmth > Willingness to use Perceived competence > Willingness to use Perceived control > Willingness to use 7.3.3. Analyses of the full model The same custom model from Study 1b was used to analyze the full model using PROCESS syntax. The findings revealed significant indirect effects of the serial mediations through perceived humanlikeness and competence (β = 0.11, BootSE = 0.04, BootCI = [0.03, 0.20]), perceived humanlikeness and control (β = 0.01, BootSE = 0.01, BootCI = [0.0001, 0.02]), and perceived humanlikeness and risks (β = 0.01, BootSE = 0.01, BootCI = [0.001, 0.03]). Perceived risks > Willingness to use Perceived hierarchy > Perceived humanlikeness > Willingness to use Perceived hierarchy > Perceived warmth > Willingness to use Perceived hierarchy > Perceived competence > Willingness to use Perceived hierarchy > Perceived control > Willingness to use Perceived hierarchy > Perceived risks > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived warmth > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived competence > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived risks > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived control > Willingness to use Table 14 Descriptive statistics in study 2. St. Dev. Cronbach’s Alpha 5.37 3.29 4.56 5.20 2.66 4.34 1.31 1.39 1.34 1.15 1.17 1.13 0.93 0.92 0.95 0.92 0.89 0.87 β* = 0.45, SE = 0.16, 95 % CI = [0.14, 0.77] β* = 0.61, SE = 0.04, 95 % CI = [0.52, 0.69] β* = 0.47, SE = 0.04, 95 % CI = [0.39, 0.55] β* = − 0.14, SE = 0.05, 95 % CI = [-0.24, − 0.05] β* = 0.13, SE = 0.05, 95 % CI = [0.04, 0.22] β = 0.11, SE = 0.06, 95 % CI = [-0.01, 0.23] β = 0.09, SE = 0.12, 95 % CI = [-0.15, 0.33] β = 0.14, SE = 0.12, 95 % CI = [-0.10, 0.38] β = 0.18, SE = 0.11, 95 % CI = [-0.03, 0.40] β = 0.20, SE = 0.14, 95 % CI = [-0.07, 0.47] β = − 0.14, SE = 0.13, 95 % CI = [-0.39, 0.12] β = 0.03, SE = 0.07, 95 % CI = [-0.10, 0.16] β* = 0.51, SE = 0.07, 95 % CI = [0.37, 0.65] β* = 0.13, SE = 0.05, 95 % CI = [0.02, 0.23] β* = − 0.15, SE = 0.06, 95 % CI = [-0.26, − 0.04] β = 0.05, BootSE = 0.04, 95 % BootCI = [-0.003, 0.14] β = 0.04, BootSE = 0.02, 95 % BootCI = [-0.02, 0.05] β = 0.09, BootSE = 0.06, 95 % BootCI = [-0.01, 0.21] β = − 0.02, BootSE = 0.01, 95 % BootCI = [-0.07, 0.01] β = − 0.03, BootSE = 0.03, 95 % BootCI = [-0.1, 0.01] β = 0.01, BootSE = 0.03, 95 % BootCI = [-0.04, 0.07] β* = 0.11, BootSE = 0.04, 95 % BootCI = [0.03, 0.20] β* = 0.01, BootSE = 0.01, 95 % BootCI = [0.001, 0.03] β* = 0.01, BootSE = 0.01, 95 % BootCI = [0.0001, 0.02] Perceived humanlikeness > Perceived control 7.3.2. Main effect An independent samples t-test indicated that participants in the highstatus AI condition were significantly more willing to use the AI (MLow = 5.22, SDLow = 1.40, MHigh = 5.54, SDHigh = 1.18, t(298) = 2.14, p = 0.03, d = 0.25) in comparison to the people in the low-status AI con­ dition. Thus, hypothesis 1a (H1a) was supported one more time. Mean Perceived hierarchy > Perceived humanlikeness Perceived humanlikeness > Perceived competence Perceived humanlikeness > Perceived risks 7.3.1. Manipulation check To test the success of the manipulation, the same method from Study 1b was used (M = 3.46, SD = 1.78). Results of an independent samples ttest revealed that the manipulation was successful (MLow = 3.12, SDLow = 1.82, MHigh = 3.83, SDHigh = 1.7, t(298) = 3.49, p = 0.001, d = 0.41), as people in the high-status AI condition perceived AI to be significantly higher in the hierarchy than people in the low-status AI condition. Willingness to use the AI Perceived humanlikeness Perceived warmth Perceived competence Perceived risks Perceived control Results Perceived humanlikeness > Perceived warmth 7.3. Results Variable Examined effects in the full model Note: An asterisk (*) denotes statistically significant relationships. 13 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 8. Study 3: credit assistant versus credit analyst – high criticality task Table 17 Descriptive statistics in study 1b. 8.1. Overview The study is predicted by Qualtrics to take 3 min. Therefore, 308 participants took part in the study in exchange for 1 GBP (Mage = 37.02, 52 % female). The median completion time was 3 min 46 s. Detailed demographic analyses are provided in Table 16. Participants accessed the questionnaire through the Qualtrics web platform, and no data were excluded from the dataset. After accepting the consent form, participants were randomly assigned to one of two conditions in a between-subjects design. The hierarchical status of the conversational AI was manipulated through assigned job titles, either "credit assistant" (low-status AI) or "credit analyst" (high-status AI). The scenario structure partially followed the format used in Study 2, but this time, no visual stimuli were introduced. For the low-status AI condition, participants read: "Orchid is an artificial intelligence (AI)-based credit assistant, utilizing machine learning and data analysis techniques to provide personalized ways to enhance credit scores. Orchid provides one-on-one services for users and makes de­ cisions based on the input it collects from them in a conversational manner." For the high-status AI condition, participants read: "Orchid is an artificial intelligence (AI)-based credit analyst, utilizing machine learning and data analysis techniques to provide personalized ways to enhance credit scores. Orchid provides one-on-one services for users and makes decisions based on the input it collects from them in a conversational manner." Participants then proceeded to answer the survey questions listed in Table 2. Additionally, they responded to the following question to assess their perception of scenario realism: "I think the information given about this AI is real." Descriptive statistics for the scale questions are presented in Table 17. Table 16 Demographics in study 3. Gender Female Male Age 18-30 31-43 44-56 57-69 70+ Ethnicity Asian Black Mixed White Other No Information 159 149 52 % 48 % 134 117 39 15 3 43 % 38 % 13 % 5% 1% 46 45 28 166 21 2 15 % 15 % 9% 54 % 7% <1 % Cronbach’s Alpha Willingness to use the AI Perceived humanlikeness Perceived warmth Perceived competence Perceived risks Perceived control Perceived scenario realism 4.80 2.87 3.27 4.83 3.15 3.74 4.00 1.45 1.47 1.59 1.40 1.25 1.39 1.58 0.92 0.91 0.96 0.96 0.90 0.91 – 8.3.3. Analyses of the full model The same custom model used in previous studies was applied to analyze the full model in this study. Additionally, each variable was regressed on perceived scenario realism to account for its potential influence. The indirect effect through perceived humanlikeness alone was nonsignificant (β = − 0.08, BootSE = 0.01, BootCI = [− 0.03, 0.02]). How­ ever, there were significant indirect effects of the serial mediations through perceived humanlikeness and warmth (β = − 0.03, BootSE = 0.02, BootCI = [− 0.07, − 0.003]), as well as perceived humanlikeness and competence (β = − 0.05, BootSE = 0.03, 95 % BootCI = [− 0.11, − 0.005]). Moreover, the indirect effect of perceived risks alone was also negative and significant (β = − 0.05, BootSE = 0.03, BootCI = [− 0.11, − 0.01]). This result suggests that people’s risk perceptions did not depend on their perception of the AI’s humanlikeness in high-criticality tasks. Participants attributed higher risks to using a credit analyst AI compared to a credit assistant AI, which subsequently reduced their willingness to use it. From this perspective, these findings align with the theorization of this paper, which proposes that people rely on quick heuristic interpretations rather than analytical judgments in highcriticality tasks due to algorithm aversion. Therefore, participants may have been more inclined to assess perceived risks of conversational AI using salient communication of “assistant” title, without considering its hierarchical status. This result has also been discussed in discussion section. Additionally, the direct effect of perceived hierarchical status became nonsignificant (β = − 0.04, SE = 0.1, p = 0.65, CI = [− 0.24, 0.15]). This indicates full serial mediation through humanlikeness and competence, as well as through humanlikeness and warmth. Increased perceptions of hierarchical status decreased perceived humanlikeness, which in turn reduced people’s willingness to use the AI by lowering perceptions of both its competence (H2b) and warmth. This result sup­ ported hypothesis 2b. The indirect effects of serial mediations through humanlikeness and 8.3.1. Manipulation check In this study, the manipulation check analysis controlled for perceived scenario realism. Results from a linear regression analysis confirmed the success of the manipulation, as the credit analyst AI (1) was perceived to have a higher hierarchical status than the credit as­ sistant AI (0) (MLow = 3.65, SDLow = 1.65, MHigh = 4.05, SDHigh = 1.38, β = 0.43, Std. Error = 0.17, t = 2.52, p = 0.012, R2 = 0.049; Adjusted R2 Percentage St. Dev. 8.3.2. Main effect The main effect analysis was also conducted while controlling for the effect of perceived scenario realism. Results of a linear regression test showed that the effect observed in previous studies reversed in this study, as participants reported lower willingness to use the AI in the credit analyst AI (1) condition compared to the credit assistant AI (0) condition (MLow = 5.02, SDLow = 1.30, MHigh = 4.63, SDHigh = 1.54, β = − 0.31, Std. Error = 0.15, t = − 2.07, p = 0.039, R2 = 0.20; Adjusted R2 = 0.20, Std. β = − 0.11). Perceived scenario realism also showed a sig­ nificant positive effect on willingness to use (β = 0.40, Std. Error = 0.05, t = 8.43, p < 0.001, Std. β = 0.43). Thus, hypothesis 1b (H1b) was supported. 8.3. Results Count Mean = 0.043, Std. β = 0.14). The effect of perceived scenario realism was also positive and significant (β = 0.17, Std. Error = 0.05, t = 3.21, p = 0.001, Std. β = 0.18). This indicates that as participants perceived the scenario as more realistic, they were more likely to attribute a higher hierarchical status to the AI in their assigned condition. This is an expected result in high criticality tasks. As previously discussed, in high criticality tasks, people tend to have inflated perceptions of the service provider’s status. 8.2. Method Item Variable 14 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 perceived risks (β = − 0.01, BootSE = 0.01, BootCI = [− 0.02, 0.01]) and through humanlikeness and perceived control (β = 0.00001, BootSE = 0.002, BootCI = [− 0.004, 0.003]) did not reach significance (see Table 18). Table 18 Direct and indirect effects in the full model. 8.3.4. Common method bias Harman’s single-factor test in SPSS confirmed the absence of com­ mon method bias, as a single factor explained only 44.6 % of the total variance, which is below the 50 % threshold. Additionally, the maximum vif was 2.40, remaining below the most conservative threshold of 3. Thus, the possibility of method bias is effectively ruled out. Perceived humanlikeness > Perceived warmth Examined effects in the full model Results Perceived hierarchy > Perceived humanlikeness β* = − 0.34, SE = 0.15, 95 % CI = [-0.64, − 0.04] β = 0.72, SE = 0.05, 95 % CI = [0.62, 0.81] β* = 0.46, SE = 0.05, 95 % CI = [0.37, 0.55] β* = − 0.11, SE = 0.05, 95 % CI = [-0.21, − 0.01] β* = 0.29, SE = 0.06, 95 % CI = [0.18, 0.40] β = 0.03, SE = 0.05, 95 % CI = [-0.06, 0.13] β = − 0.04, SE = 0.1, 95 % CI = [-0.24, 0.15] β = − 0.1, SE = 0.13, 95 % CI = [-0.35, 0.16] β = − 0.04, SE = 0.13, 95 % CI = [-0.29, 0.21] β* = 0.31, SE = 0.13, 95 % CI = [0.04, 0.57] β = 0.02, SE = 0.15, 95 % CI = [-0.28, 0.32] β* = 0.20, SE = 0.05, 95 % CI = [0.11, 0.29] β* = 0.51, SE = 0.05, 95 % CI = [0.41, 0.61] β = − 0.001, SE = 0.04, 95 % CI = [-0.07, 0.07] β* = − 0.25, SE = 0.05, 95 % CI = [-0.34, − 0.16] β* = 0.38, SE = 0.05, 95 % CI = [0.28, 0.47] β* = 0.12, SE = 0.04, 95 % CI = [0.03, 0.21] β* = 0.23, SE = 0.04, 95 % CI = [0.14, 0.31] β = 0.03, SE = 0.05, 95 % CI = [-0.07, 0.13] β* = − 0.22, SE = 0.05, 95 % CI = [-0.31, − 0.13] β = 0.03, SE = 0.03, 95 % CI = [-0.03, − 0.10] β = − 0.01, BootSE = 0.01, 95 % BootCI = [-0.03, 0.02] β = − 0.01, BootSE = 0.02, 95 % BootCI = [-0.05, 0.02] β = − 0.01, BootSE = 0.04, 95 % BootCI = [-0.1, 0.1] β = 0.00001, BootSE = 0.004, 95 % BootCI = [-0.01, 0.01] β* = − 0.05, BootSE = 0.03, 95 % BootCI = [-0.11, − 0.01] β * = − 0.03, BootSE = 0.02, 95 % BootCI = [-0.07, − 0.003] β* = − 0.05, BootSE = 0.03, 95 % BootCI = [-0.11, − 0.005] β = − 0.01, BootSE = 0.01, 95 % BootCI = [-0.02, 0.0004] β = 0.0001, BootSE = 0.03, 95 % BootCI = [-0.01, 0.01] Perceived humanlikeness > Perceived competence Perceived humanlikeness > Perceived risks Perceived humanlikeness > Perceived control Perceived humanlikeness > Willingness to use 9. Discussion Perceived hierarchy > Willingness to use In this paper, a pretest categorized four scenarios—home assistance, task management for work, teaching, and credit score manage­ ment—into low, medium, and high criticality tasks. Using these sce­ narios, one PLS-SEM study (Study 1a) and three experimental studies (Studies 1b, 2, and 3) demonstrated that when task criticality is low or moderate, high-hierarchy conversational AI (e.g., manager, teacher, analyst) generates greater willingness to use compared to low-hierarchy AI with assistant titles. This effect was serially mediated by perceived humanlikeness and, in turn, perceived competence—where higher perceived humanlikeness due to the AI’s elevated status led to greater perceived competence. Studies 1a, 1b, and 2 tested this mechanism in low and moderate criticality tasks, consistently supporting Hypotheses 1a and 2a. These results align with the findings of Jeon (2022), who tested a similar mechanism in the context of sportswear customer service. Conversely, Study 3, as a unique contribution, revealed that in highcriticality tasks, this effect reversed—low-hierarchy AI was perceived as more humanlike and more competent, leading to greater willingness to use it. This effect remained robust even when controlling for scenario realism to enhance external validity. Accordingly, in Study 3, hypoth­ eses 1b and 2b were confirmed. Table 19 presents a summary of all key findings discussed in the current manuscript and Fig. 2 illustrates the final model proposed in this paper, reflecting the consistent findings across all studies. This study also tested alternative serial mediation pathways involving perceived warmth, control, and risks alongside perceived humanlikeness to assess the robustness of the focal pathway. The focal pathway—through perceived humanlikeness and com­ petence—remained robust across all four studies. Meanwhile, the mediation pathway through perceived warmth was significant only in Studies 1a, 1b, and 3, while the pathways through perceived control and perceived risks were significant only in Study 2 (teaching task). In study 2, perceived warmth did not explain willingness to use significantly, but risks and control. This result suggests that, as a domain-specific char­ acteristic in teaching, people may make more nuanced judgments about a conversational AI teacher’s humanlike accuracy (competence) and its ability to correct its mistaken assessments (control and risks) rather than its humanlike warmth (Ding et al., 2023). These results may also stem from the underlying utilitarian motivations behind using AI-based tutoring services. Dual-process models (Kahneman, 2011) suggest that affective responses (e.g., warmth perception) may be less influential when making utilitarian decisions about AI use. Instead, users engage in more cognitive processing, where risk assessment and perceived control over the AI system take precedence. Additionally, Study 3 (credit score management task) revealed a significant mediation pathway through perceived risks. Based on existing research, it was initially proposed that both risk and control perceptions would play a crucial role in high-criticality tasks. The results showed significance only for risk per­ ceptions, and this pathway did not include perceived humanlikeness. This finding likely stems from the weak effect of perceived humanlike­ ness on risk perceptions. One possible explanation is heightened Perceived hierarchy > Perceived warmth Perceived hierarchy > Perceived competence Perceived hierarchy > Perceived risks Perceived hierarchy > Perceived control Perceived warmth > Willingness to use Perceived competence > Willingness to use Perceived control > Willingness to use Perceived risks > Willingness to use Perceived realism > Perceived humanlikness Perceived realism > Perceived warmth Perceived realism > Perceived competence Perceived realism > Perceived control Perceived realism > Perceived risks Perceived realism > Willingness to use Perceived hierarchy > Perceived humanlikeness > Willingness to use Perceived hierarchy > Perceived warmth > Willingness to use Perceived hierarchy > Perceived competence > Willingness to use Perceived hierarchy > Perceived control > Willingness to use Perceived hierarchy > Perceived risks > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived warmth > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived competence > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived risks > Willingness to use Perceived hierarchy > Perceived humanlikeness > Perceived control > Willingness to use Note: An asterisk (*) denotes statistically significant relationships. algorithm aversion in high-criticality tasks, which aligns with people’s tendency to rely on heuristic judgments. In such contexts, individuals may have been more risk-focused, overlooking other cues—such as humanlikeness—when evaluating the risks of interacting with a conversational AI in high criticality tasks (Castelo et al., 2019). Shaffer et al. (2013) previously discussed that when algorithm aversion is triggered, employing algorithms in service duties could impact people’s internal locus of control. However, in the current paper, no significant effect of perceived control was found in a high-criticality task as ex­ pected. At this point, further research is needed to explore the potential 15 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 Table 19 Summary of findings. Studies AI title(s) Task Criticality Independent Variable (IV) First Level Mediator(s) (M1) Second Level Mediator(s) (M2) Dependent Variable (DV) Key Finding(s) Pretest – – Scenarios – – Perceived task criticality Study 1a Home assistant Low Perceived hierarchical status Perceived humanlikeness Willingness to use Study 1b Task assistant versus task manager Low Perceived hierarchical status Perceived humanlikeness Study 2 Teaching assistant versus teacher Moderate Perceived hierarchical status Perceived humanlikeness Study 3 Credit assistant versus credit analyst High Perceived hierarchical status Perceived humanlikeness Perceived competence Perceived warmth Perceived risks Perceived control Perceived competence Perceived warmth Perceived risks Perceived control Perceived competence Perceived warmth Perceived risks Perceived control Perceived competence Perceived warmth Perceived risks Perceived control While the home assistant and job task assistant scenarios were found to have similar levels of criticality, the education and credit management scenarios were identified as more critical, with the credit management task rated as the most critical overall. The findings indicated that only the serial mediation pathways through perceived humanlikeness and warmth, as well as through perceived humanlikeness and competence, were both positive and significant. Willingness to use There were full serial mediations only through humanlikeness and warmth, as well as through humanlikeness and competence. Willingness to use The findings revealed significant indirect effects of the serial mediations only through perceived humanlikeness and competence, perceived humanlikeness and control, and perceived humanlikeness and risks. Willingness to use There were significant indirect effects of the serial mediations through perceived humanlikeness and warmth, as well as perceived humanlikeness and competence Fig. 2. Final model. reasons behind the nonsignificant effect of perceived control. It would be valuable to investigate whether other factors, such as user familiarity with AI, might play a role in how individuals perceive and interact with high-criticality tasks. Additionally, exploring the interplay between al­ gorithm aversion, perceived control, and hierarchical status of conver­ sational AI could shed light on why control perceptions were not as impactful in this context. Understanding these dynamics will help refine the model and provide deeper insights into how users approach conversational AI in high-criticality situations. 9.1. Theoretical implications To date, Jeon (2022) has shown that higher-hierarchy job titles correspond to greater perceptions of likeability and knowledgeability, leading to an increased willingness to buy products recommended by a conversational AI-based service provider. Sundar et al. (2017) demon­ strated that higher-hierarchy AIs are more strongly associated with competence-related characteristics. Additionally, previous research explored the effect of a specialist heuristic, where the "specialist" title assigned to media agents and computerized systems resulted in height­ ened perceptions of competence compared to roles designated as "generalist" (Kim, 2014; Koh & Sundar, 2010; Nass et al., 1996; Sah 16 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 et al., 2011; Sundar et al., 2017). Therefore, the relationship between an AI’s hierarchical status, competence, and subsequent consumption behavior has been previously explored in past research, which yielded results that contradict current marketing practices for conversational AI service providers, where "assistant" titles are often used as a marketing strategy. The current research contributes to the existing body of knowledge by first demonstrating that the mediation mechanism is further medi­ ated by perceived humanlikeness. Past research has discussed that humanlikeness is a prerequisite for perceptions of competence in the case of conversational AI (Balakrishnan & Dwivedi, 2024). The current paper builds on this premise by showing that the hierarchical status of conversational AI can also influence its perceived humanlikeness. Secondly, as another contribution, the paper reveals that the service recipients’ greater inclination toward using higher hierarchy conversa­ tional AI-based services is task-dependent, confirming past research findings indicating greater user preference of high hierarchy AI only in low criticality tasks. However, as task criticality increases, people become more willing to use a low-hierarchy "assistant" conversational AI, supporting the validity of current marketing practices. Thirdly, past research has discussed that AI aversion can be reduced by increasing the perceived humanlikeness of AI (Bigman & Gray, 2018; Castelo et al., 2019; Dang & Liu, 2024; Haslam, 2006; Messingschlager & Appel, 2023; Shaffer et al., 2013). As a novel contribution, the current paper suggests that there may be a spiral relationship between algo­ rithmic aversion and perceived humanlikeness. It demonstrates that AI aversion may also decrease the perceived humanlikeness of high-hierarchy conversational AI, triggering an “infrahumanization” process in high criticality tasks. This research also contributes to the Computers Are Social Actors (CASA) paradigm (Nass et al., 1994) by demonstrating how symbolic cues—like hierarchical job titles—can activate social heuristics in user perceptions of AI agents. The findings also offer new insights into interpersonal communication theories (Braithwaite & Schrodt, 2021), suggesting that traditional markers of authority and competence in human interactions remain influential even in AI-mediated contexts. By showing that humanlikeness is a prerequisite for competence attribu­ tion, the study affirms that AI-human interactions are still evaluated through fundamentally human communicative frameworks. This research extends the validity of previous studies that focused on a single domain with a commercial AI service provider (e.g., sportswear customer service) (Jeon, 2022) by demonstrating their applicability across various conversational AI contexts beyond customer service, such as education, home assistantship, and organizational task management. Consequently, while Jeon (2022) examined hierarchy primarily through the disparity between "assistant" and "manager" titles, the cur­ rent study broadens this perspective by incorporating other hierarchical job titles, such as "teacher" and "credit analyst," compared to "teaching assistant" and "credit assistant." This approach was intended to minimize potential biases associated with the "manager" title. Furthermore, this study highlights that the previously established mechanism reverses in higher criticality tasks, such as credit score management, where lower-hierarchy "assistant" AI agents are preferred. competence were added in a serial mediation, indicating a full media­ tion effect. This suggests that despite the current boom in AI technolo­ gies, humanlike qualities in conversational AI settings are still a prerequisite for ascribing other characteristics, such as competence, to the AI. Human characteristics are still perceived as superior to machinelike qualities. From this perspective, AI developers may benefit more from simple and cost-effective marketing strategies, such as assigning role titles to their AI agents, than from investing large sums in improving the perceived technical capabilities of AI in conversational services to increase its perceived competence. However, this suggestion also brings an ethical dilemma into the picture. AI is currently not an immaculate entity. It has been shown that even the most advanced AI agents today tend to spread misinformation, as AI is a probabilistic rather than a deterministic entity. Therefore, it sometimes “hallucinates.” (Rae, 2024). This could be especially detri­ mental in high criticality domains that require accurate and factual in­ formation, such as healthcare. The risk of such inaccuracies becomes even more pronounced when AI agents are perceived as more competent and humanlike due to ascribed job titles, potentially misleading users into trusting them more than they should. This raises concerns about transparency and accountability, especially in situations where users’ decisions based on AI’s advice could have significant consequences. Building on the findings of this paper, concrete implications emerge for chatbot design across various applied domains. In mental health services, where emotional sensitivity and trust are critical, assigning lower-status titles (e.g., “psychiatry assistant” rather than “psychiatrist”) may foster user comfort and encourage disclosure in high-stakes con­ versations, such as those involving suicidal ideation. In contrast, in educational settings, higher-status titles (e.g., “teacher” rather than “teaching assistant”) may enhance the AI agent’s perceived authority and competence, thereby improving users’ recep­ tiveness to its guidance. While higher hierarchical titles can enhance perceived credibility in routine or lower-criticality teaching tasks (e.g., providing course information or study tips), caution is warranted in assigning such roles to conversational AI agents. Overpositioning AI as a primary teaching authority may lead to diminished student trust, particularly if the AI fails to meet expectations related to pedagogical nuance, empathy, or adaptability. To address this, transparency regarding the AI’s capabilities and limitations is essential. Positioning AI agents as supplementary rather than primary instructors may help manage expectations while still benefiting from hierarchical framing. For instance, although titles such as “Teacher” or “Professor” may enhance perceived competence, a more appropriate designation—such as “Supplementary Professor”—could better reflect the agent’s sup­ portive role and preserve user trust. In customer service, where the criticality of user needs can vary widely, dynamic adjustment of hierarchical framing may help align user expectations with the AI’s role. For instance, a chatbot may be intro­ duced as a “customer service manager” for routine inquiries, but framed as a “customer service assistant” when addressing more complex or urgent issues. Such task-contingent role assignments can help optimize perceived fit and trust in AI across a range of contexts. 9.3. Limitations and future research direction 9.2. Practical implications The current paper has several limitations that should be acknowl­ edged. Firstly, it exclusively focused on investigating conversational AIbased service providers. While humanlikeness and competence of AI may be important prerequisites for people’s willingness to use conver­ sational AI-based service providers in such service domains, it remains unclear how the hierarchical status of AI might manifest an effect in domains where service recipients do not directly interact with it. The weaker hierarchical sense in such service settings may result in weaker influence on perceived humanlikeness and competence. Secondly, despite examining the effect of task criticality across different studies, the current paper does not consider this variable as a This paper holds profound managerial implications. To begin with, it suggests that assigning job titles to conversational AI service providers should not be taken as an arbitrary task. The paper provides guidelines for this process, showing that people prefer high-hierarchy conversa­ tional AI in low-criticality tasks, whereas, in high-criticality tasks, the preference shifts toward low-hierarchy conversational AI. Secondly, the results demonstrated that the hierarchical status of conversational AI can impact its perceived competence by altering the attributed humanlikeness to it. The direct effect of perceived hierar­ chical status was nonsignificant in the studies when humanlikeness and 17 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 moderator. Therefore, the paper is limited in its ability to compare the effects of different mediation pathways across varying levels of task criticality to resolve their impact on competence statistically. From this perspective, the current paper suggests that task criticality should be considered a compelling moderator for future research. Thirdly, although the current paper divides tasks based on their criticality, it does not differentiate them based on their motivational nature. All the scenario examples investigated in this research had a utilitarian nature, wherein the service recipient was primarily motivated by utilitarian reasons to interact with the given conversational AI service providers such as task management, teaching, home assistantship, and credit score management. Utilitarian domains typically do not necessi­ tate a substantial emotional capacity by the service provider to satis­ factorily implement tasks (Whitley et al., 2018). However, in domains where the service provider is expected to possess emotional skills such as empathy in elder care or affection in the education of children, altruistic job titles such as "assistant" may be preferred to a greater extent. That said, task qualities such as being utilitarian or hedonic emerge as a compelling moderator candidate for the future research. The present study’s examination of perceived humanlikeness as a predictor of perceived warmth may have inflated the results in the mediation analyses. Prior research suggests that perceived warmth is a subcomponent of perceived humanness (Fiske, Cuddy, Peter, & Xu, 2024), indicating a conceptual overlap between these constructs. As such, their inclusion in a serial mediation model may reflect a degree of construct nesting, which could artificially inflate effect sizes. This po­ tential confounding effect introduced by the serial mediation design through perceived humanlikeness and warmth warrants caution. Future research exploring similar mechanisms should take this overlap into account and consider alternative modeling approaches to ensure more accurate interpretation of mediation effects. This paper also does not establish clear thresholds for task criticality, which limits the ability to definitively categorize tasks as low, medium, or high criticality. For instance, what this study considers high criticality may not be universally applicable, and there may be tasks that could add an additional layer of criticality. The paper assumes that task criticality is assessed similarly by all individuals, but this assumption may not hold across different contexts or groups. This further highlights the need for future research to refine the definition and application of task criticality as a moderator, ensuring a more nuanced understanding of how different individuals perceive and assess criticality in various tasks. Additionally, this study does not address situations where algorithm aversion could completely disrupt the observed mechanisms. For example, in very high criticality areas like healthcare, AI could face complete rejection regardless of its job title, which may nullify the ef­ fects explored in this research. Future studies should explore such con­ texts and identify them as boundary conditions for the proposed mechanisms. While the results of this study provide valuable insights, it is important to acknowledge that some of the main effect findings yielded small effect sizes (e.g., d = 0.25) and some indirect effects had small β coefficients. Although these results were statistically significant, the practical significance or real-world impact of these effects may be limited and not widely applicable. These suggests that the relationships between the variables, while present, may not be as strong or conse­ quential as larger effects would indicate. This could be due to various factors, such as the specific context of the study, the sample characteristics, or the measurement tools used. Additionally, small effect sizes may imply that other unmeasured variables or external factors not accounted for in the current model could be influencing the outcomes. Finally, across all four studies, the sample predominantly consisted of a relatively younger age group (18–30), which may have introduced a bias in the findings. This demographic is generally characterized by higher levels of digital literacy and greater familiarity with conversa­ tional AI agents, which may influence both their expectations and in­ teractions with such technologies. Furthermore, this age disparity may have also affected participants’ perceptions of task criticality. Tasks that might be considered critical for older individuals may not be perceived as equally critical by younger participants, who may feel a stronger sense of self-agency and confidence in managing those tasks indepen­ dently without the assistance of an AI agent. Future research should take these factors into account and investigate whether the observed effects generalize across broader age groups with varying levels of digital proficiency and task-related perceptions. 9.4. Conclusion Companies assign “assistant” job titles to conversational AI service providers to increase their adoption by the public as a marketing strat­ egy. However, previous limited research in the literature has contra­ dicted this approach by showing that higher hierarchy job titles may be more effective in enhancing favorable user behaviors, as higher status job titles increase the perceived knowledgeability of the AI as an indi­ cator of competence. Applying a multi-method approach with three experiments and one structural equation modeling analysis, this research uniquely examines the effect of the hierarchical status of conversational AI agents, conveyed by their job titles, in different task criticalities. The findings reveal that in high task criticality, people prefer lowhierarchy AI with an "assistant" title (e.g., credit assistant), as sug­ gested by current marketing practices. However, when the task criti­ cality is not high, the preference shifts toward higher hierarchy AI (e.g., task manager, teacher) confirming the findings of the past research. As a second contribution, this research also demonstrates that the perceived hierarchical status of AI affects its perceived humanlikeness differently across various task criticalities. Based on these findings, the current paper provides valuable guide­ lines for companies using conversational AI service providers in service duties, highlighting the importance of considering task criticality when choosing hierarchical titles to optimize user adoption and perceptions. Declaration of generative AI and AI-assisted technologies in the writing process During the preparation of this work the author(s) used ChatGPT in order to check the grammar of the written material. After using this tool/ service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. 18 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 Appendix A. Experiment stimulus in Study 1b Appendix B. Experiment stimuli in Study 2 High Status. 19 Computers in Human Behavior Reports 18 (2025) 100682 M. Yanit Low Status. Data availability Alicke, M. D., Klotz, M. L., Breitenbecher, D. L., Yurak, T. J., & Vredenburg, D. S. (1995). Personal contact, individuation, and the better-than-average effect. Journal of personality and social psychology, 68(5), 804. Anderson, C., Kraus, M. W., Galinsky, A. D., & Keltner, D. (2012). The local-ladder effect: Hierarchical status and subjective well-being. Psychological science, 23(7), 764–771. Ansoff, H. I., Kipley, D., Lewis, A. O., Helm-Stevens, R., & Ansoff, R. (2018). Implanting strategic management. Springer. Baek, T. H., Bakpayev, M., Yoon, S., & Kim, S. (2022). Smiling AI agents: How anthropomorphism and broad smiles increase charitable giving. International Journal of Advertising, 41(5), 850–867. Balakrishnan, J., & Dwivedi, Y. K. (2024). Conversational commerce: Entering the next stage of AI-powered digital assistants. Annals of Operations Research, 333(2), 653–687. Belanche, D., Casaló, L. V., & Flavián, C. (2019). Artificial intelligence in FinTech: Understanding robo-advisors adoption among customers. Industrial Management & Data Systems, 119(7), 1411–1430. I have shared a link to the data repository where my data can be downloaded from. References Ahearne, M., Mathieu, J., & Rapp, A. (2005). To empower or not to empower your sales force? An empirical examination of the influence of leadership empowerment behavior on customer satisfaction and performance. Journal of Applied Psychology, 90 (5), 945. Ahn, J. K. J., & Sung, Y. (2022). The effect of gender stereotypes on artificial intelligence recommendations. Journal of Business Research, 141, 50–59. 20 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 Bendapudi, N., & Berry, L. L. (1997). Customers’ motivations for maintaining relationships with service providers. Journal of Retailing, 73(1), 15–37. Berger, J., Cohen, B. P., & Zelditch, J. M. (1972). Status characteristics and social interaction. American Sociological Review, 241–255. Berger, J., Rosenholtz, S. J., & Zelditch, J. M. (1980). Status organizing processes. Annual Review of Sociology, 6(1), 479–508. Bering, J. M. (2002). The existential theory of mind. Review of General Psychology, 6(1), 3–24. Bernardi, R. A. (1994). Validating research results when cronbach’s alpha is below. 70: A methodological procedure. Educational and Psychological Measurement, 54(3), 766–775. Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34. Blut, M., Wang, C., Wünderlich, N. V., & Brock, C. (2021). Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. Journal of the Academy of Marketing Science, 49, 632–658. Boehm, C., & Boehm, C. (2009). Hierarchy in the forest: The evolution of egalitarian behavior. Harvard University Press. Bonaccio, S., & Dalal, R. S. (2006). Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. Organizational Behavior and Human Decision Processes, 101(2), 127–151. Borau, S., Otterbring, T., Laporte, S., & Fosso Wamba, S. (2021). The most human bot: Female gendering increases humanness perceptions of bots and acceptance of AI. Psychology and Marketing, 38(7), 1052–1068. Braithwaite, D. O., & Schrodt, P. (2021). Engaging theories in interpersonal communication: Multiple perspectives. Routledge. Branscombe, N. R., & Wann, D. L. (1994). Collective self-esteem consequences of outgroup derogation when a valued social identity is on trial. European Journal of Social Psychology, 24(6), 641–657. Burton, J. W., Stein, M. K., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220–239. Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. Castelvecchi, D. (2016). Can we open the black box of AI? Nature News, 538(7623), 20. Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To be or not to be… human? Theorizing the role of human-like competencies in conversational artificial intelligence agents. Journal of Management Information Systems, 39(4), 969–1005. Chanseau, A., Dautenhahn, K., Koay, K. L., Walters, M. L., Lakatos, G., & Salem, M. (2019). How does peoples’ perception of control depend on the criticality of a task performed by a robot. Paladyn. Journal of Behavioral Robotics, 10(1), 380–400. Cheah, J. H., Sarstedt, M., Ringle, C. M., Ramayah, T., & Ting, H. (2018). Convergent validity assessment of formatively measured constructs in PLS-SEM: On using singleitem versus multi-item measures in redundancy analyses. International Journal of Contemporary Hospitality Management, 30(11), 3192–3210. Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Information Processing & Management, 59(3), Article 102940. Chi, R., Zhang, J., & Pan, M. (2024). The effect of anthropomorphic competence-warmth congruence of service robots on recommendation intention. Current Psychology, 43 (7), 6570–6583. Chin, W. W. (1998). The partial least squares approach to structural equation modeling. Modern methods for business research, 295(2), 295–336. Cho, G., Hwang, H., Sarstedt, M., & Ringle, C. M. (2020). Cutoff criteria for overall model fit indexes in generalized structured component analysis. Journal of marketing analytics, 8(4), 189–202. Christoforakos, L., Gallucci, A., Surmava-Große, T., Ullrich, D., & Diefenbach, S. (2021). Can robots earn our trust the same way humans do? A systematic exploration of competence, warmth, and anthropomorphism as determinants of trust development in hri. Frontiers in Robotics and AI, 8, Article 640444. Clement, L. M., & Bradley-Garcia, M. (2022). A step-by-step tutorial for performing a moderated mediation analysis using PROCESS. The Quantitative Methods for Psychology, 18(3), 258–271. Coombs, C., Stacey, P., Kawalek, P., Simeonova, B., Becker, J., Bergener, K., … Trautmann, H. (2021). What is it about humanity that we can’t give away to intelligent machines? A European perspective. International Journal of Information Management, 58, Article 102311. Crocker, J., Thompson, L. L., McGraw, K. M., & Ingerman, C. (1987). Downward comparison, prejudice, and evaluations of others: Effects of self-esteem and threat. Journal of personality and social psychology, 52(5), 907. Dang, J., & Liu, L. (2024). Extended artificial intelligence aversion: People deny humanness to artificial intelligence users. Journal of Personality and Social Psychology, Advance online publication. https://doi.org/10.1037/pspi0000480. Dash, G., & Paul, J. (2021). CB-SEM vs PLS-SEM methods for research in social sciences and technology forecasting. Technological Forecasting and Social Change, 173, Article 121092. Demczuk, R., Mantovani, D., & Fernandes, D. (2023). Looking up or down on the social ladder: How socioeconomic comparisons shape judgments about monetary and time donations. Journal of Behavioral Decision Making, 36(3), Article e2308. Ding, L., Li, T., Jiang, S., & Gapud, A. (2023). Students’ perceptions of using ChatGPT in a physics class as a virtual tutor. International Journal of Educational Technology in Higher Education, 20(1), 63. Douglas, B. D., Ewell, P. J., & Brauer, M. (2023). Data quality in online human-subjects research: Comparisons between MTurk, prolific, CloudResearch, Qualtrics, and SONA. PLoS One, 18(3), Article e0279720. Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G* power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. Fein, S., & Spencer, S. J. (1997). Prejudice as self-image maintenance: Affirming the self through derogating others. Journal of personality and Social Psychology, 73(1), 31. Fiske, S. T., Cuddy, A. J., Peter, G., & Xu, J. (2024). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition: Correction to Fiske et al. (2002).Journal of Personality and Social Psychology, 126(3), 412. https://doi.org/10.1037/pspa0000163. Fiske, A. P. (2004). Four modes of constituting relationships: Consubstantial assimilation; space, magnitude, time, and force; concrete procedures; abstract symbolism. In N. Haslam (Ed.), Relational models theory: A contemporary overview (pp. 61–146). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Følstad, A., Law, E. L. C., & van As, N. (2024). Conversational breakdown in a customer service chatbot: Impact of task order and criticality on user trust and emotion. ACM Transactions on Computer-Human Interaction, 31(5), 1–52. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. Gergen, K. J. (1991). The saturated self: Dilemmas of identity in contemporary life. New York: Basic Books. Gervais, W. M., & Norenzayan, A. (2012). Analytic thinking promotes religious disbelief. Science, 336(6080), 493–496. Goodman, M. D., & Gareis, K. C. (1993). The influence of status on decisions to help. The Journal of Social Psychology, 133(1), 23–31. Grant, A. M., Berg, J. M., & Cable, D. M. (2014). Job titles as identity badges: How selfreflective titles can reduce emotional exhaustion. Academy of Management Journal, 57(4), 1201–1225. Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619, 619. Groysberg, B., Polzer, J. T., & Elfenbein, H. A. (2011). Too many cooks spoil the broth: How high-status individuals decrease group effectiveness. Organization Science, 22 (3), 722–737. Gudergan, S. P., Moisescu, O. I., Radomir, L., Ringle, C. M., & Sarstedt, M. (2025). Special issue editorial: Advanced partial least squares structural equation modeling (PLSSEM) applications in business research. Journal of Business Research, 188, Article 115087. Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157–169. Gwinn, J. D., Judd, C. M., & Park, B. (2013). Less power= less human? Effects of power differentials on dehumanization. Journal of Experimental Social Psychology, 49(3), 464–470. Hair Jr, J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., Ray, S., … Ray, S. (2021). An introduction to structural equation modeling. Partial least squares structural equation modeling (PLS-SEM) using R: A workbook (pp. 1–29). Harris, L. T., & Fiske, S. T. (2011). Perceiving humanity or not: A social neuroscience approach to dehumanized perception. Social neuroscience: Toward understanding the underpinnings of the social mind. Harris-Watson, A. M., Larson, L. E., Lauharatanahirun, N., DeChurch, L. A., & Contractor, N. S. (2023). Social perception in Human-AI teams: Warmth and competence predict receptivity to AI teammates. Computers in Human Behavior, 145, Article 107765. Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252–264. Haslam, N., & Bain, P. (2007). Humanizing the self: Moderators of the attribution of lesser humanness to others. Personality and Social Psychology Bulletin, 33(1), 57–68. Haslam, S. A., Jetten, J., Postmes, T., & Haslam, C. (2009). Social identity, health and well-being: An emerging agenda for applied psychology. Applied Psychology-an International Review-Psychologie Appliquee-Revue Internationale, 58(1), 1–23. Haslam, N., Loughnan, S., Kashima, Y., & Bain, P. (2008). Attributing and denying humanness to others. European Review of Social Psychology, 19(1), 55–85. Haslam, N., Rothschild, L., & Ernst, D. (2000). Essentialist beliefs about social categories. British Journal of Social Psychology, 39(1), 113–127. Hayes, A. F. (2012). Process: A versatile computational tool for observed variable mediation, moderation, and conditional process modeling. Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43, 115–135. Heßler, P. O., Pfeiffer, J., & Hafenbrädl, S. (2022). When self-humanization leads to algorithm aversion: What users want from decision support systems on prosocial microlending platforms. Business & Information Systems Engineering, 64(3), 275–292. Horchak, O. V., Giger, J. C., & Garrido, M. V. (2016). Action contribution to competence judgments: The use of the journey schema. Frontiers in Psychology, 7, 448. Hu, Q., Lu, Y., Pan, Z., Gong, Y., & Yang, Z. (2021). Can AI artifacts influence human cognition? The effects of artificial autonomy in intelligent personal assistants. International Journal of Information Management, 56, Article 102250. Jamali, J., Ayatollahi, S. M. T., & Jafari, P. (2018). The effect of cross-loading on measurement equivalence of psychometric multidimensional questionnaires in mimic model: a simulation study. Materia Socio-Medica, 30(2), 121. Jeon, Y. A. (2022). Let me transfer you to our AI-based manager: Impact of managerlevel job titles assigned to AI-based agents on marketing outcomes. Journal of Business Research, 145, 892–904. Jussupow, E., Benbasat, I., & Heinzl, A. (2020). Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux. 21 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 Kim, K. J. (2014). Can smartphones be specialists? Effects of specialization in mobile advertising. Telematics and Informatics, 31(4), 640–647. Kim, W. B., & Hur, H. J. (2023). What makes people feel empathy for AI chatbots? Assessing the role of competence and warmth. International Journal of HumanComputer Interaction, 1–14. Kim, J., Kang, S., & Bae, J. (2022). Human likeness and attachment effect on the perceived interactivity of AI speakers. Journal of Business Research, 144, 797–804. Kim, S., & McGill, A. L. (2011). Gaming with mr. Slot or gaming the slot machine? Power, anthropomorphism, and risk perception. Journal of Consumer Research, 38(1), 94–107. Kim, S. Y., Schmitt, B. H., & Thalmann, N. M. (2019). Eliza in the uncanny valley: Anthropomorphizing consumer robots increases their perceived warmth but decreases liking. Marketing Letters, 30, 1–12. Koh, Y. J., & Sundar, S. S. (2010). Heuristic versus systematic processing of specialist versus generalist sources in online media. Human Communication Research, 36(2), 103–124. Koski, J. E., Xie, H., & Olson, I. R. (2015). Understanding social hierarchies: The neural and psychological foundations of status perception. Social Neuroscience, 10(5), 527–550. Krämer, N. C., Eimler, S., Von Der Pütten, A., & Payr, S. (2011). Theory of companions: What can theoretical models contribute to applications and understanding of humanrobot interaction? Applied Artificial Intelligence, 25(6), 474–502. Kraus, M. W., & Keltner, D. (2013). Social class rank, essentialism, and punitive judgment. Journal of personality and social psychology, 105(2), 247. Kraus, M. W., & Park, J. W. (2017). The structural dynamics of social class. Current Opinion in Psychology, 18, 55–60. Kraus, M. W., Piff, P. K., & Keltner, D. (2009). Social class, sense of control, and social explanation. Journal of personality and social psychology, 97(6), 992. Kurtaliqi, F., Miltgen, C. L., Viglia, G., & Pantin-Sohier, G. (2024). Using advanced mixed methods approaches: Combining PLS-SEM and qualitative studies. Journal of Business Research, 172, Article 114464. Lammers, J., & Stapel, D. A. (2011). Power increases dehumanization. Group Processes & Intergroup Relations, 14(1), 113–126. Lee, T. L., & Fiske, S. T. (2006). Not an outgroup, not yet an ingroup: Immigrants in the stereotype content model. International Journal of Intercultural Relations, 30(6), 751–768. Leyens, J. P., Paladino, P. M., Rodriguez-Torres, R., Vaes, J., Demoulin, S., RodriguezPerez, A., et al. (2000). The emotional side of prejudice: The attribution of secondary emotions to ingroups and outgroups. Personality and Social Psychology Review, 4(2), 186–197. Leyens, J. P., Rodriguez-Perez, A., Rodriguez-Torres, R., Gaunt, R., Paladino, M. P., Vaes, J., et al. (2001). Psychological essentialism and the differential attribution of uniquely human emotions to ingroups and outgroups. European Journal of Social Psychology, 31(4), 395–411. Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650. Loughnan, S., & Haslam, N. (2007). Animals and androids: Implicit associations between social categories and nonhumans. Psychological science, 18(2), 116–121. Loughnan, S., Haslam, N., Sutton, R. M., & Spencer, B. (2014). Dehumanization and social class. Social Psychology, 45(1), 54–61. Matheus, R., Faber, R., Ismagilova, E., & Janssen, M. (2023). Digital transparency and the usefulness for open government. International Journal of Information Management, 73, Article 102690. McLean, G., Osei-Frimpong, K., & Barhorst, J. (2021). Alexa, do voice assistants influence consumer brand engagement?–Examining the role of AI powered voice assistants in influencing consumer brand engagement. Journal of Business Research, 124, 312–328. Messingschlager, T. V., & Appel, M. (2023). Mind ascribed to AI and the appreciation of AIgenerated art. new media & society, Article 14614448231200248. Mori, M. (1970). The uncanny valley. Energy, 7(4), 33–35. Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electronic Markets, 31(2), 343–364. Mozafari, N., Weiger, W. H., & Hammerschmidt, M. (2022). Trust me, I’m a bot–repercussions of chatbot disclosure in different service frontline settings. Journal of Service Management, 33(2), 221–245. Nass, C., Fogg, B. J., & Moon, Y. (1996). Can computers be teammates? International Journal of Human-Computer Studies, 45(6), 669–678. Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are social actors. Proceedings of the SIGCHI conference on Human factors in computing systems, 72–78. Nazlan, N. H., Tanford, S., & Montgomery, R. (2018). The effect of availability heuristics in online consumer reviews. Journal of Consumer Behaviour, 17(5), 449–460. Neururer, M., Schlögl, S., Brinkschulte, L., & Groth, A. (2018). Perceptions on authenticity in chat bots. Multimodal Technologies and Interaction, 2(3), 60. O’brien, R. M. (2007). A caution regarding rules of thumb for variance inflation factors. Quality and Quantity, 41, 673–690. Pieters, R., Bottschen, G., & Thelen, E. (1998). Customer desire expectations about service employees: An analysis of hierarchical relations. Psychology and Marketing, 15 (8), 755–773. Pinto, J. K., Patanakul, P., & Pinto, M. B. (2017). “The aura of capability”: Gender bias in selection for a project manager job. International Journal of Project Management, 35 (3), 420–431. Prolific. (2024). Why do I have more participants than requested? Prolific researcher help center. from https://researcher-help.prolific.com/en/article/345973. (Accessed 27 March 2025). Prolific. (2025). Prolific’s payment principles. Prolific Researcher Help Center. from https ://researcher-help.prolific.com/en/article/2273bd. (Accessed 27 March 2025). Puntoni, S., Reczek, R. W., Giesler, M., & Botti, S. (2021). Consumers and artificial intelligence: An experiential perspective. Journal of Marketing, 85(1), 131–151. Rae, I. (2024). The effects of perceived AI use on content perceptions. In In proceedings of the 2024 CHI conference on human factors in computing systems (pp. 1–14). Raff, S., Rose, S., & Huynh, T. (2024). Perceived creepiness in response to smart home assistants: A multi-method study. International Journal of Information Management, 74, Article 102720. Richter, N. F., Hauff, S., Ringle, C. M., & Gudergan, S. P. (2022). The use of partial least squares structural equation modeling and complementary methods in international management research. Management International Review, 62(4), 449–470. Richter, N. F., & Tudoran, A. A. (2024). Elevating theoretical insight and predictive accuracy in business research: Combining PLS-SEM and selected machine learning algorithms. Journal of Business Research, 173, Article 114453. Saaty, T. L. (2001). Decision making for leaders: The analytic hierarchy process for decisions in a complex world. RWS publications. Sah, Y. J., Yoo, B., & Sundar, S. S. (2011). Are specialist robots better than generalist robots?. In Proceedings of the 6th international conference on Human-robot interaction (pp. 241–242). Scaillet, N., & Leyens, J. P. (2000). From incorrect deductive reasoning to ingroup favouritism. In In Social identity processes: Trends in theory and research (pp. 49–61). SAGE Publications Ltd. Shaffer, V. A., Probst, C. A., Merkle, E. C., Arkes, H. R., & Medow, M. A. (2013). Why do patients derogate physicians who use a computer-based diagnostic support system? Medical Decision Making, 33(1), 108–118. Shi, S., Gong, Y., & Gursoy, D. (2021). Antecedents of trust and adoption intention toward artificially intelligent recommendation systems in travel planning: A heuristic-systematic model. Journal of Travel Research, 60(8), 1714–1734. https:// doi.org/10.1177/0047287520966395 Sidanius, J., & Pratto, F. (2003). Social dominance theory and the dynamics of inequality: A reply to Schmitt, branscombe, and kappen and wilson and Liu. British Journal of Social Psychology, 42(2), 207–213. Smith, H. J., & Tyler, T. R. (1996). Justice and power: When will justice concerns encourage the advantaged to support policies which redistribute economic resources and the disadvantaged to willingly obey the law? European Journal of Social Psychology, 26(2), 171–200. Straub, D., Boudreau, M. C., & Gefen, D. (2004). Validation guidelines for IS positivist research. Communications of the Association for Information Systems, 13(1), 24. Struch, N., & Schwartz, S. H. (1989). Intergroup aggression: Its predictors and distinctness from in-group bias. Journal of personality and social psychology, 56(3), 364. Sundar, S. S. (2008). The main model: A heuristic approach to understanding technology effects on credibility. Cambridge, MA: MacArthur Foundation Digital Media and Learning Initiative. Sundar, S. S., Jung, E. H., Waddell, T. F., & Kim, K. J. (2017). Cheery companions or serious assistants? Role and demeanor congruity as predictors of robot attraction and use intentions among senior citizens. International Journal of Human-Computer Studies, 97, 88–97. Sundar, S. S., & Nass, C. (2000). Source orientation in human-computer interaction: Programmer, networker, or independent social actor. Communication Research, 27 (6), 683–703. Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: Challenges and a path forward. California Management Review, 61(4), 15–42. Teodoridis, F., Bikard, M., & Vakili, K. (2019). Creativity at the knowledge frontier: The impact of specialization in fast-and slow-paced domains. Administrative Science Quarterly, 64(4), 894–927. Touré-Tillery, M., & McGill, A. L. (2015). Who or what to believe: Trust and the differential persuasiveness of human and anthropomorphized messengers. Journal of Marketing, 79(4), 94–110. Trautt, G. M., & Bloom, L. J. (1982). Therapeugenic factors in psychotherapy: The effects of fee and title on credibility and attraction. Journal of Clinical Psychology, 38(2), 274–279. Tyler, T. R., & Degoey, P. (1995). Collective restraint in social dilemmas: Procedural justice and social identification effects on support for authorities. Journal of personality and social psychology, 69(3), 482. Tyler, T. R., & Degoey, P. (1996). Trust in organizational authorities. Trust in organizations: Frontiers of theory and research (pp. 331–356). Tzafestas, S. G. (2016). Human-robot social interaction. In Sociorobot world (pp. 53–69). Springer. Wang, L., Sakashita, M., Cheng, G., Ji, J., & Zhang, Y. (2021). The effect of regulatory focus on customer citizenship behavior in a virtual brand community: The role of online self-presentationand community identification. Journal of Consumer Behavior, 20(3), 607–625. https://doi.org/10.1002/cb.1888 Wanner, J., Herm, L. V., Heinrich, K., & Janiesch, C. (2021). Stop ordering machine learning algorithms by their explainability! An empirical investigation of the tradeoff between performance and explainability. In Conference on e-Business, eServices and e-Society (pp. 245–258). Cham: Springer International Publishing. Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219–232. Whitley, S. C., Trudel, R., & Kurt, D. (2018). The influence of purchase motivation on perceived preference uniqueness and assortment size choice. Journal of Consumer Research, 45(4), 710–724. Woolley, K., & Risen, J. L. (2018). Closing your eyes to follow your heart: Avoiding information to protect a strong intuitive preference. Journal of personality and social psychology, 114(2), 230. 22 M. Yanit Computers in Human Behavior Reports 18 (2025) 100682 Yan, Y., Fan, W., Shao, B., & Lei, Y. (2022). The impact of perceived control and power on adolescents’ acceptance intention of intelligent online services. Frontiers in Psychology, 13, Article 1013436. Yanco, H. A., & Drury, J. L. (2002). A taxonomy for human-robot interaction. In Proceedings of the AAAI fall symposium on human-robot interaction (pp. 111–119). Yanit, M., Yanit, M., & Wan, F. (2023). Right agent, wrong level of hedonism: How high (vs low) hedonic values in AI-performed tasks lead to decreased perceptions of humanlikeness, warmth, and less consumer support. Computers in Human Behavior, 147, Article 107870. Yoganathan, V., Osburg, V. S., Kunz, W. H., & Toporowski, W. (2021). Check-in at the Robo-desk: Effects of automated social presence on social cognition and service implications. Tourism Management, 85, Article 104309. Zhu, D. H., & Chang, Y. P. (2020). Robot with humanoid hands cooks food better? Effect of robotic chef anthropomorphism on food quality prediction. International Journal of Contemporary Hospitality Management, 32(3), 1367–1383. CRiskCo. (n.d.). Artificial intelligence as an ally in credit analysis. CRiskCo. Retrieved March 27, 2025, from https://www.criskco.com/en/artificial-intelligence-as-an-a lly-in-credit-analysis/. 23