Recent marketing trends involve companies using low-status job titles, such as "assistant" (e.g., Google Home Assistant), to label conversational AI agents. This strategy aims to activate an altruistic "assistant" heuristic and enhance users' willingness to use these AI agents. However, this paper—comprising one pretest (N = 313), three experiments (N = 307, N = 300, N = 308), and one partial least squares structural equation modeling (PLS-SEM) analysis (N = 309)—demonstrates that the effect of this strategy on willingness to use is positive only when the task criticality is high. When the task criticality is not high, higher-hierarchy AI titles (e.g., "manager," "teacher," "analyst") generate greater willingness to use. The research examines three alternative serial mediation pathways—perceived warmth, perceived control, and perceived risks—to test for competing explanations alongside the focal serial mediation through perceived humanlikeness and competence. Across the four studies, the serial mediation via perceived humanlikeness and competence remained robust, even when controlling for alternative pathways and scenario realism (Study 3). The final model indicates that when task criticality is not high, increased perceptions of hierarchical status in conversational AI settings enhance perceived humanlikeness. This, in turn, boosts perceived competence, ultimately increasing users' willingness to use the AI. However, when task criticality is high, the effect reverses—higher-status AI is perceived as less humanlike and less competent, reducing users' willingness to engage with it.