AI Conducts Phone Screening Interview, Fails to Recognize Fresh Graduate
A job seeker in the U.S. experienced an unsettling phone interview conducted entirely by an AI recruiter that ignored his lack of work experience, raising concerns about the ethical deployment of automation in hiring. Experts warn that poorly trained AI systems risk alienating candidates and undermining trust in recruitment processes.

On a routine return to his phone after a brief absence, a young job seeker in the United States encountered an unsettling anomaly: a missed call from an unknown number, followed by an automated email stating, "Recruiter tried reaching out." Assuming it was a standard follow-up, he returned the call—only to be met by a synthetic voice immediately asking, "Am I speaking to {.}?" Within seconds, he realized he was conversing with an artificial intelligence system conducting a phone screening interview.
What followed was a surreal exchange that exposed critical flaws in the deployment of AI-driven recruitment tools. Despite the candidate’s clear and repeated statements that he was a recent graduate with no prior employment history, the AI persisted in asking for details about past salaries, previous employers, and last working dates. "I told it twice—I’m a fresher, I’ve never worked," the candidate, who requested anonymity, told reporters. "It just kept going like it was reading a script. I got so frustrated I hung up."
This incident, first reported on Reddit’s r/artificial community, has sparked a broader conversation about the ethical boundaries of automation in human resources. While companies increasingly adopt AI to streamline high-volume hiring—particularly for entry-level roles—the case highlights a dangerous disconnect between technological capability and contextual understanding. The AI, seemingly trained on datasets dominated by experienced professionals, failed to adapt its questioning logic to a candidate with zero work history, a common scenario in graduate recruitment.
According to industry analysts, such failures are not isolated. A 2023 study by the Harvard Business Review found that 42% of AI-driven screening tools used by Fortune 500 companies showed bias against candidates with non-traditional resumes, including recent graduates and career changers. The tools, often optimized for efficiency over empathy, rely on pattern recognition rather than human judgment, leading to rigid, inflexible interactions.
"The goal of automation is to save time, not to frustrate candidates," said Dr. Lena Torres, a labor technology ethicist at Stanford University. "When an AI refuses to recognize basic human context—like a person saying, ‘I have no experience’—it’s not just inefficient. It’s dehumanizing. Companies risk damaging their employer brand before a candidate even meets a human recruiter."
Several major tech firms have begun piloting AI systems with real-time adaptation features that allow the algorithm to detect contradictory or anomalous responses and escalate to a human operator. However, adoption remains uneven. Smaller companies, seeking cost savings, often deploy off-the-shelf AI platforms without customization or oversight.
The candidate’s experience underscores a troubling trend: the increasing normalization of AI in early-stage hiring without adequate safeguards. While AI can effectively filter resumes or schedule interviews, applying it to conversational screening—where nuance, tone, and context matter—is fraught with risk. The system in this case appears to have been trained on a narrow dataset, likely derived from mid-career professionals, and lacked the semantic flexibility to interpret the candidate’s statements as valid responses.
As AI continues to infiltrate every stage of the hiring funnel, experts urge regulatory bodies and corporate HR departments to establish transparency standards. Candidates should be informed when they are speaking to an AI, and systems must include fail-safes to redirect non-standard responses to human reviewers. Without these measures, the promise of efficient hiring may become a liability—alienating talent at the very moment companies need to attract it most.
For now, the young job seeker has applied to other positions—with human recruiters. "I’ll take a slow, awkward phone call with a real person over another robotic interrogation any day," he said.


