top of page

AI Voice Scams Are the Newest Hotel Security Risk—And They’re Already Working

  • Writer: Anu Metsallik
    Anu Metsallik
  • Apr 14
  • 5 min read

Updated: 4 days ago


In the not-so-distant past, hotel cybersecurity threats were largely phishing emails and stolen passwords—messages disguised as booking confirmations or IT alerts, asking staff to “verify credentials” on fake login pages. Many hotels learned the hard way, but the industry eventually caught up: staff were trained, spam filters improved, and most employees today know better than to click a shady link or enter their passwords blindly.


But the next wave of hotel scams doesn’t come in writing—it comes with a familiar voice.

As awareness of email-based scams rises, scammers are shifting tactics to phone-based impersonation, using AI voice cloning to create new points of vulnerability. This new generation of fraud doesn’t rely on careless clicking—it relies on empathy, trust, and urgency, traits that are core to hospitality itself.



Scammers ask for urgent actions such as fund transfers or sharing sensitive information
Scammers ask for urgent actions such as fund transfers or sharing sensitive information

Why Voice-Based Scams Are So Dangerous in Hotels


Hotels handle dozens, sometimes hundreds, of phone-based interactions daily. Staff are trained to act promptly and respectfully, especially when a VIP guest, a general manager, or a well-known supplier calls. The hospitality industry's commitment to personalized service makes it more vulnerable to these advanced scams. The very qualities that define excellent hospitality—attentiveness, responsiveness, and trust—are being manipulated by cybercriminals using AI technology.


Scammers utilize advanced AI tools to replicate the voices of hotel executives, guests, or trusted partners. By sourcing short audio clips from public platforms or previous communications, they create convincing voice clones. These cloned voices are then used to call hotel staff, often during peak hours or night shifts, requesting urgent actions such as fund transfers or sharing sensitive information. The familiarity of the voice and the urgency of the request often lead employees to comply without question.



"If this was really happening, I would have heard about it."


But the reality is—you probably don’t.


Hotels that fall victim to these scams rarely speak about them publicly. The reason? Fear of losing guest trust. No property wants the headline: “Luxury Hotel Falls for Scam.” And no guest wants to stay somewhere that seems easily manipulated—or where their personal data might be at risk.


So these incidents are quietly contained, reported internally or to insurers, but almost never shared with the media.

The result? A rapidly growing threat that remains largely invisible to the public eye.



How AI Voice Cloning Scams Target Hotels


These scams aren't just clever—they're crafted specifically to exploit hotel operations.

Here’s how:


Impersonation of Executives or Partners:

Scammers clone the voices of general managers, finance directors, or ownership reps. They call front desk agents, night auditors, or accounting staff, requesting urgent wire transfers, login credentials, or guest data. The voice is very convincing—and the request sounds completely normal.


Guest Emergency Scenarios:

A panicked “guest” calls claiming they’ve lost their wallet or had a medical emergency. They sound just like someone checked in under that name. They ask for a charge to their room, or access to their personal data, and staff—wanting to help—act without verifying.


Vendor or Supplier Deception:

Known vendor voices (laundry, IT, minibar service) are cloned and used to issue “payment updates” or fake delivery instructions. These types of impersonations easily slip through the cracks during busy operational hours.



Real-World Examples


These aren’t hypothetical scenarios—they’re already happening:


MGM Resorts International (2023) – Vishing-Led Data Breach


In one of the most high-profile hospitality security breaches to date, MGM Resorts was targeted by a group of cybercriminals known as "Scattered Spider." The attackers gathered publicly available information about an MGM employee, including details from LinkedIn, and then called the internal IT help desk impersonating that employee.


The scammers persuaded support staff to reset an employee credentials—giving them access to internal systems. Within minutes, they escalated privileges, deployed ransomware, and disrupted hotel and casino operations across the MGM network.


The impact? Guest systems were down for days, casino slot machines went offline, and sensitive personal data was breached. MGM reported over $100 million in losses due to the attack. The entire breach began with a simple phone call, proving just how dangerous voice-based impersonation has become.



IHG-Branded Hotel (2024) – Voice Clone Triggers $15K Loss


At an IHG-branded property in the U.S., a seasoned night auditor received an early-morning call from someone claiming to be from IHG corporate IT. The caller sounded confident, referenced a real technical issue the hotel had experienced the week before, and explained that a "test protocol" needed to be run to resolve it.


Trusting the seemingly legitimate request, the night auditor followed instructions and processed several small refund transactions using virtual card numbers supplied by the caller. On the screen, the transactions briefly appeared valid—no red flags.


By the time the shift ended, over $15,000 had been funneled to the scammer’s accounts. Only the next morning did management realize that it was a scam—likely aided by either a voice-cloned GM or spoofed caller ID. The incident prompted immediate changes in protocol, including mandatory multi-channel verification for all calls involving financial actions.



Impersonation Isn't New—But It’s Getting Smarter


Pretending to be someone else is not a new scam tactic in hotels. Every front office team has heard stories—if not experienced them—where housekeeping has let a person into a guest’s room because they claimed their key card wasn’t working. No ID shown, just a plausible excuse and a confident attitude. And just like that, a thief walks into someone’s room.


These kinds of oversights remind us that scams thrive on assumptions and pressure, not just on technical weaknesses. That’s why bringing all types of scam scenarios—old and new—into regular staff training is essential. Whether it’s a voice on the phone or a stranger at the door, the mindset of verification must be built into hotel culture.



Verification and Prevention Strategies


Protecting your hotel doesn’t mean slowing down your service. It means getting smarter with verification:


  • Implement Multi-Factor Verification: Always cross-check financial or sensitive requests via at least two separate channels. If it comes by voice, confirm it via a known email or direct message.


  • Establish Internal Code Phrases: Use passphrases only known to key personnel for authorizing anything critical. If someone calling can’t repeat the code, it’s a red flag.


  • Staff Training and Simulation: Include voice scam scenarios in your phishing simulations and security trainings. Front desk and back-office teams need to know what to listen for and when to pause.


  • Adopt AI Detection Tools: Some security platforms can flag synthetic voices and alert you in real-time—an increasingly valuable investment.



The Psychological Trap: Familiar Voices, Urgent Requests


This new breed of scam hijacks our brains with the unexpected, the emotional, and the urgent. The call sounds real. The voice feels known. The timing seems crucial. These factors short-circuit skepticism, and in a high-speed hotel environment, that’s a recipe for compliance.


AI scams in hotels isn’t science fiction. It’s already happening—quietly, expensively, and more often than you think. The silence surrounding these incidents doesn't mean they're rare. It just means no one wants to admit they got duped by a voice. Awareness is the first line of defense. It’s time the hospitality industry started talking about the risks—before the next convincing call costs more than just money.

bottom of page