Say you’re about to wire money to someone. Maybe you’re closing a real estate transaction, maybe you need to pay a vendor for their services, maybe you’re disbursing an investment return. In all cases, the conventional wisdom is: verbally confirm before sending money.
Unfortunately, the conventional wisdom turns out to be wrong, or at best out of date.
First, some background. Here’s the classic structure of a Business Email Compromise (BEC) attack:
Some companies have a policy of using callbacks to fight this. After step 3, before sending payment, an employee at company B calls company A over the phone and verifies the bank details.
This is better than nothing. Callbacks are cheap, easy, and will successfully defeat many primitive fraud attempts. But today’s fraudsters are rapidly getting more sophisticated, and callbacks simply don’t work reliably. We’ve come across dozens of cases where a company performed a callback and still got scammed.
You could get your counterparty’s phone number via email, but if a fraudster has access to that account, they’ll just give you their phone number instead. (Sadly this method is all too common. Many real companies do this and think they’re getting additional protection.)
Fraudsters will sometimes get clever by putting their phone number in email signatures, invoices, and other prior communications. (At Walrus, we’ve seen fraudsters wait more than 6 months after gaining access to an email account before executing the fraud.) So you can’t trust any number that originated from that email address, and you have to be careful not to let such a number get into your records in a way that would result in you trusting it later.
You could have them call you, but same problem, it could be the fraudster calling, and you have no way to verify the number you see. (Fraudsters will often lead with an email like “I know you need to do a callback, so let me save you some time and call you”, getting the employee’s guard down.)
You could find a public phone number for that company on the web, but A) not all companies publish those, and B) there’s no guarantee it will even go to the same department. Do you want to spend 15 minutes navigating phone trees and customer support agents trying to get them to transfer you to the one employee who knows what you’re talking about?
The best option is to already have that person’s number saved from a known legitimate source; but how often will that be the case? And remember that they just lost access to their email account; how comfortable are you betting that they’re still in control of their phone number?
It is surprisingly easy to steal people’s phone numbers. (It happened to Jack Dorsey!)
Basic SIM-swapping can happen without any privileged access. Companies need to let people access their account without a password; maybe the customer forgot it, maybe they picked a bad one and an attacker guessed it, maybe they’re just old-fashioned and don’t like online portals.
By their nature, these recovery methods are less secure. Maybe they ask some security questions, maybe they ask for the last 4 digits of their credit card or social security number, or to confirm the last few transactions on the account, etc. Do you remember to keep all those details as secret as your password? Probably not.
Once the fraudster has convinced the customer support agent that they’re the account holder, they can simply ask that the number is transferred to their own phone. This can be done right before a scheduled callback, leaving insufficient time for the victim to notice their phone is inoperable, figure out what happened, and warn everyone they know.
But it gets worse: When it comes to using callbacks to protect against BEC, remember we’re taking as a ground assumption that the victim’s email address could be compromised.
This makes account takeover much easier. Most mobile or VoIP accounts will be linked to an email address, and if an attacker requests a password reset, anyone with access to that email address can click the link and change the password.
(Landline office phones mostly solve this problem, though they do have their own vulnerabilities. But many companies nowadays are moving away from them, and of course remote employees can’t use them.)
When you communicate with a website over HTTPS, you’re using a single, robust global protocol that, outside of a few very rare attacks, can virtually guarantee that you’re communicating with the website you think you are, even if you’re (for example) connected to a malicious wifi network.
The phone network… does not work like this. Phone calls long predate the internet; early analog networks were operating in the late 1800s. All later protocols have needed to maintain compatibility with the previous ones, so while there has been a slow digitization, the technological foundations of the phone system are still much more primitive and disjointed than those of the internet.
With other 12,000 different network operators (not all of whom are even well-intentioned, let alone know what a “private key” is), there are a diverse range of attacks. Most fall into one of two categories:
Phone providers may tout the ability of newer technologies like 5G to thwart these sorts of attacks, but these claims tend to be exaggerated. 5G replaces SS7 with the newer Diameter protocol… but Diameter is susceptible to many of the same attacks. More importantly, every modern phone still supports older technologies, undermining any such protection. IMSI catchers for example will force phones to downgrade to 2G or 3G in order to allow interception.
These types of attacks are most common in government surveillance and cyberwarfare, but they’re also sometimes used against the private sector. Consider the group of criminals in Germany that exploited SS7 vulnerabilities to steal SMS 2-factor authentication codes and place fraudulent bank transfers. Or the biotech company in San Francisco who discovered that a competitor had set up an IMSI catcher near their building, used to eavesdrop on all of their internal calls and text messages. If you’re sending a multi-million dollar wire, it can absolutely be worth a fraudster’s time to set up a more sophisticated attack.
The fraud methods above rely on the attacker answering the callback. Usually this will be unnoticeable to the caller, since they don’t know what their counterparty’s voice sounds like. (The stereotypical phone scammer has a foreign accent, but schemes that directly target businesses for larger amounts will generally recruit accomplices who have voices and mannerisms similar to the typical demographics of the target company’s employees.)
In cases where the caller has previously spoken to the subject in person, this can be defeated by recognizing that the person who picked up the phone is not who you were expecting. This is why callbacks were, for a long time, reasonably secure when you knew the person on the other side of the line. Even if you called the wrong person, you’d know it wasn’t them.
With the advent of generative AI, this has stopped being true. Replicavox has an online demo you can play with; upload a clip of anyone’s voice, type in any text, and it will generate their voice saying what you entered.
More sophisticated tools can replicate a person’s voice and speaking style even more convincingly, and do it live, with an attacker speaking into a microphone and the AI translating what they said into a fake voice with no perceptible delay. (And no, video calls don’t pose much more of an obstacle.)
News stories about the highest profile cases abound, you’ve probably seen some:
The big numbers are flashy, but the cost of these tools has fallen so low that fraudsters can profitably deploy them against relatively miniscule transactions. A businessman in Mumbai was defrauded out of just $900 in 2024 with a clone of his son’s voice.
Even if you successfully get in contact with a legitimate employee of the business, can their information be trusted? People usually don’t memorize their company’s bank details; they’re getting them from somewhere. Often that “somewhere” is an email from another employee! At which point the callback is serving no purpose, since the fraudster can provide the credentials to that employee via email. (As happened to Club Car in 2021.)
Or perhaps they call their CFO, or their CFO calls them. Now the problem is recursive; most of the callback risks explained in this article also apply to those sorts of internal calls.
This is avoidable in theory. An employee who works on-site can physically walk over to a co-worker to ask them, or find the details on paperwork in a file cabinet, etc. But in practice most people aren’t going to go to that effort. And for employees who work remotely, this isn’t even an option.
Set aside all of the security issues. Callbacks are slow. Both people have to be available simultaneously, which can be challenging if they’re in different time zones. Or the number you have for them is an office phone, and they’re currently working from home. Or they’re just busy! (One of our clients at Walrus mentioned that before they started using our platform, their average callback took 6 emails to schedule.)
For a small company that pays a few invoices a month, sure, this is not a big deal. But for a venture capital or private equity firm that’s sending hundreds of investment wires or partner payouts in each wave? That’s a noticeable drain on resources! If each one takes 5-10 minutes, that’s 1-5 entire workdays of doing nothing but calling people.
And remember that the more callbacks you ask someone to do, the more likely it is that they’ll try to take shortcuts. After all, it’s not the employee’s money.
In April 2022, Australian company Inoteq had to pay an invoice for $235,400 to another company, Mobius Group. Mobius sent instructions via email. The Inoteq employee tasked with the invoice called Mobius to confirm the details, but couldn’t hear the person on the other side of the line clearly. Annoyed, they ended the call and sent to the account in the email. The email had been hacked, and the details were fraudulent. (Inoteq ended up suing Mobius for the money back, and lost.)
It doesn’t matter if your process is theoretically perfect if people won’t actually follow it.
In short, there are a lot of ways for callbacks to go wrong. Some of them can be solved with better employee instructions and enforcement, but ultimately it’s still a gamble that everyone will follow the instructions every time. All it takes is one employee to forget a step, or decide to skip it and not tell anyone because they’re tired or in a rush.
According to a 2022 survey, only 53% of employees report that their company has any cybersecurity protocols at all, and just 34% report mandatory awareness training. (Even when there is training, it’s usually brief and superficial.) If an employee doesn’t understand exactly what the threat is, that means they don’t know what parts of a callback process are safe to modify.
Think about all the pitfalls described above; is the average employee going to know why all of those are a problem? Might they think to themselves, for example, “I know I’m not supposed to accept inbound, but here the caller ID matches the number I have for them, so it’s fine to pick this up”, not realizing that caller ID is commonly spoofed?
Implementing a callback process that requires the caller to basically be a cybersecurity professional themselves is neither cost-effective nor realistic. Nor will it be easy to create an elaborate system to surveil and verify every employee’s actions to ensure they’re rigorously following your policies.
If callbacks were truly necessary then all the hassle would be worth it, but when alternatives exist that are both more convenient and more secure… why bother?