top of page
Search

I Traced a Single Phishing Email to a 188-Domain Criminal Operation Across Four Countries

  • Jay Maier
  • 2h
  • 9 min read

A few days ago, I received a phishing email. It was disguised as a DocuSign request, sent from a real insurance agent's compromised email account at a small agency right here in southeast Wisconsin.


I clicked the link. I entered my email address on the page that came up. I'm an IT professional with 25+ years of experience, and I fell for it — because it was that good.

Minutes later, Google sent me an alert: "Google blocked someone with the password for [my account] from signing in." The attacker had captured my credentials and immediately tried to log into my Gmail. The only thing that stopped them was two-factor authentication.


That alert changed everything. Instead of being embarrassed and moving on — which is what the attacker expects most people to do — I got angry. I decided to find out exactly who had just tried to break into my account and how far their operation reached.

What I found was an industrial-scale criminal phishing operation spanning four countries, 188 malicious domains, and over 14 months of activity — all traced back to a single rented server in a data center in Buffalo, New York. By the time I was done, I'd gotten Amazon Web Services to take down phishing infrastructure, the FBI had a detailed complaint on file, and the founder of the hosting company had personally responded to investigate.


All because I clicked a link I shouldn't have — and decided to fight back instead of look away.


Here's how it unfolded.


The Email That Got Me

I'll be honest about why this email worked. It came from a real person at a real insurance agency, with their actual business signature at the bottom — company name, address, phone number, all correct. It was styled to look exactly like a DocuSign notification, complete with official-looking branding and a big purple "REVIEW DOCUMENT" button. I had no reason to doubt it.


I clicked the button and landed on what looked like a standard DocuSign verification page. It asked me to enter my email to access the document. I entered it. Within minutes, Google flagged an unauthorized login attempt on my account from an unfamiliar location. My two-factor authentication blocked it, but the message was clear: the attacker now had my password.


That's when the red flags I should have caught earlier became obvious. The email had been sent via BCC to an unknown number of recipients — not how DocuSign actually works. The subject line was stuffed with corporate-sounding buzzwords designed to create urgency. And buried in the fine print at the bottom of the email, the name of the supposed DocuSign sender was different from the name in the "From" field.


Someone had broken into this insurance agent's email account and was using it to blast phishing emails to her entire contact list. I was one of the targets — and my credentials had been harvested.


Digging Into the Email Headers

Every email carries hidden metadata called headers. They're like a shipping label — they show exactly where the email came from, which servers handled it, and whether the authentication checks passed.


I pulled the full headers and found something important: all of the standard email security checks — SPF, DKIM, and DMARC — passed. That's because the email wasn't spoofed. It was sent from the agent's actual Google Workspace account. The attacker had direct access to her inbox.


The headers also revealed the email was composed with Microsoft Outlook but sent through Gmail infrastructure — a mismatch that shouldn't happen under normal circumstances.


Dissecting the Attack

After changing my password and securing my account, I went back to the phishing email — this time with the mindset of an investigator, not a victim. I extracted the raw HTML source from the email body and found the actual URL behind that "REVIEW DOCUMENT" button.


It didn't go to DocuSign.


Instead, it pointed to an Amazon Web Services S3 storage bucket with a name designed to look legitimate: a hyphenated variation of "DocuSign" in the bucket name, hosted in Amazon's Ohio data center. The file inside was called "Subpoena+1.html" — which didn't even match the email's subject line about insurance agreements. The attacker was reusing the same phishing page across different campaigns and didn't bother changing the filename.


I pulled the source code of that page safely. It was a clean, professional-looking fake DocuSign portal — "Document Notification: Please verify your email to access the document." When someone entered their email address and clicked "Access Document," the page didn't go to DocuSign. JavaScript on the page silently redirected them to an entirely different server, appending their email address to the URL.


That redirect server was where the real credential theft happened.


Tracing the Credential Harvesting Server

The redirect pointed to a domain I'd never heard of — a nonsense-syllable name with an .sbs extension. I ran a DNS lookup and found it resolved to an IP address belonging to ColoCrossing, a hosting company operating out of a data center at 325 Delaware Avenue in Buffalo, New York.


WHOIS records showed the domain had been registered just 26 days before the phishing email hit my inbox, through Gname.com — a Singapore-based domain registrar. The DNS was handled by DNSPod, a Chinese service run by Tencent Cloud. And the domain's WHOIS information was completely redacted.


The infrastructure map was taking shape: a Singapore registrar, Chinese DNS, an Ohio-based cloud storage page, and a New York-based server — all working together, all controlled by one attacker, and none of the companies involved aware of the others.


188 Domains on One Server

Then I ran a reverse IP lookup on the ColoCrossing server. Instead of finding one or two domains, I found 188.


Every single one followed the same pattern: algorithmically generated nonsense syllable names, spread across cheap top-level domains like .sbs, .dev, .directory, .team, and .auction. All of them had been registered within days of each other in mid-April 2026. This wasn't someone who got lucky with a compromised email account. This was an organized, industrial-scale phishing operation.


But among those 188 domains, a few stood out. Three of them broke the nonsense-syllable pattern completely. They were deliberate misspellings of the name of a German industrial engineering company — a 100-year-old manufacturer with 3,500 employees worldwide.


A quick search confirmed it. That German company had published a public security notice in June 2025 — nearly a year earlier — warning that fraudulent emails were being sent impersonating their employees. The same server. The same attacker. A campaign that had been running for over 14 months across multiple countries with no consequences.


Reporting Everything

Over the next 48 hours, I filed reports with every entity involved:


Amazon Web Services received a detailed abuse report about the S3 bucket hosting the fake DocuSign page. They responded within hours, opened a formal case, and took the phishing page offline. It went from serving content to returning "403 Forbidden."


The FBI's Internet Crime Complaint Center (IC3) received a comprehensive complaint with the full infrastructure timeline, all indicators of compromise, and a list of subpoena opportunities — the specific companies that hold records that could identify the attacker.


The domain registrar in Singapore received an abuse report through their official web form requesting the phishing domain be suspended.


DocuSign received a brand impersonation report.


Google received a report about the compromised email account.


The hosting company received an abuse report — but through their standard abuse inbox, nothing happened. So I escalated.


Getting the CEO's Attention

When the standard abuse channels didn't produce results, I researched the hosting company's ownership structure and sent a detailed report directly to the founder and general manager at the parent company, outlining the full scope: 188 phishing domains, 14+ months of activity, an active FBI complaint, and the fact that AWS had already taken action on their side.


Within hours, the founder personally responded, CC'd two members of his operations team, and asked me to send the full evidence package. I sent everything — reverse IP results, WHOIS records, phishing page source code, decoded JavaScript, SSL certificate data, and the connection to the German company's 2025 incident.


The Attacker Fights Back

Then something interesting happened. The original server went dark — but the attacker had already migrated all 188 domains to a second server at the same hosting facility. I caught the migration in real time through DNS lookups and immediately notified the hosting company's team.


When I examined what the attacker deployed on the new server, it was revealing. Instead of a fresh phishing kit, they'd put up a fake car parts e-commerce storefront. But the page's CSS had display:none on the body — it never actually rendered. Instead, obfuscated JavaScript at the bottom of the page decoded itself using XOR encryption and silently redirected visitors to a Chinese e-commerce platform. The fake storefront was designed to pass a superficial content review if the hosting company checked the page. You had to decode the hidden JavaScript to see what it was actually doing.

I decoded it and sent the analysis to the hosting company's team.


What the Attacker Got Right

It's worth acknowledging how well-constructed this operation was. The phishing email passed every standard email security check because the attacker used a compromised legitimate account. The fake DocuSign page was hosted on Amazon's trusted cloud infrastructure. The credential harvesting server was behind layers of international indirection. The domains were disposable and algorithmically generated. And when infrastructure got burned, the attacker had backup servers ready to go at the same facility.


This wasn't a kid in a basement. This was a professional operation built to scale and built to survive takedowns.


What Saved Me — and What Would Have Saved Her

Two-factor authentication saved my account. The attacker had my password — that's confirmed by Google's alert — but they couldn't get past the second factor. If I hadn't had 2FA enabled, they would have been inside my Gmail within minutes of me clicking that link.


But here's the nuance: the type of MFA matters. Standard authenticator apps and SMS codes can be intercepted by modern phishing techniques that use real-time proxy servers to relay your credentials and MFA codes to the real login page simultaneously. The only MFA method that's fully resistant to this is a hardware security key — a physical device that cryptographically verifies it's talking to the real website, not a proxy. In my case, Google's risk-based login detection likely blocked the attempt because it came from an unfamiliar location. I got lucky that the attacker wasn't using a more sophisticated proxy-based approach.


For the insurance agent whose account was compromised, there were gaps that made the attack possible:


Google Workspace settings need to be audited. Unauthorized third-party apps can be granted access to an email account through OAuth consent flows — the user clicks "Allow" on what looks like a legitimate authorization screen, and the attacker gets persistent access that survives a password change.


Email forwarding rules need to be checked. Attackers commonly add silent forwarding rules that send copies of all incoming email to an external address. The account owner never sees it. Even after a password reset, the attacker continues receiving everything.


Login audit logs are critical — and time-limited. Google only retains login audit data for a limited time. If you don't pull those logs quickly after a compromise, the attacker's IP address and access timestamps disappear.


The Bigger Picture

The most striking thing about this investigation isn't what I found — it's that nobody else had connected the dots. The German company was targeted over a year ago. They published a warning and moved on. The hosting company had a server running 188 phishing domains for 14 months. The standard abuse reporting channels hadn't worked. It took someone manually tracing one phishing email through the full attack chain, running reverse IP lookups, decoding obfuscated JavaScript, and escalating directly to company leadership to start shutting things down.


That's the reality of cybercrime in 2026. The attackers run automated, scaled operations across multiple countries. The defenders — especially small businesses — do the best they can with limited resources and move on to the next fire. The gap between those two realities is where people get hurt.


What You Can Do Right Now

If you're a small business owner reading this, here's the honest truth: you're probably not going to trace a phishing attack back to a server in Buffalo. You don't need to. What you need to do is close the gaps that make the attack possible in the first place.


Check your MFA. Is it on? What type? If it's SMS-based, it's not enough.


Check your OAuth apps. In Google Workspace, go to Security → API controls and look for any third-party app authorizations you don't recognize.


Check your email forwarding rules. In Gmail, go to Settings → Forwarding and make sure nothing is being sent somewhere it shouldn't be.


Check who still has access. Former employees, old shared passwords, unused accounts — every one is a potential entry point.


Or let us check for you. At Rising Sun Solutions, we offer a Small Business Security Assessment — a fixed-fee review of your email security, account settings, network configuration, and authentication practices. We find the gaps and give you a clear action plan. It typically takes a single visit, and it costs a fraction of what a breach would.

Because the best time to find out your email account is vulnerable isn't after your entire client list gets a phishing email with your name on it. Trust me — I clicked the link, and the only thing between me and a fully compromised account was one security setting I'd configured months earlier. Not everyone is that lucky.

 
 
 
bottom of page