In Part 1, we talked about the hidden risks in website funnels like how malicious code can slip in, stay unnoticed, and quietly put businesses at risk.
In this post, we’re taking a closer look at how we handled a real-world incident inside CheckoutChamp.
The issue wasn’t just cleaning code off a few pages. Malicious scripts kept reappearing, pointing to deeper access problems, and the platform itself had limitations that made investigation harder: no MFA support, no IP logs to track suspicious activity, and no API to scan pages at scale.
That meant we needed a different approach. With more than 2,000 funnel pages to review, manual checks alone weren’t enough.
Here’s how we combined manual cleanup, AI tools, and creative workarounds to narrow the surface area, find the bad code, and give the client clarity on what was really happening.
The challenge we faced right away
The funnels in CheckoutChamp weren’t compromised just once; they kept getting reinfected.
Each time the code was cleaned, it came back, which pointed to an access issue rather than a single incident. Someone still had the ability to log in and make changes.
Several platform limitations made it harder to investigate. CheckoutChamp doesn’t support MFA, so accounts were more exposed than they should have been. Our account also lacked access to logs with IP addresses, which meant we couldn’t easily trace where logins were coming from.
And because the platform doesn’t provide an API for scanning pages, there was no straightforward way to automate checks across thousands of funnels.
In short: no MFA, no IP logs, no API. We knew the code was being reintroduced, but without better visibility or stronger access controls, every manual cleanup was just a temporary fix.
The constraints
Working in CheckoutChamp came with a few specific roadblocks. First, we didn’t have full permission to view all the logs, which meant we couldn’t see IP addresses tied to login activity. That limited our ability to confirm who was logging in and from where.
Second, with more than 2,000 funnel pages to review, there was no native way to automate scanning. That left us relying on manual checks and some creative workarounds just to get visibility into the code.
And finally, the platform’s support team didn’t provide clear answers on funnel activity. We could see the problem, but without better logging or tooling from CheckoutChamp, it was difficult to validate changes directly in the platform.
Those gaps shaped the way we had to approach this investigation. The only way forward was to combine manual review with some creative workarounds.
How we approached it
We started with the basics: manually cleaning the malicious code page by page. That gave us an immediate fix, but with more than 2,000 funnel pages to review, it was obvious manual cleanup alone wasn’t going to cut it.
To narrow the scope, we tested AI tools. ChatGPT 4 and 5 were inconsistent by sometimes flagging harmless code, other times missing actual threats. Gemini, however, helped shrink the review surface. It allowed us to filter down the larger set of pages and export the questionable ones into a Google Doc for structured review with the client.
Claude was also tested, though results were mixed.
Since CheckoutChamp doesn’t offer an API for funnel scanning, we looked for workarounds.
One was hijacking the web browser to dump all funnel code. This worked for a one-time review, but it wasn’t a scalable option for continuous monitoring. A “hacky” workaround for scanning through API-like behavior was also explored in a private channel.
At the same time, access was tightened. A suspicious user was disabled from within one of their clients, which helped stop the code from being reintroduced.
This layered approach (manual cleanup, AI-assisted review, browser-based code dumps, and account auditing) gave us a way to tackle the problem even without the native tools we’d normally expect.
What we found
The scans and manual reviews confirmed that malicious code had been injected into multiple funnel pages. Much of it looked like ordinary scripts, but closer inspection showed clear indicators of compromise.
One example was code referencing a domain that isn’t a standard CDN or recognized front-end library. The injected scripts were designed to:
- Load resources from suspicious domains.
- Track user actions like clicks and page visits.
- Construct postback URLs for conversion tracking.
- Redirect traffic through affiliate-style networks.
Other scripts pulled in external resources from domains such as s.axon.ai and c.albss.com, further reinforcing the pattern of aggressive tracking and redirection.
Not every flagged page turned out to be malicious. For example, one set of edits appeared legitimate overall, though four specific edits in that group were confirmed as malicious.
That distinction was important for cleanup, since it allowed us to separate false positives from real threats.
The outcome
By combining manual cleanup with AI-assisted review, we were able to significantly reduce the time required to process more than 2,000 pages. What would have taken days of one-by-one checks was narrowed down to a manageable set of pages that could be validated in just a few hours.
A suspicious user account was disabled in one of their clients, which helped stop malicious code from being reintroduced after cleanup. The narrowed list of pages was exported into a Google Doc and shared back with the client, giving them visibility into which edits were malicious and which could be ignored.
While Gemini and other AI tools couldn’t fully replace human review, they played a critical role in shrinking the review surface and prioritizing which pages needed closer attention. This hybrid approach allowed us to move quickly without losing accuracy.
Lessons learned
Several key takeaways stood out during this process.
- Platform limitations matter. The lack of MFA support, no access to IP logs, and no API for scanning made investigation and remediation far more difficult than it needed to be.
- AI helps, but it isn’t enough. Gemini was useful for shrinking the review surface, but manual verification was still required. ChatGPT and Claude produced inconsistent results, sometimes flagging false positives and sometimes missing real issues.
- Cleanup without access control is temporary. As long as unauthorized users retained access, malicious code kept coming back. Disabling the suspicious user within one of their clients was a turning point.
- Visibility is critical. Without logs or continuous scanning, it’s too easy for malicious changes to slip through unnoticed. Even partial access gaps slow down detection and response.
These lessons reinforced the need for layered defenses: strong access controls, better visibility through logs, and the right balance between automated tools and manual review.
Conclusion
In Part 1, we outlined why website funnels are such an attractive target for attackers. This case study showed what happens when those risks turn into a real compromise and how we approached remediation under significant platform constraints.
CheckoutChamp left large gaps in visibility and control. That meant manual review, AI-assisted filtering, and creative workarounds were necessary to uncover and remove malicious code across thousands of pages. Disabling a suspicious user within one of their clients’ networks was also a critical step in preventing reinfection.
By combining different methods, we were able to identify malicious edits, separate them from legitimate ones, and give the client a clear picture of the problem.
This is the reality for businesses operating on platforms with limited security features: once malicious code finds a way in, cleanup is rarely straightforward. NTS’s role is not just to help prevent these compromises, but to step in, investigate, and solve the issue when it does happen.
Need help securing your funnels or investigating suspicious code? Contact us. We’ll help you find it, clean it, and keep it out.