This is the subhead for the blog post
Thanks for joining us on our road to MAU Vegas 2019, where we’ll be presenting our proprietary mobile ad fraud-fighting process! In the weeks to come, we’ll detail war stories and tactics for sniffing out and snuffing out fraud.
So you’ve integrated with a fraud reporting tool, and it’s time to dig into the reports. If you’re anything like us when we first started, you might be feeling a little scared of what you will find. But have no fear; every fraud prevention program has to start somewhere, and looking at the data is definitely the first step.
Some fraud tools are almost exclusively automated, which gives you little control (or knowledge) of the thresholds that cause traffic to be flagged as risky. The benefit of this is it saves you a lot of time because there’s not very much you can do on top of these types of tools. The downside, however, is that you are entrusting a black-box tool to make important decisions on your behalf about your campaigns.
The reason this can be concerning is that we have found many false positives in fraud reports that are a result of ad ops errors, technical issues, and data clarity issues. This is why, whenever possible, we recommend reviewing the fraud decisions that your fraud tool is making and looking for the data to back it up.
In this post, we’ll address the three most common technical issues we’ve seen get flagged as fraud – and teach you why they could actually be fraud and why they might be legit.
High click counts per IP address
Why this can indicate fraud: If hundreds of devices and clicks are being reported on the same IP address over the course of one day or week or even month, this is a red flag for potential fraud; the source could be a device farm where hundreds of devices are conducting ad fraud through bot activity on an ongoing basis. Even in the case of shared IP addresses in a public setting such as an airport or an apartment building, you would not expect to see the types of volume that get flagged.
What could be happening instead: Upon reviewing some of the IP addresses flagged for high click counts, we learned that one of the IP addresses in question was the creative server for an ad network; this was just how they set up their technology. Since the source was legitimate, we just made an exception and whitelisted this network and this IP address.
It’s always worth a quick investigation. Sometimes you’ll get an answer that makes sense, and other times you will get no answer or an answer that still has you scratching your head. You be the judge of what makes sense to you – and of course, whenever you can get proof, ask for it.
Geolocation distance or mismatch
Why this can indicate fraud: If a large portion of installs from a particular app or site have a geolocation that is significantly far away (or in a different country) than where the click took place, this can signal bot activity that is attempting to obfuscate its location.
What could be happening instead: When we saw 45% of traffic being flagged from a single ad network, we investigated and found an issue with the way click data was being sent to the attribution platform. It turned out the ad network was sending click data (including geolocation) via IPv6, which was not supported by the attribution platform. Therefore the geolocations were entirely missing on the click, causing these installs to be flagged as fraud.
Hijacked click trackers and high click counts
Why this can indicate fraud: When a click tracker is hijacked from a campaign, it usually means that a third party is taking the click tracker and committing click fraud with it. You won’t know by looking at a fraud report that a click tracker was hijacked, but what you will see is usually high click counts per device or a very low click to install conversion rate on a source.
What could be happening instead: When we’ve investigated high click counts on an app or site, we’ve occasionally found that a click tracker was hijacked from the network and campaign it belonged to and was used by a third party to deliver excessive clicks. The way we’ve been able to confirm that this was happening is to compare the number of click IDs generated in the MMP vs. the number of click IDs generated from the ad network’s data. The ratio should be 1:1, but if the MMP is generating way more click IDs for a particular tracker then the network is generating, it’s likely that the click tracker was hijacked.
These are just a few examples of what you can learn when you start asking questions and looking at the data. Try to take a logical, curious approach and get the whole team talking to each other (ad networks + attribution platform) in order to share all the data and get to the best resolution. Sometimes you’ll reach an explanation that makes sense to you; other times you will disagree. Ultimately it’s up to you to decide which action to take based on the data and the investigation. Good luck!