the bad, the ugly, and the truth of mobile ad fraudlearn.adjust.com/rs/108-gaz-487/images/an...

24
The bad, the ugly, and the truth of mobile ad fraud

Upload: truongkhue

Post on 23-Dec-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

The bad, the ugly, and the truth of mobile ad fraud

Ad fraud can decimate ad budgets, skew data, and create a feedback loop that’s a race to the bottom. On average, 4% of an app’s paid user acquisition budget is taken by fraud. We’ve seen outliers with up to 90% of budgets stolen - and our Fraud Prevention Suite has so far rejected 400,000 installs (not including 200,000 SDK signatures) - around $2m per day in saved budgets.

With our fraud prevention suite, we were the first in the industry to take fraud seriously, and the only attribution service to take it on proactively. As mobile fraud grows more sophisticated, we've strived to educate marketers on the dangers of mobile fraud. This expert's guide is one of a number of ways we can help you find out more about this pressing issue.

Here, we highlight five sources of mobile ad fraud; identifying what they are, how they work, what and damage they can do to your mobile campaigns, datasets, and more. This guide will help you identify, and get closer to solving, tangible problems faced by all mobile companies. Fraud is a deep-seated issue, and something which the entire industry needs to face up to before it can be dealt with. Education is the first step in getting rid of it for good.

Mobile ad fraud is a problem that Adjust has been fighting for a long time.

2

Table of contentsSDK Spoofing _________________________________________ 4

How SDK Spoofing works __________________________________________________ 5The evolution of SDK Spoofing _____________________________________________ 5Adjust’s SDK Spoofing solution ______________________________________________ 6

Click Injections ________________________________________ 7What does this mean for marketers? ________________________________________ 8Let’s talk about “Install Broadcasts” __________________________________________ 8A common misconception on how to deal with this type of fraud ____________________ 9A new level of sophistication _______________________________________________ 11How Adjust deals with Click Injections ________________________________________ 11

Click Spam ___________________________________________ 12How Fraudsters poach organic users _________________________________________ 13The impact of Click Spam __________________________________________________ 14Fighting Click Spam _______________________________________________________ 15Removing spammers from the data set _______________________________________ 15

Fake Installs __________________________________________ 16How Fraudsters take advantage of data centers _______________________________ 17

Fake In-App Purchases _________________________________ 18Understanding Fake In-App Purchases ________________________________________ 19The impact of Fake In-App Purchases ________________________________________ 19Fighting back against Fake In-app Purchases __________________________________ 20Choosing the right approach ________________________________________________ 21Filtering Fake In-App Purchase data __________________________________________ 21

A call to arms ________________________________________ 22

3

* WARNING *

* FRAUD DETECTED *

SDK Spoofing (also known as ‘replay attacks’) is a type of fraud that generates legitimate-looking installs without any real installs occurring, in order to steal from an advertiser’s user acquisition budget.

SDK spoofing is hard to spot. This is because fraudsters utilize real devices, which are

normally more active and spread out than fraud perpetrated en masse in a single location.

The scheme originates from fraudsters simulating installs via data centers. It’s challenging enough for them to pull off, and requires them to consistently create new IP addresses to keep their fraud secret.

SDK Spoofing

The TL;DR

Install characteristics: The users

aren't real, and the engagement is

completely fake.

Signs you're at risk: If the SDK

version and app version of installs

coming through don't match the

latest version you've released.

How to fix it: Adjust's SDK

signature

4

At the very first signs of this new type of fraud we began recording, researching and took defensive steps. The fastest way for us to take action in the short term was to release hotfixes to our attribution, removing spoofed install data based on faulty use of our parameter structures that did not match the intended purpose.

Fraudsters had a lot to lose, so they pushed the bar in terms of their level of sophistication, evolving to match our measures. Fraudulent device data started to match data from real-device traffic and became consistent over a multitude of device-based parameters (and, later, all device-based) parameters. How was this possible, if everything was fake?

The simple answer is: not everything is fake. The fraudsters can collect real device data. They do this by using their own apps or by leveraging an app they have control over. The intent of their data collection is malicious,

but that does not mean that the app being exploited for data is itself malicious. The perpetrator’s app might have a very real purpose or it might be someone else’s legit app and the perpetrators simply have access to it by means of having their SDK integrated within it.

This could be any type of SDK from monetization SDKs to any closed-source SDK where the information being collected isn’t transparent. Regardless, fraudsters have access to apps that are being used by a (favorably) large amount of users.

Having a source that generates real device data makes the fraudsters' task simple. They no longer need to randomize or curate troves of data, because they have access to the real thing. This has made it difficult on the anti-fraud side to research and identify these spoofing attempts.

In order to perform SDK Spoofing, a fraudster has to break open the SSL encryption between the communication of a tracking SDK and its backend servers, typically done by performing a ‘man-in-the-middle attack’ (MITM attack).

After completing the MITM attack, fraudsters then generate a series of test installs for an app they want to defraud. Since they can read the URLs in clear text format, they can learn which URL calls represent specific actions within the app, such as first open, repeated opens, and even different in-app events like purchases, or leveling up. They also research which parts of these URLs are static and which are dynamic, keeping the static parts (things

like shared secrets, event tokens, etc) and experimenting with the dynamic parts, which include things like advertising identifiers.

These days, with callbacks and near real-time communication detailing the success of installs and events, the perpetrators can test their setup by creating a click and matching it to an install session. If it’s successfully tracked, they know they’ve nailed the logic. As such, SDK Spoofing is simple trial and error with only a couple dozen variables.

Once an install is successfully tracked, the fraudsters will have figured out a URL setup that allows them to create fake installs.

The evolution of SDK Spoofing

How SDK Spoofing works

5

Releasing hotfixes to stop this threat became increasingly difficult. In radical cases, we had to manually research hundreds of thousands of data points to prove that these installs were in fact fake, giving our clients a chance to recuperate their lost budgets. Throughout this time, we worked on a solution that would put a stop to this fraud scheme dead in its tracks.

To combat SDK Spoofing, we created a signature hash to sign SDK communication packages. This method ensures that replay attacks do not work, as we introduced a new dynamic parameter to the URL which cannot be guessed or stolen, and is only ever

used once. In order to achieve a reasonably secure hash and an equally reasonable user experience for our clients, we opted for an additional shared secret, which will be generated in the dashboard for each app the client wants to secure.

Marketers also have the opportunity to renew secrets and use different ones for different version releases of their app. This allows them to deprecate signature versions over time, making sure that attribution is based on the highest security standard for the newest releases and the older releases can be removed from attribution fully.

Adjust’s SDK Spoofing solution

6

AD

The TL;DR

Install characteristics: The

users are real, and most likely

organic. However, fraudsters

have been known to steal the

last engagement from one of

the networks you're running a

campaign with.

Signs you're at risk: Attributions

to clicks after the user decided to

download the app

How to fix it: Leveraging Google

newest referrer API, and their new

“install_begin_time” timestamp,

which provides you with the

precise time of download.

Click injections are a sophisticated form of click spam.

By publishing or gaining control over an Android app which utilizes the Android OS’s “broadcasts intents” or the Android content provider to find

out when an install happens, fraudsters can detect when other apps are installed on a device. Then, they can trigger a click right after the install

completes, and then the fraudster will receive the credit for (usually organic) installs.

In other words, a fraudster uses an app to hijack the user’s device at just the right time – and with just the right information – to create a legitimate-looking “ad-

click”, which then nets them CPI payouts.

Click Injections

7

Let’s talk about “Install Broadcasts”

The scheme siphons off advertising budget that could have been used to reach more people. It also means that conversions result in marketers believing that some paid campaigns perform better with users than they actually do.

With the introduction of fraud in marketing data, numbers-driven conclusions that marketers reach are based on data that contains inaccuracies, turning into a vicious cycle of advertisers continuing to invest in advertising that’s relatively ineffective, potentially diverting money from better-placed and better-designed campaigns.

Every Android app broadcasts changes to the device they’re installed on, including other apps. These status broadcasts are sent when apps are downloaded, installed, or uninstalled.

The feature is useful for creating a tight connection between different apps by allowing them to (for example) streamline logins with a deep link to a recently installed password manager, or giving users more

direct options to transfer into a specific web browser, and so on.

Any app can “listen in” on these broadcasts, and it’s a system that fraudsters have learned to exploit, finding out when apps are installed, and injecting themselves as the new source for the install microseconds after a new app makes it onto the device.

What does this mean for marketers?

8

Industry-wide, it seems that the best way to fight click injection is to reject all attributions for any install that happens within a few seconds after the click. The idea here is that it's actually impossible to download and open an app (that is typically over 100MB in size) within such a short amount of time.

A common misconception on how to deal with this type of fraud

time in seconds

amou

nt o

f in

stal

ls

5000

4000

3000

2000

1000

0

However, there are a few problems with this approach - namely, it doesn’t cover the content provider exploit. So, let's look at the two defining KPIs for any filter: false positives and false negatives.

Now, you say that false positives should be impossible since no one can download an app faster than is technically feasible. But what about some edge cases? For example, if you restore your freshly purchased phone with all of your former apps, and then proceed to open it for the first time after clicking an ad, you will be counted as an install, (due to your new advertiser ID) with only a few seconds between click and install. You also could have installed an app via the desktop-to-mobile function and experience the same thing. Pre-installs, advertiser ID resets, and a couple of other cases actually produce installs with impossibly short click to install times. On iOS, where we are not aware of any click injection, we still see 1.4% of all installs made in the first 10 seconds.

It's fair to argue that if those are installs that should be rewarded the same as an authentic new install, but are labeled as fraud, it would be a disservice to the advertisers. This, of course, would be because app businesses would perceive networks or a sub-publisher as fraudulent even though they have done nothing wrong.

9

false negatives click injectionslegitimately rejected genuine UA installs

50 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115

false positives

How about false negatives? It turns out that this is where this approach really falls apart. To explain, we need to understand (in detail) how click injection works. First, we split the user activity into a few steps: tap on download button, app download completion, app install completion, and app open.

The broadcast the fraudulent app listens to gets triggered once the app download is complete. (Note that some of our competitors have devised filters that look at the timestamp of the first app open.)

It's quite clear to see where the main issue lies: on one hand, there is always going to be a few seconds between the download of the app and the time that the device has readied it for usage. On the other hand, few users actually open apps within a few seconds of the download. Depending on the size of the download, they may check their Facebook, or chat on Whatsapp, and then open the new app a few moments later.

On iOS we can see that less than 10% of all users install within the first 10 seconds of the download being completed. This means that we have a false negative rate around 90%.

By all measures, this has to be called a poor result and trying to fight click injections with this rejection scheme will actually play into the hands of the fraudsters. This is because they will be able to optimize their operation against the threshold, keeping 90% of the revenue intact all while advertisers will not look further into the fraud scheme as they think they are protected. This is the worst case scenario for anybody but the fraudster.

10

At the end of 2017, Adjust released a new way of preventing click injections from affecting our ad campaigns. This was only possible after Google switched to a new referrer API, and made timestamps that proved the user’s intent available for us to read. These timestamps show what a user wants before click injections intercepted the install.

Our click injection filter now denies the attribution of installs to sources that deliver a click in between the two available timestamps: `install_finish_time` and `first_open_time/̀ `install`. This prevents click injection of both varieties from touching anyone’s ad spend.

There’s a second method by which scammers take advantage of the system. We conducted an investigation into an app that was suspected to be bypassing the default method of click injections. To find it, we monitored malicious apps to find out when they injected the clicks, and then reverse engineered those apps to reveal their methodology.

In the past, a more straightforward method was to listen for any newly installed apps, wait for a new app to be installed, and then run clicks. It was straightforward - Android provided an out-of-the-box way for an app to know if a new app was installed in the system

(through the PACKAGE_ADDED broadcast).

However, this way had a lot of flaws, mainly that the timestamp between a completed install and the beginning of the download was mismatched, which made it a very easy type of fraud to detect.

So, instead, fraudsters managed to listen-in on newly-installed downloads before the app completed its installation. Fortunately, it’s something we caught wind of, and created a new filter for (as we look into below).

A new level of sophistication

How Adjust deals with Click Injections

11

ADADADAD

Organic installs have incredible value for app businesses. They’re the users who download an app without having interacted with any sort of advertisement, and have probably done so out of their own interest, or through a recommendation by word-of-mouth.

Organic users are generally higher quality than other users, tend to try apps for longer, and can have a higher lifetime value than their paid counterparts.

Tracking the number of organic users coming and going from an app is normally a great way to understand the overall health of an app. However, this changes when fraudsters try to claim organic users as theirs.

Operating on the edge of acceptable practice, unscrupulous publishers use shady techniques to take credit for organic users, which means that an app business can be tricked into overweighting the importance of a fraudulent traffic source, and also makes apps pay for users which installed them organically.

This practice has two names: ‘organics poaching’ and ‘click spam’.

Click SpamThe TL;DR

Install characteristics: Real,

organic users attributied from

dodgy `paid´ sources.

Signs you're at risk: Paid

installs from certain sources

behaving eerily similar to organic

counterparts. A flat distribution

of installs over the length of the

campaign. Low to extremely low

conversion rates.

How to fix it: Distribution

modeling.

12

Organics poaching starts when a user lands on a mobile web page or in an app which a fraudster is operating in. From there, any one of several kinds of fraud could take place:

• The mobile web page could be executing clicks in the background without visible ads, or ads which can be interacted with.

• The spammer could begin clicking in the background while the user engages with their app, making it look as though they have interacted with an advert

• The fraudster app can generate clicks at any time if they run an app that is running in the background 24/7 (e.g.

launchers, memory cleaners, battery savers etc.)

• The fraudster could send impressions-as-clicks to make it look as if a view has converted into an engagement.

• The spammer could blatantly send clicks from made up device IDs to tracking vendors, or from retargeting lists they got from other advertisers.

What unites these approaches is that a user is not aware that they’ve been registered as interacting with an advert. In actual fact, they never saw anything.

As a result, the user may install an app organically, but a fraudster will claim they’ve seen an advert – meaning the conversions will be attributed to a source that had nothing to do with the install.

How fraudsters poach organic users

13

Click spamming is insidious because it captures organic traffic and then claims the credit for the user later.

This has a few profound effects on an advertiser, the most obvious of which is that they pay for a user who was actually installed organically.

Not only does this cost advertisers their spend, but there are a few more effects to this type of fraud. First, and related to the previous point, the fact that the advertiser does not know that they’ve paid for an organic skews a number of interrelated metrics.

It undercooks the number of organic users that the app is generating, which affects both internal cohort analysis and downplays the impact of marketing that could generate organics such as ASO, branding and press outreach. These could have been potentially cannibalized through click spamming.

Organics poaching also threatens the certainty of acquisition decisions too. If an advertising network is claiming organic users, and these users perform well within an app, the advertiser will likely decide to invest in that channel to acquire more of the same type of users. This creates a circular problem, where the advertiser continues to pay someone else for the users they’ve already acquired completely naturally (or through other marketing channels) until they realize the mistake.

It also has the potential to affect targeting decisions across the whole business. While those organic users will undoubtedly be good quality, their presence in the paid acquisition cohorts will tempt a marketer to pay for advertising in other channels that target these groups. This is despite the fact that these groups might well download the app in question without the prompt of an advert at all - meaning that the advertiser wastes time and money chasing users who could be reached in other ways.

These investments will be made at the expense of other channels. Campaigns that are largely unblemished by fraudulent conversions won’t appear to be doing as well in comparison to those populated by poached organics. The missing ROI on relatively fraud-free channels pose an opportunity cost to the advertiser: when they could have invested sums chasing truly promising user cohorts, their budget is tied up with fraudulent channels instead.

Click spamming might seem like a relatively small thing to deal with. But if it isn’t spotted early, it can seriously pollute an entire app’s attribution efforts - leading advertisers astray and causing them to waste a significant amount of time chasing after users they’ve already acquired.

The impact of Click Spam

14

It’s impossible for advertisers to combat click spamming on the front line, as it’s down to publishers to stop engaging in the practice.

However, advertisers can catch click spamming when it happens by looking for a simple pattern. During our investigations into the problem, we discovered that there was a clear difference in the way that genuine advertising clicks are distributed over time versus click spammers.

For a genuine traffic source, clicks are attributed with a normal distribution. The precise shape and size of the distribution will vary, but the pattern from a trustworthy source is essentially a hefty number of installs on hour one before a rapid tapering of performance.

Fighting Click Spam

Removing spammers from the data set

Time after click

Inst

all c

ount

Time after click

Inst

all c

ount

Time after click

Display Marketing Click Spam Distribution Modeling

Inst

all c

ount

Sources of click spamming behave differently. Fraudulent installs are distributed flatly, because the spammer can trigger the click but not the install. Therefore installs (and click to install times) will follow a random distribution pattern.

This means that it is possible to weed out click spammers after the event. However, a better way is to refuse to attribute installs to traffic sources which claim traffic with a flat distribution. This is a proactive way which advertisers can use to fight back against spammers.

Once an advertiser can identify spammers they can begin to remove their influence.

It’s very difficult to totally negate the effect of spammers from a mobile marketing campaign. Networks try their best to remove spammers from their offerings, but the scope and scale of the mobile app ecosystem means there is always the potential for a spammer to find their way through.

Instead of trying to eradicate the problem entirely at the source, businesses advertising on mobile need to push back against spammers with the help of attribution. The simplest step (on paper) is to refuse to pay any spammer claiming traffic that matches a click spamming pattern.

15

Fake InstallsA fake install is a broad term that defines when a fraudster tricks an attribution partner into tracking an install that hasn’t taken place on a real device, attributing it to a paid source.

To accomplish it, fraudsters use emulation software to fake installs in an effort to claim

advertising revenue. Fake installs defraud everyone along the advertising chain - taking money away from advertisers, publishers and networks. On a traffic flow sample of over 400m installs over 17 days, we estimated that $1.7m worth of installs were being paid to fraudsters faking installs.

The TL;DR

Install characteristics: As the name

implies – completely fabricated users

that only exist to trigger installs

based on fraudulent advertisements.

Signs you're at risk: A high level

of installs with instant drop-off

after the click.

How to fix it: Filtering targeted IPs

based on blacklisted locations.

16

The extraneous installs can also inflate the click-to-install conversion rates, potentially making certain channels appear to deliver more value than they truly do.

This can either lead the advertiser to conclude that channels that include some degree of fraudulent conversions have more positive ROI than other channels where all the users are legitimate. However, marketers could recognize that something strange is happening and instead discard the channel altogether, potentially losing out on the value of the legitimate users from that source.

When it comes to a solution, we can rely on one key insight: fraudsters will run these emulators in a data center, and typically they’ll either route the traffic through the TOR network or a VPN to “place” the conversion in high-value markets.

In most instances, when a user downloads a mobile advert their smartphone’s IP should be drawn from a pool of IPs associated with a carrier (if they’re on mobile data) or with an IP associated with an internet provider (of wifi). So, when a user’s IP is associated with a data center or an identity-masking server, such as a proxy, VPN or TOR, it is likely that there is an attempt to deliberately defraud the campaign.

IPs belonging to this type of locations, known as “anonymous IPs”, can be filtered from attribution to paid sources, preventing them from polluting data sources. This will prevent the majority of fraud associated with simulators before it begins and reduces the impact of one datacenter manipulation tactic significantly.

Fraudsters use device emulation software to affect data centers. They do this by programming scripts that make the emulator create a new random device with a fresh advertising ID.

On that device, they create a user and have that user engage with advertisements. The emulated device will download the target app from an app store (or from local storage to cut down on traffic cost), triggering an install. Finally, the emulated device opens the installed app, so it can trigger an install event, which is then transmitted to the attribution provider.

Sophisticated fraudsters might even go as far as storing the session for later use, in order to create third or seventh-day retention by opening another session at the desired time.

The principle effect of fake installs on an advertiser is that it introduces fake or misleading data into the marketing funnel. This issue goes beyond the lost spend which affects everyone.

This can cause the advertiser significant problems. Fake installs will, for example, register as users who have immediately gone inactive after they’ve completed the install (or reached the post-install quality goal). If these installs are attributed but not identified as fake installs, then such behavior can begin to damage metrics, such as retention rates. This can drag down other metrics, such as lifetime value, and cause a rippling effect that damages numbers across the entire funnel.

How Fraudsters take advantage of data centers

17

Fake In-App PurchasesUp until now, we’ve been talking about fraud coming from fraudulent publishers. Fake in-app purchases are a bit different. Essentially, they’re an end-user issue, although they are still a form of fraud.

Defined as an instance where an in-app purchase (or IAP) was made but no revenue was exchanged, Adjust figures suggest that 30% of attempted IAP spends on iOS are fake - a sample based on millions of iOS devices.

The main concern from developers about fake in-app purchases has been about how much

potential revenue they’re missing out on. However, the impact on a business isn’t just monetary. It’s also about how fake purchases (and the people who make them) damage the successful operation of a free-to-play app.

Fake in-app spending is an ingrained and widespread practice of global proportion, whereby users who want to get ahead of the competition (or gain extra content) take advantage of the app systems to do so.

The TL;DR

Install characteristics: Potentially

high in-app spend, or frequent

purchases per user.

Signs you're at risk: Unmatched

purchase receipt codes.

How to fix it: Purchase

Verification.

18

The biggest impact of fake in-app spend is the effect it has on skewing cohort analysis. Users or players who are able to fake an in-app purchase are not self-controlled individuals capable of pinching lightly from the jar; they’re individuals who will take as much as they can whenever it is possible.

This can cause problems for app businesses when it comes to identifying and attributing valuable users. For example, an app maker might not have an extensive attribution system set-up. It acquires from one traffic source one hundred users, ten of whom repeatedly make fake in-app purchases. In this instance, the business is at risk of doing two things. First, there’s the risk of a significant gap between the revenues the company tracks in-house and on a store. This can cause confusion when it comes to analyzing the app’s performance and also to allocate budgets.

More importantly though, and a big risk to the business, is that there’s a real chance that the marketing team could misidentify those users as valuable. This could mean that the company invests in a traffic source that provides a number of in-app cheaters or even creates a user persona that matches the personality of cheaters – increasing the likelihood of costly mistakes down the line. There’s also another major impact that fake in-app purchases can have on an app: the damage it does to the app’s community. Developers of free-to-play mobile games often spend time striking a careful balance between providing benefits to big spenders and supporting free players. So when players make fraudulent or fake in-app purchases to get free coins, or in-game extras, it has the potential to knock the in-game economy off balance. This can cause legitimate users to feel less happy with their experience and increase the likelihood of them leaving.

The impact of Fake In-App Purchases

Understanding Fake In-App Purchases

With a typical purchase, an app should send a purchase receipt to the server of the app store provider. Once that receipt is sent, the store then sends back another receipt to confirm that the purchase took place.

In the simplest terms, if a purchase takes place then there should be a receipt. If it doesn’t take place, there won’t be one.

Fraudulent purchases operate differently. First, there’s the man-in-the-middle technique. In this instance, the app is tricked into sending a purchase receipt to a proxy server. This server

then pretends to be the man-in-the-middle store, sending back a fake receipt in its place. The app will be fooled since it receives what looks like a valid receipt from what it thinks is the real app store server.

Second, and more commonly, pirates hijack the API call to send a fake receipt. By using cracked code inserted into jailbroken or rooted devices, the phone itself sends back a fake receipt pretending to be the store. Though precise details of how that takes place vary, the key point is that the code fakes that all-important store receipt.

19

To combat fake in-app purchases, app businesses need to be able to check purchase receipts against store receipts in real time, typically on a controlled server environment that users cannot fool, in order to verify when and where spend takes place.

There are a few main solutions to off-balance, but they often depend on which operating system we’re talking about. Let’s take a look at the issue on each respective platform.

On iOS

In this solution, an app developer sends the receipt from the app through their own server just before the purchase is tracked within their analytics. Apple’s servers then respond to the request with one of three status codes telling you whether the purchase was real.

The benefit of this approach is that it’s pretty simple to integrate. But to make sure that this process has any meaning, businesses have to correctly attribute and track users to either block them from the app or to remove them from cohort analysis.

This works in a similar way to simple server-to-server receipt validation, but it adds an extra layer of complexity to sanity check installs.

By looking at the device ID and location locally to see if there are any oddities (e.g. a device

located in Germany submitting a receipt from Taiwan), full server-side receipt verification provides an extra layer of security. In order to check each and every field of in-app purchase receipts, developers need to implement specific decoding and decryption methods in their app.

Simple server-to-server receipt validation

Full server-side receipt verification

On Android

On Android, server-side receipt verification is tough because Android users access a multitude of stores across the world. This makes it much harder to add that extra layer of complexity, as it has to be matched against the practices of potentially dozens of different Android stores (particularly in a country like China).

This means that it makes more sense to use the same simple approach outlined on iOS for Android. It might lack the complexity, but it will help most companies to begin tackling the issue in a meaningful way.

Simple server-to-server receipt validation

Fighting back against Fake In-app Purchases

20

Choosing the right approach

For many businesses, fake in-app purchases will be significantly less of a problem than for others. It can be easy to guess that free-to-play games are affected the most, and so will have to use multi-layered anti-fraud techniques to try to beat the issue. For other verticals, server-to-server receipts may be enough.

There will also be less need for purchase verification if the app processes payments using credit cards or Paypal. The higher levels of security built into these systems can prevent the majority of IAP fraud, though these payment routes need to be carefully monitored to ensure other types of fraud don’t take place at the same time.

As for businesses worrying that fake in-app spends might be a serious problem, the main consideration for which approach to take will depend on the popularity of the app.

Apps with a smaller audience will have fewer players willing to bend the rules, as there is less incentive to cheat to get ahead. This means that simpler integrations will be able to complete the job at hand, allowing the team to focus on different priorities.

But when an app or game becomes a hit, and particularly when it reaches territory where piracy is prevalent, opting for a detailed verification process on the server side becomes a much higher priority.

Whichever approach is taken, app businesses have to make sure that they’re attributing users successfully to identify who is engaging in fake behavior. This is to allow the business to do any of the following, either alone or in conjunction with one another:

• Ban the user from the app to prevent them

from making fake purchases in the future if they commit multiple offenses - such as a two strike-rule - to both punish and allow some leeway for first-time offenders.

• Deduct fraudulent spend from revenue

modeling to ensure that business metrics are accurate, including the attribution of user spend across marketing channels and other figures like LTV.

• Remove or separate fraudsters from

marketing cohorts, to prevent data sets from being corrupted and to identify the profile of users who will bend the rules.

By checking receipts and tying it into an attribution strategy, app businesses can begin to identify and remove fake in-app spenders from their apps and their data sets.

Filtering Fake In-App Purchase data

21

A call to armsMobile ad fraud is a significant problem that affects the entire mobile advertising ecosystem.

Although it’s advertisers who bear the brunt of fraud, poor quality attribution or technical difficulties, eradicating ad fraud is a challenge the whole industry should come together to solve.

When errors are introduced into any part of the ecosystem, it cascades through the industry in a number of different ways. An advertiser overspending on an incentivized campaign loses the chance to spend in other channels; a network defrauded by install fraud overpromises revenues to publishers and accidentally tarnishes advertiser data; fake in-app purchases knock the balance of popular games, creating situations where

developers might accidentally unfairly change their game, with the effect of driving genuine users out of their app.

In short, it’s in the interest of the entire advertising ecosystem to go after these issues. This means that publishers, advertisers, networks and attribution services should work together to help scrub the problem out of the advertising economy.

If we fail, fraud won’t go away. An effort to behave as one by implementing industry best practices could make a profound difference to the entire ecosystem.

This would help to slowly, but surely, erase fraud from the app economy, creating an industry that is more accountable, more open and more honest for everyone.

22

GET STARTED WITH MOBILE FRAUD TODAY

We've got our hands on a copy of

THEFRAUDSTER'S COOKBOOK

23

Do you want to know more about how Adjust can help you stop mobile ad fraud from affecting your campaigns?

Click here to find out more.

The Adjust Fraud Prevention SuiteMobile ad fraud prevention, in real time.

www.adjust.com