Summary: Users that surpass test screeners with untruthful answers can ruin your qualitative data and cost you a lot of money. Keep them at bay with false pre-emptive screeners or highly specific screener questions.

So you’ve run a few remote user tests and you’ve gotten back some great results. You’ve been diligent in leveraging pilot tests, carefully crafting neutrally-worded tasks, and segmenting your audience. By now, you’re pretty comfortable with services like UserTesting.com and you’re looking to take things to the next level. Well, I just may be able to help you out with a very simple piece of advice. First, let’s start with the problem.

One in Three Participants Suck

At least, that’s how I felt when I began user testing. I pulled out all the stops: enterprise account, delicately crafted tasks, obsessively precise segmentation. Yet still, one third of the tests that I ran usually would come out as failures due to poorly qualified users.

After spending more time running tests, I began to realize that my core “user qualification issue” was not a wrongfully constructed screener or sloppy segmentation, but rather it was merely a byproduct of the environment created by remote user testing panels. In an average remote user test, each user is paid $10 for each ~15 minute test (which clients pay ~$50 for). That’s a pretty good amount of money, considering that all the user has to do is hang out at home and play around on websites. But it’s not quite that simple in practice.

The service that I use has more than 1 million people on it’s panel. That’s great for clients because they can get really specific with their segmentation and still find users that fit the bill. On the other hand, this also creates a sense of competition between the actual users. Because while there are 1 million users, there are significantly less tests being conducted at any given time. Letting clients rate the user’s performance after each test helps to weed out some of the sub-par testers (users with poor ratings will receive less opportunities), but that system most definitely still has it’s flaws.

The client can assign a star rating to each test

As users compete to secure a test, they seek new ways to gain a competitive edge or increase their chances of getting accepted for the job. For some users, this means outsmarting test screeners by choosing the answer that is most likely to be correct, regardless of whether or not they’re telling the truth. For example, I once used a screener that asked, “Do you work as a marketer or in a role directly related to marketing?” If the user answered yes, they would pass the screener and meet the requirements for the test, because I was looking for marketers.

I can very confidently say that, in multiple instances where I have used a marketing demographic screener, I have still been assigned users that were most definitely not marketers (or even remotely familiar with professional level marketing). That’s a big problem.

Time to Get Dirty

With a portion of the panel members abusing the system, I needed to make a conscious effort to still screen out the users that didn’t fit my target demographic. Sometimes, this meant breaking the rules and getting creative with my approach to the user screening process.

One solution that I discovered was the false screener. What this does is pose a completely unrelated yet apparently legitimate screener question before the actual screener. The key is that users must disqualify themselves on the first screener (the false screener) before they can qualify themselves on the second screener (the real screener). This verifies that the user will tell the truth, as users that are abusing the system will always try to qualify themselves on the first screener.

How to Create a False Screener

After implementing the false screener, I immediately noticed an increase in the quality of user that I was recruiting. That being said, there are two “make or break” components to the false screener:

  • It must appear specific while actually being fairly vague
    This will make the question appear legitimate, but also allow it to be general enough to where you can capture the majority of dishonest users. If you get too specific, you might scare the dishonest users into disqualifying themselves in fear that they may receive a bad rating at the end of the test (since they’re not subject matter experts). However, because you’re using a false screener, this would in turn qualify the dishonest user.
  • It must be in no way related to your actual target audience
    This will ensure that you don’t accidentally disqualify an honest user.

There are a lot of different ways to write an effective false screener and they will likely vary depending on the project and context. Here are a couple examples to help you visualize the differences between effective and ineffective false screeners:

DO

  1. False screener: Do you work in the food service industry?
  2. Real screener: Do you work in the construction industry?

DON'T

  1. False screener: Do you work in the public sector?
  2. Real screener: Do you work in the construction industry?

The first example works because the false screener is vague and unrelated to the target demographic (construction workers will not accidentally self-identify as food service workers). The second example does not work because the false screener could be related to the target demographic (some construction workers may also be in the public sector).

Alternatives

As an alternative to the false screener, you can simply formulate highly specific or difficult questions that only your niche demographic would be able to answer. The downside to this is that if you don’t write the question really well, you may end up weeding out the wrong users. Additionally, this approach can only be applied in circumstances where a niche demographic is desired. If you’re looking for a more broad demographic (like construction workers), this won’t work quite as well.

By moderating your remote user tests, you can salvage the situation a little and provide some guidance to the user, which may help them to give you good feedback even if they aren’t properly qualified. The problem is, this requires an enterprise account with most solutions (which is expensive). You’re also going to have to be physically present when the test takes place, which sort of defeats the primary benefit of remote user testing (it’s easy and fast).

More than anything, your goal should simply be to collect good data from properly qualified users. However you achieve that result is totally up to you, and what may be a great approach for one designer may also be a terrible approach for another. Run some tests, get to know your audience, and progressively determine the best way to screen users for your project.