leveraging crowdsourcing to detect improper tasks in ...€¦ · background: quality control of...

15
Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces Yukino Baba, Hisashi Kashima (Univ. Tokyo), Kei Kinoshita, Goushi Yamaguchi, Yosuke Akiyochi (Lancers Inc.) IAAI 2013 (July 17, 2013).

Upload: others

Post on 13-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces

Yukino Baba, Hisashi Kashima (Univ. Tokyo), Kei Kinoshita, Goushi Yamaguchi, Yosuke Akiyochi (Lancers Inc.) IAAI 2013 (July 17, 2013).

Page 2: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

TARGET Improper task detection in crowdsourcing marketplaces

APPROACH Supervised learning

RESULTS

2

Overview: Crowdsourced labels are useful for detecting improper tasks in crowdsourcing

1 Operator (expert, expensive)

Crowdsourcing workers

(non-expert, cheap)

Operator (expert, expensive)

classifier

train train

12

3

Classifier trained using expert labels achieved AUC 0.950

Classifier trained using expert and non-expert labels achieved AUC 0.962

+quality control

Aggregation strategy

AUC: Area Under the receiver operator Curve

Page 3: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

3

Background: Quality control of tasks posted to crowdsourcing has not received much attention

Crowdsourcing marketplace

Requester Worker

Post a task Assign a task

Return a submission

Send a submission

There are no guarantees for either worker or requester reliability → quality control is an important issue

Quality control of submissions has been well studied

Quality control of tasks has not received much attention

Page 4: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

4

Motivation: Improper tasks are observed in crowdsourcing marketplaces

Example of improper task If you know anyone who might be involved in welfare fraud, please inform us about the person

Name

Address

Other examples are l  Collecting personal information l  Requiring workers to register for a particular service l  Asking workers to create fake SNS postings

Operators in crowdsourcing marketplaces have to monitor the tasks continuously to find improper tasks. However, manual investigation of tasks is very expensive.

Page 5: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

5

Goal: Supporting the manual monitoring by automatic detection of improper tasks

Train

Report to operator

Classifier

New task

The task is IMPROPER!

The task is PROPER

Training dataLabel

(Proper/ Improper)

Task Information

Task1

Task2

Operator (expert,

expensive)

OFFLINE ONLINECrowdsourcing

workers (non-expert, cheap)

Page 6: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

6

Experiment 1: We trained a classifier using labels given by expert operators

Classifier trained using expert labels achieved AUC 0.950

Classifier trained using expert and non-expert labels achieved AUC 0.962

RESULTS 1

Train

Report to operator

Classifier

New task

The task is IMPROPER!

The task is PROPER

Training dataLabel

(Proper/ Improper)

Task Information

Task1

Task2

OFFLINE ONLINE

2

Operator (expert,

expensive)

Crowdsourcing workers

(non-expert, cheap)

Page 7: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

We used task data in Lancers (MTurk-like crowdsourcing marketplace in Japan)

7

Type Examples or descriptionTask textual feature Bag-of-words in task title and instructionTask non-textual feature Amount of reward and #assigned workersRequester ID “Who posted the task?”Requester non-textual feature Gender, age and reputation

Task dataset: Real operational data inside a commercial crowdsourcing marketplace

✔ 2,904 PROPER tasks 96 IMPROPER tasks

Features used for training

This task is suspended due to violation of our Terms and Conditions

These labels are given by expert operators

Page 8: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

CLASSIFIER Linear SVM, using 60% of tasks for training

RESULTS l  Classifier using all features showed the best performance

(0.950 for averaged AUC over 100 iterations) l  The task textual feature was the most effective l  Helpful features:

-  Red-flag keywords e.g., “account,” “password,” and “e-mail”

-  Amount of rewards -  Requester reputation -  Worker qualifications

8

Result 1: Classifier trained using expert labels achieved AUC 0.950

Page 9: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

2.Aggregation

1.Quality control

1

9

Experiment 2: We trained a classifier using labels given by experts and non-experts

Classifier trained using expert labels achieved AUC 0.950

Classifier trained using expert and non-expert labels achieved AUC 0.962

RESULTS

Train

Report to operator

Classifier

New task

The task is IMPROPER!

The task is PROPER

Training dataLabel Task Info.

Task1

Task2

OFFLINE ONLINE

22

Crowdsourcing workers

(non-expert, cheap)

Operator (expert,

expensive)

Page 10: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

We hired crowdsourcing workers on and asked them to label each task

10

Non-expert label dataset: Multiple workers were assigned to label each task

Task

Worker Label

✔ PROPER taskIMPROPER task

Page 11: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

11

Low ability

[Dawid&Skene ‘79] A. P. Dawid and A. M. Skene.: Maximum Likelihood Estimation of Observer Error-Rates Using the EM Algorithm, Journal of the Royal Statistical Society, Series C (Applied Statistics) , Vol. 28, No. 1 (1979)

Assuming that “High-ability workers are prone to give correct labels” and “Workers giving correct labels have a high ability”: the method predicts both true labels and worker abilities

Quality control of worker labels: We applied existing method considering worker ability

Reliability of labels given by non-expert workers depends on individual workers →We merged the labels by applying a statistical method considering worker ability [Dawid&Skene ‘79]

Low ability

High ability

Predicted true label

Page 12: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

Select IMPROPER as agreed label

We have 3×3 strategies for both cases in and 12

How can we aggregate labels given by experts and non-experts?

1

12

3

12

3

?✔ ✔

✔ ✔ ✔

✔ ✔

Aggregation of expert and non-expert labels: Rule-based strategies

Crowdsourcing workers

(non-expert)

Operator (expert)

(1) If both labels are the same: just use the label

Select ✔ PROPER as agreed label(2) If both labels are different: We have three strategies

Ignore the sample (labeled task) and do not include it in training dataset

Page 13: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

13

SKIP

Result 2: Classifier trained using expert and non-expert labels achieved AUC 0.962

Best strategy achieved averaged AUC 0.962

PROPER

PROPER

IMPROPER

IMPROPER

If the experts judge a task as IMPROPER, the strategy always sides with the experts.

The expert operators seem to judge a task as IMPROPER strictly, so we should sides with them if they judge a task as IMPROPER.

The non-expert crowdsourcing workers are not very serious to judge a task as IMPROPER so we should ignore the task if the experts disagree the judgments by the non-experts.

Why did the best strategy perform well?

BEST STRATEGY

If the experts judge a task as PROPER but the non-experts disagree, the strategy ignores the sample.

Page 14: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

14

Result 3: Crowd labels can reduce the number of expert labels by 25% while maintaining accuracy

% o

f ta

sks

Training data

% o

f ta

sks

Training data

AUC 0.950 (Result 1)

AUC 0.951 (Result 3)

Crowdsourced labels are useful in reducing the number of expert labels while maintaining the same level of classification performance

Labeld by Labeld by

Page 15: Leveraging Crowdsourcing to Detect Improper Tasks in ...€¦ · Background: Quality control of tasks posted to crowdsourcing has not received much attention Crowdsourcing marketplace

15

Summary: Crowdsourced labels are useful for detecting improper tasks in crowdsourcing

To support manual monitoring, we used machine learning techniques and built classifiers to detect improper tasks in crowdsourcing

1 ML approach is effective in improper task detection (Result 1)

12

3

By addressing a range of reliability and choosing a good aggregation strategy, crowdsourced labels improved the performance of improper task detection (Result 2)

Crowdsourced labels are useful in reducing the labeling cost of expert operators (Result 3)

12

3