preference elicitation in single and multiple user settings

Post on 12-Jan-2016

42 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

DESCRIPTION

Preference Elicitation in Single and Multiple User Settings. Darius Braziunas, Craig Boutilier, 2005 ( Boutilier, Patrascu, Poupart, Shuurmans, 2003, 2005 ) Nathanael Hyafil, Craig Boutilier, 2006a, 2006b Department of Computer Science University of Toronto. Overview. - PowerPoint PPT Presentation

TRANSCRIPT

1

Preference Elicitation in Single and Multiple User Settings

Darius Braziunas, Craig Boutilier, 2005(Boutilier, Patrascu, Poupart, Shuurmans, 2003, 2005)

Nathanael Hyafil, Craig Boutilier, 2006a, 2006b

Department of Computer Science

University of Toronto

2

Overview

Preference Elicitation in A.I.

Single User Elicitation

• Foundations of Local queries [BB-05]

• Bayesian Elicitation [BB-05]

• Regret-based Elicitation [BPPS-03,05]

Multi-agent Elicitation (Mechanism Design)

• One-Shot Elicitation [HB-06b]

• Sequential Mechanisms [HB-06a]

3

Preference Elicitation in AI

Luggage Capacity?Two Door? Cost?

Engine Size?Color? Options?

Shopping for a Car:

4

The Preference Bottleneck

Preference elicitation: the process of determining a user’s preferences/utilities to the extent necessary to make a decision on her behalf

Why a bottleneck?• preferences vary widely

• large (multiattribute) outcome spaces

• quantitative utilities (the “numbers”) difficult to assess

5

Automated Preference Elicitation

The interesting questions:• decomposition of preferences

• what preference info is relevant to the task at hand?

• when is the elicitation effort worth the improvement it offers in terms of decision quality?

• what decision criterion to use given partial utility info?

6

Overview

Preference Elicitation in A.I.

• Constraint-based Optimization

• Factored Utility Models

• Types of Uncertainty

• Types of Queries

Single User Elicitation

Multi-agent Elicitation (Mechanism Design)

7

Constraint-based Decision Problems

Constraint-based optimization (CBO):

• outcomes over variables X = {X1 … Xn}

• constraints C over X spell out feasible decisions

• generally compact structure, e.g., X1 & X2 ¬

X3

• add a utility function u: Dom(X) → R

• preferences over configurations

8

Constraint-based Decision Problems

Must express u compactly like C• generalized additive independence (GAI)

model proposed by Fishburn (1967) [and BG95]

nice generalization of additive linear models

• given by graphical model capturing independence

9

Factored Utilities: GAI ModelsSet of K factors fk over subset of vars X[k]

• “local” utility for each local configuration

[Fishburn67] u in this form exists iff• lotteries p and q are equally preferred whenever p and q

have the same marginals over each X[k]

Kkk kfu )xx) ][(( A Bf1(A)

a: 3a: 1

f2(B)b: 3b: 1

Cf3(BC)

bc: 12 bc: 2

…u(abc) = f1(a)+ f2(b)+ f3(bc)

10

Optimization with GAI Models

Optimize using simple IP (or Var Elim, or…)• number of vars linear in size of GAI model

Kk

k kfu )xx) ][((

A Bf1(A)a: 3a: 1

f2(B)b: 3b: 1

C f3(BC) bc: 12 bc: 2

CAIukDomkkik

kkXI , tosubj. max

])[(][][][][},{

Xxxxx

11

Difficulties in CBO

Constraint elicitation• generally stable across different users

Utility (objective) representation• GAI solidifies many of intuitions in soft constraints

Utility elicitation: how do we assess individual user preferences?

• need to elicit GAI model structure (independence)

• need to elicit (constraints on) GAI parameters

• need to make decisions with imprecise parameters

12

Strict Utility Function Uncertainty

User’s actual utility u unknown

Assume feasible set F U = [0,1]n

• allows for unquantified or “strict” uncertainty

• e.g., F a set of linear constraints on GAI terms

How should one make a decision? elicit info?

u(red,2door,280hp) > 0.4u(red,2door,280hp) > u(blue,2door,280hp)

13

f2(L,N)l,n: [2,4]l,n: [1,2]

Strict Uncertainty Representation

L

NP

Utility Function

f1(L)l: [7,11]l: [2,5]

14

Bayesian Utility Function Uncertainty

User’s actual utility u unknown

Assume density P over U = [0,1]n

Given belief state P, EU of decision x is:

EU(x , P) = U (px . u) P( u ) du

Decision making is easy, but elicitation harder?• must assess expected value of information in query

15

f2(L,N)l,n: l,n:

Bayesian Representation

L

NP

Utility Function

f1(L)l: l:

16

Query Types

Comparison queries (is x preferred to x’ ?)• impose linear constraints on parameters

k fk(x[k]) > k fk(x’[k])

• Interpretation is straightforward

U

17

Query Types

Bound queries (is fk(x[k]) > v ?)

• response tightens bound on specific utility parameter

• can be phrased as a local standard gamble query

U

18

Overview

Preference Elicitation in A.I.

Single User Elicitation

• Foundations of Local queries

• Bayesian Elicitation

• Regret-based Elicitation

Multi-agent Elicitation (Mechanism Design)

• One-Shot Elicitation

• Sequential Mechanisms

19

Difficulties with Bound Queries

Bound queries focus on local factors• but values cannot be fixed without reference to others!

• seemingly “different” local prefs correspond to same u

u(Color,Doors,Power) = u1(Color,Doors) +

u2(Doors,Power)

u(red,2door,280hp) = u1(red,2door) +

u2(2door,280hp)

u(red,4door,280hp) = u1(red,4door) +

u2(4door,280hp)

10 6 4

6 3 3

1 9

20

Local Queries [BB05]

We wish to avoid queries on whole outcomes• can’t ask purely local outcomes

• but can condition on a subset of default values

Conditioning set C(f) for factor fi(Xi) :

• variables that share factors with Xi

• setting default outcomes on C(f) renders Xi independent of remaining variables

• enables local calibration of factor values

21

Local Standard Gamble Queries

Local std. gamble queries• use “best” and “worst” (anchor) local outcomes

-- conditioned on default values of conditioning set

• bound queries on other parameters relative to these

• gives local value function v(x[i]) (e.g., v(ABC) )

Hence we can legitimately ask local queries:

But local Value Functions not enough: • must calibrate: requires global scaling

22

Global Scaling

Elicit utilities of anchor outcomes wrt global

best and worst outcomes

• the 2*m “best” and “worst” outcomes for each

factor

• these require global std gamble queries

(note: same is true for pure additive models)

23

Bound Query Strategies

Identify conditioning sets Ci for each factor fiDecide on “default” outcome

For each fi identify top and bottom anchors

• e.g., the most and least preferred values of factor i

• given default values of Ci

Queries available:• local std gambles: use anchors for each factor, C-sets

• global std gambles: gives bounds on anchor utilities

24

Overview

Preference Elicitation in A.I.

Single User Elicitation

• Foundations of Local queries

• Bayesian Elicitation

• Regret-based Elicitation

Multi-agent Elicitation (Mechanism Design)

• One-Shot Elicitation

• Sequential Mechanisms

25

Partial preference informationBayesian uncertainty

Probability distribution p over utility functions Maximize expected (expected) utility

MEU decision x* = arg maxx Ep [u(x)]

Consider:• elicitation costs

• values of possible decisions

• optimal tradeoffs between elicitation effort and improvement in decision quality

26

Query selection

At each step of elicitation process, we can• obtain more preference information

• make or recommend a terminal decision

27

Bayesian approachMyopic EVOI

MEU(p)

q1q2

...

r1,1 r2,1r1,2 r2,2

...

MEU(p1,1) MEU(p1,2) MEU(p2,1) MEU(p2,2)

28

Expected value of information

MEU(p) = Ep [u(x*)]

Expected posterior utility: EPU(q,p) = Er|q,p [MEU(pr)]

Expected value of information of query q:

EVOI(q) = EPU(q,p) – MEU(p)

MEU(p)

q1q2 ...

r1,1 r2,1r1,2 r2,2

...MEU(p1,1) MEU(p1,2) MEU(p2,1) MEU(p2,2)

29

Bayesian approachMyopic EVOI

Ask query with highest EVOI - cost [Chajewska et al ’00]

• Global standard gamble queries (SGQ) “Is u(oi) > l?”

• Multivariate Gaussian distributions over utilities

[Braziunas and Boutilier ’05]• Local SGQ over utility factors

• Mixture of uniforms distributions over utilities

30

Local elicitation in GAI models [Braziunas and Boutilier ’05]

Local elicitation procedure• Bayesian uncertainty over local factors• Myopic EVOI query selection

Local comparison query

“Is local value of factor setting xi greater than l”?

• Binary comparison query

• Requires yes/no response

• query point l can be optimized analytically

31

Experiments

Car rental domain: 378 parameters [Boutilier et al. ’03]

• 26 variables, 2-9 values each, 13 factors

2 strategies• Semi-random query

Query factor and local configuration chosen at randomQuery point set to the mean of local value function

• EVOI querySearch through factors and local configurationsQuery point optimized analytically

32

Experiments

0 10 20 30 40 50 60 70 80 90 1000

2

4

6

8

10

12

14

16

Mixture of Uniforms

Uniform

Gaussian

No. of queries

Percentage utility error (w.r.t. truemax utility)

33

Bayesian Elicitation: Future Work

GAI structure elicitation and verification

Sequential EVOI

Noisy responses

34

Overview

Single User Elicitation

• Foundations of Local queries

• Bayesian Elicitation

• Regret-based Elicitation why MiniMax Regret (MMR) ?

Decision making with MMR

Elicitation with MMR

Multi-agent Elicitation (Mechanism Design)

35

Minimax Regret: Utility Uncertainty

Regret of x w.r.t. u:

Max regret of x w.r.t. F:

Decision with minimax regret w.r.t. F:

),(),(),( * uxEUuxEUuxR u

),(max),( uxRFxMRFu

),()(;),(minarg *

)(

* FxMRFMMRFxMRx FXFeasx

F

36

Why Minimax Regret?*

Appealing decision criterion for strict uncertainty• contrast maximin, etc.

• not often used for utility uncertainty [BBB01,HS010]

x

x’

x’x

x

x’x’

x

xx’

x

x’

Bett

er

u1 u2 u3 u4 u5 u6

37

Why Minimax Regret?

Minimizes regret in presence of adversary• provides bound worst-case loss

• robustness in the face of utility function uncertainty

In contrast to Bayesian methods:• useful when priors not readily available

• can be more tractable; see [CKP00/02, Bou02]

• effective elicitation even if priors available [WB03]

38

Overview

Single User Elicitation

• Foundations of Local queries

• Bayesian Elicitation

• Regret-based Elicitation why MiniMax Regret (MMR) ?

Decision making with MMR

Elicitation with MMR

Multi-agent Elicitation (Mechanism Design

39

Computing Max Regret

Max regret MR(x,F) computed as an IP• number of vars linear in GAI model size

• number of (precomputed) constants (i.e., local regret terms) quadratic in GAI model size

• r( x[k] , x’[k] ) = u(x’[k] ) – u(x[k] )

CAIrk k

kkkXI ik

, tosubj. max]['

][']['][}',{ ][' x

xxxx

40

Minimax Regret in Graphical Models

We convert minimax to min (standard trick)• obtain a MIP with one constraint per feasible config

• linearly many vars (in utility model size)

Key question: can we avoid enumerating all x’ ?

41

Constraint Generation

Very few constraints will be active in solution

Iterative approach: • solve relaxed IP (using a subset of

constraints)

• Solve for maximally violated constraint

• if any add it and repeat; else terminate

42

Constraint Generation Performance

Key properties:• aim: graphical structure permits practical

solution

• convergence (usually very fast, few constraints)

• very nice anytime properties

• considerable scope for approximation

• produces solution x* as well as witness xw

43

Overview

Single User Elicitation

• Foundations of Local queries

• Bayesian Elicitation

• Regret-based Elicitation why MiniMax Regret (MMR) ?

Decision making with MMR

Elicitation with MMR

Multi-agent Elicitation (Mechanism Design)

44

Regret-based Elicitation[Boutilier, Patrascu, Poupart, Schuurmans IJCAI05; AIJ 06]

Minimax optimal solution may not be satisfactoryImprove quality by asking queries

• new bounds on utility model parameters

Which queries to ask?• what will reduce regret most quickly?

• myopically? sequentially?

Closed form solution seems infeasible• to date we’ve looked only at heuristic elicitation

45

Elicitation Strategies IHalve Largest Gap (HLG)

• ask if parameter with largest gap > midpoint

• MMR(U) ≤ maxgap(U), hence nlog(maxgap(U)/) queries needed to reduce regret to

• bound is tight

• like polyhedral-based conjoint analysis [THS03]

f1(a,b) f1(a,b) f1(a,b) f1(a,b) f2(b,c) f2(b,c) f2(b,c) f2(b,c)

46

Elicitation Strategies II

Current Solution (CS)• only ask about parameters of optimal solution x* or

regret-maximizing witness xw

• intuition: focus on parameters that contribute to regret reducing u.b. on xw or increasing l.b. on x* helps

• use early stopping to get regret bounds (CS-5sec)

f1(a,b) f1(a,b) f1(a,b) f1(a,b) f2(b,c) f2(b,c) f2(b,c) f2(b,c)

47

Elicitation Strategies IIIOptimistic-pessimistic (OP)

• query largest-gap parameter in one of:optimistic solution xo

pessimistic solution xp

Computation:• CS needs minimax optimization• OP needs standard optimization• HLG needs no optimization

Termination:• CS easy• Others ?

48

Results (Small Random)

10vars; < 5 vals

10 factors, at most 3 vars

Avg 45 trials

49

Results (Car Rental, Unif)

26 vars; 61 billion configs

36 factors, at most 5 vars; 150 parameters

Avg 45 trials

50

Results (Real Estate, Unif)

20 vars; 47 million configs

29 factors, at most 5 vars; 100 parameters

Avg 45 trials

51

Results (Large Rand, Unif)

25 vars; < 5 vals

20 factors, at most 3 vars

A 45 trials

52

Summary of Results

CS works best on test problems• time bounds (CS-5): little impact on query quality

• always know max regret (or bound) on solution

• time bound adjustable (use bounds, not time)

OP competitive on most problems• computationally faster (e.g., 0.1s vs 14s on RealEst)

• no regret computed so termination decisions harder

HLG much less promising

53

Interpretation

HLG:

• provable regret reduced very quickly

But:

• true regret faster (often to optimality)

• OP and CS restricted to feasible decisions

• CS focuses on relevant parameters

54

Conclusion – Single User

Local parameter elicitation• Theoretically sound

• Computationally practical

• Easier to answer

Bayesian EVOI / Regret-based elicitation• Good guides for elicitation

• Integrated in computationally tractable algorithms

Future Work:• Sequential reasoning

55

Questions?

References:D. Braziunas and C. Boutilier:

• “Local Utility Elicitation in GAI Models”, UAI 2005

C. Boutilier, R. Patrascu, P. Poupart, D.

Shuurmans:

• “Constraint-based Optimization and Utility Elicitation

using the Minimax Decision Criterion”, Artificial

Intelligence, 2006

• (CP-2003 + IJCAI 2005)

56

Preference Elicitation in

Single and Multiple User Settings

Part 2

Darius Braziunas, Craig Boutilier, 2005(Boutilier, Patrascu, Poupart, Shuurmans, 2003, 2005)

Nathanael Hyafil, Craig Boutilier, 2006a, 2006b

Department of Computer Science

University of Toronto

57

Overview

Single User Elicitation• Foundations of Local queries

• Bayesian Elicitation

• Regret-based Elicitation

Multi-agent Elicitation

• Background: Mechanism Design

• Partial Revelation Mechanisms

• One-Shot Elicitation

• Sequential Mechanisms

58

Bargaining for a Car

Luggage Capacity?Two Door? Cost?

Engine Size?Color? Options?

$$$$

$$

$$

$$

$$

$$

$$

59

Multiagent PE: Mechanism Design

Incentive to misrepresent preferencesMechanism design tackles this:

• Design rules of game to induce behavior that leads to maximization of some objective

(e.g., Social Welfare, Revenue, ...)

• Objective value depends on private information held by self-interested agents Elicitation + Incentives

Applications:• Auctions, multi-attribute Negotiation, Procurement problems,

• Network protocols, Autonomic computing, ...

60

Basic Social Choice Setup

Choice of x from outcomes X

Agents 1..n: type ti Ti and valuation vi(x, ti)

Type vectors: tT and t-i T-i

Goal: optimize social choice function f: T X• e.g., social welfare SW(x,t) = vi(x, ti)

Assume payments and quasi-linear utility:• ui(x, i ,ti ) = vi(x, ti ) - i

Our focus: SW maximization, quasi-linear utility

61

Basic Mechanism Design

A mechanism m consists of three components: • actions Ai

• allocation function O: A X

• payment functions pi : A R

m induces a Bayesian game• m implements social choice function f if

in equilibrium : O((t)) = f(t) for all tT

62

Incentive Compatibility (Truth-telling)

Dominant Strategy IC

• No matter what: agent i should tell the truth

Bayes-Nash IC

• Assume others tell the truth

• Assume agent i has Bayesian prior over others’ types

• Then, in expectation, agent i should tell the truth

Ex-Post IC

• Assume others tell the truth

• Assume agent i knows the others’ types

• Then agent i should tell the truth

63

Properties

Mechanism is Efficient: • maximizes SW given reported types:

-efficient: within of optimal SW

Ex Post Individually Rational:• No agent can lose by participating

-IR: can lose at most

64

Direct Mechanisms

Revelation principle: focus on direct mechanisms where agents directly and (in eq.) truthfully reveal their full types

For example, Groves scheme (e.g., VCG):• choose efficient allocation and use payment function:

• implements SWM in dominant strategies

• incentive compatible, efficient, individually rational

65

Groves Schemes

Strong results: Groves is basically the “only choice” for dominant strategy implementation

• Roberts (1979): only social choice functions implementable in dominant strategies are affine welfare maximizers (if all valuations possible)

• Green and Laffont (1977): must use Groves payments to implement affine maximizers

Implications for partial revelation

66

Issues with Classical Mechanism Design

Computation Costs

• e.g., Winner Determination

Revelation Costs

• Communication

• Computation

• Cognitive

• Privacy

67

Issues with Classical Mechanism Design

Full Revelation and “Quality”

• trade-off revelation costs with Social Welfare

Full Revelation and Incentives

• very dependent

• need “new” concepts

68

Overview

Single User Elicitation• Foundations of Local queries

• Bayesian Elicitation

• Regret-based Elicitation

Multi-agent Elicitation

• Background: Mechanism Design

• Partial Revelation Mechanisms

• One-Shot Elicitation

• Sequential Mechanisms

69

Partial Revelation Mechanisms

Full revelation unappealing

A partial type is any subset i Ti

A one-shot (direct) partial revelation mechanism• each agent reports a partial type i i

• typically i partitions type space, but not required

A truthful strategy: report i s.t. ti i

Goal: minimize revelation, computation, communication by suitable choice of partial types

70

Implications of Roberts

Partial revelation means we can’t generally maximize social welfare

• must allocate under type uncertainty

But if SCF is not an affine maximizer, we can’t expect dominant strategy implementation

What are some solutions?• relax solution concept to BNE / Ex-Post

• relax solution concept to approx incentives

• incremental and “hope for” less than full elicitation

• relax conditions on Roberts results

71

Existing Work on PRMs [Conen,Hudson,Sandholm, Parkes, Nisan&Segal, Blumrosen&Nisan]

Most Approaches:• require enough revelation to determine optimal

allocation and VCG paymentshence can’t offer savings in general [Nisan&Segal05]

• Sequential, not one-shot

• specific settings (1-item, combinatorial auctions)

Priority games [Blumrosen&Nisan 02]

• genuinely partial and approximate efficiency

• but very restricted valuation space (1-item)

72

Preference Elicitation in MechDes

We move beyond this by allowing approximately

optimal allocation• specifically, regret-based allocation models

Avoid Roberts by relaxing solution concept:• Bayes-Nash equilibrium?

NO! [HB-06b]

• Ex-Post IC?

NO ! [HB-06b]

• approximate, ex-post implementation

73

Partial Revelation MD:Impossibility Results

Bayes-Nash Equilibrium• Theorem: [HB-06b]

Deterministic PRMs are TrivialRandomized PRMs are Pseudo-Trivial

• Consequences: max expected SW = same as best trivialmax expected revenue = same as best trivial

“Useless”

Ex-Post Equilibrium• Same

74

Overview

Single User Elicitation• Foundations of Local queries

• Bayesian Elicitation

• Regret-based Elicitation

Multi-agent Elicitation

• Background: Mechanism Design

• Partial Revelation Mechanisms

• One-Shot Elicitation

• Sequential Mechanisms

75

Regret-based PRMs

In any PRM, how is allocation to be chosen?

x*() is minimax optimal decision for

A regret-based PRM: O()=x*() for all

76

Regret-based PRMs: Efficiency

Efficiency not possible with PRMs (unless MR=0)

• but bounds are quite obvious

Prop: If MR(x*(),) for all , then

regret-based PRM m is -efficient for truthtelling agents.

• thus we can tradeoff efficiency for elicitation effort

77

Regret-based PRMs: Incentives

Can generalize Groves payments• let fi (i) be an arbitrary type in i

Thm: Let m be a regret-based PRM with • partial types and a

• partial Groves payment scheme.

If MR(x*(),) for all , then m is

-ex post incentive compatible

78

Regret-based PRMs: Rationality

Can generalize Clark payments as well

Thm: Let m be a regret-based PRM with • partial types and a • partial Clark payment scheme.

If MR(x*(),) for all , then m is -ex post individually

rational.

A Clark-style regret-based PRM gives approximate efficiency, approximate IC (ex post) and approximate IR (ex post)

79

Approximate Incentives and IR

Natural to trade off efficiency for elicitation effortIs approximate IC acceptable?

• computing a good “lie”? Good? Huge computation costs

• if incentive to deviate from truth is small enough, then formal, approximate IC ensures practical, exact IC

Is approximate IR acceptable?• Similar argument

Thus regret-based PRMs offer scope to tradeoff• as long as we can find a good set of partial types

80

Computation and Design/Elicitation

Minimax optimization given partial type vector • same techniques as for single agent

• varies with setting (experiments: CBO with GAI)

Designing the mechanism• one-shot PRM: must choose partial types for each i

• sequential PRM: need elicitation strategy

• we apply generalization of CS to each task

81

(One-shot) Partial Type Optimization

Designing PRM: must pick partial types• we focus on bounds on utility parameters

A simple greedy approach• Let be current partial type vectors (initially {T} )

• Let =(1,… i,…n ) be partial type vector with greatest MMR

• Choose agent i and suitable split of partial type i into ’i and ’’i

• Replace all [i ] by pair of vectors: i ’i ;’’i

• Repeat until bound is acceptable

82

The Mechanism Tree

(’1,… i,…n ) (’’1,… i,…n )

(’1,… ’i,… ) (’1,… ’’i,… ) (’’1,… ’i,… ) (’’1,… ’’i,… )

(1,… i,…n )

83

A More Refined Approach

Simple model has drawbacks• exponential blowup (“naïve” partitioning)

• split of i useful in reducing regret in one partial type vector , but is applied at all partial type vectors

Refinement• apply split only at leaves where it is “useful”

keeps tree from blowing up, saves computation

• new splits traded off against “cached” splits

• once done, use either naïve/variable resolution types for each agent

84

Naïve vs. Variable Resolution

i

p1

p2

i

p1

p2

85

Heuristic for Choosing Splits

Adopt variant of current solution strategy

Let be partial type vector with max MMR• optimal solution x* regret-maximizing witness xw

• only split on parameters of utility functions of optimal solution x* or regret-maximizing witness xw

• intuition: focus on parameters that contribute to regret reducing u.b. on xw or increasing l.b. on x* helps

• pick agent-parameter pair with largest gap

86

Preliminary Empirical Results

Very preliminary results• use only very naïve algorithm

• single buyer, single seller

• 16 goods specified by 4 boolean variables

• valuation/cost given by GAI modeltwo factors, two vars each (buyer/seller factors are different)

thus 16 values/costs specified by 8 parametersno constraints on feasible allocations

87

Preliminary Empirical Results

88

Overview

Single User Elicitation• Foundations of Local queries

• Bayesian Elicitation

• Regret-based Elicitation

Multi-agent Elicitation

• Background: Mechanism Design

• Partial Revelation Mechanisms

• One-Shot Elicitation

• Sequential Mechanisms

89

Sequential PRMs

Optimization of one-shot PRMs unable to exploit conditional “queries”

• e.g., if seller cost of x greater than your upper bound, needn’t ask you for your valuation of x

Sequential PRMs• incrementally elicit partial type information

• apply similar heuristics for designing query policy

• incentive properties somewhat weaker: opportunity to manipulate payments by altering the query path

thus additional criteria can be used to optimize

90

Sequential PRMs: Definition

Set of queries Qi

• response rRi(qi) interpreted as partial type i (r) Ti

• history h: sequence of query-response pairs possibly followed by allocation (terminal)

Sequential mechanism m maps:• nonterminal histories to queries/allocations

• terminal histories to set of payment functions pi

Revealed partial type i(h): intersect. i (r), r in h

m is partial revelation if exists realizable terminal h s.t. i(h) admits more than one type ti

91

Sequential PRMs: Properties

Strategies i(hi ,qi ,ti) selects responses

i is truthful if ti i (i(hi ,qi ,ti))

• truthful strategies must be history independent

(Determ.) strategy profile induces history h• if h is terminal, then quasi-linear utility realized

• if history is unbounded, then assume utility = 0

Regret-based PRM allocation defined as in one-shot

92

Max VCG Payment Scheme

Assume terminal history h• let be revealed PTV at h, x*() be allocation

Max VCG payment scheme:

• where VCG payment is:

93

Incentive Properties

Suppose we elicit type info until MMR allocation has max regret and we use “max VCG”

Define:

Thm: m is -efficient, -ex post IR and

(+(x*()))-ex post IC.

• weaker results due to possible payment manipulation

94

Elicitation Approaches

Two Phases:

Standard max regret based approaches• give us bounds on efficiency , no a priori bounds

Regret-based followed by payment elicitation• once small enough, elicit additional payment

information until max is small enough

95

Elicitation Approaches

Direct optimization:

global manipulability:u(best lie) - u(truth)

ask queries that directly reduce global manipulability

can be formulated as regret-style optimizationanalogous query strategies possible

96

Test Domains

Car Rental Problem:• 1 client , 2 dealers• GAI valuation/costs: 13 factors, size 1-4• Car: 8 attributes, 2-9 values• Total 825 parameters

Small Random Problems:• supplier-selection, 1 buyer, 2 sellers• 81 parameters

97

Results: Car Rental

Initial regret: 99% of opt SWZero-regret: 71/77 queriesAvg remaining uncertainty: 92% vs 64% at zero-manipulability Avg nb params queried: 8%

•relevant parameters

•reduces revelation

•improves decision quality

98

Results: Random Problems

99

ContributionsTheoretical framework for Partial Revelation Mech DesignOne-shot mechanisms

• generalize VCG to PRMs (allocation + payments)• v. general payments: secondary objectives• algorithm to design partial types

Sequential mechanisms• slightly different model, but similar results• algorithm to design query strategy

Viewpoint: why approximate incentives are useful• Approximate decision trade off cost vs. quality

• Formal, approximate IC ensures practical, exact IC

Applicable to general Mechanism Design problemsEmpirically very effective

100

PRMs: Future WorkFurther investigate splitting / elicitation heuristicsMore experimentation

• Larger problems• Combinatorial Auctions

Formal model manipulability cost formal, exact IC

Formal model revelation costs explicit revelation vs efficiency trade-off

Sequentially optimal elicitation

101

Questions ?

References:

Nathanael Hyafil and Craig Boutilier

• “Regret-based Incremental Partial Revelation

Mechanisms”, AAAI 2006

• “One-shot Partial Revelation Mechanisms”,

Working Paper, 2006

102

Extra Slides - Part 1

103

Fishburn [1967]: Default Outcomes

Define default outcome:For any x, let x[I] be restriction of x to vars I, with remaining replaced by default values:

Utility of x can be written [F67]:

• sum of utilities of certain related “key” outcomes

104

Key Outcome Decomposition

Example: GAI over I={ABC}, J={BCD}, K={DE}

u(x) = u(x[I]) + u(x[J]) + u(x[K])

- u(x[IJ]) - u(x[IK]) - u(x[JK])

+ u(x[IJK])

u(abcde) = u(x[abc]) + u(x[bcd]) + u(x[de])

- u(x[bc]) - u(x[]) - u(x[d])

+ u(x[])

u(abcde) = u(abcd0e0) + u(a0bcde0) + u(a0b0c0de)

- u(a0bcd0e0) - u(a0b0c0de0)

105

Canonical Decompostion [F67]

This leads to canonical decompostion of u:

u1(x1, x2) u2(x2, x3)

u(abcde) = u(abcd0e0)

+ u(a0bcde0) - u(a0bcd0e0)

+ u(a0b0c0de) - u(a0b0c0de0)

e.g., I={ABC}, J={BCD}, K={DE}

106

Local Queries

Thus, if for some y (where Y =X \ Xi \ C(Xi) )

then for all y’

hence we can legitimately ask local queries:

107

Implications for Minimax RegretComplicates MMR

• utility of outcome depends linearly on GAI parameters

• but GAI parameters depend on bounds induced by two types of queries: quadratic constraints

Local pairwise regret notion can be modified

• To compute rx[k]x’[k] set values: vx’[k] to u.b. and vx[k] to l.b.

If vx’[k]↑ > vx[k]↓: max u(xT[k]) and min u(x[k])

otherwise do the opposite

108

Bayesian Utility Function Uncertainty

User’s actual utility u unknown

Assume density P over U = [0,1]n

Given belief state P, EU of decision x is:

Decision making is easy, but elicitation harder?• must assess expected value of information in query

U x uPupPxEU )(),(

109

Extra Slides - Part 2

top related