onr muri: nexgenetsci from consensus to social learning in complex networks ali jadbabaie skirkanich...

30
ONR MURI: NexGeNetSci From Consensus to Social Learning in Complex Networks Ali Jadbabaie Skirkanich Associate Professor of innovation Electrical & Systems Engineering and GRASP Laboratory University of Pennsylvania First Year Review, August 27, 2009 With Alireza Tahbaz-Salehi and Victor Preciado

Post on 20-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

ONR MURI: NexGeNetSci

From Consensus to Social Learning in Complex Networks

Ali JadbabaieSkirkanich Associate Professor of innovation

Electrical & Systems Engineering and GRASP LaboratoryUniversity of Pennsylvania

First Year Review, August 27, 2009

With Alireza Tahbaz-Salehi and Victor Preciado

Theory DataAnalysis

Numerical Experiments

LabExperiments

FieldExercises

Real-WorldOperations

• First principles• Rigorous math• Algorithms• Proofs

• Correct statistics

• Only as good as underlying data

• Simulation• Synthetic,

clean data

• Stylized• Controlled• Clean,

real-world data

• Semi-Controlled

• Messy, real-world data

• Unpredictable• After action

reports in lieu of data

JadbabaieCollective behavior, social aggregation

http://www.cis.upenn.edu/~ngns

Good news:Spectacular progress

• Consensus and information aggregation

• Random spectral graph theory

• synchronization, virus spreading

• New abstractions beyond graphs:

• understanding network topology• simplicial homology• computing homology groups

Consensus, Flocking and Consensus, Flocking and SynchronizationSynchronization

Opinion dynamics, crowd control, synchronization and flocking

Flocking and opinion dynamics• Bounded confidence opinion model (Krause,

2000)– Nodes update their opinions

as a weighted average

of the opinion value of their friends

– Friends are those whose opinion is already close

– When will there be fragmentation and when will there be convergence of opinions?

– Dynamics changes topology

Consensus in random networks• Consider a network with n nodes and a vector of initial values, x(0)

• Consensus using a switching and directed graph Gn(t)

• In each time step, Gn(t) is a realization of a random graph where edges appear with probability, Pr(aij=1)=p, independently of each other

Random

Ensemble

)()(

)1(1

nknkk

k

IAIDW

kWk

xxConsensus dynamics

variable.random a is limx

vector,random a is where,lim

,... with ,0)(

k*

021

kx

U

WWWUUk

i

Tkk

kkkk

vv1

xxStationary behavior

Despite its easy formulation, very little is known about x* and v

Random Networks

The graphs could be correlated so long as they are stationary-ergodic.

What about the consensus value?

• Random graph sequence means that consensus value is a random variable

• Question: What is its distribution?• A relatively easy case :

– Distribution is degenerate (a Dirac) if and only if all matrices have the same left eigenvector with probability 1.

• In general:

Where is the eigenvector associated with the largest eigenvalue (Perron vector)

Can we say more?

E[WkWk] for Erdos-Renyi graphs

Define:

• For simplicity in our explanation, we illustrate the structure of E[WkWk] using the case n=4:

Random Consensus

These entries have the following expressions:

where q=1-p and H(p,n) is a special function that can be written in terms of a hypergeometric function (the detailed expression is not relevant in our exposition)

• Defining the parameter

we can finally write the left eigenvector of the expected Kronecker as:

• Furthermore, substituting the above eigenvector in our original expression for the variance (and simple algebraic simplifications) we deduce the following final expression as a function of p, n, and x(0):

where

Variance of consensus value for Erdos-Renyi graphs

• var(x*) for initial conditions uniformly distributed in [0,1], nЄ{3,6,9,12,15}, and p varying in the range (0,1]

Random Consensus (plots)

p

Var(x*)n=3 n=6 n=9 n=12 n=15

What about other random graphs?

Static Model with Prescribed Expected Degree Distribution

• Generalized static models [Chung and Lu, 2003]:– Random graph with a prescribed expected degree sequence– We can impose an expected degree wi on the i-th node

Degree distributions are useful to the extent that they tell us something about the spectral properties (at least for distributed computation/optimization)

i

j

-10 -8 -6 -4 -2 0 2 4 6 8 100

10

20

30

40

50

60

1000 nodes

-10 -8 -6 -4 -2 0 2 4 6 8 100

10

20

30

40

50

60

-10 -8 -6 -4 -2 0 2 4 6 8 100

10

20

30

40

50

60

-10 -8 -6 -4 -2 0 2 4 6 8 100

5

10

15

20

25

30

500 nodes

-10 -8 -6 -4 -2 0 2 4 6 8 100

5

10

15

20

25

30

-10 -8 -6 -4 -2 0 2 4 6 8 100

5

10

15

20

25

30

-10 -8 -6 -4 -2 0 2 4 6 8 100

1

2

3

4

5

6

100 nodes

-10 -8 -6 -4 -2 0 2 4 6 8 100

1

2

3

4

5

6

-10 -8 -6 -4 -2 0 2 4 6 8 100

1

2

3

4

5

6

-10 -8 -6 -4 -2 0 2 4 6 8 100

10

20

30

40

50

60

-10 -8 -6 -4 -2 0 2 4 6 8 100

5

10

15

20

25

30

-10 -8 -6 -4 -2 0 2 4 6 8 100

1

2

3

4

5

6

Eigenvalues of Chung-Lu Graph

• What is the eigenvalue distribution of the adjacency matrix for very large Chung-Lu random networks?

Numerical Experiment: Represent the histogram of eigenvalues for several realizations of this random graph

Limiting Spectral Density: Analytical expression only possible for very particular cases.

Contribution: Estimation of the shape of the bulk for a given expected degree sequence, (w1,…,wn).

Spectral moments of random graphs and degree distributions

• Degree distributions can reveal the moments of the spectra of graph Laplacians

• Determine synchronizability

• Speed of convergence of distributed algorithms

• Lower moments do not necessarily fix the support, but they fix the shape

• Analysis of virus spreading (depends on spectral radius of adjacency)

• Non-conservative synchronization conditions on graphs with prescribed degree distributions

• Analytic expressions for spectral moments of random geometric graphs

Consensus and Naïve Social learning

• When is consensus a good thing?• Need to make sure update converges to the

correct value

Naïve vs. Bayesian

just average

with neighbors

Fuse info with Bayes Rule

Naïve learning

Social learning

• There is a true state of the world, among countably many

• We start from a prior distribution, would like to update the distribution (or belief on the true state) with more observations

• Ideally we use Bayes rule to do the information aggregation

• Works well when there is one agent (Blackwell, Dubins’1962), become impossible when more than 2!

Locally Rational, Globally Naïve: Bayesian learning under peer pressure

Model Description

Model Description

Belief Update Rule

Why this update?

Eventually correct forecasts

Eventually-correct estimation of the output!

Why strong connectivity?

No convergence if different people interpret signals differently

N is misled by listening to the less informed agent B

Example

One can actually learn from others

Learning from others

Information in i’th signal only good for distinguishing

Convergence of beliefs and consensus on correct value!

Learning from others

Summary

Only one agent needs a positive prior on the true state!