t owards s elf -o ptimizing f rameworks for c ollaborative s ystems sasa junuzovic (advisor: prasun...
Post on 21-Dec-2015
216 views
TRANSCRIPT
TOWARDS SELF-OPTIMIZING FRAMEWORKS FOR
COLLABORATIVE SYSTEMS
Sasa Junuzovic(Advisor: Prasun Dewan)
University of North Carolina at Chapel Hill
2
COLLABORATIVE SYSTEMS
Shared Checkers Game
User1 User2
User1 Enters Command
‘Move Piece’
User1 Sees ‘Move’ User2 Sees ‘Move’
3
PERFORMANCE IN COLLABORATIVE SYSTEMS
Professor Student Candidates’ Day at UNC
UNC Professor Demoing Game to Candidate at Duke
Poor Interactivity
Quits Game
Performance is Important!
4
IMPROVING PERFORMANCE INEVERYDAY LIFE
Chapel Hill Raleigh
5
WINDOW OF OPPORTUNITY FOR IMPROVING PERFORMANCE
Requirements
Resources
Insufficient Resources
Abundant Resources
Sufficient but Scarce Resources
Improve Performance from
Poor to Good
Always Poor Performance
Always Good Performance
Window of Opportunity [18]
[18] Jeffay, K. Issues in Multimedia Delivery Over Today’s Internet. IEEE Conference on Multimedia Systems. Tutorial. 1998.
6
WINDOW OF OPPORTUNITY IN COLLABORATIVE SYSTEMS
Focus
Window of Opportunity
Requirements
Resources
7
THESIS
For certain classes of applications, it is possible to meet performance
requirements better than existing systems through a new collaborative
framework without requiring hardware, network, or user-interface changes.
8
PERFORMANCE IMPROVEMENTS:ACTUAL RESULT
0 5 10 15 20 25 30 35 40 45 500
50
100
150
200
250
300
350
400
450
500
User Move
Resp
onse
Tim
es (m
s)
Self-Optimizing System Improves Performance
Self-Optimizing System Improves Performance
Again
Good
Bad
Performance
Time
Initially no Performance Improvement
With OptimizationWithout Optimization
9
WHAT DO WE MEAN BY PERFORMANCE?
0 5 10 15 20 25 30 35 40 45 500
50
100
150
200
250
300
350
400
450
500
User Move
Resp
onse
Tim
es (m
s)
Good
Bad
Performance
Time
What Aspects of Performance are Improved?
With OptimizationWithout Optimization
10
PERFORMANCE METRICS
Performance Metrics:
Local Response Times [20]
Remote Response Times [12]
Jitter [15]
Throughput [13]
Task Completion Time [10]
Focus
Bandwidth [16]
[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.[12] Ellis, C.A. and Gibbs, S.J. Concurrency control in groupware systems. ACM SIGMOD Record. Vol. 18 (2). Jun 1989. pp: 399-407.[13] Graham, T.C.N., Phillips, W.G., and Wolfe, C. Quality Analysis of Distribution Architectures for Synchronous Groupware. Conference on Collaborative Computing: Networking, Applications, and
Worksharing (CollaborateCom). 2006. pp: 1-9.[15] Gutwin, C., Dyck, J., and Burkitt, J. Using Cursor Prediction to Smooth Telepointer Actions. ACM Conference on Supporting Group Work (GROUP). 2003. pp: 294-301.[16] Gutwin, C., Fedak, C., Watson, M., Dyck, J., and Bell, T. Improving network efficiency in real-time groupware with general message compression. ACM Conference on Computer Supported Cooperative
Work (CSCW). 2006. pp: 119-128.[20] Shneiderman, B. Designing the user interface: strategies for effective human-computer interaction. 3rd ed. Addison Wesley. 1997.
11
LOCAL RESPONSE TIME
Time
Some User Enters Command
That User Sees Output for Command
User1 Enters‘Move Piece’
User1 Sees ‘Move’
Local Response Time
User1 User1
12
REMOTE RESPONSE TIME
Time
Some User Enters Command
Another User Sees Output for Command
User1 Enters‘Move Piece’
User2 Sees ‘Move’
Remote Response Time
User1 User2
13
NOTICEABLE PERFORMANCE DIFFERENCE?
0 5 10 15 20 25 30 35 40 45 500
50
100
150
200
250
300
350
400
450
500
User Move
Resp
onse
Tim
es (m
s)
Time
When are Performance Differences Noticeable?
300ms
180ms
80ms
21ms
Differences
Good
Bad
Performance
With OptimizationWithout Optimization
14
NOTICEABLE RESPONSE TIME THRESHOLDS
Local Response Times
50ms [20]
50ms [23]
50ms [23]
<Remote Response Times
50ms [17]
50ms [17]
50ms [17]
<
[17] Jay, C., Glencross, M., and Hubbold, R. Modeling the Effects of Delayed Haptic and Visual Feedback in a Collaborative Virtual Environment. ACM Transactions on Computer-Human Interaction (TOCHI). Vol. 14 (2). Aug 2007. Article 8.
[20] Shneiderman, B. Designing the user interface: strategies for effective human-computer interaction. 3rd ed. Addison Wesley. 1997.[23] Youmans, D.M. User requirements for future office workstations with emphasis on preferred response times. IBM United Kingdom Laboratories. Sep 1981.
15
21ms
SELF-OPTIMIZING SYSTEM NOTICEABLY IMPROVES RESPONSE TIMES
0 5 10 15 20 25 30 35 40 45 500
50
100
150
200
250
300
350
400
450
500
User Move
Resp
onse
Tim
es (m
s)
Time
300ms
180ms
80ms
Differences
Noticeable Improvements!
With OptimizationWithout Optimization
16
SIMPLE RESPONSE TIME COMPARISON: AVERAGE RESPONSE TIMES
100ms 200ms
200ms 100ms
Optimization A
Optimization B
User1 User2
Optimization A Optimization B
17
SIMPLE RESPONSE TIME COMPARISON MAY NOT GIVE CORRECT ANSWER
100ms 200ms
200ms 100ms
Optimization A
Optimization B
Optimization A Optimization B
Response Times More Important For Some Users
<Optimization A Optimization B
User1 User2ImportanceImportance <
18
USER’S RESPONSE TIME REQUIREMENTS
External Criteria From Users Needed to Decide
Optimization A Optimization B< >=External Criteria Required Data
Favor Important Users Identity of Users
?
Favor Local or Remote Response Times
Identity of Users Who Input and Users Who Observe
Arbitrary Arbitrary
Users Must Provide Response Time Function that Encapsulates Criteria
Self-Optimizing System Provides Predicted Response Times and
Required Data
19
MAIN CONTRIBUTIONS
[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.[22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM Symposium on Interactive Computing Systems. 2009. pp: 275-284.
Collaboration Architecture Multicast Scheduling Policy
Studied the Impact on Response Times
Automated Maintenance
Important Response Time Factors
Chung [10]
Better Meet Response Time Requirements than Other Systems!
Wolfe et al. [22]
20
ILLUSTRATING CONTRIBUTION DETAILS: SCHEDULING POLICY
Collaboration Architecture Multicast Scheduling Policy
Studied the Impact on Response Times
Automated Maintenance
Important Response Time FactorsBetter Meet Response Time Requirements than Other Systems!
Illustration of
Contributions
Single-Core
21
SCHEDULING COLLABORATIVE SYSTEMS TASKS
Scheduling Requires Definition of Tasks
Collaborative Systems Tasks External Application Tasks
Typical Working Set of Applications is not Known
We Use an Empty Working Set
Defined by Collaboration Architecture
22
COLLABORATION ARCHITECTURES
U
P
User Interface
Program Component
Allows Interaction with Shared State
Manages Shared State
Runs on Each User’s Machine
May or May not Run on Each User’s Machine
23
COLLABORATION ARCHITECTURES
[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.
U
P
Input Output
User Interface
Program Component
Sends Input Commands to Program Component
Each User-Interface must be Mapped to a Program Component [10]
Sends Outputs to User Interface
24
POPULAR MAPPINGS
U 2 U 3U 1
P 1
Input Output
Centralized Mapping
Master Slave Slave
25
POPULAR MAPPINGS
U 1
P 1
U 2 U 3
P 2 P 3
Input Output
Replicated Mapping
Master Master Master
26
COMMUNICATION ARCHITECTURE
Masters Perform Majority of
Communication TaskSend Input to All
ComputersSend Output to All Computers
Communication Model?
Replicated Centralized
Push-Based Pull-Based Streaming
Unicast or Multicast?
Focus
27
UNICAST VS. MULTICAST
3 54 6 7 98 10
1 2
Transmission Is Performed Sequentially
If Number of Users Is Large, Transmission Takes Long Time
3 54 6 7 98 10
1 2
Unicast Multicast
Transmission Performed in Parallel
Relieves Single Computer From Performing Entire Transmission Task
28
COLLABORATIVE SYSTEMS TASKS
3 54 6 7 98 10
1 2
3 54 6 7 98 10
1 2
Unicast Multicast
Only Source Has to Both Process and Transmit Commands
Any Computer May Have to Both Process and Transmit Commands
29
PROCESSING AND TRANSMISSION TASKS
[9] Begole, J. Rosson, M.B., and Shaffer, C.A. Flexible collaboration transparency: supporting worker independence in replicated application-sharing systems. ACM Transactions on Computer-Human Interaction (TOCHI). Vol . 6(2). Jun 1999. pp: 95-132.
[14] Greif, I., Seliger, R., and Weihl, W. Atomic Data Abstractions in a Distributed Collaborative Editing System. Symposium on Principles of Programming Languages. 1986. pp: 160-172.[21] Sun, C. and Ellis, C. Operational transformation in real-time group editors: issues, algorithms, and achievements. ACM Conference on Computer Supported Cooperative Work (CSCW). 1998. pp: 59-68.
Mandatory Tasks:
Processing of User Commands
Transmission of User Commands
Concurrency Control [14]
Awareness [9]
Optional Tasks Related to:
Focus
Consistency Maintenance [21]
User1
User2 User3
30
CPU VS. NETWORK CARD TRANSMISSION
User1
User2 User3
Transmission of a Command
Command
CPU Transmission Task
Network Card Transmission Task
Schedulable with Respect to CPU
Processing
Follows CPU Transmission
Parallel with CPU Processing
(non-blocking)
Schedulable Not Schedulable
(Processing Task in Networking)
31
IMPACT OF SCHEDULING ON LOCAL RESPONSE TIMES
Intuitively, to Minimize Local Response Times, CPU should …
Process Command First
Transmit Command Second
Reason: Local Response Time does not Include CPU Transmission Time on User1’s
Computer
User1
User2 User3
Enters Command
‘Move Piece’
32
IMPACT OF SCHEDULING ON LOCAL RESPONSE TIMES
Intuitively, to Minimize Remote Response Times, CPU should …
Transmit Command First
Process Command Second
Reason: Remote Response Times do not Include Processing Time on User1’s
Computer
User1
User2 User3
Enters Command
‘Move Piece’
33
INTUITIVE CHOICE OF SINGLE-CORE SCHEDULING POLICY
Local Response Times Remote Response Times
Use Process-First Scheduling
Use Transmit-First Scheduling
Use Concurrent Scheduling?
Important
Important
Important Important
34
SCHEDULING POLICY RESPONSE TIME TRADEOFF
Local Response
Times
Process First Concurrent Transmit
First< <Remote
Response Times
Process First Concurrent Transmit
First> >Tradeoff
35
PROCESS-FIRST: GOOD LOCAL, POOR REMOTE RESPONSE TIMES
Local Response
Times
Process First Concurrent Transmit
First< <Remote
Response Times
Process First Concurrent Transmit
First> >Tradeoff
36
TRANSMIT-FIRST: GOOD REMOTE, POOR LOCAL RESPONSE TIMES
Local Response
Times
Process First Concurrent Transmit
First< <Remote
Response Times
Process First Concurrent Transmit
First> >Tradeoff
37
CONCURRENT: POOR LOCAL, POOR REMOTE RESPONSE TIMES
Local Response
Times
Process First Concurrent Transmit
First< <Remote
Response Times
Process First Concurrent Transmit
First> >Tradeoff
38
NEW SCHEDULING POLICY
[6] Junuzovic, S. and Dewan, P. Serial vs. Concurrent Scheduling of Transmission and Processing Tasks in Collaborative Systems. Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2008.
[8] Junuzovic, S., and Dewan, P. Lazy scheduling of processing and transmission tasks collaborative systems. ACM Conference on Supporting Group Work (GROUP). 2009. pp: 159-168.
Current Scheduling Policies Tradeoff Local and Response Times in an All or Nothing Fashion (CollaborateCom 2008 [6])
Need a New Scheduling Policy
Transmit-First Process-First Concurrent
Systems Approach
Lazy (ACM GROUP 2009 [8])
Psychology
39
CONTROLLING SCHEDULING POLICY RESPONSE TIME TRADEOFF
Local Response
Times
Remote Response
Times
Unnoticeable Increase < 50ms
Noticeable Decrease > 50ms
40
LAZY SCHEDULING POLICY IMPLEMENTATION
User1
User2 User3
Enters Command
‘Move Piece’Basic Idea
Benefit: Compared to Process-First, User2’s Remote Response Time Improved and
Others did not Notice Difference in Response Times
Temporarily Delay Processing
Transmit during Delay
Keep Delay Below Noticeable
Process
Complete Transmitting
1
2
3
41
EVALUATING LAZY SCHEDULING POLICY
Improvements in Response Times of Some Users Without Noticeably Degrading Response Times of Others
Free Lunch
LocalResponse
TimesProcess First Lazy
Remote Response
TimesProcess First Lazy>
Noticeable
By Design
42
ANALYTICAL EQUATIONS
Mathematical Equations (in Thesis) Rigorously Show Benefits of Lazy Policy
Flavor: Capturing Tradeoff Between Lazy and Transmit-First
Model Supports Concurrent Commands and Type-Ahead
43
ANALYTICAL EQUATIONS: LAZY VS. TRANSMIT-FIRST
1 2 3 4 5 6
Sum of Network Latencies on Path
Sum of Intermediate Computer Delays
Destination Computer Delay
Scheduling Policy Independent
44
INTERMEDIATE DELAYS
1 2 3 4 5 6
Transmit-First Intermediate Computer Delay
Lazy Intermediate Computer Delay
Case 1: Transmit Before Processing
Case 2: Transmit After Processing
45
INTERMEDIATE DELAY EQUATION DERIVATION
1 2 3 4 5 6
Transmit-First Intermediate Computer Delay
Lazy Intermediate Computer Delay
46
TRANSMIT-FIRST INTERMEDIATE DELAYS
1 2 3 4 5 6
Transmit-First Intermediate
Computer Delay
Time Required to Transmit to Next
Computer on Path
47
LAZY INTERMEDIATE DELAYS: TRANSMIT BEFORE PROCESSING
1 2 3 4 5 6
Lazy Intermediate Computer Delay
Time Required to Transmit to Next
Computer on Path
If Transmit Before
Processing
Transmit-First Intermediate
Computer Delay
Noticeable Threshold
Sum of Intermediate Delays so Far
Processing Was Delayed <iff
48
LAZY INTERMEDIATE DELAYS: FROM “TRANSMIT BEFORE” TO “TRANSMIT AFTER” PROCESSING
1 2 3 4 5 6
Noticeable Threshold
Sum of Intermediate Delays so Far
Processing Was Delayed <iff
k kSum of
Intermediate Delays so Far
Processing Delay
Delay Delay Delay
Eventually … >
49
LAZY INTERMEDIATE DELAYS: TRANSMIT AFTER PROCESSING
1 2 3 4 5 6
Lazy Intermediate Computer Delay
Time Required to Transmit to Next
Computer on Path
If Transmit After
Processing
Time Required to Process Command
Transmit-First Intermediate
Computer Delay>
50
TRANSMIT-FIRST AND LAZY INTERMEDIATE DELAY COMPARISON
1 2 3 4 5 6
Lazy Intermediate Computer Delay
Transmit-First Intermediate
Computer Delay≥Transmit-First Intermediate Delays Dominate Lazy
Intermediate Delays
51
TRANSMIT-FIRST DESTINATION DELAYS
1 2 3 4 5 6
Transmit-First DestinationComputer Delay
Total CPU Transmission Time Processing Time
6
Number of Computers to
Forward
Transmit-First Destination
Computer Delay
52
LAZY DESTINATION DELAYS
1 2 3 4 5 6
Lazy DestinationComputer Delay Processing Delay Processing Time
6
Number of Computers to
Forward
Lazy DestinationComputer Delay
Capped
53
TRANSMIT-FIRST AND LAZY DELAY COMPARISON
1 2 3 4 5 6
Sum of Network Latencies on Path
Sum of Intermediate Computer Delays
Destination Computer Delay
Userj’s Remote Response Time of Command i
Lazy Transmit First Lazy Transmit
First
Transmit-First and Lazy do not Dominate Each Other
54
THEORETICAL EXAMPLE
25 50 75ms
Network Card Trans Time
User1 Enters All Commands
Proc Time
Network Latencies
Noticeable Threshold
CPU Trans Time
3 64 7 8 119 12
1 2
105
55
THEORETICAL EXAMPLE: LAZY NOTICEABLY BETTER THAN PROCESS-FIRST FOR USER8
≥ ≥
Process First
Lazy
50 ms100 150 200 250 300 350 400 450
Lazy Noticeably Better than Process-First
Noticeable
Network Card Trans Time
Proc Time
Network Latencies
Noticeable Threshold
25 50 75ms
CPU Trans Time
3 64 7 8 119 12
1 2
5 10
56
THEORETICAL EXAMPLE:LAZY ADVANTAGE ADDITIVE
Process First
Lazy
50 ms100 150 200 250 300 350 400 450
Lazy Improvement in Remote Response Times Compared to Process-First is Additive!
More Computers that Delay Processing → More Noticeable Improvement
Lazy Noticeably Better than Process-First
Noticeable
57
THEORETICAL EXAMPLE: LAZY NOTICEABLY BETTER THAN TRANSMIT-FIRST FOR USER2
ms100 150 200 250 300 350 400 450
Network Card Trans Time
Proc Time
Network Latencies
Noticeable Threshold
25 50 75ms
CPU Trans Time≥ <
≥
Transmit First
Lazy
Lazy May be Noticeably Better than
Transmit-FirstNoticeable
3 64 7 8 119 12
1 2
5 10
58
SIMULATIONS
Simulate Performance Using Analytical Model
Response Time Differences Noticeable in Realistic Scenarios?
Need Realistic Simulations!
Theoretical Example Carefully Constructed to Illustrated Benefit of Lazy
59
SIMULATION SCENARIODistributed PowerPoint
Presentation Need Realistic Values of Parameters
Processing and Transmission Times
Measured Values from Logs of Actual PowerPoint Presentations
Single-Core Machines
Presenter Using Netbook
P3 DesktopP4 DesktopNetbook
Next-generation Mobile Device
No Concurrent Commands
No Type-Ahead
60
SIMULATION SCENARIODistributed PowerPoint
PresentationPresenting to 600 People
Using P4 Desktops and Next Generation Mobile Devices
61
SIMULATION SCENARIODistributed PowerPoint
PresentationSix Forwarders Each Sending
Messages to 99 Users
62
SIMULATION SCENARIO
[19] p2pSim: a simulator for peer-to-peer protocols. http://pdos.csail.mit.edu/p2psim/kingdata. Mar 4, 2009.[24] Zhang, B., Ng, T.S.E, Nandi, A., Riedi, R., Druschel, P., and Wang, G. Measurement-based analysis, modeling, and synthesis of the internet delay space. ACM Conference on Internet Measurement. 2006.
pp: 85-98.
Distributed PowerPoint Presentation
Used Subset of Latencies Measured between 1740
Computers Around the World [19][24]
Others Around the World
Low Latencies Between Forwarders
63
SIMULATION RESULTS: LAZY VS. PROCESS-FIRST
Equal to
Number of Lazy Remote Response Times
Noticeably Better
Noticeably Worse
Not Noticeably Different
Process-First
0
598
0
1
As much as 604ms
Lazy Better by 36ms
Local Response Time
Process-First
58ms
Lazy
107ms
Lazy Local Response Times Unnoticeably Worse (49ms) than Process-First
Lazy Dominates Process-First!
64
SIMULATION RESULTS: LAZY VS. TRANSMIT-FIRST AND CONCURRENT
Equal to
Number of Lazy Remote Response Times
Noticeably Better
Noticeably Worse
Not Noticeably Different
Transmit-First
407
5
187
0
As much as 158msAs much as 240ms
Local Response Time
Transmit-First
177ms
Lazy
107ms
Lazy Local Response Time Noticeably Better (70ms) than Transmit-First. Can Also Be Noticeably Better than Concurrent (in Different Scenario).
None of the Transmit-First, Concurrent, and Lazy Dominate Each Other!
Concurrent
118ms
Concurrent
407
5
187
0
As much as 158msAs much as 240ms
65
SIMULATION RESULTS: LAZY VS. TRANSMIT-FIRST AND CONCURRENT
Number of Lazy Remote Response Times
Noticeably Better
Transmit-First
As much as 158ms
Concurrent
As much as 158ms
Lazy Provides Better Remote Response Times to Four of the Five Forwarding Users
5 5
Lazy More “Fair” than Transmit-First and Concurrent
66
AUTOMATICALLY SELECTING SCHEDULING POLICY THAT BEST MEETS RESPONSE TIME REQUIREMENTS
Improve as Many Response Times as Possible Concurrent or Transmit-First
Improve as Many Remote Response Times as Possible without Noticeably Degrading Local Response Times Lazy
User’s Response Time Requirements Scheduling Policy Used
Improve as Many Remote Response Times as Possible without Noticeably Degrading Local Response Times and
Remote Response Times of ForwardersLazy
67
SELF-OPTIMIZING COLLABORATIVE FRAMEWORK
Collaboration Functionality
Replicated Architecture
Centralized Architecture Unicast
Multicast
Process-First
Transmit-First
Concurrent Lazy
Self-Optimizing Functionality
68
SHARING APPLICATIONS
U 2U 1
P 1
Client-Side Component
C 1 C 2
Input Output
Centralized Architecture Replicated Architecture
U 2U 1
P 1 P 1
C 1 C 2
69
MULTICAST
U 2U 1
P 1
Input Output
P 2
Centralized-Multicast Architecture
C 1 C 2
U 2U 1
P 3 P 4
C 3 C 4
Inactive Inactive Inactive
70
SCHEDULING POLICIES
U 1
P 1
C 1High
Priority
Low Priority
U 1
P 1
C 1Low
Priority
High Priority
U 1
P 1
C 1
Transmit-First Process-First Concurrent
Equal Priority
Equal Priority
Centralized-Multicast Architecture
71
LAZY POLICY IMPLEMENTATION
U 2U 1
P 1 P 2
C 1 C 2
InactiveDelays Processing While Delay Not
Noticeable
Must Know Delay So Far
Source Provides Input Time of Command
Forwarder Computes Time Elapsed Since Input Time
Delay Processing While Not Noticeable
Centralized-Multicast Architecture
Clocks Must be Synchronized
Use Simple Clock Sync Scheme
72
SELF-OPTIMIZATION FRAMEWORK: SERVER-SIDE COMPONENT
Analytical Model
Response Time Function
System Manager
Parameter Collector
Provided by Users
Predicted Response Time Matrix
For Each Scheduling Policy
Predict Response Times of All Users
For Commands by Each User
Same as for Simulations
Previously Measured
ValuesMeasure
Dynamically
73
CLIENT-SIDE COMPONENT: MEASURING PARAMETERS
Sharing
U 1
P 1
C 1Optimization
Client-Side Component
74
SWITCHING SCHEDULING POLICIES
U 1
P 1
High Priority
Low Priority
Low Priority
High Priority
Transmit-First
Process-First Concurrent
Equal Priority
Equal Priority
Lazy
Lazy Algorithm
75
PERFORMANCE OF COMMANDS ENTERED DURING SWITCH
U 2U 1
P 1 P 2
C 1 C 2
Inactive
Transmit-First Transmit-First
Switching Scheduling Policies on All Computers Takes Time
Computers May Temporarily Use Mix of Old and New Policies
Lazy Lazy
Not a Semantic Issue
May Temporarily Degrade Performance
Eventually All Computers Switch to new Policy and Performance Improves
76
MAIN CONTRIBUTIONS: SCHEDULING
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time Factors
Analytical Model
Better Meet Response Time
Requirements than Other Systems!
Self-Optimizing System
Designed Lazy Scheduling
Single-Core
Multi-Core ?
77
INTUITIVE CHOICE OF MULTI-CORE SCHEDULING POLICY
Local Response Times Important?
Remote Response Times Important?
Transmit and Process in Parallel
Transmit and Process in Parallel
Important
Important
78
INTUITIVE CHOICE OF MULTI-CORE SCHEDULING POLICY
Local Response Times Important?
Remote Response Times Important?
Transmit and Process in Parallel
Parallelize Processing Task?
Application Defined Processing Task Task
Important
79
INTUITIVE CHOICE OF MULTI-CORE SCHEDULING POLICY
Local Response Times Important?
Remote Response Times Important?
Transmit and Process in Parallel
Parallelize Transmission Task?
System Defined Transmission Task Divided Among Multiple Cores
No Benefit to Remote Response Times
Difficult to Predict Remote Response Times
Important
80
AUTOMATING SWITCH TO PARALLEL POLICY
Arbitrary Response Time Requirements Parallel
Analytical Model Predicts Simulations Show Experiments Confirm
81
MAIN CONTRIBUTIONS: SCHEDULING
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time Factors
Analytical Model
Required
Better Meet Response Time
Requirements than Other Systems!
Self-Optimizing System
Required
Designed Lazy Scheduling
Single-Core
Multi-Core
82
MAIN CONTRIBUTIONS: MULTICAST
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time FactorsBetter Meet Response Time
Requirements than Other Systems!
83
MULTICAST CAN HURT REMOTE RESPONSE TIMES
3 54 6 7 98 10
1 2
Transmission Performed in Parallel
Quicker Distribution of Commands
Multicast May Improve or Degrade Response Times
Multicast Paths Longer than Unicast Paths
Response Times
Response Times
84
MULTICAST AND SCHEDULING POLICIES
Traditional Multicast Ignores Collaboration Specific Parameters
Scheduling Policy
Consider Process-First
Response Time Includes Processing Time of Source and Destination
Response Time Includes Processing Time of All Computers on Path from
Source to Destination
Another Reason Why Multicast May Hurt Response Times
85
MULTICAST AND SCHEDULING POLICIES
Multicast and Transmit-First Remote Response Times
Must Transmit Before Processing! If Transmission Costs High
Remote Response Times Compared to Process First
Another Reason Why Multicast May Hurt Response Times
Must Support Both Unicast and Multicast
86
SUPPORTING MULTICAST
[3] Junuzovic, S. and Dewan, P. Multicasting in groupware? IEEE Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom) . 2007. pp: 168-177.
Must Decouple These Tasks to Support Multicast
Processing Architecture Communication Architecture
Traditional Collaboration Architectures Couple Processing and Transmission Tasks
Masters Perform Majority of
Communication Task
Bi-Architecture Model of Collaborative Systems(IEEE CollaborateCom 2007 [3])
Replicated Centralized
87
MAIN CONTRIBUTIONS: MULTICAST
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time FactorsBetter Meet Response Time
Requirements than Other Systems!
Analytical Model
Self-Optimizing System
Required
Bi-Architecture Model
88
PERFORMANCE OF COMMANDS ENTERED DURING COMMUNICATION ARCHITECTURE SWITCH
3 54 6 7 98 10
1 2
3 54 6 7 98 10
1 2
Communication Architecture Switch Takes Time
Cannot Stop Old Architecture because of Messages In Transit
Deploy New Architecture in Background
Switch to New Architecture When All Computers Deploy It
Use Old Architecture During Switch
Commands Entered During Switch Temporary Experience Non-optimal
(Previous) Performance
89
MAIN CONTRIBUTIONS: MULTICAST
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time FactorsBetter Meet Response Time
Requirements than Other Systems!
90
IMPACT OF MAPPING ON RESPONSE TIMES
[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.
Which Mapping Favors Local Response Times?
IntuitivelyEmpirically
Shown Wrong [10]
Experimental Results Infinitely Many Collaboration Scenarios Impractical
Replicated Centralized
Yes No Yes No
91
MAIN CONTRIBUTIONS: MULTICAST
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time FactorsBetter Meet Response Time
Requirements than Other Systems!
Analytical Model
Self-Optimizing System
92
COMMANDS ENTERED DURING SWITCH
[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.
U 2
P 2
C 2
Output Arrives
MasterPause Input
During Switch Simple Hurts Response Times
Inconsistent with Replicated and
Centralized Architectures
Not if Done During Break
Run Old and New
Configurations in Parallel [10]
Good Performance Not Simple
Approach Used in Self-Optimizing
System
93
CONTRIBUTIONS
Response Time Requirements
Resources
Insufficient Resources
Abundant Resources
Sufficient but Scarce Resources
Our Contribution
94
MAIN CONTRIBUTIONS
[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.[22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM Symposium on Interactive Computing Systems. 2009. pp: 275-284.
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time FactorsBetter Meet Response Time
Requirements than Other Systems!
Chung [10]
Wolfe et al. [22]
95
MAIN CONTRIBUTIONS
[1] Junuzovic, S., Chung, G., and Dewan, P. Formally analyzing two-user centralized and replicated architectures. European Conference on Computer Supported Cooperative Work (ECSCW). 2005. pp: 83-102.[2] Junuzovic, S. and Dewan, P. Response time in N-user replicated, centralized, a proximity-based hybrid architectures. ACM Conference on Computer Supported Cooperative Work (CSCW). 2006. pp: 129-
138.[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.[22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM Symposium on Interactive Computing Systems. 2009. pp: 275-284.
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time Factors
Bi-architectureNewLazy Policy
Process-First Obsolete
New
Support for Multicast
Better Meet Response Time
Requirements than Other Systems!
Chung [10]
Wolfe et al. [22]
ECSCW 2005 [1]
ACM CSCW 2006 [2]
96
MAIN CONTRIBUTIONS
[22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM Symposium on Interactive Computing Systems. 2009. pp: 275-284.
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time Factors
Analytical Model
New
Better Meet Response Time
Requirements than Other Systems!
Wolfe et al. [22]
97
MAIN CONTRIBUTIONS
Choice of Configuration at Start Time
Choice of Configuration at Runtime
Locked into an Configuration
Help Users Decide
Analytical Model
OR
98
MAIN CONTRIBUTIONS
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time Factors
Analytical ModelNew
Implementation Issues
Better Meet Response Time
Requirements than Other Systems!
Self-Optimizing SystemNew
99
MAIN CONTRIBUTIONS
Collaboration Architecture Multicast Scheduling
Policy
Studied the Impact on
Response Times
Automated Maintenance
Important Response Time Factors
Proof of Thesis:It is possible to meet performance
requirements of collaborative applications better than existing
systems through a new a collaborative framework without requiring
hardware, network, or user-interface changes.
Proof of Thesis:For certain classes of applications, it is possible to
meet performance requirements better than existing systems through a new collaborative
framework without requiring hardware, network, or user-interface changes.
Better Meet Response Time
Requirements than Other Systems!
100
CLASSES OF APPLICATIONS IMPACTED
Three Driving Problems
Collaborative Games Distributed Presentations Instant Messaging
PervasiveEntire Industries Built Around ItPopular
Webex, Live Meeting, etc.
101
CLASSES OF APPLICATIONS IMPACTED
Push-Based Communication
No Concurrency Control, Consistency Maintenance, or Awareness Commands
Three Driving Problems
Collaborative Games Distributed Presentations Instant Messaging
Support Centralized and Replicated Semantics
102
IMPACT
Complex Prototype Shows Benefit of Automating Maintenance of
ProcessingArchitecture
Communication Architecture
Scheduling Policy
Recommend to Software Designers?
Added Complexity Worth It?
Yes if Performance of Current Systems is an Issue
Initial Costs Replicated Semantics
Window of Opportunity Exists?
Yes In Multimedia Networking
Further Analysis Needed for Collaborative Systems
103
WINDOW OF OPPORTUNITY IN FUTURE:IMPACT OF PROCESSING ARCHITECTURE
Processor Power
Processing Costs
Choice of Processing Architecture Not
Important
Processing Costs
Choice of Processing Architecture Important
Demand for More Complex Applications
Some Text Editing Operations Still Slow
104
WINDOW OF OPPORTUNITY IN FUTURE:IMPACT OF COMMUNICATION
ARCHITECTURE
Network Speed
Transmission Costs
Choice of Communication Architecture Not
Important
Transmission Costs
Choice of Communication Architecture Important
Demand for More Complex Applications
High Definition Video (Telepresence)
Cellular Networks Still Slow
Fast Links Consume Too Much Power on Mobile
Devices!
105
WINDOW OF OPPORTUNITY IN FUTURE:IMPACT OF SCHEDULING POLICY
Number of Cores
Always Use Parallel Policy
Choice of Scheduling Policy Not Important
Energy Costs Choice of Scheduling Policy Important
Cell Phones, PDAs, and Netbooks still Single Core
High Definition Video Requires Multiple Cores
106
OTHER CONTRIBUTIONS
Performance Simulator~ ns
Teaching
With System, Students Experience Response Times
With Simulations, Students Learn about Response Time
Factors
Large-Scale Experiment Setup
Large-Scale Simulations are Simple to Setup!
Impractical
107
OTHER CONTRIBUTIONS – INDUSTRY RESEARCH
[4] Junuzovic, S. ,Dewan, P, and Rui., Y. Read, Write, and Navigation Awareness in Realistic Multi-View Collaborations. IEEE Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2007. pp: 494-503.
[7] Junuzovic, S., Hegde, R., Zhang, Z., Chou, P., Liu, Z., Zhang, C. Requirements and Recommendations for an Enhanced Meeting Viewer Experience. ACM Conference on Multimedia (MM). 2008. pp: 539-548.
Dissertation Research
1 1
, , 11 1
,m m
REP i j k k k mk k
remote d
Theory Systems
Microsoft Research Applications and User Studies
Awareness in Multi-User
Editors
Meeting Replay
SystemsTelepresence
IEEE CollaborateCom 2007 [4]
US Patent Application
ACM MM 2008 [7]US Patent Application
Framework For Interactive
Web Apps
US Patent AwardedMSR Tech Talk
Papers in SubmissionPatent Applications in
Preparation
108
FUTURE WORK
Discover User Performance Requirements
Adjust Configuration Based on User Activity
Facial Expressions
Attention Level
User Requirement Driven Performance Goals
109
FUTURE WORK
Extension to Other Scenarios
Meetings in Virtual Worlds (e.g., Second Life)
Performance Issues with as few as 10 Users
Dynamically Deploy Multicast and Scheduling Policy
Step Closer to Solving Performance Issues
110
FUTURE WORK
Simulations and Experiments
Evaluate Benefit of Simulator for Allocating Resources in Large Online Systems
Can Administrators Make Better Decisions with Simulator?
Large Experimental Test-Bed
Current Public Clusters not Sufficient for Performance Experiments
111
ACKNOWLEDGEMENTS
Scholars For Tomorrow Fellow (2004-2005)
Microsoft Research Fellow (2008-2010)
NSERC PGS D Scholarship (2008-2010)
Committee:James Anderson (UNC Chapel Hill)Nick Graham (Queen’s University)
Saul Greenberg (University of Calgary)Jasleen Kaur (UNC Chapel Hill)
Ketan Mayer-Patel (UNC Chapel Hill)
Advisor: Prasun Dewan (UNC Chapel Hill)
Microsoft and Microsoft Research: Kori Inkpen, Zhengyou Zhang, Rajesh Hegde … many others
112
ACKNOWLEDGEMENTS
Sister, Mom, and Dad
Emily Tribble, Russ Gayle, Avneesh Sud, Todd Gamblin, Stephen Olivier, Keith Lee, Bjoern Brandenburg, Jamie Snape, Srinivas Krishnan, and
others
Thanks Everyone!
Professors and Everyone Else in the Department
113
THANK YOU
114
PUBLICATIONS
[1] Junuzovic, S., Chung, G., and Dewan, P. Formally analyzing two-user centralized and replicated architectures. European Conference on Computer Supported Cooperative Work (ECSCW). 2005. pp: 83-102.
[2] Junuzovic, S. and Dewan, P. Response time in N-user replicated, centralized, a proximity-based hybrid architectures. ACM Conference on Computer Supported Cooperative Work (CSCW). 2006. pp: 129-138.
[3] Junuzovic, S. and Dewan, P. Multicasting in groupware? IEEE Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2007. pp: 168-177.
[4] Junuzovic, S. ,Dewan, P, and Rui., Y. Read, Write, and Navigation Awareness in Realistic Multi-View Collaborations. IEEE Conference on Collaborative Computing: Networking, Applications, and Worksharing
(CollaborateCom). 2007. pp: 494-503.
[5] Dewan, P., Junuzovic, S., and Sampathkuman, G. The Symbiotic Relationship Between the Virtual Computing Lab and Collaboration Technology. International Conference on the Virtual Computing Initiative (ICVCI). 2007.
[6] Junuzovic, S. and Dewan, P. Serial vs. Concurrent Scheduling of Transmission and Processing Tasks in Collaborative Systems. Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2008.
[7] Junuzovic, S., Hegde, R., Zhang, Z., Chou, P., Liu, Z., Zhang, C. Requirements and Recommendations for an Enhanced Meeting Viewer Experience. ACM Conference on Multimedia (MM). 2008. pp: 539-548.
[8] Junuzovic, S., and Dewan, P. Lazy scheduling of processing and transmission tasks collaborative systems. ACM Conference on Supporting Group Work (GROUP). 2009. pp: 159-168.
115
REFERENCES
[9] Begole, J. Rosson, M.B., and Shaffer, C.A. Flexible collaboration transparency: supporting worker independence in replicated application-sharing systems. ACM Transactions on Computer-Human Interaction (TOCHI). Vol . 6(2). Jun 1999. pp: 95-132.
[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.[11] Chung, G. and Dewan, P. Towards dynamic collaboration architectures. ACM Conference on Computer Supported Cooperative
Work (CSCW). 2004. pp: 1-10.[12] Ellis, C.A. and Gibbs, S.J. Concurrency control in groupware systems. ACM SIGMOD Record. Vol. 18 (2). Jun 1989. pp: 399-407.[13] Graham, T.C.N., Phillips, W.G., and Wolfe, C. Quality Analysis of Distribution Architectures for Synchronous Groupware.
Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2006. pp: 1-9.[14] Greif, I., Seliger, R., and Weihl, W. Atomic Data Abstractions in a Distributed Collaborative Editing System. Symposium on
Principles of Programming Languages. 1986. pp: 160-172.[15] Gutwin, C., Dyck, J., and Burkitt, J. Using Cursor Prediction to Smooth Telepointer Actions. ACM Conference on Supporting Group
Work (GROUP). 2003. pp: 294-301.[16] Gutwin, C., Fedak, C., Watson, M., Dyck, J., and Bell, T. Improving network efficiency in real-time groupware with general
message compression. ACM Conference on Computer Supported Cooperative Work (CSCW). 2006. pp: 119-128.[17] Jay, C., Glencross, M., and Hubbold, R. Modeling the Effects of Delayed Haptic and Visual Feedback in a Collaborative Virtual Environment. ACM Transactions on Computer-Human Interaction (TOCHI). Vol. 14 (2). Aug 2007. Article 8.[18] Jeffay, K. Issues in Multimedia Delivery Over Today’s Internet. IEEE Conference on Multimedia Systems. Tutorial. 1998.[19] p2pSim: a simulator for peer-to-peer protocols. http://pdos.csail.mit.edu/p2psim/kingdata. Mar 4, 2009.[20] Shneiderman, B. Designing the user interface: strategies for effective human-computer interaction. 3rd ed. Addison Wesley.
1997.[21] Sun, C. and Ellis, C. Operational transformation in real-time group editors: issues, algorithms, and achievements. ACM Conference
on Computer Supported Cooperative Work (CSCW). 1998. pp: 59-68.[22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM
Symposium on Interactive Computing Systems. 2009. pp: 275-284.[23] Youmans, D.M. User requirements for future office workstations with emphasis on preferred response times. IBM United
Kingdom Laboratories. Sep 1981.[24] Zhang, B., Ng, T.S.E, Nandi, A., Riedi, R., Druschel, P., and Wang, G. Measurement-based analysis, modeling, and synthesis of the
internet delay space. ACM Conference on Internet Measurement. 2006. pp: 85-98.