caching. andrew security andrew scale and performance sprite performance

Post on 21-Dec-2015

236 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Caching

 

Caching

• Andrew Security• Andrew Scale and Performance• Sprite Performance

Andrew File System

Sprite

Network File System

Andrew File System

• AFS, AFS2, Coda• 1983 to present, Satya its

champion• Ideas spread to other systems, NT

Security Terms

• Release, Modification, Denial of Service

• Mutual suspicion, Modification, Conservation, Confinement, Initialization

• Identification, Authentication, Privacy, Nonrepudiation

System Components

 Vice

Secure Servers VirtueProtected Workstations

VenusVirtual File System

Authentication Server

Andrew Encryption

• DES - Private Keys• E[msg,key], D[msg,key]• Local copy of secret key• Exchange of keys doesn’t scale

– Web of trust extends to lots of servers– Pair wise keys unwieldy

Andrew Authentication

• Username sent in the clear• Random number exchange

– E[X,key] sent to server (Vice)– D[E[X,key],key] = X– E[X+1,key] to client (Venus)

• BIND exchanges session keys

Authentication Tokens

• Description of the user• ID, timestamp valid/invalid• Used to coordinate what should be

available from Vice (server) to Virtue (client)

Access Control

• Hierarchical groups– Project/shared accounts discouraged

• Positive/Negative Rights• U(+) — U(-)• VMS linear list & rights IDs• Prolog engine in NT• Netware has better admin feedback

Resource Usage

• Network not an issue– Distributed DOS ‘hard’

• Server High Water Mark– Violations by SU programs tolerated– Daemon processes given ‘stem’ accnt

• Workstations not an issue– User files in Vice

Other Security Issues

• XOR for session encryption• PC support via special server• Diskless workstations avoided

Enhancements

• Cells (NT Domains)• Kerberos• Protection Server for user

administration

Sprite Components

 Client Server

Local Disk Server Disk

Client Cache Server Cache

Sprite Design

• Cache in client and server RAM• Kernel file system modification

– Affects system/paging and user files

• Cache size negotiated with VM• Delayed 30s write-back

– Called ‘laissez-faire’ by Andrew

NFS Comparison

• Presumed optimized• RPC access semantics

– NFS uses UDP, others TCP

• Sprite targeting 100+ nodes• Andrew targeting 5,000+ nodes

Andrew Scale and Performance

• Dedicated server process per client• Directory redirection for content• Whole file copy in cache

Problems already…

• Context switching in server• TCP connection overhead

– Session done by kernel

• Painful to move parts of VFS toother servers– Volume abstraction fixed this later

Cache Management

• Write on close• No concurrent

write• Versioning• User level

• Delayed write• Cache disabled• Versioning• Kernel level

Function Distribution

• TestAuth - validate cache• GetFileStat - file status• Fetch - server to client• Store - client to server

61.7%26.8%4.0%2.1%

Performance Improvements

• Virtue caches directory• Local copy assumed correct• File id’s, not names, exchanged• Lightweight Processes (LWP)

– Context data record on server

Andrew Benchmarks

 

Sprite Throughput

 

Sprite Benchmarks

 

Sprite Benchmarks

 

Cache Impact - Client

 

Cache Impact - Server

 

Cache Impact - Net

 

Comparison

 

General Considerations

• 17-20% slower than local

• Server bottleneck• Scan for files and read

almost all local

• 6-8x faster vs no cache

• Server cache extends local cache

• Remote paging fast as local disk!

• 5x users/server

Fini

top related