lightweight remote procedure call
DESCRIPTION
Lightweight Remote Procedure Call. BRIAN N. BERSHAD, THOMAS E. ANDERSON, EDWARD D. LASOWSKA, AND HENRY M. LEVY UNIVERSTY OF WASHINGTON. PRESENTER: ALIREZA GOUDARZI Jan 27, 2010 [email protected] CS 533 , Winter 2020 , John Walpole. - PowerPoint PPT PresentationTRANSCRIPT
Lightweight Remote Procedure Call
BRIAN N. BERSHAD, THOMAS E. ANDERSON, EDWARD D. LASOWSKA,
AND HENRY M. LEVYUNIVERSTY OF WASHINGTON
"Lightweight Remote Procedure Call" by B. N. Bershad, T. E. Anderson, E. D. Lazowska, and H. M. Levy, Proceedings of the 12th Symposium on Operating Systems Principles, pp. 102-113, December 1989.
PRESENTER: ALIREZA GOUDARZIJan 27, 2010
[email protected] 533 , Winter 2020 , John Walpole
Waves of Computing Paradigms
Centralized
Virtualization & Cloud
Distributed
Limited capacity , expensive , hard to manage , low reliability ….
Ubiquitous computing
You are here
Distributed Computing
OS
Machine
RPC: Inter-machine communication with abstraction of a procedure call.
• Each machine run a monolithic kernel OS• Distributed application talk to each other• Machines ( OS ) talk to each other
• Domain of protection is an entire machine• OS internally is one piece
OS
Machine
OS
Machine
OS
Machine
Emergence of Microkernel in OS
Modularity correctness hit the kernel atmosphere
From Wikipedia : http://en.wikipedia.org/wiki/Architecture_of_Windows_NT
Microkernel Implications
• Separate components • Separate protection domains– Might or might now sharing the same address
space• Component inter-comm. in a shared memory
conventional IPC
What’s wrong with that ?
IPC
• way too much coupling between components– Logic and semantics of components are exposed
• Defies the purpose of modular design• Difficult to use• Control flow transfer is awkward
?You are here
What’s the problem?
THE BIG PROBLEM
• Control flow transfer• Data transfer• Separate address space • On a same memory space
RPC
• Syntax of a procedure call• Semantics of a procedure call• But!....
RPC : Control Flow / Data Transfer
Client Kernel Server
4 context switch , 4 buffer copy , expensive access validation , expensive stubsExpensive scheduling
Different naming but gist of it is that these are separate address spaces ( might be in the same shared memory or not ) and to give them access to the data you have to copy the data back and forth
Any Alternatives?
Protected Procedure Call
• Each object is protected with an Access Control List (ACL)
• Different objects can access each other if their have the right permission to another object.
• Then an object can call a procedure if it has sufficient right to do so.
• But….
Protected Procedure Call
• It is just a procedure call.• Within a single address space• address space / protection domain
=> NO REAL IPC
Solution
Lightweight Remote Procedure Call
IPC semantics of remote procedure call
Efficiency of protected procedure call
+
=
Lightweight-RPC : The Middle Way
• Observations in RPC systems : – Most communications are not cross-machines– Most attributes are simple not complex
• LRPC as a RPC exploiting the single machine boundary.
Main Gains
• Simple control transfer ( b/w addr. spaces)• Simple data transfer ( b/w addr. spaces)• Simple stubs ( low overhead ) • Designing with single-machine-concurrency in
mind.
LRPC Call: Control Flow TransferProcedure Call ( hand-off scheduling)• Semantics imply we can safely bypass
scheduler ! Why?• We can invoke the server procedure with only
one formal call !!! How?
LRPC: Call
Client
Stub
Kernel
Stub
Server
A-Stack
L-Stack E-Stack
P-ID
U-thread + Server Stack
2- Copy of Param.
1- call
3 – prepare the context
4-upcall
LRPC Call : Multiprocessor
• TLB miss problem in server domain• CPU context caching in multiprocessor system• CPU idles on server / client domain– CPU context migration is cheaper than flushing
and reloading TLB• No locks only one for each A-stack queue in
client stub.
LRPC: Binding
• How does validation work? ( call / return ) • How does client find the server?• How does kernel allocate and manage
memory?
LRPC Binding: Client Import
Client
Stub
Kernel
Stub
Server
1- registration
2- listening
A-Stack
3- request
Binding O
4- binding
L-Stack E-Stack
PDL3- response
Special Cases
• Between Machine Calls• Complex/Large Params.• Termination
Q & A
You are here