parallel computing
TRANSCRIPT
1
WELCOME
K.NARAYANA
08Q61A0575
2
Execution
EXAMPLE :-
• main()
{
• for (int i = 0; d.get_meaning(i,s) != 0; ++i)
• cout << (i+1) << ": " << s << "\n";
• return 0;
}
4
5
For example:
Traditionally software has been written for serial computation.
7
8
Load Balancing
10
11
Parallel Computer Memory Architectures:
Parallel Computer Memory Architectures:
Distributed Memory
There are different ways to classify parallel computers
• classified along the two independent dimensions of Instruction and Data
• SISD – Single Instruction, Single Data
• SIMD – Single Instruction, Multiple Data
• MISD – Multiple Instruction, Single Data
• MIMD – Multiple Instruction, Multiple Data
ADVANTAGES
• In the simplest sense, parallel computing is
To be run using multiple CPUs
A problem is broken into discrete parts that can be solved concurrently
Each part is further broken down to a series of instructions
Instructions from each part execute simultaneously on different CPUs
Synchronization
Problem decomposition
Data Dependencies
20
Parallel computing Overheads
21
Problem decomposition
Data Dependencies
• 1: function Dep(a, b)
• 2: c := a·b
• 3: d := 3·c
• 4: end function
• 1: function NoDep(a, b)
• 2: c := a·b
• 3: d := 3·b
• 4: e := a+b
• 5: end function
Conclusion
• Parallel computing is fast.
• There are many different approaches and models of parallel computing.
• Parallel computing is the future of computing.
• Solve larger problems
25
THANK YOU