visualization system for measuring concurrency in distributed

100
ii ABSTRACT Distributed programs written using MPI are only concerned with time and message complexities. However these measures only address the quantitative aspect of the distributed programs. The qualitative aspect, which is the performance of the distributed program, can be ascertained only by measuring the concurrency of the distributed program. An approach to calculate the concurrency of a distributed computation is quoted in [Raynal 1992]. According to this approach, concurrency can be calculated using two abstractions, cone and cylinder. Cone abstraction is associated with individual events in the distributed computation and cylinder abstraction is associated with the whole computation. Both cone and cylinder abstractions make use of the values: weight, volume and height. With the help of these 3 values, the measure of concurrency can be calculated for both cone and cylinder abstraction. In this paper, a visualization tool is proposed which will analyze the concurrency of a distributed program by making use of the above discussed approach, and the accuracy and efficiency of the visualization tool is tested using 4 benchmark programs.

Upload: others

Post on 03-Feb-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Visualization System for Measuring Concurrency in Distributed

ii

ABSTRACT

Distributed programs written using MPI are only concerned with time and

message complexities. However these measures only address the quantitative aspect of

the distributed programs. The qualitative aspect, which is the performance of the

distributed program, can be ascertained only by measuring the concurrency of the

distributed program.

An approach to calculate the concurrency of a distributed computation is quoted

in [Raynal 1992]. According to this approach, concurrency can be calculated using two

abstractions, cone and cylinder. Cone abstraction is associated with individual events in

the distributed computation and cylinder abstraction is associated with the whole

computation. Both cone and cylinder abstractions make use of the values: weight, volume

and height. With the help of these 3 values, the measure of concurrency can be calculated

for both cone and cylinder abstraction.

In this paper, a visualization tool is proposed which will analyze the concurrency

of a distributed program by making use of the above discussed approach, and the

accuracy and efficiency of the visualization tool is tested using 4 benchmark programs.

Page 2: Visualization System for Measuring Concurrency in Distributed

iii

TABLE OF CONTENTS

Abstract .............................................................................................................................. ii

Table of Contents ............................................................................................................... iii

List of Figures ......................................................................................................................v

1. Introduction .................................................................................................................1

2. Background and Related Research…. ........................................................................3

2.1 Concurrency measures .......................................................................................4

2.1.1 Cone and cylinder abstractions .................................................................5

2.1.2 Calculating concurrency ...........................................................................6

3. System Design…. .......................................................................................................7

3.1 Graphical User Interface ....................................................................................9

3.1.1 Authorization ............................................................................................9

3.1.2 Input GUI ................................................................................................10

3.1.3 Progress Bar ............................................................................................12

3.1.4 Output GUI .............................................................................................12

3.2 Calculating Concurrency .................................................................................16

3.2.1 Parsing the clog2 file ..............................................................................16

3.2.2 Parsing the events file .............................................................................17

3.2.3 Calculating concurrency .........................................................................22

3.3 Concurrency Database .....................................................................................24

4 Experimental Results…. ...........................................................................................25

4.1 Vector clock generation ...................................................................................25

4.2 Leader election algorithm ................................................................................28

Page 3: Visualization System for Measuring Concurrency in Distributed

iv

4.3 Random number of internal events ..................................................................32

4.4 Same number of internal events .......................................................................36

5. Conclusion…. ...........................................................................................................41

Acknowledgements ............................................................................................................43

References…….. ................................................................................................................44

Appendix A……. ...............................................................................................................45

Appendix B……. ...............................................................................................................56

Appendix C……. ...............................................................................................................91

Page 4: Visualization System for Measuring Concurrency in Distributed

v

LIST OF FIGURES

Figure 1. Time event diagram .............................................................................................4

Figure 2. System Design .....................................................................................................8

Figure 3. Username and Hostname .....................................................................................9

Figure 4. Request for Password ........................................................................................10

Figure 5. Incorrect Username/Password ...........................................................................10

Figure 6. Input GUI...........................................................................................................11

Figure 7. Progress Bar.......................................................................................................12

Figure 8. Output GUI ........................................................................................................13

Figure 9. Concurrency VS Number of processes..............................................................13

Figure 10. Concurrency VS Number of events .................................................................14

Figure 11. Concurrency VS Time (in ms) ........................................................................14

Figure 12. Concurrency VS Time (in ms) ........................................................................15

Figure 13. Concurrency VS Time (in ms) ........................................................................15

Figure 14. Snapshot of clog2 file ......................................................................................17

Figure 15. Parser Design ...................................................................................................18

Figure 16. Internal working of Events ..............................................................................19

Figure 17. Concurrency VS Number of processes............................................................26

Figure 18. Concurrency VS Number of events .................................................................26

Figure 19. Individual Graph ..............................................................................................27

Figure 20. Combined Graph .............................................................................................27

Figure 21. Stacked Graph .................................................................................................28

Figure 22. Concurrency VS Number of processes............................................................29

Page 5: Visualization System for Measuring Concurrency in Distributed

vi

Figure 23. Concurrency VS Number of events .................................................................30

Figure 24. Individual Graph ..............................................................................................30

Figure 25. Combined Graph .............................................................................................31

Figure 26. Stacked Graph .................................................................................................32

Figure 27. Concurrency VS Number of processes............................................................33

Figure 28. Concurrency VS Number of events .................................................................34

Figure 29. Individual Graph ..............................................................................................34

Figure 30. Combined Graph .............................................................................................35

Figure 31. Stacked Graph .................................................................................................36

Figure 32. Concurrency VS Number of processes............................................................37

Figure 33. Concurrency VS Number of events .................................................................38

Figure 34. Individual Graph ..............................................................................................38

Figure 35. Combined Graph .............................................................................................39

Figure 36. Concurrency VS Number of processes............................................................40

Page 6: Visualization System for Measuring Concurrency in Distributed

1

1. INTRODUCTION

In sequential programs, the efficiency of the program is described by using two

measures, time and space complexity. In parallel applications, the performance of the

program or algorithm is measured by calculating the performance gain of the parallel

algorithm over the sequential algorithm, both written for the same problem. This measure

is known as speedup and it takes into consideration the number of processes running in

the parallel program [Bertsekaa 1989].

In distributed algorithms, time and message complexities are the two measures

which are used to identify the efficiency of a distributed program. However these two

measure are quantitative and don’t answer any questions about the quality of the

distributed program. Without a measure to calculate the quality of the distributed

program, we cannot know if the execution is well distributed and if the execution will

have delays due to synchronization constraints [Raynal 1992].

These questions can be answered by finding the concurrency of the distributed

computation; the approach to measure concurrency is quoted in [Raynal 1992].

According to this approach, the degree of concurrency can be calculated by quantifying

the synchronization delay. Synchronization delay is a delay that might exist between two

successive events in the same process.

In order to measure the concurrency, the above approach makes use of two

abstractions, cone and cylinder, which are used to quantify the synchronization delay and

find the concurrency measure. Cone abstraction is associated with individual events,

Page 7: Visualization System for Measuring Concurrency in Distributed

2

whereas the cylinder abstraction is associated with the whole computation. Three values

are associated with both the abstractions: weight, volume and height.

Cone and cylinder abstraction will be discussed further in section 2. The design of

the visualization system that automates the calculation of concurrency by making use of

the said approach will be discussed in section 3. The visualization tool is run for several

benchmark programs. The evaluation of the benchmark programs using the visualization

system will be discussed in section 4.

Page 8: Visualization System for Measuring Concurrency in Distributed

3

2. BACKGROUND AND RELATED RESEARCH

In [Lamport 1978], Lamport proposed the concept of causality relation (also

known as happen before relation) denoted by “” which described the causal ordering of

events. According to happen before relation if two events a and b are in the same process

and a happened before b, then we say that ab. If a is an event of sending a message m

in one process and b is an event of receiving that message m in another process, then we

say that ab. Causality relation is transitive, so if ab and bc then ac.

To implement the concept to causality relation in distributed system, Lamport

proposed logical clock in 1978. Logical clocks are used to implement a partial ordering

of events with minimal overhead. In a system using logical clocks, if an event happens

before another event in the same process or in another process, then the timestamp of the

first event will be less than the timestamp of the second event. There are 3 types of events

that can happen in a distributed computation: send, receive and internal event. Send event

is the event of sending a message m to another process. Receive event is the event of

receiving the message m from another process. Internal event is said to occur when a new

event happens in the same process. These conditions are known as correctness conditions

and logical clocks must always satisfy these conditions. The drawback with logical

clocks are that when there exist two events a and b in different process and if the

timestamp of a is less than timestamp of b, then we cannot say that the event ab.

In [Mattern 1988, Fidge 1991], Colin Fidge and Friedemann Mattern

independently proposed vector clocks which overcame the limitation of logical clocks. A

Page 9: Visualization System for Measuring Concurrency in Distributed

4

vector clock of a system of N processes is a vector of N logical clocks, one clock per

process. In vector clocks, a local copy of the global clock array is kept in each process.

Initially all clocks are set to zero. In the case of internal event, the process will increment

is own logical clock in the vector by 1. In the case of send event, the process increment

its own logical clock in the vector by 1 and also sends the entire vector clock along with

the message. In the case of receive event, the process increments its logical clock in the

vector by 1 and updates each element in its vector by taking the maximum of the value in

its own vector clock and the value in the vector in the received message.

However vector clock only gives us the timestamp of the event. We can use this

timestamp values to find the interval between 2 events but the concurrency of the

distributed computation cannot be known from the vector clock elements.

2.1 Concurrency measures

While calculating concurrency measure, it is assumed that the message passing is

instantaneous. It is also assumed that each event consumes approximately the same

amount of computing time. This computing time is known as time unit.

Figure 1. Time event diagram [Raynal 1992]

Page 10: Visualization System for Measuring Concurrency in Distributed

5

From Figure 1, there are 3 processes and each process contains some events. The

delay between two events is known as synchronization delay. The purpose of

concurrency measure is to effectively measure the number of synchronization delay. This

idea was first introduced in [Charron-Bost 1989].

2.1.1 Cone and cylinder abstractions

In order to effectively measure synchronization delay, cone and cylinder

abstractions were introduced. Cone abstraction deals with the concurrency measure of the

individual event. Cylinder abstraction deals with the concurrency measure of the entire

computation.

From [Raynal 1992], with cone and cylinder abstractions, three values are

associated: volume, weight and height. Weight, denoted by wt, in a cone abstraction is the

exact number of events that causally precede an event e. Weight in a cylinder abstraction

is the number of events that are produced in the total computation. Volume, denoted by

vol, in a cone abstraction is the maximum number of events that could possibly precede

an event e. Volume in a cylinder abstraction is the maximum number of events that can

be produced in the entire computation. Height, denoted by ht, in a cone abstraction is the

number of events that could come before the event e but it has to be on the longest causal

path ending with event e. Height in a cylinder abstraction is the largest logical time

associated with an event e.

Page 11: Visualization System for Measuring Concurrency in Distributed

6

2.1.2 Calculating concurrency

From [Raynal 1992], the concurrency measure is calculated for cone and cylinder

abstraction using the following formulas,

For cone abstraction:

( )

( ( )) ( ( ))

( ( )) ( ( ))

For cylinder abstraction:

( ) ( ( )) ( ( ))

( ( )) ( ( ))

In the above equations the numerator denote the total number of synchronization

delay which have actually occurred and the denominator denotes the maximum number

of synchronization delay which could have possibly occurred.

Once the value of α’e (e) and α’ (C) is calculated, it has to be subtracted from 1 in

order to get the concurrency measure. The reason it has to be subtracted from 1 is

because as it is seen in both [Charron-Bost 1989, Fidge 1990], when α = 1, the

computation is said to be maximally concurrent and when α = 0, the computation is said

to be sequential. Hence by subtracting by 1, the concurrency measure calculated in

[Raynal 1992] can be made compatible with [Charron-Bost 1989, Fidge 1990].

Page 12: Visualization System for Measuring Concurrency in Distributed

7

3. SYSTEM DESIGN

The goal of the visualization system is to give the user a way to analyze the

concurrency of a distributed computation. The input to the visualization system is a clog2

trace file which is generated by the program written using MPICH2. The clog2 is a

default log format of Multi-Processing Environment (MPE), which is a suite of

performance analysis tools for programs written for MPI [MPI website]. The

visualization system was written solely in Java using the Eclipse IDE. The concurrency

measure database was created using a MySQL database and the bridge between Java and

MySQL was made with the help of Hibernate. Third-party Java libraries such as

JFreeChart was used to display graphs and JSch was used to establish connection with the

Linux server.

From Figure 2, the user would initiate the application by giving authorization to

make a connection to the Linux server by entering the hostname, username and password.

Once the connection is established with the server a GUI will be displayed prompting the

user to enter the name of the MPI program, number of processes and a command line

argument, which is optional. The application can access the MPI program file on the

server written in C/C++ only when it is stored in a folder called “Project” and has a

makefile. The application then executes the MPI program on the server multiple times

and each time the MPI program executes, the clog2 file will be converted from binary to

text format using the “clog2_print” UNIX command and then the contents of clog2 file

will be written and stored in a text file on the client machine. With the help of these clog2

file, the visualization system can measure the concurrency of the distributed computation

and it stores the concurrency measures in the concurrency measure database. When all

Page 13: Visualization System for Measuring Concurrency in Distributed

8

the concurrency measure values have been calculated, the output GUI is displayed. From

the output GUI, the user can select a graph to view. Each step of the system design in

Figure 2 will be discussed in detail in the next section.

Figure 2. System Design

USER

INPUT

GUI

LINUX

SERVER

OUTPUT

GUI

DISPLAY

GRAPHS

CONCURRENCY

MEASURE

DATABASE

AUTHORIZATION

Calculating Concurrency

Page 14: Visualization System for Measuring Concurrency in Distributed

9

3.1 Graphical User Interface

The Graphical User Interface (GUI) provides the user a way to interact with the

visualization system. The GUI of the visualization system consists of 4 panels:

1) Authorization

2) Input GUI

3) Progress Bar

4) Output GUI

3.1.1 Authorization

When the application is initialized it prompts the user for the hostname and

username as shown in Figure 3. The username and hostname of the server should be

entered in the format “[email protected]” where the part before @ is the username for

the server and the part after @ is the hostname of the server. The application would take

the name of the local machine as the username and hostname as “penguin.tamucc.edu” by

default; however the user can anytime change the username and hostname before

proceeding.

Figure 3. Username and Hostname

In the next step, the user is prompted for the password to make a connection to the

Linux server as shown in Figure 4. The user would have to enter the correct password to

make a successful connection with the server. If the user leaves the field blank and

proceeds, he will be prompted again to enter the correct password.

Page 15: Visualization System for Measuring Concurrency in Distributed

10

Figure 4. Request for Password

If the user entered the wrong username, hostname or password then the

application would display an error message as shown in Figure 5. In this case, the user

would be again prompted to enter the correct username and password.

Figure 5. Incorrect Username/Password

3.1.2 Input GUI

The name of the program, number of processes and any command-line argument

are the 3 essential things needed to run an MPI program. The input GUI prompts the user

to enter the name of the program, number of processes and a command-line argument

which is optional, as shown in Figure 6.

Page 16: Visualization System for Measuring Concurrency in Distributed

11

Figure 6. Input GUI

Once the generate button is pressed, the user’s input would be used to construct

31 text files. The first text file will contain the UNIX command to access the folder

“Project” and mpirun statement (command to execute MPI program) to run the program

using the given input. After the execution of the MPI program, the UNIX command

“clog2_print” is run to convert clog2 file from binary to text format and print it on the

screen. This information is stored on the local machine as a text file.

The next 15 text files will also contain the UNIX commands to access the folder

“Project” but the number of processes in the mpirun statements is varied each time. By

varying the number of processes, a graph can be plotted depicting the change in

concurrency with the change in number of processes.

The next 15 text files will also contain the UNIX commands to access the folder

“Project” but the command-line argument in the mpirun statements is varied each time.

By varying the number of processes, a graph can be plotted depicting the change in

concurrency with the change in number of events.

The 31 text files are created on a temporary basis. They are deleted by the

application soon after used.

Page 17: Visualization System for Measuring Concurrency in Distributed

12

3.1.3 Progress Bar

Once the user enters the input fields in the input GUI, the application would have

to make a connection to the server and execute the MPI program 31 times. Then the

concurrency measure has to be calculated for all the 31 programs. The time to execute

and to calculate concurrency measure for all the 31 programs may take a lot of time. Here

the progress bar comes into play as it displays to the user the current status of the

application, as shown in Figure 7.

Figure 7. Progress Bar

3.1.4 Output GUI

When the progress bar reaches 100%, the application would display the output

GUI. The output GUI consists of 6 buttons, 5 of the 6 buttons corresponds to a type of

graph and the 6th

is the exit button, as shown in Figure 8.

Page 18: Visualization System for Measuring Concurrency in Distributed

13

Figure 8. Output GUI

The 1st button when clicked would display the graph plotted with percentage of

concurrency measure on the y-axis and the number of processes on the x-axis, as shown

in Figure 9.

Figure 9. Concurrency VS Number of processes

The 2nd

button when clicked would display the graph plotted with percentage of

concurrency measure on the y-axis and the number of events on the x-axis, as shown in

Figure 10.

Page 19: Visualization System for Measuring Concurrency in Distributed

14

Figure 10. Concurrency VS Number of events

The 3rd

button when clicked would display the graph plotted with percentage of

concurrency measure on the y-axis and time (in milliseconds) on the x-axis, as shown in

Figure 11.

Figure 11. Concurrency VS Time (in ms)

Page 20: Visualization System for Measuring Concurrency in Distributed

15

The 4th

button when clicked would display the graph plotted with percentage of

concurrency measure on the y-axis and time (in milliseconds) on the x-axis, however the

concurrency measure of each process is displayed separately, as shown in Figure 12.

Figure 12. Concurrency VS Time (in ms)

The 5th

button when clicked would display the graph plotted with percentage of

concurrency measure on the y-axis and time (in milliseconds) on the x-axis, however this

graph is a collection of smaller graphs stacked one over the other, where each graph

corresponds to a process, as shown in Figure 13.

Figure 13. Concurrency VS Time (in ms)

Page 21: Visualization System for Measuring Concurrency in Distributed

16

3.2 Calculating Concurrency

This is the main section of the system design. All the processing and calculations

happen in this section. The clog2 text files created on the local machine as seen in section

4.1.2 will be first parsed through a parser. This parser would extract all the necessary

information from these clog2 files and then write the extracted information to a text file.

The newly generated text file would then be passed through another parser which will

generate the vector clock for each event and by making use of the vector clocks; the

concurrency measure of the cylinder and cone abstraction can be calculated.

3.2.1 Parsing the clog2 file

The original clog2 file is passed through this parser to find the size of the

communicator and the number assigned to the starting and ending event of the internal

event. This is done by scanning each line of the clog2 file searching for some specific

words. The first word is “max_comm_word_size”, the variable assigned to this word will

be the size of the communicator. The size of the communicator is extracted and stored in

the variable “size”. The words “s_et” and “e_et” corresponds to start event and end event

respectively and the number assigned to both start and end event are extracted, as shown

in Figure 14.

Page 22: Visualization System for Measuring Concurrency in Distributed

17

Figure 14. Snapshot of clog2 file

Once we have the size, start event and end event, we can now scan the document

and search for only the required information. The parser scans each sentence, searching

for the occurrence of “send”,”recv”, start event and end event. The lines which contain

these words will be written to another text file. This text file will be named as the name

of the program entered by the user in the input GUI followed by “_required info”. The

newly created text file will be used by the application to measure the concurrency of the

distributed computation.

3.2.2 Parsing the events file

The text file created in the previous section will be passed through this parser,

which is known as “InfoParser”. This parser would first scan the text file line by line

determining which event is occurring on each line. So when the parser encounter a

“send” in the line then it would be a send event, similarly if it is “recv” then it would be

the receive event and if it is end event then it would be an internal event, as shown in

Figure 15.

Page 23: Visualization System for Measuring Concurrency in Distributed

18

Figure 15. Class Diagram

If the event is a send event then the “Send Event” method is called which will

update the vector clock of the sender process. It first checks if the clock has been

initialized, if the clock has already been initialized it will call the “Update Clock” method

otherwise it will call the “New Clock” method. Similarly the vector clock of the receiver

process is updated if the type of the event is internal event. In the case of the receive

event, the receiver process will be updated similarly to send and internal event, however

it has an additional method call “Receiver Update”, as shown in Figure 16.

The parser will create two vector clocks for each event in the distributed

computation. The first vector clock is known as “LocalClock”, this vector clock is like

any other vector clock. However, the second vector clock which is known as

“WeightClock” is a special vector clock, as it keeps tracks of the synchronization delays

between the events.

Method Line separator

When a line is read from the text file created in section 3.2.1, the parser would

come to know if the event is a send, receive or internal event by looking at the type of the

Parser

Send Event Recv Event Internal Event

Page 24: Visualization System for Measuring Concurrency in Distributed

19

message. Once the type of event is known, the parser would call the respective function,

so if the type of the event is send, the parser would call send event, if it is receive then the

parser would call the receive event, similarly for the internal event.

The corresponding event will then make use of the line separator to scan and

analyze the line. The line separator scans the line and extracts and stores the timestamp of

the event and the rank of the sender process and receiver process.

Figure 16. Internal working of Events

Send Event Recv Event Internal Event

Clock

=new

Update Clock

New Clock

Clock

=new

Update Clock

New Clock

Clock

=new

Update Clock

New Clock

Receiver

update

Yes

No

Yes

No

Yes

No

Page 25: Visualization System for Measuring Concurrency in Distributed

20

Method Internal event

If the event is an internal event then it first checks if the receiver process has been

initialized or not. If the receiver process is not initialized then internal event will call the

“New Clock” method. If the receiver process is already initialized then it will call the

“Update Clock” method.

Method Send event

If the event is a send event then it first checks if the sender process has been

initialized or not. If the sender process is not initialized then send event will call the

“New Clock” method. If the sender process is already initialized then it will call the

“Update Clock” method.

The above procedure works well when the send, receive and internal event

happen in order. But if there are multiple sends occurring one after the other, the above

procedure would calculate incorrect vector clock values. This problem was solved by

making use of an array list to store the vector clock after each “Update Clock”. For

example, if there are 3 sends occurring one after the other, the vector clock for each send

would be stored in the array list and when the receive event is encountered it would make

use of the array list to get the corresponding vector clock value to update itself.

Method Receive event

If the event is a receive event then it first checks if the receiver process has been

initialized or not. If the receiver process is not initialized then receive event will call the

“New Clock” method. If the receiver process is already initialized then it will call the

“Update Clock” method. After updating the vector clock of the receiver process, receive

Page 26: Visualization System for Measuring Concurrency in Distributed

21

event will call “Receiver update” to update its value by comparing with the vector clock

of the corresponding send event.

Method Receiver update

Receiver update is called to update the vector clock of the receiver process by

comparing it with the vector clock of the sender process. In the case of multiple send

events as seen in send event, receiver update will first retrieve the corresponding vector

clock of the send event from the array list and then each position of the vector clock in

receiver process is compared with the corresponding position in the vector clock of the

sender process. The maximum of the two values is then updated in the vector clock of the

receiver process.

The above procedure only works when a corresponding send event has already

occurred before this receive event. In some cases, it was noticed that the clog2 text file

had receive events occurring before send events. This problem was solved by simulating

the send event. For example, if receive event occurred before the send event at the very

start of the clog2 file then the vector clock of the receive event will be updated vector

clock with 1 in the ith

position, where i is the rank of the sender process and keeping the

remaining elements as zero. If the receive event occurred before the send event in the

middle of the clog2 file, then the vector clock of the receive event will be updated with

the latest vector clock of the sender process where the ith

position of the vector clock has

been incremented by 1 to simulate the occurrence of send event.

Method New clock

New clock is called to initialize the vector clock. It initializes the vector clock by

first creating a list. The size of the list will be equal to the size of the communicator. This

Page 27: Visualization System for Measuring Concurrency in Distributed

22

list is then filled with zeroes. The ith

position in the list will be incremented by 1, where i

is the rank of the process.

Method Update clock

Update clock is called to update the vector clock. It updates the vector clock of

the event by first retrieving the latest vector clock of that process i and then it increments

the ith

position of the vector clock.

3.2.3 Calculating concurrency

In this section, the concurrency measures are calculated for both cone and

cylinder abstraction. The method used to calculate the concurrency measures is also

discussed.

Method ConcCal

ConcCal is a method defined to calculate the measure of concurrency. This

method takes weight, volume and height as argument. After receiving the values, it

substitutes them in the formula to calculate the measure of concurrency [Raynal 1992].

After the concurrency value is calculated, it subtract the calculated value with one, this is

done to make the concurrency measure value compatible with other concurrency

measures [Raynal 1992]. This value would then be returned back to the method that

called it.

Method Cylinder abstraction

After all the values have been calculated for “LocalClock” and “WeightClock”,

the method “CylinderCal” is called. This method first takes the ith

position of each

process i and stores it in an array. It then adds the elements of the array to find the weight

Page 28: Visualization System for Measuring Concurrency in Distributed

23

of the cylinder. The height of the cylinder is calculated by taking the “WeightClock” of

the last event of the distributed computation and finding the maximum value among

them. Volume is calculated by simply multiplying the height with the number of

processes. Once the value is known for weight, volume and height, the concurrency can

be easily calculated by sending the 3 values as arguments to ConcCal method, which

after calculating the concurrency of the cylinder returns it back to “CylinderCal”.

Method Cone abstraction

Since cone abstraction is associated with individual events, its concurrency must

be calculated after completion of each event. This can be done by having 3 methods for

weight, volume and height. After each completion of event in LocalClock, call the

method “WeightCal”, this method will calculate the weight by adding all the values in the

“LocalClock” of that event. The weight of the event will then be added to the list called

as “Weight”. After completion of each event in “WeightClock” call the method

“VolHeightCal”, this method will calculate the height by subtracting by 1 the ith

position

of the “WeightClock” of the corresponding event, where the event e belongs to process

pi. Volume is calculated by adding all the values of the “WeightClock” of the

corresponding event and then subtracting the final value by 1. After calculating volume

and height, the values will be added to the list “Volume” and “Height” respectively. This

is repeated for each event in the distributed computation.

Once the value of weight, volume and height is known for each event, the method

“ConeCal” is called. This method will remove the first element from the lists “Weight”,

“Volume” and “Height” and then it sends the 3 values as arguments to the method

“ConcCal”, which after calculating concurrency returns it backs to “ConeCal”. This is

Page 29: Visualization System for Measuring Concurrency in Distributed

24

repeated until the 3 lists are empty and we finally get the measure of concurrency for

each event in the distributed computation.

3.3 Concurrency Database

As we have seen in section 4.1.3, the MPI program is executed 31 times. In its

first run the MPI program will be executed using the input values given by the user. In

the next 15 runs, the MPI program is executed by incrementing the number of process by

one in each run. In the last 15 runs, the MPI program is executed by incrementing the

command line argument by five in each run. After the program executes 31 times, we end

up with 31 concurrency values, where each concurrency value correspond to each run of

the program. The measure of the concurrency of the MPI program which was executed

using the user’s input will be stored in a variable in the program. However the remaining

30 concurrency values should be stored in a database table, so that they can be later used

during the generation of graphs. For this we make use of a database to hold these values.

This is a very simple database which contains just 2 tables, fixed_event and

fixed_processes.

Table fixed_event

This table is used to store the concurrency measures of MPI programs where the

number of process was incremented in each run. The schema of this table is,

fixed_event (sno, name_of_file, no_of_processes, concurrency)

Table fixed_processes

This table is used to store the concurrency measures of MPI programs where the

command line argument was incremented in each run. The schema of this table is,

fixed_processes (sno, name_of_file, no_of_events, concurrency)

Page 30: Visualization System for Measuring Concurrency in Distributed

25

4. EXPERIMENTAL RESULTS

The visualization system is tested using 4 benchmark programs. This section provides

the testing of four benchmark programs and their evaluations for different runs that are

performed.

4.1 Vector clock generation

This program simulates a distributed computation where the ordering of event is

done using vector clocks. It takes an integer value in the command line argument which

represents the total number of external event that should take place in the program. In this

program there are two functions, manager and worker. The manager is responsible for

sending the 1st message to a random process and then the process updates its vector clock

with the received vector and then performs some random number of internal events and

then it chooses a random process and sends its updated vector. When the process

encounter the last message it will send a done message to process 0, which will then send

a request message to all the processes in the communicator asking them to send their

vector clocks, the processes respond back with their vector clocks and finally the

manager(process 0) prints the vector clocks of each process. When the program is

executed, the final output shows the vector clocks of each process after the given number

of external events.

Figure 17 shows the graph for the percentage of concurrency with respect to the

number of processes. It can be seen from the graph that the change in number of process

will result in change in concurrency of the distributed computation.

Page 31: Visualization System for Measuring Concurrency in Distributed

26

Figure 17. Concurrency VS No of Processes

Figure 18 shows the graph for the percentage of concurrency with respect to the

number of events. It can be seen from the graph that the change in number of events will

result in change in concurrency of the distributed computation.

Figure 18. Concurrency VS No of Events

Page 32: Visualization System for Measuring Concurrency in Distributed

27

Figure 19 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the concurrency of this program is zero

most of the time, which tells us that the program is mostly sequential.

Figure 19. Individual Graph

Figure 20 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the concurrency of each process is zero

most of the time, which tells us that the program is mostly sequential.

Figure 20. Combined Graph

Page 33: Visualization System for Measuring Concurrency in Distributed

28

Figure 21 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the concurrency of process 0 was 1 in

the beginning but it fell down to 0. The remaining processes are zero from the beginning,

which tells us that the program is mostly sequential.

Figure 21. Stacked Graph

4.2 Leader election algorithm

In this program there are two functions, manager and worker. The manager is

responsible for assigning a unique identifier to each process and then sending a message

to process 1 to initialize the election algorithm. The worker function is responsible for

electing the leader, once the election initialization message is received from the manager,

it will start the election algorithm. In the election algorithm it performs 3 checks, it

checks if the received identifier is less than, greater than or equal to itself. If the received

identifier is less than its own identifier, then it will forward its own identifier to its

neighbor. If the received identifier is greater than its own identifier than it will simply

Page 34: Visualization System for Measuring Concurrency in Distributed

29

forward the received identifier to its neighbor and finally if the received identifier is equal

to its own identifier then it means that it has the highest identifier and it will elect itself

the leader and forward an elected message to its neighbor. The elected message goes

round the ring until it reaches back to the process which was elected the leader and the

program terminates.

Figure 22 shows the graph for the percentage of concurrency with respect to the

number of processes. It can be seen from the graph that the change in number of process

will result in change in concurrency of the distributed computation.

Figure 22. Concurrency VS Number of processes

Figure 23 shows the graph for the percentage of concurrency with respect to the

number of events. It can be seen from the graph that the change in number of events will

result in change in concurrency of the distributed computation. Since this program does

not take any command line arguments, the graph would be plotted by counting the events

occurring in the program and finding the concurrency, so in some of the cases the number

of events can be same for two or more programs.

Page 35: Visualization System for Measuring Concurrency in Distributed

30

Figure 23. Concurrency VS Number of events

Figure 24 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the concurrency of this program

oscillates between 0 and 1 in the beginning and then it slowly reduces with time.

Figure 24. Individual Graph

Page 36: Visualization System for Measuring Concurrency in Distributed

31

Figure 25 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It is interesting to see in the graph that one process 0 had concurrency of

100% while the concurrency of other processes was in the range of 0 to 10%.

Figure 25. Combined Graph

Figure 26 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the concurrency of process 0 was

100% and the remaining processes had maximum concurrency of 10%.

Page 37: Visualization System for Measuring Concurrency in Distributed

32

Figure 26. Stacked Graph

4.3 Random number of internal events

In this program, each process will send a message to all the other processes at the

start of the program and when any process receives a message it performs a random

number of internal events. The range to select the random number of internal event is

given as a command line argument. The program terminates when all the send messages

have been received.

Figure 27 shows the graph for the percentage of concurrency with respect to the

number of processes. It can be seen from the graph that the change in number of process

will result in change in concurrency of the distributed computation.

Page 38: Visualization System for Measuring Concurrency in Distributed

33

Figure 27. Concurrency VS Number of processes

Figure 28 shows the graph for the percentage of concurrency with respect to the

number of events. It can be seen from the graph that the change in number of events will

result in change in concurrency of the distributed computation. Since this program does

not take any command line arguments, the graph would be plotted by counting the events

occurring in the program and finding the concurrency. In this case all the program had

same number of events.

Page 39: Visualization System for Measuring Concurrency in Distributed

34

Figure 28. Concurrency VS Number of events

Figure 29 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the concurrency of this program had a

good concurrency measure than the previous benchmark programs.

Figure 29. Individual Graph

Page 40: Visualization System for Measuring Concurrency in Distributed

35

Figure 30 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the concurrency of process 0 fall to

zero and remains there until the end of the program while the other processes shown a

good amount of concurrency.

Figure 30. Combined Graph

Figure 31 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the computations performed in process

0 were not as concurrent as the computation performed in the remaining processes.

Page 41: Visualization System for Measuring Concurrency in Distributed

36

Figure 31. Stacked Graph

4.4 Same number of internal events

In this program, each process will send a message to all the other processes at the

start of the program and when any process receives a message it performs n number of

internal events. The number of internal events n is given as a command line argument.

The program terminates when all the send messages have been received.

Figure 32 shows the graph for the percentage of concurrency with respect to the

number of processes. It can be seen from the graph that the change in number of process

will result in change in concurrency of the distributed computation. In this case, we can

conclude from the graph that as the number of process increases, the concurrency

decreases.

Page 42: Visualization System for Measuring Concurrency in Distributed

37

Figure 32. Concurrency VS Number of processes

Figure 33 shows the graph for the percentage of concurrency with respect to the

number of events. It can be seen from the graph that the change in number of events will

result in change in concurrency of the distributed computation. Since this program does

not take any command line arguments, the graph would be plotted by counting the events

occurring in the program and finding the concurrency, so in some of the cases the number

of events can be same for two or more programs.

Page 43: Visualization System for Measuring Concurrency in Distributed

38

Figure 33. Concurrency VS Number of events

Figure 34 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the concurrency of this program varies

a lot in the middle portion of the execution. However, the concurrency measure stabilizes

towards the end.

Figure 34. Individual Graph

Page 44: Visualization System for Measuring Concurrency in Distributed

39

Figure 35 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the concurrency of each process is

around the same range in most part of the execution.

Figure 35. Combined Graph

Figure 36 shows the graph for the percentage of concurrency with respect to time

in milliseconds. It can be seen from the graph that the concurrency of processes 1, 3 and

4 increases around the same time and then it is followed by the processes 0 and 2 around

10ms. Then they remain around the same until termination.

Page 45: Visualization System for Measuring Concurrency in Distributed

40

Figure 36. Stacked Graph

Page 46: Visualization System for Measuring Concurrency in Distributed

41

5. CONCLUSION

In this paper, a visualization system to measure the concurrency of a distributed

computation is implemented. The visualization system is based on the approach proposed

in [Raynal 1992]. In this approach, the concurrency of a distributed computation can be

calculated either for cone abstraction or cylinder abstraction. Cone abstraction is

associated with individual events and cylinder abstraction is associated with the entire

computation. In order to measure concurrency in either abstraction, the value of weight,

volume and height must be calculated.

The proposed visualization system is designed to be very user friendly. The user

would just enter the input values in the user interface and all the necessary calculations

are done by the tool. The user can use this tool to track the concurrency of his program

and find the location where there is a concurrency loss and then optimize his code to

make it more concurrent.

This tool was tested using 4 benchmark programs. It was able to accurately

measure the concurrency for both cylinder and cone abstraction. By looking at the graphs

generated by this tool, we were able to easily differentiate between a sequential program

and a program which was written concurrently.

The visualization system can be further improved by implementing a scanner at

the very start of the program. The function of this scanner would be to scan the clog2 file

and find any inconsistencies and try to rectify them. The common inconsistency that exist

in a clog2 file is the occurrence of a receive event before the respective send event. In

such a case, the visualization system will not be able to generate the correct vector clocks

which will result in incorrect concurrency measures. This inconsistency was found

Page 47: Visualization System for Measuring Concurrency in Distributed

42

mainly in test programs which have overall concurrency greater than 30%. For now we

try to tackle this problem by simulating a send event on the fly whenever such an

inconsistency is detected. But for programs which have very high concurrency measure

the chances of this inconsistency occurring in the same program is very high. In such case

having a scanner that finds and rectifies such inconsistency will add to the robustness of

the visualization system.

For each test program, the visualization tool must be run from the beginning even

if we want to run the same program again. We know that sometimes the test programs

can be huge and may take a lot of time to measure the concurrency. In future, this

problem can be solved by having a proper database to store the concurrency values of

each program. We can then have an option on the GUI, where the user can select the

name of the program which he has already run through the tool and the tool will just load

the values from the database and display them instead of calculating them again.

The tool was tested using program where the maximum concurrency measure was

45%. In future, the tool can be tested for program having concurrency more than 90%,

this will help us to better evaluate the accuracy and efficiency of this visualization tool.

Page 48: Visualization System for Measuring Concurrency in Distributed

43

ACKNOWLEDGEMENT

The preparation of this report and completion of the project was successful

because of the never ending support and guidance of Dr. Michael Scherger, Assistant

Professor of the Department of Computing Sciences, Texas A&M University – Corpus

Christi.

I would like to express my sincere thanks to Dr. Ahmed M. Mahdy, Texas A&M

University – Corpus Christi for his suggestions, comments and guidance throughout the

project. His support has tremendously helped to ensure to the success of the project.

I would like to express my sincere thanks to Dr. David Thomas, Associate

Professor of Computing Sciences, Texas A&M University – Corpus Christi, for his

unending support and warm wishes that helped me to concentrate on completing my

project.

My sincere heartfelt thanks to all the faculty, and staff of the Department of

Computing Sciences for their outstanding support.

Last but not least, I would like to thank my parents, my wife and my family who

provided the much needed moral support and boosted me in reaching the successful

completion of the project.

Page 49: Visualization System for Measuring Concurrency in Distributed

44

REFERENCES

[Bertsekaa 1989] D.P. Bertsekaa and J.N. Tsitsiklis. Parallel and Distributed

Computation, Numerical Methods. Prentice-Hall, Inc., 1989.

[Chandy 1985] Chandy, K.M and Lamport, L. Distributed Snapshots: Determining

Global States of Distributed Systems. ACM Transactions on Computer Systems, Vol. 3,

No. 1, February 1985, Pages 63-75.

[Charron-Bost 1989] Charron-Bost, B. Measure of parallelism of distributed

computations. In B. Monien, editor, Proceedings of the 6th Annual Symposium on

Theoretical Aspects of Computer Science (STACS 89), pages 434-445.

[Fidge 1990] Fidge, C.J. A Simple Run-Time Concurrency Measure. 3rd

Australian

Transputer and OCCAM User Group Conference

[Fidge 1991] Fidge, C.J. Logical Time in Distributed Computing Systems. Computer ,

vol.24, no.8, pp.28-33, Aug 1991

[Lamport 1978] Lamport, L. Time, Clocks, and the Ordering of Events in a Distributed

System. Communications of the ACM, vol. 21, no.7, July 1978

[Mattern 1988] Mattern, F. Virtual Time and Global States of Distributed Systems.

Proceedings of the International Workshop on Parallel and Distributed Algorithms

[Raynal 1992] Raynal, M.; Mizuno, M.; Neilsen, M.L.; , "Synchronization and

concurrency measures for distributed computations," Distributed Computing Systems,

1992., Proceedings of the 12th International Conference on , vol., no., pp.700-707.

Page 50: Visualization System for Measuring Concurrency in Distributed

45

APPENDIX A

GUI Code

1. FrameMain.java

import java.awt.*;

import java.awt.event.*;

import java.io.IOException;

import java.util.ArrayList;

import java.util.Hashtable;

import javax.swing.*;

import org.jfree.ui.RefineryUtilities;

@SuppressWarnings("serial")

public class FrameMain extends JFrame implements Runnable

{

// set the 3 panels for input process and output

private PanelInput inputPanel;

private PanelProcessing processingPanel;

private PanelOutput outputPanel;

private Cmeasure concm;

private ArrayList<?> fixedEvent;

private ArrayList<?> fixedProc;

private ArrayList<Double> indGraph;

private Hashtable<String, ArrayList<Integer>> combGraph;

private Hashtable<String, ArrayList<Integer>> stackGraph;

private String size;

private static String host,pswd;

/**

* @desc initializes the components that is to be found in the frame such as the

* three Panels

*/

public FrameMain()

{

// set the size of the window

this.setSize(500, 230);

// show the window in the center screen

this.setLocationRelativeTo(null);

// set the title of the window

this.setTitle("Concurrency Measure v1.0");

Page 51: Visualization System for Measuring Concurrency in Distributed

46

// set the layout to border layout, we will only occupy the

// CENTER of the frame for our components

this.setLayout(new BorderLayout());

// we start initially with the input panel to be shown in the frame

this.inputPanel = new PanelInput();

this.add(BorderLayout.CENTER, inputPanel);

// we add an event to the input panel's generate button

this.inputPanel.generateButton.addActionListener(new ActionListener() {

public void actionPerformed(ActionEvent e) { generateButtonClick(); }});

}

/**

* @desc this method is executed when the generate button has been clicked

*/

private void generateButtonClick()

{

// we now remove the input panel from the frame

// because it will be replaced by processing panel

this.remove(inputPanel);

int lengthOfTask = 10;

// then we show the processing panel in the frame

this.processingPanel = new PanelProcessing(lengthOfTask);

this.add(processingPanel);

// do a repaint to refresh the window

this.repaint();

this.validate();

// start the animation of the processing panel

Thread thread = new Thread(this);

thread.start();

}

/**

* @desc this method is automatically executed when the start executing

* method is triggered

*/

@Override

public void run()

{

Page 52: Visualization System for Measuring Concurrency in Distributed

47

// get the values entered by the user from the input panel

String numberOfProcessors =

this.inputPanel.numberOfProcessorsTextField.getText();

String nameOfProgram =

this.inputPanel.nameOfProgramTextField.getText();

String numberOfEvents =

this.inputPanel.numberOfEventsTextField.getText();

concm=new Cmeasure();

this.processingPanel.increaseProgress();

progEXE pgexe=new

progEXE(nameOfProgram,Integer.parseInt(numberOfEvents),Integer.parseInt(numberOf

Processors),host,pswd);

// a process has been finished so we increase again the progress

this.processingPanel.increaseProgress();

try {

pgexe.oneFile();

} catch (IOException e1) {

// TODO Auto-generated catch block

e1.printStackTrace();

}

// a process has been finished so we increase again the progress

this.processingPanel.increaseProgress();

try {

pgexe.fixedEvents();

} catch (IOException e1) {

// TODO Auto-generated catch block

e1.printStackTrace();

}

// a process has been finished so we increase again the progress

this.processingPanel.increaseProgress();

try {

pgexe.fixedProc();

} catch (IOException e1) {

// TODO Auto-generated catch block

e1.printStackTrace();

}

Page 53: Visualization System for Measuring Concurrency in Distributed

48

// a process has been finished so we increase again the progress

this.processingPanel.increaseProgress();

try {

fixedEvent=concm.FEController("FixedEvent");

} catch (IOException e1) {

// TODO Auto-generated catch block

e1.printStackTrace();

}

// a process has been finished so we increase again the progress

this.processingPanel.increaseProgress();

try {

fixedProc=concm.FPController("FixedProc");

} catch (IOException e1) {

// TODO Auto-generated catch block

e1.printStackTrace();

}

// a process has been finished so we increase again the progress

this.processingPanel.increaseProgress();

try {

indGraph=concm.IndividualGraph(nameOfProgram);

} catch (IOException e1) {

// TODO Auto-generated catch block

e1.printStackTrace();

}

// a process has been finished so we increase again the progress

this.processingPanel.increaseProgress();

try {

combGraph=concm.CombinedGraph(nameOfProgram);

System.out.println("combGraph: "+combGraph.entrySet());

size=concm.size_of_comm;

System.out.println("Size :"+size);

} catch (IOException e1) {

// TODO Auto-generated catch block

e1.printStackTrace();

}

// a process has been finished so we increase again the progress

this.processingPanel.increaseProgress();

Page 54: Visualization System for Measuring Concurrency in Distributed

49

try {

stackGraph=concm.StackedGraph(nameOfProgram);

size=concm.size_of_comm;

System.out.println("Size :"+size);

} catch (IOException e1) {

// TODO Auto-generated catch block

e1.printStackTrace();

}

// a process has been finished so we increase again the progress

this.processingPanel.increaseProgress();

// SO IF YOU THINK THAT ALL PROCESS IS FINISHED,

// THEN WE SHOW THE OUTPUT to the PANEL OUTPUT

// but we remove first the processing panel

this.remove(this.processingPanel);

// add the panel output

this.outputPanel = new PanelOutput();

this.add(BorderLayout.CENTER, outputPanel);

// add events to the button of the panel output

this.outputPanel.button1.addActionListener(new ActionListener(){ public

void actionPerformed(ActionEvent e){ button1Click(fixedEvent); }});

this.outputPanel.button2.addActionListener(new ActionListener(){ public

void actionPerformed(ActionEvent e){ button2Click(fixedProc); }});

this.outputPanel.button3.addActionListener(new ActionListener(){ public

void actionPerformed(ActionEvent e){ button3Click(indGraph); }});

this.outputPanel.button4.addActionListener(new ActionListener(){ public

void actionPerformed(ActionEvent e){ button4Click(combGraph,size); }});

this.outputPanel.button5.addActionListener(new ActionListener(){ public

void actionPerformed(ActionEvent e){ button5Click(stackGraph,size); }});

this.outputPanel.button6.addActionListener(new ActionListener(){ public

void actionPerformed(ActionEvent e){ button6Click(); }});

// refresh the window

this.repaint();

this.validate();

}

/**

* @desc Event that happens when the first button is clicked from output panel

*/

@SuppressWarnings("rawtypes")

private void button1Click(ArrayList FixedE)

Page 55: Visualization System for Measuring Concurrency in Distributed

50

{

LineChart Lchart = new LineChart("% of Concurrency VS No of

Processes",FixedE,"% of Concurrency","No of Processes");

Lchart.pack();

RefineryUtilities.centerFrameOnScreen(Lchart);

Lchart.setVisible(true);

}

/**

* @desc Event that happens when the first button is clicked from output panel

*/

@SuppressWarnings("rawtypes")

private void button2Click(ArrayList FixedP)

{

LineChart Lchart = new LineChart("% of Concurrency VS No of

Events",FixedP,"% of Concurrency","No of Events");

Lchart.pack();

RefineryUtilities.centerFrameOnScreen(Lchart);

Lchart.setVisible(true);

}

/**

* @desc Event that happens when the first button is clicked from output panel

*/

private void button3Click(ArrayList<Double> individualGraph)

{

IndividualGraph Lchart = new IndividualGraph("% of Concurrency VS

Time(in ms)",individualGraph,"% of Concurrency","Time(in ms)");

Lchart.pack();

RefineryUtilities.centerFrameOnScreen(Lchart);

Lchart.setVisible(true);

}

/**

* @desc Event that happens when the first button is clicked from output panel

*/

private void button4Click(Hashtable<String, ArrayList<Integer>>

combinedGraph,String size_of_comm)

{

System.out.println("Size of comm in combGraph: "+size_of_comm);

CombinedGraph Lchart = new CombinedGraph("% of Concurrency VS

Time(in ms)",combinedGraph,"% of Concurrency","Time(in ms)",size_of_comm);

Lchart.pack();

RefineryUtilities.centerFrameOnScreen(Lchart);

Lchart.setVisible(true);

}

Page 56: Visualization System for Measuring Concurrency in Distributed

51

/**

* @desc Event that happens when the first button is clicked from output panel

*/

private void button5Click(Hashtable<String, ArrayList<Integer>>

stackedGraph,String size_of_comm)

{

System.out.println("Size of comm in combGraph: "+size_of_comm);

StackedGraph Lchart = new StackedGraph("% of Concurrency VS

Time(in ms)",stackedGraph,size_of_comm);

Lchart.pack();

RefineryUtilities.centerFrameOnScreen(Lchart);

Lchart.setVisible(true);

}

/**

* @desc Event that happens when the first button is clicked from output panel

*/

private void button6Click()

{

// Terminate the window

this.setVisible(false);

}

/**

* @desc entry point of the program

* @param args optional argument (we will not use this for this program)

*/

public static void main(String[] args)

{

// Start the main frame.

while(true)

{

host=JOptionPane.showInputDialog("Enter

username@hostname",System.getProperty("user.name")+"@penguin.tamucc.edu");

if(host.length()>0)

{

JPasswordField pwd = new JPasswordField();

JOptionPane.showConfirmDialog(null, pwd,"Enter

Password",JOptionPane.OK_CANCEL_OPTION);

pswd=new String(pwd.getPassword());

if(pswd.length()>0)

break;

else

JOptionPane.showMessageDialog(null, "Please

enter a valid password","password",JOptionPane.ERROR_MESSAGE);

Page 57: Visualization System for Measuring Concurrency in Distributed

52

}

else{

JOptionPane.showMessageDialog(null, "Please enter a

valid username", "username", JOptionPane.ERROR_MESSAGE);

}

}

System.out.println("Host is: "+host);

System.out.println("pswd is: "+pswd);

FrameMain main = new FrameMain();

main.setVisible(true);

}

}

2. PanelInput.java

import javax.swing.*;

@SuppressWarnings("serial")

public class PanelInput extends JPanel

{

// this are fields that can be accessed from the FrameMain.java class

public JTextField numberOfProcessorsTextField;

public JTextField nameOfProgramTextField;

public JTextField numberOfEventsTextField;

public JButton generateButton;

/**

* @desc initialize the components found in the panel input

*/

public PanelInput()

{

// set the layout of the panel so we could freely put the components

// anywhere in the window

this.setLayout(null);

// create the labels

JLabel numberOfProcessorsLabel = new JLabel("Number of Processes:

");

JLabel nameOfProgramLabel = new JLabel("Name of Program: ");

JLabel numberOfEventsLabel = new JLabel("Number of Events: ");

// create the textboxes

this.numberOfProcessorsTextField = new JTextField();

this.nameOfProgramTextField = new JTextField();

this.numberOfEventsTextField = new JTextField();

Page 58: Visualization System for Measuring Concurrency in Distributed

53

// create the buttons

this.generateButton = new JButton("Generate");

// layout the labels accordingly on the screen

numberOfProcessorsLabel.setLocation(10, 10);

numberOfProcessorsLabel.setSize(200, 20);

this.add(numberOfProcessorsLabel);

nameOfProgramLabel.setLocation(10, 50);

nameOfProgramLabel.setSize(200, 20);

this.add(nameOfProgramLabel);

numberOfEventsLabel.setLocation(250, 10);

numberOfEventsLabel.setSize(200, 20);

this.add(numberOfEventsLabel);

// layout the textboxes accordingly on the screen

this.numberOfProcessorsTextField.setLocation(150, 10);

this.numberOfProcessorsTextField.setSize(50, 20);

this.add(numberOfProcessorsTextField);

this.nameOfProgramTextField.setLocation(150, 50);

this.nameOfProgramTextField.setSize(50, 20);

this.add(nameOfProgramTextField);

this.numberOfEventsTextField.setLocation(370, 10);

this.numberOfEventsTextField.setSize(50, 20);

this.add(numberOfEventsTextField);

// layout the button

this.generateButton.setLocation(200, 100);

this.generateButton.setSize(100, 25);

this.add(this.generateButton);

}

}

3. PanelOuput.java

import javax.swing.*;

@SuppressWarnings("serial")

public class PanelOutput extends JPanel

{

// fields that can be accessed in the FrameMain.java

public JButton button1;

public JButton button2;

Page 59: Visualization System for Measuring Concurrency in Distributed

54

public JButton button3;

public JButton button4;

public JButton button5;

public JButton button6;

/**

* @desc initialize 6 buttons into the panel

*/

public PanelOutput()

{

// set the layout of the panel so we could freely put the components

// anywhere in the window

this.setLayout(null);

// initialize the texts of the buttons

this.button1 = new JButton("% of Concurrency VS No of Processes");

this.button2 = new JButton("% of Concurrency VS No of Events");

this.button3 = new JButton("% of Concurrency VS Time[Individual

Graph]");

this.button4 = new JButton("% of Concurrency VS Time[Combined

Graph]");

this.button5 = new JButton("% of Concurrency VS Time[Stacked

Graph]");

this.button6 = new JButton("Exit");

// set the sizes of the button

this.button1.setSize(300, 25);

this.button2.setSize(300, 25);

this.button3.setSize(300, 25);

this.button4.setSize(300, 25);

this.button5.setSize(300, 25);

this.button6.setSize(300, 25);

// layout the button in the screen

this.button1.setLocation(100, 10);

this.button2.setLocation(100, 40);

this.button3.setLocation(100, 70);

this.button4.setLocation(100, 100);

this.button5.setLocation(100, 130);

this.button6.setLocation(100, 160);

this.add(button1);

this.add(button2);

this.add(button3);

this.add(button4);

this.add(button5);

Page 60: Visualization System for Measuring Concurrency in Distributed

55

this.add(button6);

}

}

4. PanelProcessing.java

import javax.swing.*;

@SuppressWarnings("serial")

public class PanelProcessing extends JPanel

{

private JProgressBar progressBar;

private int taskLength;

/**

* @desc initialize the components of the panel

* @param taskLength the amount of task to be done to be used by progress bar

*/

public PanelProcessing(int taskLength)

{

// set the layout of the panel so we could freely put the components

// anywhere in the window

this.setLayout(null);

this.taskLength = taskLength;

this.progressBar = new JProgressBar(0, taskLength);

this.progressBar.setSize(300, 30);

this.progressBar.setLocation(100, 50);

this.add(progressBar);

}

/**

* @desc called to update the value of the progress bar

*/

public void increaseProgress()

{

if(this.progressBar.getValue() < this.taskLength)

this.progressBar.setValue(this.progressBar.getValue() + 1);

// update the percentage

this.progressBar.setStringPainted(true);

}

}

Page 61: Visualization System for Measuring Concurrency in Distributed

56

APPENDIX B

Core Code

1. Cmeasure.java

import java.io.*;

import java.math.BigInteger;

import java.util.Hashtable;

import java.util.List;

import java.util.ArrayList;

import org.hibernate.Query;

import org.hibernate.Session;

public class Cmeasure {

private int tracker=0;

private int tracker2=0;

public String size_of_comm;

@SuppressWarnings("rawtypes")

public ArrayList FEController(String nameoffile) throws IOException{

ArrayList FEList=new ArrayList();

Session session =

SessionFactoryUtil.getSessionFactory().getCurrentSession();

session.beginTransaction();

Query query=session.createSQLQuery("Select count(*) from fixed_event

where sno=1");

BigInteger temp=(BigInteger) query.uniqueResult();

if(temp.intValue()==1)

{

Query query1=session.createSQLQuery("Select sno from

fixed_event order by sno desc limit 1");

int temp1=(Integer)query1.uniqueResult();

tracker=temp1;

}

int no_of_times=FixedEvents(session,nameoffile);

session.getTransaction().commit();

Session session1 =

SessionFactoryUtil.getSessionFactory().getCurrentSession();

session1.beginTransaction();

FEList=RetrieveFE(session1,no_of_times,tracker);

tracker=no_of_times;

session1.getTransaction().commit();

Page 62: Visualization System for Measuring Concurrency in Distributed

57

return FEList;

}

@SuppressWarnings("rawtypes")

public ArrayList FPController(String name_file) throws IOException{

ArrayList FPList=new ArrayList();

Session sess =

SessionFactoryUtil.getSessionFactory().getCurrentSession();

sess.beginTransaction();

Query query2=sess.createSQLQuery("Select count(*) from

fixed_processors where sno=1");

BigInteger temp2=(BigInteger) query2.uniqueResult();

if(temp2.intValue()==1)

{

Query query3=sess.createSQLQuery("Select sno from

fixed_processors order by sno desc limit 1");

int temp3=(Integer)query3.uniqueResult();

tracker2=temp3;

}

int itrtr=FixedProcessors(sess,name_file);

sess.getTransaction().commit();

Session sess1 =

SessionFactoryUtil.getSessionFactory().getCurrentSession();

sess1.beginTransaction();

FPList=RetrieveFP(sess1,itrtr,tracker2);

tracker2=itrtr;

sess1.getTransaction().commit();

return FPList;

}

public int FixedProcessors(Session session,String name) throws IOException

{

String s_et,e_et,size_of_comm,name_of_file;

int count,event_count;

name_of_file=name;

count=15;

Parse_file Pfile=new Parse_file();

for(int i=0;i<count;i++)

{

Pfile.Scan(name_of_file+i);

s_et=Pfile.val;

e_et=Pfile.m;

Page 63: Visualization System for Measuring Concurrency in Distributed

58

size_of_comm=Pfile.size;

Pfile.Write_to_file(s_et,e_et,name_of_file+i);

InfoParser Sep=new InfoParser();

Sep.Hasht(name_of_file+i,size_of_comm,e_et);

event_count=Sep.event_counter;

FixedProcess fp=new FixedProcess();

fp.setName_of_file(name_of_file+i);

fp.setNo_of_events(event_count);

fp.setConcurrency(Sep.ConcMeasure);

session.save(fp);

}

return count;

}

public int FixedEvents(Session session,String name) throws IOException

{

String s_et,e_et,size_of_comm,name_of_file;

int count;

name_of_file=name;

count=15;

Parse_file Pfile=new Parse_file();

for(int i=0;i<count;i++)

{

Pfile.Scan(name_of_file+i);

s_et=Pfile.val;

e_et=Pfile.m;

size_of_comm=Pfile.size;

Pfile.Write_to_file(s_et,e_et,name_of_file+i);

InfoParser Sep=new InfoParser();

Sep.Hasht(name_of_file+i,size_of_comm,e_et);

FixedEvent fe=new FixedEvent();

fe.setName_of_file(name_of_file+i);

fe.setNo_of_processors(Integer.parseInt(size_of_comm));

fe.setConcurrency(Sep.ConcMeasure);

session.save(fe);

}

return count;

}

@SuppressWarnings({ "unchecked", "rawtypes" })

public ArrayList RetrieveFE(Session session,int no_times,int track)

{

Query query = session.createQuery("from FixedEvent where

sno>"+track+" and sno<="+(track+no_times));

Page 64: Visualization System for Measuring Concurrency in Distributed

59

List<FixedEvent> list = query.list();

java.util.Iterator<FixedEvent> iter = list.iterator();

ArrayList FixedE=new ArrayList();

while (iter.hasNext()) {

FixedEvent fe = iter.next();

System.out.println("Name of file: "+fe.getName_of_file());

System.out.println("No. of processors: "+fe.getNo_of_processors());

System.out.println("Concurrency: "+fe.getConcurrency());

FixedE.add(fe.getNo_of_processors());

FixedE.add(fe.getConcurrency());

}

System.out.println("The size of the list is: "+FixedE.size());

return FixedE;

}

public ArrayList<Object> RetrieveFP(Session session,int itrtr,int track2)

{

Query query = session.createQuery("from FixedProcess where

sno>"+track2+" and sno<="+(track2+itrtr));

@SuppressWarnings("unchecked")

List<FixedProcess> list = query.list();

java.util.Iterator<FixedProcess> iter = list.iterator();

ArrayList<Object> FixedP=new ArrayList<Object>();

while (iter.hasNext()) {

FixedProcess fp = iter.next();

System.out.println("Name of file: "+fp.getName_of_file());

System.out.println("No. of events: "+fp.getNo_of_events());

System.out.println("Concurrency: "+fp.getConcurrency());

FixedP.add(fp.getNo_of_events());

FixedP.add(fp.getConcurrency());

}

System.out.println("The size of the list is: "+FixedP.size());

return FixedP;

}

public ArrayList<Double> IndividualGraph(String filename) throws

IOException{

String s_et,e_et;

Parse_file Pfile=new Parse_file();

Pfile.Scan(filename);

s_et=Pfile.val;

e_et=Pfile.m;

Page 65: Visualization System for Measuring Concurrency in Distributed

60

size_of_comm=Pfile.size;

Pfile.Write_to_file(s_et,e_et,filename);

InfoParser Sep=new InfoParser();

Sep.Hasht(filename,size_of_comm,e_et);

System.out.println("The concurrency measure for the distributed

computation is: "+Sep.ConcMeasure);

return Sep.Individual_Conc;

}

public Hashtable<String, ArrayList<Integer>> CombinedGraph(String file_name)

throws IOException{

String s_et,e_et;

Hashtable<String,ArrayList<Integer>> combinedgraph=new

Hashtable<String,ArrayList<Integer>>();

Parse_file Pfile=new Parse_file();

Pfile.Scan(file_name);

s_et=Pfile.val;

e_et=Pfile.m;

size_of_comm=Pfile.size;

Pfile.Write_to_file(s_et,e_et,file_name);

InfoParser Sep=new InfoParser();

Sep.Hasht(file_name,size_of_comm,e_et);

combinedgraph=Sep.IndividualGraph;

return combinedgraph;

}

public Hashtable<String, ArrayList<Integer>> StackedGraph(String nfile) throws

IOException

{

String s_et,e_et;

Hashtable<String,ArrayList<Integer>> stackedgraph=new

Hashtable<String,ArrayList<Integer>>();

Parse_file Pfile=new Parse_file();

Pfile.Scan(nfile);

s_et=Pfile.val;

e_et=Pfile.m;

size_of_comm=Pfile.size;

Pfile.Write_to_file(s_et,e_et,nfile);

InfoParser Sep=new InfoParser();

Sep.Hasht(nfile,size_of_comm,e_et);

stackedgraph=Sep.IndividualGraph;

return stackedgraph;

Page 66: Visualization System for Measuring Concurrency in Distributed

61

}

}

2. InfoParser.java

import java.io.BufferedReader;

import java.io.FileNotFoundException;

import java.io.FileReader;

import java.io.IOException;

import java.util.ArrayList;

import java.util.Hashtable;

import java.util.LinkedList;

import java.util.Scanner;

class InfoParser{

String timer,sender,receiver,type,key,temp1,temp2,end_event;

int size,event_counter=0;

float ConcMeasure;

Hashtable<String,ArrayList<String>> WeightClock;

Hashtable<String,ArrayList<String>> LocalClock;

Hashtable<String,LinkedList<Object[]>> LocalSendEvent;

Hashtable<String,LinkedList<Object[]>> WeightSendEvent;

Hashtable<String,ArrayList<Integer>> IndividualGraph;

ArrayList<String> Templist;

ArrayList<Double> Individual_Conc;

ArrayList<String> TempWClist;

ArrayList<Integer> Weight;

ArrayList<Integer> Volume;

ArrayList<Integer> Height;

public InfoParser(){

WeightSendEvent=new Hashtable<String,LinkedList<Object[]>>(size);

LocalSendEvent=new Hashtable<String,LinkedList<Object[]>>(size);

WeightClock=new Hashtable<String,ArrayList<String>>(size);

LocalClock=new Hashtable<String,ArrayList<String>>(size);

IndividualGraph=new Hashtable<String,ArrayList<Integer>>(size);

Templist=new ArrayList<String>();

TempWClist=new ArrayList<String>();

Individual_Conc=new ArrayList<Double>();

Weight=new ArrayList<Integer>();

Volume=new ArrayList<Integer>();

Height=new ArrayList<Integer>();

}

void Hasht(String name_file,String si,String ev) throws IOException{

Page 67: Visualization System for Measuring Concurrency in Distributed

62

String read;

size=Integer.parseInt(si);

end_event=ev;

BufferedReader reader=null;

try{

reader=new BufferedReader(new

FileReader(name_file+"_required info.txt"));

}catch(FileNotFoundException e){

e.printStackTrace();

}

while((read=reader.readLine())!=null)

{

ArrayList<String> list=new ArrayList<String>();

for(int i=0;i<=size;i++)

{

list.add("0");

}

if(read.indexOf("bare")>0&&read.indexOf(end_event)>0)

{

Internalevent(read,list,LocalClock);

WeightCalc(receiver);

}

if(read.indexOf("msg")>0&&read.indexOf("send")>0)

{

Sendevent(read,list,LocalClock);

WeightCalc(sender);

event_counter++;

}

if(read.indexOf("msg")>0&&read.indexOf("recv")>0)

{

Receiveevent(read,list,LocalClock);

WeightCalc(receiver);

event_counter++;

}

WClock(read);

}

CylinderCal();

ConeCal();

}

void lsep(String rd){

Scanner sc=new Scanner(rd).useDelimiter("\\s+|[\\=]");

if(rd.indexOf("send")>0)

{

while(sc.hasNext())

{

Page 68: Visualization System for Measuring Concurrency in Distributed

63

key=sc.next();

if(key.equalsIgnoreCase("ts"))

{

timer=sc.next();

}

if(key.equalsIgnoreCase("rank"))

{

sender=sc.next();

while(sc.hasNext())

{

temp2=sc.next();

if(temp2.equalsIgnoreCase("et"))

{

temp1=sc.next();

if(temp1.equals("send"))

{

type="sending";

}

else

{

type="receiving";

}

}

if(temp2.equalsIgnoreCase("rank"))

{

receiver=sc.next();

break;

}

}

}

}

}

else

{

while(sc.hasNext())

{

key=sc.next();

if(key.equalsIgnoreCase("ts"))

{

timer=sc.next();

}

if(key.equalsIgnoreCase("rank"))

{

receiver=sc.next();

while(sc.hasNext())

{

Page 69: Visualization System for Measuring Concurrency in Distributed

64

temp2=sc.next();

if(temp2.equalsIgnoreCase("et"))

{

temp1=sc.next();

if(temp1.equals("send"))

{

type="sending";

}

else if(temp1.equals("recv"))

{

type="receiving";

}

else

{

type="internal event";

}

}

if(temp2.equalsIgnoreCase("rank"))

{

sender=sc.next();

break;

}

}

}

}

}

}

void Internalevent(String read_line,ArrayList<String>

ls,Hashtable<String,ArrayList<String>> Clock){

lsep(read_line);

if(Clock.get(receiver)==null)

{

NewClock(receiver,ls,Clock);

}

else

{

UpdateClock(receiver,Clock);

}

}

void Sendevent(String read_line,ArrayList<String>

ls,Hashtable<String,ArrayList<String>> Clock){

Page 70: Visualization System for Measuring Concurrency in Distributed

65

lsep(read_line);

if(Clock.get(sender)==null)

{

NewClock(sender,ls,Clock);

}

else

{

UpdateClock(sender,Clock);

}

if(Clock==LocalClock)

{

LinkedList<Object[]> LocalSend=new LinkedList<Object[]>();

Templist=LocalClock.get(sender);

Object[] arr=Templist.toArray();

if(LocalSendEvent.get(sender)==null)

{

LocalSend.add(arr);

LocalSendEvent.put(sender, LocalSend);

}

else{

LocalSend=LocalSendEvent.get(sender);

LocalSend.add(arr);

LocalSendEvent.put(sender, LocalSend);

}

}

else if(Clock==WeightClock)

{

LinkedList<Object[]> WeightSend=new LinkedList<Object[]>();

Templist=WeightClock.get(sender);

Object[] arr=Templist.toArray();

if(WeightSendEvent.get(sender)==null)

{

WeightSend.add(arr);

WeightSendEvent.put(sender, WeightSend);

}

else{

WeightSend=WeightSendEvent.get(sender);

WeightSend.add(arr);

WeightSendEvent.put(sender, WeightSend);

}

}

}

Page 71: Visualization System for Measuring Concurrency in Distributed

66

void Receiveevent(String read_line,ArrayList<String>

ls,Hashtable<String,ArrayList<String>> Clock){

lsep(read_line);

if(Clock.get(receiver)==null)

{

NewClock(receiver,ls,Clock);

}

else

{

UpdateClock(receiver,Clock);

}

ReceiverUpdate(Clock);

}

void NewClock(String x,ArrayList<String>

ls,Hashtable<String,ArrayList<String>> Clock){

int s1,s2;

String st1,st2;

ls.set(size,timer);

st1=ls.get(Integer.parseInt(x));

s1=Integer.parseInt(st1);

s2=s1+1;

st2=Integer.toString(s2);

ls.set(Integer.parseInt(x),st2);

Clock.put(x, ls);

}

void UpdateClock(String x,Hashtable<String,ArrayList<String>> Clock){

int s1;

this.Templist=Clock.get(x);

Templist.set(size,timer);

s1=Integer.parseInt(Templist.get(Integer.parseInt(x)));

s1++;

Templist.set(Integer.parseInt(x),Integer.toString(s1));

Clock.put(x, Templist);

}

void ReceiverUpdate(Hashtable<String,ArrayList<String>> RecClock){

ArrayList<String> ActualList=new ArrayList<String>();

ArrayList<String> ObjectList=new ArrayList<String>();

ArrayList<String> FirstList=new ArrayList<String>();

LinkedList<Object[]> List=new LinkedList<Object[]>();

Object[] recvevent;

Page 72: Visualization System for Measuring Concurrency in Distributed

67

ActualList=RecClock.get(receiver);

if(RecClock==this.LocalClock){

List=LocalSendEvent.get(sender);

if(List!=null&&!List.isEmpty()){

recvevent=List.removeLast();

for(int i=0;i<recvevent.length;i++){

ObjectList.add((String) recvevent[i]);

}

for(int i=0;i<size;i++)

{

int s1,s2;

s1=Integer.parseInt(ObjectList.get(i));

s2=Integer.parseInt(ActualList.get(i));

if(s1>=s2)

{

s2=s1;

}

ActualList.set(i, Integer.toString(s2));

}

}else{

ObjectList=RecClock.get(sender);

if(ObjectList!=null){

int

temp=Integer.parseInt(ObjectList.get(Integer.parseInt(sender)));

temp++;

ObjectList.set(Integer.parseInt(sender),

Integer.toString(temp));

ActualList=RecClock.get(receiver);

for(int i=0;i<size;i++)

{

int s1,s2;

s1=Integer.parseInt(ObjectList.get(i));

s2=Integer.parseInt(ActualList.get(i));

if(s1>=s2)

{

s2=s1;

}

ActualList.set(i, Integer.toString(s2));

}

}else{

for(int i=0;i<size;i++){

FirstList.add("0");

}

int

temp=Integer.parseInt(FirstList.get(Integer.parseInt(sender)));

temp=1;

Page 73: Visualization System for Measuring Concurrency in Distributed

68

FirstList.set(Integer.parseInt(sender),

Integer.toString(temp));

ActualList=RecClock.get(receiver);

for(int i=0;i<size;i++)

{

int s1,s2;

s1=Integer.parseInt(FirstList.get(i));

s2=Integer.parseInt(ActualList.get(i));

if(s1>=s2)

{

s2=s1;

}

ActualList.set(i, Integer.toString(s2));

}

}

}

}

else if(RecClock==this.WeightClock){

List=WeightSendEvent.get(sender);

if(List!=null&&!List.isEmpty()){

recvevent=List.removeLast();

for(int i=0;i<recvevent.length;i++){

ObjectList.add((String) recvevent[i]);

}

for(int i=0;i<size;i++)

{

int s1,s2;

s1=Integer.parseInt(ObjectList.get(i));

s2=Integer.parseInt(ActualList.get(i));

if(s1>=s2)

{

s2=s1;

}

ActualList.set(i, Integer.toString(s2));

}

}else{

ObjectList=RecClock.get(sender);

if(ObjectList!=null){

int

temp=Integer.parseInt(ObjectList.get(Integer.parseInt(sender)));

temp++;

ObjectList.set(Integer.parseInt(sender),

Integer.toString(temp));

ActualList=RecClock.get(receiver);

for(int i=0;i<size;i++)

{

Page 74: Visualization System for Measuring Concurrency in Distributed

69

int s1,s2;

s1=Integer.parseInt(ObjectList.get(i));

s2=Integer.parseInt(ActualList.get(i));

if(s1>=s2)

{

s2=s1;

}

ActualList.set(i, Integer.toString(s2));

}

}else{

for(int i=0;i<size;i++){

FirstList.add("0");

}

int

temp=Integer.parseInt(FirstList.get(Integer.parseInt(sender)));

temp=1;

FirstList.set(Integer.parseInt(sender),

Integer.toString(temp));

ActualList=RecClock.get(receiver);

for(int i=0;i<size;i++)

{

int s1,s2;

s1=Integer.parseInt(FirstList.get(i));

s2=Integer.parseInt(ActualList.get(i));

if(s1>=s2)

{

s2=s1;

}

ActualList.set(i, Integer.toString(s2));

}

}

}

}

RecClock.put(receiver, ActualList);

}

void WClock(String rd_line){

ArrayList<String> Wlist=new ArrayList<String>();

for(int i=0;i<=size;i++)

{

Wlist.add("0");

}

if(rd_line.indexOf("bare")>0&&rd_line.indexOf(end_event)>0)

{

Page 75: Visualization System for Measuring Concurrency in Distributed

70

Internalevent(rd_line,Wlist,WeightClock);

VolHeightCal(receiver);

}

if(rd_line.indexOf("msg")>0&&rd_line.indexOf("send")>0)

{

Sendevent(rd_line,Wlist,WeightClock);

VolHeightCal(sender);

}

if(rd_line.indexOf("msg")>0&&rd_line.indexOf("recv")>0)

{

int s1,s2,s3;

ArrayList<String> TempWlist=new ArrayList<String>();

Receiveevent(rd_line,Wlist,WeightClock);

TempWlist=WeightClock.get(receiver);

s1=Integer.parseInt(TempWlist.get(Integer.parseInt(sender)));

s2=Integer.parseInt(TempWlist.get(Integer.parseInt(receiver)));

s3=(Math.max(s1,s2)+1);

TempWlist.set(Integer.parseInt(receiver),Integer.toString(s3));

VolHeightCal(receiver);

}

}

void CylinderCal(){

Object[] arr1,arr2;

int num[]=new int[size];

int num2[]=new int[size];

int WCnum[]=new int[size];

int WCnum2[]=new int[size];

int wt=0,ht=0,vol;

for(int i=0;i<size;i++)

{

this.Templist=LocalClock.get(Integer.toString(i));

this.TempWClist=WeightClock.get(Integer.toString(i));

if(Templist!=null&&TempWClist!=null)

{

if(Templist.isEmpty()==false&&TempWClist.isEmpty()==false)

{

arr1=Templist.toArray();

arr2=TempWClist.toArray();

for(int k=0;k<size;k++)

{

num[k]=Integer.parseInt((String) arr1[k]);

WCnum[k]=Integer.parseInt((String)

arr2[k]);

Page 76: Visualization System for Measuring Concurrency in Distributed

71

}

num2[i]=num[i];

WCnum2[i]=WCnum[i];

}

else{

for(int k=0;k<size;k++)

{

Templist.add("0");

TempWClist.add("0");

}

arr1=Templist.toArray();

arr2=TempWClist.toArray();

for(int k=0;k<size;k++)

{

num[k]=Integer.parseInt((String) arr1[k]);

WCnum[k]=Integer.parseInt((String)

arr2[k]);

}

num2[i]=num[i];

WCnum2[i]=WCnum[i];

}

}

}

for(int i=0;i<size;i++)

{

wt +=num2[i];

}

for(int i=0;i<size;i++)

{

if(WCnum2[i]>ht)

{

ht=WCnum2[i];

}

}

vol=size*ht;

ConcMeasure=ConcCal(wt,vol,ht);

}

void ConeCal(){

int wt,vol,ht;

double IndConMeasure;

for(int i=0;i<Weight.size();i++){

wt=Weight.get(i);

vol=Volume.get(i);

ht=Height.get(i);

Page 77: Visualization System for Measuring Concurrency in Distributed

72

IndConMeasure=ConcCal(wt,vol,ht);

Individual_Conc.add(IndConMeasure);

}

}

void WeightCalc(String x){

int tempwt=0,wt;

Object[] arr1;

int[] num=new int [size];

ArrayList<Integer> IList=new ArrayList<Integer>();

Templist=LocalClock.get(x);

arr1=Templist.toArray();

for(int k=0;k<size;k++)

{

num[k]=Integer.parseInt((String) arr1[k]);

}

for(int i=0;i<size;i++)

{

tempwt +=num[i];

}

wt=tempwt-1;

if(IndividualGraph.get(x)==null)

{

IList.add(wt);

IndividualGraph.put(x, IList);

}

else

{

IList=IndividualGraph.get(x);

IList.add(wt);

}

Weight.add(wt);

}

void VolHeightCal(String x){

int ht,tempvol=0,vol;

Object[] arr1;

int[] WCnum=new int [size];

ArrayList<Integer> IList=new ArrayList<Integer>();

TempWClist=WeightClock.get(x);

arr1=TempWClist.toArray();

for(int k=0;k<size;k++)

{

WCnum[k]=Integer.parseInt((String) arr1[k]);

Page 78: Visualization System for Measuring Concurrency in Distributed

73

}

ht=WCnum[Integer.parseInt(x)]-1;

Height.add(ht);

for(int i=0;i<size;i++)

{

tempvol +=WCnum[i];

}

vol=tempvol-1;

IList=IndividualGraph.get(x);

IList.add(ht);

IList.add(vol);

Volume.add(vol);

}

float ConcCal(int weight,int volume,int height)

{

float TempConc1,TempConc2,TempConc3;

TempConc1=volume-weight;

TempConc2=volume-height;

if(TempConc2==0)

{

TempConc3=0;

}

else{

TempConc3=TempConc1/TempConc2;

}

if((1-TempConc3)>0)

{

return(1-TempConc3);

}

else

{

return 0;

}

}

}

3. Parse_file.java

import java.io.BufferedReader;

import java.io.BufferedWriter;

import java.io.File;

import java.io.FileNotFoundException;

import java.io.FileReader;

import java.io.FileWriter;

import java.io.IOException;

Page 79: Visualization System for Measuring Concurrency in Distributed

74

import java.util.Scanner;

class Parse_file{

String key=null,m="999",val=null,read=null,size=null;

int tempval;

void Scan(String s) throws FileNotFoundException{

Scanner sc=null;

try{

sc=new Scanner(new File(s+".txt")).useDelimiter("\\s+|[\\=]");

}catch(FileNotFoundException e){

e.printStackTrace();

}

while (sc.hasNext()){

key=sc.next();

if(key.equalsIgnoreCase("max_comm_world_size"))

{

for(int i=0;i<3;i++)

{

size=sc.next();

}

}

if(key.equalsIgnoreCase("s_et"))

{

val=sc.next();

}

if(key.equalsIgnoreCase("e_et"))

{

tempval=Integer.parseInt(sc.next());

if(tempval>300)

{

m=Integer.toString(tempval);

}

break;

}

}

sc.close();

}

void Write_to_file(String start_event,String end_event,String filename) throws

IOException{

FileWriter fstream=null;

try{

Page 80: Visualization System for Measuring Concurrency in Distributed

75

fstream = new FileWriter(filename+"_required info.txt");

}catch(FileNotFoundException e){

e.printStackTrace();

}

BufferedWriter out = new BufferedWriter(fstream);

BufferedReader reader=null;

try{

reader=new BufferedReader(new FileReader(filename+".txt"));

}

catch(FileNotFoundException e){

e.printStackTrace();

}

while((read=reader.readLine())!=null)

{

if((read.indexOf("et="+start_event)>0)||read.indexOf("et="+end_event)>0||read.in

dexOf("et=send")>0||read.indexOf("et=recv")>0)

{

if(read.indexOf("sdef")>0){

continue;

}

out.write(read+"\r\n");

}

}

out.close();

reader.close();

fstream.close();

}

}

4. progEXE.java

import java.io.BufferedWriter;

import java.io.File;

import java.io.FileWriter;

import java.io.IOException;

public class progEXE {

private int noevent,noproc;

private String nameofprog,username,password;

private Shell sh;

public progEXE(String a,int b,int c,String d,String e){

this.nameofprog=a;

Page 81: Visualization System for Measuring Concurrency in Distributed

76

this.noevent=b;

this.noproc=c;

this.username=d;

this.password=e;

}

public void oneFile() throws IOException{

File nprog=new File(nameofprog+10+".txt");

if(!nprog.exists())

try {

nprog.createNewFile();

} catch (IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

BufferedWriter out=new BufferedWriter(new

FileWriter(nameofprog+10+".txt"));

out.write("cd Project \n");

out.write("make \n");

out.write("mpirun -np "+noproc+" "+nameofprog+" "+noevent+"\n");

out.write("clog2_print "+nameofprog+".clog2 \n");

out.close();

sh=new Shell(nameofprog);

sh.shellCommands(username,password);

if(nprog.delete()==false)

{

System.out.println("File has not been deleted");

}

else

System.out.println("File has been deleted");

}

public void fixedProc() throws IOException{

int i=0,counter=0;

while(i<15)

{

File nprog=new File("FP"+i+".txt");

if(!nprog.exists())

try {

nprog.createNewFile();

} catch (IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

Page 82: Visualization System for Measuring Concurrency in Distributed

77

BufferedWriter out=new BufferedWriter(new

FileWriter("FP"+i+".txt"));

out.write("cd Project \n");

out.write("make \n");

out.write("mpirun -np "+noproc+" "+nameofprog+"

"+(noevent+counter)+"\n");

out.write("clog2_print "+nameofprog+".clog2 \n");

out.close();

sh=new Shell("FP"+i);

sh.shellCommands(username,password);

i++;

counter+=5;

if(nprog.delete()==false)

{

System.out.println("File has not been deleted");

}

else

System.out.println("File has been deleted");

}

}

public void fixedEvents() throws IOException{

int i=0,counter=0;

while(i<15)

{

File nprog=new File("FE"+i+".txt");

if(!nprog.exists())

try {

nprog.createNewFile();

} catch (IOException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

BufferedWriter out=new BufferedWriter(new

FileWriter("FE"+i+".txt"));

out.write("cd Project \n");

out.write("make \n");

out.write("mpirun -np "+(noproc+counter)+" "+nameofprog+"

"+noevent+"\n");

out.write("clog2_print "+nameofprog+".clog2 \n");

out.close();

sh=new Shell("FE"+i);

sh.shellCommands(username,password);

i++;

counter++;

Page 83: Visualization System for Measuring Concurrency in Distributed

78

if(nprog.delete()==false)

{

System.out.println("File has not been deleted");

}

else

System.out.println("File has been deleted");

}

}

}

5. Shell.java

import com.jcraft.jsch.*;

import java.awt.*;

import java.io.FileInputStream;

import java.io.FileOutputStream;

import javax.swing.*;

public class Shell{

private String inputfile,outputfile;

public Shell(String name){

if(name.indexOf("E")>0)

{

this.inputfile=name;

String count=name.substring(name.indexOf("E")+1);

this.outputfile="FixedEvent"+count;

}

else if(name.indexOf("P")>0)

{

this.inputfile=name;

String count=name.substring(name.indexOf("P")+1);

this.outputfile="FixedProc"+count;

}

else

{

this.inputfile=name+"10";

this.outputfile=name;

}

}

public void shellCommands(String host,String pwd){

try{

JSch jsch=new JSch();

Page 84: Visualization System for Measuring Concurrency in Distributed

79

String user=host.substring(0, host.indexOf('@'));

host=host.substring(host.indexOf('@')+1);

//user="zsyed";

//host="penguin.tamucc.edu";

Session session=jsch.getSession(user, host, 22);

session.setPassword(pwd);

// username and password will be given via UserInfo interface.

UserInfo ui=new MyUserInfo();

session.setUserInfo(ui);

try{

session.connect(30000); // making a connection with timeout.

}catch(Exception e){

JOptionPane.showMessageDialog(null, "Incorrect username/password", "Error",

JOptionPane.ERROR_MESSAGE);

}

Channel channel=session.openChannel("shell");

channel.setInputStream(new FileInputStream(inputfile+".txt"));

channel.setOutputStream(new FileOutputStream(outputfile+".txt"));

channel.connect(3*1000);

Thread.sleep(2500);

channel.disconnect();

if(channel.isClosed())

session.disconnect();

}

catch(Exception e){

System.out.println(e);

}

}

public static class MyUserInfo implements UserInfo, UIKeyboardInteractive{

public String getPassword(){ return passwd; }

public boolean promptYesNo(String str){

return true;

}

String passwd;

JTextField passwordField=(JTextField)new JPasswordField(20);

Page 85: Visualization System for Measuring Concurrency in Distributed

80

public String getPassphrase(){ return null; }

public boolean promptPassphrase(String message){ return true; }

public boolean promptPassword(String message){

Object[] ob={passwordField};

int result=JOptionPane.showConfirmDialog(null, ob, message,

JOptionPane.OK_CANCEL_OPTION);

if(result==JOptionPane.OK_OPTION){

passwd=passwordField.getText();

return true;

}

else{

return false;

}

}

public void showMessage(String message){

JOptionPane.showMessageDialog(null, message);

}

final GridBagConstraints gbc =

new GridBagConstraints(0,0,1,1,1,1,

GridBagConstraints.NORTHWEST,

GridBagConstraints.NONE,

new Insets(0,0,0,0),0,0);

private Container panel;

public String[] promptKeyboardInteractive(String destination,

String name,

String instruction,

String[] prompt,

boolean[] echo){

panel = new JPanel();

panel.setLayout(new GridBagLayout());

gbc.weightx = 1.0;

gbc.gridwidth = GridBagConstraints.REMAINDER;

gbc.gridx = 0;

panel.add(new JLabel(instruction), gbc);

gbc.gridy++;

gbc.gridwidth = GridBagConstraints.RELATIVE;

JTextField[] texts=new JTextField[prompt.length];

for(int i=0; i<prompt.length; i++){

gbc.fill = GridBagConstraints.NONE;

gbc.gridx = 0;

gbc.weightx = 1;

panel.add(new JLabel(prompt[i]),gbc);

Page 86: Visualization System for Measuring Concurrency in Distributed

81

gbc.gridx = 1;

gbc.fill = GridBagConstraints.HORIZONTAL;

gbc.weighty = 1;

if(echo[i]){

texts[i]=new JTextField(20);

}

else{

texts[i]=new JPasswordField(20);

}

panel.add(texts[i], gbc);

gbc.gridy++;

}

if(JOptionPane.showConfirmDialog(null, panel,

destination+": "+name,

JOptionPane.OK_CANCEL_OPTION,

JOptionPane.QUESTION_MESSAGE)

==JOptionPane.OK_OPTION){

String[] response=new String[prompt.length];

for(int i=0; i<prompt.length; i++){

response[i]=texts[i].getText();

}

return response;

}

else{

return null; // cancel

}

}

}

}

6. CombinedGraph.java

import java.awt.Color;

import java.util.ArrayList;

import java.util.Hashtable;

import org.jfree.chart.ChartFactory;

import org.jfree.chart.ChartPanel;

import org.jfree.chart.JFreeChart;

import org.jfree.chart.axis.NumberAxis;

import org.jfree.chart.plot.PlotOrientation;

import org.jfree.chart.plot.XYPlot;

import org.jfree.chart.renderer.xy.XYLineAndShapeRenderer;

import org.jfree.data.xy.XYDataset;

import org.jfree.data.xy.XYSeries;

Page 87: Visualization System for Measuring Concurrency in Distributed

82

import org.jfree.data.xy.XYSeriesCollection;

import org.jfree.ui.ApplicationFrame;

@SuppressWarnings("serial")

class CombinedGraph extends ApplicationFrame{

public CombinedGraph(String title,Hashtable<String,ArrayList<Integer>>

ChartPlot,String YLabel,String XLabel,String size) {

super(title);

final XYDataset dataset = createDataset(ChartPlot,size);

final JFreeChart chart = createChart(dataset,title,XLabel,YLabel);

final ChartPanel chartPanel = new ChartPanel(chart);

chartPanel.setPreferredSize(new java.awt.Dimension(500, 270));

setContentPane(chartPanel);

}

XYDataset createDataset(Hashtable<String,ArrayList<Integer>> PlotTable,String

comm_size){

ArrayList<XYSeries> theMap=new ArrayList<XYSeries>();

int wt,vol,ht;

InfoParser ifParser=new InfoParser();

for(int k=0;k<Integer.parseInt(comm_size);k++)

{

int z=0;

ArrayList<Integer> theList=new ArrayList<Integer>();

theList=PlotTable.get(Integer.toString(k));

ArrayList<Double> Conc=new ArrayList<Double>();

double tempConc;

for(int m=0;m<theList.size();m++)

{

wt=theList.get(z);

ht=theList.get(z+1);

vol=theList.get(z+2);

tempConc=ifParser.ConcCal(wt, vol, ht);

Conc.add(tempConc);

z+=3;

if(z==theList.size())

{

break;

}

}

Object[] arr;

Page 88: Visualization System for Measuring Concurrency in Distributed

83

arr=Conc.toArray();

for(int i=0;i<Conc.size();i++)

{

System.out.println("The Conc is: "+arr[i]+" the process is:

"+k);

}

XYSeries aSeries = new XYSeries("Process"+k);

for(int i=0;i<Conc.size();i++)

{

aSeries.add(i,Conc.get(i));

}

theMap.add(aSeries);

}

XYSeriesCollection dataset = new XYSeriesCollection();

for(int h=0;h<Integer.parseInt(comm_size);h++)

{

dataset.addSeries(theMap.get(h));

}

return dataset;

}

private JFreeChart createChart(XYDataset dataset,String Title,String X,String

Y){

final JFreeChart chart = ChartFactory.createXYLineChart(

Title, // chart title

X, // x axis label

Y, // y axis label

dataset, // data

PlotOrientation.VERTICAL,

true, // include legend

true, // tooltips

false // urls

);

chart.setBackgroundPaint(Color.white);

final XYPlot plot = chart.getXYPlot();

plot.setBackgroundPaint(Color.lightGray);

plot.setDomainGridlinePaint(Color.white);

plot.setRangeGridlinePaint(Color.white);

XYLineAndShapeRenderer renderer = new XYLineAndShapeRenderer();

renderer.setSeriesLinesVisible(0, true);

renderer.setSeriesShapesVisible(1, false);

Page 89: Visualization System for Measuring Concurrency in Distributed

84

plot.setRenderer(renderer);

NumberAxis rangeAxis = (NumberAxis) plot.getRangeAxis();

rangeAxis.setStandardTickUnits(NumberAxis.createIntegerTickUnits());

return chart;

}

}

7. IndividualGraph.java

import java.awt.Color;

import java.util.ArrayList;

import org.jfree.chart.ChartFactory;

import org.jfree.chart.ChartPanel;

import org.jfree.chart.JFreeChart;

import org.jfree.chart.axis.NumberAxis;

import org.jfree.chart.plot.PlotOrientation;

import org.jfree.chart.plot.XYPlot;

import org.jfree.chart.renderer.xy.XYLineAndShapeRenderer;

import org.jfree.data.xy.XYDataset;

import org.jfree.data.xy.XYSeries;

import org.jfree.data.xy.XYSeriesCollection;

import org.jfree.ui.ApplicationFrame;

@SuppressWarnings("serial")

class IndividualGraph extends ApplicationFrame{

@SuppressWarnings("rawtypes")

public IndividualGraph(String title,ArrayList ChartPlot,String YLabel,String

XLabel) {

super(title);

Object[] arr;

arr=ChartPlot.toArray();

final XYDataset dataset = createDataset(arr);

final JFreeChart chart = createChart(dataset,title,XLabel,YLabel);

final ChartPanel chartPanel = new ChartPanel(chart);

chartPanel.setPreferredSize(new java.awt.Dimension(500, 270));

setContentPane(chartPanel);

}

XYDataset createDataset(Object[] arr){

XYSeries series2 = new XYSeries("Concurrency");

for(int i=0;i<arr.length;i++)

Page 90: Visualization System for Measuring Concurrency in Distributed

85

{

series2.add(i,(Double) arr[i]);

}

XYSeriesCollection dataset = new XYSeriesCollection();

dataset.addSeries(series2);

return dataset;

}

private JFreeChart createChart(XYDataset dataset,String Title,String X,String

Y){

final JFreeChart chart = ChartFactory.createXYLineChart(

Title, // chart title

X, // x axis label

Y, // y axis label

dataset, // data

PlotOrientation.VERTICAL,

true, // include legend

true, // tooltips

false // urls

);

chart.setBackgroundPaint(Color.white);

final XYPlot plot = chart.getXYPlot();

plot.setBackgroundPaint(Color.lightGray);

plot.setDomainGridlinePaint(Color.white);

plot.setRangeGridlinePaint(Color.white);

XYLineAndShapeRenderer renderer = new XYLineAndShapeRenderer();

renderer.setSeriesLinesVisible(0, true);

renderer.setSeriesShapesVisible(1, false);

plot.setRenderer(renderer);

NumberAxis rangeAxis = (NumberAxis) plot.getRangeAxis();

rangeAxis.setStandardTickUnits(NumberAxis.createIntegerTickUnits());

return chart;

}

}

8. LineChart.java

import java.awt.Color;

import java.util.ArrayList;

import org.jfree.chart.ChartFactory;

import org.jfree.chart.ChartPanel;

Page 91: Visualization System for Measuring Concurrency in Distributed

86

import org.jfree.chart.JFreeChart;

import org.jfree.chart.axis.NumberAxis;

import org.jfree.chart.plot.PlotOrientation;

import org.jfree.chart.plot.XYPlot;

import org.jfree.chart.renderer.xy.XYLineAndShapeRenderer;

import org.jfree.data.xy.XYDataset;

import org.jfree.data.xy.XYSeries;

import org.jfree.data.xy.XYSeriesCollection;

import org.jfree.ui.ApplicationFrame;

@SuppressWarnings("serial")

class LineChart extends ApplicationFrame{

@SuppressWarnings("rawtypes")

public LineChart(String title,ArrayList ChartPlot,String YLabel,String XLabel) {

super(title);

Object[] arr;

arr=ChartPlot.toArray();

final XYDataset dataset = createDataset(arr);

final JFreeChart chart = createChart(dataset,title,XLabel,YLabel);

final ChartPanel chartPanel = new ChartPanel(chart);

chartPanel.setPreferredSize(new java.awt.Dimension(500, 270));

setContentPane(chartPanel);

}

XYDataset createDataset(Object[] arr){

XYSeries series = new XYSeries("Concurrency");

int i,m=0;

for(i=0;i<arr.length;i++)

{

series.add((Double) arr[m],(Double) arr[m+1]);

if((m+1)==(arr.length-1))

{

break;

}

else

{

m+=2;

}

}

XYSeriesCollection dataset = new XYSeriesCollection();

dataset.addSeries(series);

return dataset;

Page 92: Visualization System for Measuring Concurrency in Distributed

87

}

private JFreeChart createChart(XYDataset dataset,String Title,String X,String

Y){

final JFreeChart chart = ChartFactory.createXYLineChart(

Title, // chart title

X, // x axis label

Y, // y axis label

dataset, // data

PlotOrientation.VERTICAL,

true, // include legend

true, // tooltips

false // urls

);

chart.setBackgroundPaint(Color.white);

final XYPlot plot = chart.getXYPlot();

plot.setBackgroundPaint(Color.lightGray);

plot.setDomainGridlinePaint(Color.white);

plot.setRangeGridlinePaint(Color.white);

XYLineAndShapeRenderer renderer = new XYLineAndShapeRenderer();

renderer.setSeriesLinesVisible(0, true);

renderer.setSeriesShapesVisible(1, false);

plot.setRenderer(renderer);

NumberAxis rangeAxis = (NumberAxis) plot.getRangeAxis();

rangeAxis.setStandardTickUnits(NumberAxis.createIntegerTickUnits());

return chart;

}

}

9. StackedGraph.java

import java.util.ArrayList;

import java.util.Hashtable;

import org.jfree.chart.ChartPanel;

import org.jfree.chart.JFreeChart;

import org.jfree.chart.axis.AxisLocation;

import org.jfree.chart.axis.NumberAxis;

import org.jfree.chart.plot.CombinedDomainXYPlot;

import org.jfree.chart.plot.PlotOrientation;

import org.jfree.chart.plot.XYPlot;

Page 93: Visualization System for Measuring Concurrency in Distributed

88

import org.jfree.chart.renderer.xy.StandardXYItemRenderer;

import org.jfree.chart.renderer.xy.XYItemRenderer;

import org.jfree.data.xy.XYDataset;

import org.jfree.data.xy.XYSeries;

import org.jfree.data.xy.XYSeriesCollection;

import org.jfree.ui.ApplicationFrame;

@SuppressWarnings("serial")

public class StackedGraph extends ApplicationFrame {

/**

* Constructs a new demonstration application.

*

* @param title the frame title.

*/

public StackedGraph(String title,Hashtable<String,ArrayList<Integer>>

ChartPlot,String size) {

super(title);

final JFreeChart chart = createCombinedChart(ChartPlot,size);

final ChartPanel panel = new ChartPanel(chart, true, true, true, false, true);

panel.setPreferredSize(new java.awt.Dimension(500, 270));

setContentPane(panel);

}

/**

* Creates a combined chart.

*

* @return the combined chart.

*/

private JFreeChart createCombinedChart(Hashtable<String,ArrayList<Integer>>

cPlot,String nProcessor) {

ArrayList<XYPlot> aSubPlot=new ArrayList<XYPlot>();

// create subplot 1...

for(int i=0;i<Integer.parseInt(nProcessor);i++)

{

final XYDataset data1 = createDataset1(cPlot,nProcessor,i);

final XYItemRenderer renderer1 = new StandardXYItemRenderer();

final NumberAxis rangeAxis1 = new NumberAxis("% of Concurrency");

final XYPlot subplot1 = new XYPlot(data1, null, rangeAxis1, renderer1);

subplot1.setRangeAxisLocation(AxisLocation.TOP_OR_LEFT);

aSubPlot.add(subplot1);

}

Page 94: Visualization System for Measuring Concurrency in Distributed

89

// parent plot...

final CombinedDomainXYPlot plot = new CombinedDomainXYPlot(new

NumberAxis("Time(in ms)"));

plot.setGap(10.0);

// add the subplots...

for(int k=0;k<aSubPlot.size();k++)

{

plot.add(aSubPlot.get(k), 1);

}

/*plot.add(subplot1, 1);

plot.add(subplot2, 1);*/

plot.setOrientation(PlotOrientation.VERTICAL);

// return a new chart containing the overlaid plot...

return new JFreeChart("% of Concurrency VS Time(in ms)",

JFreeChart.DEFAULT_TITLE_FONT, plot, true);

}

/**

* Creates a sample dataset.

*

* @return Series 1.

*/

private XYDataset createDataset1(Hashtable<String,ArrayList<Integer>>

chPlot,String nProc,int iter) {

ArrayList<Integer> tList=new ArrayList<Integer>();

int wt,vol,ht,z=0;

ArrayList<Double> Conc=new ArrayList<Double>();

InfoParser ifParser=new InfoParser();

tList=chPlot.get(Integer.toString(iter));

double tempConc;

for(int m=0;m<tList.size();m++)

{

wt=tList.get(z);

ht=tList.get(z+1);

vol=tList.get(z+2);

tempConc=ifParser.ConcCal(wt, vol, ht);

Conc.add(tempConc);

z+=3;

if(z==tList.size())

{

break;

Page 95: Visualization System for Measuring Concurrency in Distributed

90

}

}

XYSeries aSeries = new XYSeries("Process"+iter);

for(int i=0;i<Conc.size();i++)

{

aSeries.add(i,Conc.get(i));

}

final XYSeriesCollection collection = new XYSeriesCollection();

collection.addSeries(aSeries);

return collection;

}

}

Page 96: Visualization System for Measuring Concurrency in Distributed

91

APPENDIX C

Hibernate Code

1. SessionFactoryUtil.java

import org.hibernate.SessionFactory;

import org.hibernate.cfg.Configuration;

public class SessionFactoryUtil {

private static final SessionFactory sessionFactory;

static {

try {

// Create the SessionFactory from hibernate.cfg.xml

sessionFactory = new Configuration().configure().buildSessionFactory();

} catch (Throwable ex) {

System.err.println("Initial SessionFactory creation failed." + ex);

throw new ExceptionInInitializerError(ex);

}

}

public static SessionFactory getSessionFactory() {

return sessionFactory;

}

}

2. FixedEvent.java

public class FixedEvent {

String name_of_file;

int sno;

double concurrency,no_of_processors;

public int getSerialno() {

return sno;

}

public void setSerialno(int sno) {

this.sno = sno;

}

public String getName_of_file() {

return name_of_file;

}

Page 97: Visualization System for Measuring Concurrency in Distributed

92

public void setName_of_file(String name_of_file) {

this.name_of_file = name_of_file;

}

public double getNo_of_processors() {

return no_of_processors;

}

public void setNo_of_processors(double no_of_processors) {

this.no_of_processors = no_of_processors;

}

public double getConcurrency() {

return concurrency;

}

public void setConcurrency(double concurrency) {

this.concurrency = concurrency;

}

}

3. FixedProcess.java

public class FixedProcess {

String name_of_file;

int sno;

double concurrency,no_of_events;

public int getSerialno() {

return sno;

}

public void setSerialno(int sno) {

this.sno = sno;

}

public String getName_of_file() {

return name_of_file;

}

public void setName_of_file(String name_of_file) {

this.name_of_file = name_of_file;

}

public double getNo_of_events() {

return no_of_events;

}

public void setNo_of_events(double no_of_events) {

this.no_of_events = no_of_events;

}

public double getConcurrency() {

return concurrency;

}

Page 98: Visualization System for Measuring Concurrency in Distributed

93

public void setConcurrency(double concurrency) {

this.concurrency = concurrency;

}

}

4. Concurrency.hbm.xml

<?xml version="1.0"?>

<!DOCTYPE hibernate-mapping PUBLIC

"-//Hibernate/Hibernate Mapping DTD 3.0//EN"

"http://hibernate.org/dtd/hibernate-mapping-3.0.dtd">

<hibernate-mapping>

<class name="FixedEvent" table="fixed_event">

<id name="serialno" column="sno">

<generator class="native" />

</id>

<property name="name_of_file">

<column name="name_of_file" not-null="true" />

</property>

<property name="no_of_processors">

<column name="no_of_processors" not-null="true" />

</property>

<property name="concurrency">

<column name="concurrency" not-null="true" />

</property>

</class>

<class name="FixedProcess" table="fixed_processors">

<id name="serialno" column="sno">

<generator class="native" />

</id>

<property name="name_of_file">

<column name="name_of_file" not-null="true" />

</property>

<property name="no_of_events">

<column name="no_of_events" not-null="true" />

Page 99: Visualization System for Measuring Concurrency in Distributed

94

</property>

<property name="concurrency">

<column name="concurrency" not-null="true" />

</property>

</class>

</hibernate-mapping>

5. Hibernate.cfg.xml

<!DOCTYPE hibernate-configuration PUBLIC

"-//Hibernate/Hibernate Configuration DTD 3.0//EN"

"http://hibernate.org/dtd/hibernate-configuration-3.0.dtd">

<hibernate-configuration>

<session-factory>

<!-- hibernate dialect -->

<property name="hibernate.dialect">org.hibernate.dialect.MySQL5Dialect</property>

<property

name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>

<property

name="hibernate.connection.url">jdbc:mysql://localhost:3306/concurrency</property>

<property name="hibernate.connection.username">root</property>

<property name="hibernate.connection.password"></property>

<property

name="transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</p

roperty>

<!-- Automatic schema creation (begin) === -->

<property name="hibernate.hbm2ddl.auto">update</property>

<!-- Simple memory-only cache -->

<property

name="hibernate.cache.provider_class">org.hibernate.cache.HashtableCacheProvider</p

roperty>

<!-- Enable Hibernate's automatic session context management -->

<property name="current_session_context_class">thread</property>

<!-- Shows the SQL in stdout -->

<property name="show_sql">true</property>

Page 100: Visualization System for Measuring Concurrency in Distributed

95

<!-- ############################################ -->

<!-- # mapping files with external dependencies # -->

<!-- ############################################ -->

<mapping resource="Concurrency.hbm.xml"/>

</session-factory>

</hibernate-configuration>