chapter -1 basics, development and brief survey of...

37
1 CHAPTER - 1 BASICS, DEVELOPMENT AND BRIEF SURVEY OF SCHEDULING THEORY 1.1. Background of Operation Research Any problem that requires a positive decision to be made can be classified as an Operation Research (O.R.) type problem. Although operation research problems have existed since the creation of man, the term ‘Operation Research’ was coined in 1940 by McClosky and Trefthen in a small town of Bawdsey in England. It is a science that came into existence in a military context. During World War II, the military management of U.K. called on scientists from various disciplines and organized them into teams to assist in solving strategic and tactical problems relating to air and land defence of country. This new approach to the systematic and scientific study of the operations of the system was called Operations Researchor ‘Operational Research’. Hence, Operations Research can be termed as ‘an art of winning war without actually fighting it . In India, Operations Research came into existence with the opening of an O.R. unit in 1949 at the Regional Research Laboratory in Hyderabad. At the same time another O.R. unit was set up at the Defence Science laboratory to tackle the problem of stores, purchase and weapon evaluation. Operations Research Society of India (ORSI) was formed in 1957 and its first conference was held in Delhi in 1959. It was also decided to bring out a journal on operations research, the first volume of which came out in 1963 with the name ‘Opsearch ’. Operations Research has been defined in various ways and it is perhaps still too young to be defined in some authoritative way. There have not been any uniformly acceptable definitions of it as yet. Some prominent definitions of O.R. are given below.

Upload: others

Post on 06-Jun-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

1

CHAPTER - 1

BASICS, DEVELOPMENT AND BRIEF SURVEY

OF SCHEDULING THEORY

1.1. Background of Operation Research

Any problem that requires a positive decision to be made can be

classified as an Operation Research (O.R.) type problem. Although operation

research problems have existed since the creation of man, the term ‘Operation

Research’ was coined in 1940 by McClosky and Trefthen in a small town of

Bawdsey in England. It is a science that came into existence in a military context.

During World War II, the military management of U.K. called on scientists from

various disciplines and organized them into teams to assist in solving strategic and

tactical problems relating to air and land defence of country. This new approach to

the systematic and scientific study of the operations of the system was called

‘Operations Research’ or ‘Operational Research’. Hence, Operations Research

can be termed as ‘an art of winning war without actually fighting it ’.

In India, Operations Research came into existence with the opening of an

O.R. unit in 1949 at the Regional Research Laboratory in Hyderabad. At the same

time another O.R. unit was set up at the Defence Science laboratory to tackle the

problem of stores, purchase and weapon evaluation. Operations Research Society

of India (ORSI) was formed in 1957 and its first conference was held in Delhi in

1959. It was also decided to bring out a journal on operations research, the first

volume of which came out in 1963 with the name ‘Opsearch ’.

Operations Research has been defined in various ways and it is perhaps

still too young to be defined in some authoritative way. There have not been any

uniformly acceptable definitions of it as yet. Some prominent definitions of O.R.

are given below.

2

Operations Research is a systematic method-oriented study of the basic

structures, characteristics, functions and relationships of an organization to

provide the executive with a sound, scientific and quantitative basis for decision

making.

Operations Research is a scientific method of providing executive

departments with a quantitative basis for decisions regarding the operations

under control.

Operations Research is the scientific knowledge through interdisciplinary

team efforts for the purpose of determining the best utilization of limited

resources.

Operations Research is the art of giving bad answers to problems, to

which, otherwise, worse answers are given.

Operations Research is a applied decision theory. It uses any scientific,

mathematical or logical means to attempt to cope with problems that confront

the executive when he tries to achieve a through going rationality in dealing

with decision problems.

Operations Research is an experimental and applied science devoted to

observations, understanding and predicting the behavior of purposeful man-

machine systems; and operations research workers are actively engaged in

applying this knowledge to practical problems in business, government and

society.

Operations Research is the application of modern methods of

mathematical science to complex problems involving management of large

systems of men, machines, materials and money in industry, business,

government and defence.

The various definitions given above bring out the following essential

characteristics of operations research:

a) System (or Executive) Orientation: It means that a part of an

organization has some effects on the every other part. The optimum operation

of one part of a system may not be optimum operation for some other part.

3

Therefore, to evaluate any decision, one must identify all possible interactions

and determine their impact on the organization as a whole.

b) Use of interdisciplinary Terms: The study is performed by a team of

scientists whose individual members have been drawn from different scientific

and engineering discipline. For example, one may find a mathematician,

statistician, physicist, psychologist, economist and an engineer working

together on an O.R. problem. It has been recognized beyond that people from

different discipline can produce more unique solutions with greater probability

of success, than that could be expected from the same number of persons from

a single discipline.

c) Application of Scientific Methods: Operations Research is the use of

scientific methods to solve the problem under study. Most scientific research,

such as chemistry and physics can be carried out well in the laboratories,

under controlled conditions, without much interference from outside world.

d) Uncovering New Problems: The solution of an operation research

problem may uncover a number of new problems. All the uncovered problems

need not to solve at the same time. In order to derive maximum benefit, each

one of them must be solved. The results of O.R. study pertaining to a

particular problem need not wait until all unconnected problems are solved.

e) Quantitative Solutions: Operations Research approach provides the

management with a quantitative basis for decision making.

f) Human Factors: Operations Research study is incomplete without a study

of human factors. Human factors play a great role in the problem posed.

1.2. Scopes of Operation Research

Operations research plays an increasingly important role in both the

public and private sectors. Operations research has got wide scope. In general, we

can say that whenever there is a problem, there is O.R. for help. In addition to the

Military, Operations research addresses a wide variety of issues in transportation,

inventory planning, production planning, crew planning, communication network

design and operation, computer operations, financial assets, risk management,

revenue management, market clearing and many other topics that aim to improve

4

business productivity. In the public domain, it deals with energy policy, defence,

health care, water resource planning, forestry management, design and operation

of urban emergency systems and criminal justice.

The scopes of O.R. can be summarized in various important fields as follows:

(i). In Defence

In Modern time war, the military operations are carried out by Airforce, Army and

Navy. Therefore, there is a necessity to formulate optimum strategies that may

give maximum benefit. O.R. helps the military executives and managers to select

the best course of action to win the battle. Hence O.R. can be termed as ‘an art of

winning war without actually fighting it’.

(ii). In Industry

As industry expanded, it became less and less possible for man to manage them.

Thus, the company executives established different departments.

(a) Production Department

To minimize the cost of production, O.R. is useful to production specialist in

Designing and selecting sites

Scheduling and sequencing the production run by proper allocation of

machines, and

Calculating the optimal product mix.

(b) Marketing Department

To maximize the amount sold and to minimize the cost of sales, O.R. is useful to

the marketing managers in

Determining, when to buy, how often to buy and what to buy to minimize

the total cost.

5

Calculating the minimum sale price per unit, and

Knowing the customer’s choice relating to color, packing and size etc., for

various products.

(c) Financial Department

To minimize the capital required to maintain any level of the business, O.R. is

useful to financial controller in

Finding out the long term capital required

Finding out a profit plan for the company, and

Determining the optimum replacement policies.

(iii). In L.I.C.

Operations research techniques are also applicable to enable L.I.C. officers to

decide the premium rates of various policies for the best interest of the

corporation.

(iv). In Agriculture

With increase in population and consequent shortage of food, there is a need to

increase agriculture output for a country. There are many problems faced by the

agriculture department of a country like; climate conditions, problem of optimal

distribution of water from the resources, etc. Thus there is a need of best policy

under the given restrictions. O.R. techniques help to determine the best policies

and have a great scope in agriculture.

(v). In Planning

Careful planning plays an important role for the economic development of a

country. Planning commission made use of O.R. techniques for planning the

optimum size of caravelle fleet of Indian Airlines, for the production planning, for

the water resource planning etc. Thus O.R. has a great scope in planning.

6

In a net-shell, Operations research is the discipline of applying advanced

analytical methods that help in making better decisions. There is a great scope for

economist, statistician, administrator and technicians working as a team to solve

problems by using O.R. approach.

1.3. Phases of Operations Research

Operations research is a problem-solving and decision-making science. It

is a kit of scientific and programmable rules providing the management a

‘quantitative basis’ for decisions regarding the operations under control, .i.e. it

enables the decision maker to choose the best alternative from all available

alternatives for a problem. The procedure to be followed in the study of O.R.

involves the following major phases:

(i) Formulating the problem

(ii) Constructing a mathematical model

(iii) Deriving the solution from the model

(iv) Testing the model and its solution

(v) Controlling the solution

(vi) Implementation.

1.4. Uses of Operations Research

As its name implies, operations research involves “research on

operations”. Thus operation research is applied to problems that concern how to

conduct and coordinate the operations (activities) within an organization. The

nature of organization is immaterial and infact, O.R. has been applied extensively

in diverse areas as manufacturing, transportation, construction,

telecommunications, financial planning, health care, the military, and public

services. The breadth of applications of operations research is unusually wide.

Today operations research symbolizes a scientific and interdisciplinary

team working directed towards mathematical analyzing, meta-level issues such as

7

agriculture, environment, natural resources, management, industrial productivity,

congestion and waiting lines and quality of life etc. Operations research is a tool

which is concerned with the design and operation of the man-machine system

scientifically, usually under certain constraints and conflicting situations requiring

the optimum allocation of limited resources and getting optimal solution of the

problem.

At present O.R. covers a wide range of topics such as allocation

problems, linear programming, integer programming, inventory control, queuing

problems, scheduling problems, routing problems, reliability, replacement etc.

The flow shop scheduling models constitute an important branch of O.R. as this

branch is helpful to the decision makers in the management, industrial, business,

engineering, military, computer field and likewise other fields of optimizations.

1.5. Sequencing and Scheduling

Sequencing problems are very common occurrence in nature. They exist

whenever there is a choice as to the order in which a number of tasks/jobs can be

performed. It is a technique to order the jobs in a particular sequence. There are

different types of sequencing which followed in industries such as first in first

out basis, priority basis, job size basis and processing time basis etc. In

sequencing, we have to determine the optimal sequence (order) of performing the

jobs in such a way so that the total time is minimized. If n jobs are to be processed

on m machines for successful completion of a project, then there are ! mn

possible sequences of job processing. It is very difficult to find the effectiveness

measure of all the sequences which are very large in number and to select one of

most suitable sequence which optimizes the measure effectively.

Scheduling is the allocation of resources over time to perform a

collection of tasks. First, it is a decision making process that is used on a regular

basis in many manufacturing and services industries. It deals with the allocation

of resources to tasks over given time periods and its goal is to optimize one or

more objectives. Second, it is a body of theory, .i.e. it is a collection of principles,

8

models, techniques and logical conclusions that provide insight into the

scheduling function. In scheduling theory three types of decision making goals

seem to be prevalent: These are efficient utilization of resources, rapid response to

demands, and close conformance to prescribed deadlines. Frequently, an

important cost-related measure of system performance such as machine idle time,

job waiting time or job lateness etc can be used as substitute for total system cost,

and quantitative approaches to problems with these criteria appear throughout the

literature on the scheduling. Traditionally, scheduling problems have been

viewed as problems in optimization subject to constraints – specifically, problems

in allocation and sequencing. Sometimes, scheduling is purely allocation, and in

these cases mathematical programming models can usually be employed to

determine optimal decisions.

Many of the early developments in the field of scheduling were

motivated by problems arising in manufacturing. Therefore, it was natural to

employ the vocabulary of manufacturing when describing scheduling problems.

Now even though the scheduling work is of considerable significance in many

non-manufacturing areas, the terminology of manufacturing is still frequently

used. Thus, resources are usually called ‘machines’ and basic task modules are

called ‘jobs’. Sometimes, job may consist of several elementary tasks that are

interrelated by precedence restrictions and such elementary tasks are referred to as

‘operations’. For example, a problem in the scheduling of the patients visit to

specialists in a diagnostic clinic and to find the system described generically as

the processing of ‘jobs’ by ‘machines’.

By scheduling, we assign a particular time for completing a particular

job. Scheduling is the allocation of resources pertaining to start and finish times

for tasks. It is nothing but scheduling various jobs on a set of resources such that

certain performance measures are optimized. Scheduling is generally considered

to be the one of the most significant issue in the planning and operation of a

manufacturing system. Better scheduling system has significant impact on cost

reduction, increased productivity, customer satisfaction and overall competitive

advantage. In addition, recent customer demand for high variety products has

9

contributed to an increase in product complexity that further emphasizes the need

for improved scheduling. Proficient scheduling leads to increase in capacity

utilization efficiency and hence thereby reducing the time required to complete

jobs and consequently increasing the profitability of an organization in present

competitive environment.

The scheduling/sequencing problems are common occur in our daily life

e.g. ordering of jobs for processing in a manufacturing plant, waiting air craft for

landing clearance, programs to be run in a sequence at a computer center etc. Such

problems exist whenever there is an alternative choice in which a number of jobs

can be done. The selection of an appropriate order or sequence in which to receive

waiting customer is called sequencing. As with others operational research

problems the objective is to optimize the use of available facilities to effectively

process the item or the jobs.

A specific scheduling problem is described by four types of information:

1) The jobs and operations to be processed.

2) The number and types of machines that comprise the shop.

3) Disciplines that restrict the manner in which the assignments can be made.

4) The criteria by which a schedule will be evaluated.

A four-parameter notation will be used to identify individual scheduling problems

written as A/B/C/D.

A describes the job-arrival process. For dynamic problems ( jobs

arrive intermittently, at the times that are predictable only in a statistical sense,

and arrivals will continue indefinitely into the future), A will identify the

probability distribution of the times between arrivals. For static problems (a

certain number of jobs arrive simultaneously in a shop that is idle and

immediately available for work), it will specify the number of jobs-assumed to

arrive simultaneously. When n is given as the first term, it denotes an arbitrary,

but finite number of jobs in a static problem.

10

B describes the number of machines in the shop. A second term of m

denotes an arbitrary number of machines.

C describes the flow pattern in the shop. The principal symbols are F

for the flow shop limiting case, R for the randomly routed job-shop limiting case,

and G for completely general or arbitrary flow pattern. In the case of a flow shop

with a single machine there is no flow pattern, and the third parameter of

description is omitted.

D describes the criterion by which the schedule is to be evaluated.

As an example of this notation:

n/2/F/Fmax : represents an arbitrary sequence of n jobs in a two machines flow

shop scheduling so as to minimize the maximum flow-time.

n/m/G/Fmax : represents n jobs in an arbitrary shop of m machines so that the last

is finished as soon as possible.

The two terms viz sequencing and scheduling although distant in some

extent, but are used as synonymous terms in the present work.

1.6. Scheduling Environments

Over the last fifty years a considerable amount of research efforts has

been focused on scheduling. Scheduling is the allocation of resources over time to

perform a collection of tasks. The practical problem of allocating resources over

time to perform a collection of task arises in a variety of situations. Depending

upon various situations, scheduling can be classified into the following

categories/types:

1.6.1. Single Machine Scheduling (1)

Single machine scheduling is the simplest of all possible machine

environments and is a special case of all other complicated machine environment.

11

In single machine scheduling each job consists of a single operation, i.e. there is a

single resource or machine. The basic machine scheduling is characterized by the

following conditions:

a) A set of n independent, single-operation jobs is available for processing at

time zero.

b) Setup times for the jobs are independent of job sequence and can be

included in processing time.

c) Job descriptors are known in advance.

d) One machine is continuously available and is never kept idle while work is

waiting.

e) Once processing begins on a job, it is processed to completion without

interruption.

Under these conditions there is a one to one correspondence between a sequence

of the n jobs and a permutation of the jobs indices 1, 2… n. The total number of

distinct solutions to the basic single machine scheduling problem with n jobs is !n

The single machine scheduling problem is significant as it can illustrate a

variety of scheduling topics. It is a building block in the development of a

comprehensive understanding of scheduling concepts. In order to understand the

behavior of a complex system consisting of a number of machines, single machine

scheduling appears as an elementary component. For example, in multi-

operational processes there is often a bottleneck stage, and the treatment of the

bottleneck itself with single machine analysis may determine the properties of the

entire schedule.

1.6.2. Parallel Machine Scheduling (Pm)

The parallel machine scheduling is a kind of important multi machine

scheduling in which every machine has same work function and every job can be

processed by any available machine. There are m identical machines in parallel.

Job j requires a single operation and may be processed on any one of the m

available machines or on any one belongs to a given subset. A bank of machines

12

in parallel is a setting that is important from both a theoretical and a practical

point of view. From theoretical point of view it is a generalization of the single

machine and from the practical point of view, it is important because the

occurrence of resources in parallel is common in the real world. Also, techniques

for machines in parallel are often used in decomposition procedures for multi-

stage systems.

The parallel machine scheduling may consider as a two step process.

First, one has to determine which jobs have to be allocated to which machines;

second, one has to determine the sequence of the jobs allocated to each machine.

1.6.3. Flow Shop Scheduling (Fm)

In many manufacturing and assembly facilities each job has to undergo a

series of operations. Often, these operations have to be done on all jobs in the

same order implying that the jobs have to follow the same route. The machines

are assumed to be set up in series and environment is referred to as a flow shop.

The shop contains m different machines in series. Each job has to be

processed on each one of these m machines. All jobs have to follow the same

order or route, .i.e. they have to be processed first on machine 1, then on machine

2, and so on. After, completion on one machine a job joins the queue at the next

machine, .i.e. the flow shop is characterized by a flow of work that is

unidirectional. In other words, a flow shop contains a natural machine order, .i.e.

it is possible to number the machines so that if the jth operation of any job

precedes its kth operation, then the machine required by the jth operation has a

lower number than the machine required by the kth operation. Usually, all queues

are assumed to operate under First in First Out (FIFO) discipline, .i.e. a job

cannot pass another while waiting in a queue. The machines in a flow shop are

numbered 1, 2. . . m; and the operations of job i are corresponding numbered

(i, 1), (i, 2), …, (i, m). Each job can be treated as if it had exactly m operations, for

in cases where fewer operations exist, the corresponding processing times can be

taken to be zero. The conditions that characterize flow shop problems are similar

13

to the conditions of the basic single-machine model. One difference from the

basic single-machine scheduling is that the inserted idle time may be

advantageous. In the single machine model with simultaneous arrivals it was

possible to assume that the machine would never be kept idle when work was

waiting. In the flow shop scheduling, it may be necessary to provide idle time to

achieve optimality. For a flow shop problem, there are n! different job sequences

possible for each machine, and therefore ! mn different schedules are to be

examined.

1.6.4. Flexible Flow shop Scheduling (F Fc)

A flexible flow shop scheduling is a generalization of the flow shop and

parallel machine scheduling. Instead of m machines in series there are c stages in

series with at each stage a number of identical machines in parallel. Each job has

to process through first at stage1, then at stage 2, and so on, .i.e. a job has to be

processed at each stage only on one of the machines. This machine environment

has been referred to as a flexible flow shop, a compound flow shop, a multi-

processor flow shop or a hybrid flow shop. A stage functions as a bank of parallel

machines and at each stage job ‘j’ requires processing on only one machine and

any machine can do. The queue between the various stages may or may not

operate according to the First Come First Served (FCFS) discipline.

1.6.5. Job Shop Scheduling (Jm)

In a job shop with m machines each job has its own predetermined route

to follow. The classical job shop problem differs from the flow shop problem in

one important respect: the flow of work is not unidirectional. In a flow shop

scheduling all jobs follows the same route. But, when the routes are fixed and not

necessarily the same for each job, the model is called a job shop. Unlike the flow

hop model, there is no initial machine that can perform only the first operation of

a job, nor there is a terminal machine that performs only the last operation of a

job. In a job shop it is more appropriate to describe an operation with a triplet

(i, j, k) in order to denote the fact that operation j of job i requires machine k.

14

If a job in a job shop has to visit certain machines more than once, the

job is said to be recirculate. Recirculation is a common phenomenon in the real

world. For example, in semiconductor manufacturing jobs have to recirculate

several times before they complete all their processing.

1.6.6. Flexible Job Shop Scheduling (FJc)

A flexible job shop is a generalization of the job shop and the parallel

machine environments. Instead of m machines in series there are c work centers

and each work center has a number of identical parallel machines. Each job has its

own route to follow through the shop; Job j requires processing at each work

center on only one machine and any machine can do.

1.6.7. Open Shop Scheduling (Om)

In open shop scheduling each job has a predetermined route. There are m

machines. Every job has to be processed on each of the m machines. However,

some of these processing times may be zero. There are no restrictions with regard

to the routing of each job through the machines environment. The scheduler is

allowed to determine a route for each job and different jobs may have different

routes.

1.7. Categories of Scheduling Problems

The sequencing / scheduling problems can be categorized into the

following types:

1.7.1. Deterministic Scheduling Problems

The scheduling problems when all elements of the problems are such as

the arrival state of jobs at a shop floor, due date of jobs, ordering, processing time,

availability of machines etc. do not include stochastic factor and determined in

advance are included in this category.

15

1.7.2. Static Scheduling Problems

The scheduling problems in which the nature of job arrival is different

and set of jobs over time does not change are called static scheduling problems.

The setup times of jobs are available before hand.

1.7.3. Dynamic Scheduling Problems

The scheduling problems in which set of jobs changes over time and

arrival rate of jobs is different are called dynamic scheduling problems.

1.7.4. Stochastic Scheduling Problems

The scheduling problems in which atleast one of the problem elements

includes a stochastic factor are stochastic scheduling problems.

1.8. Principal Assumptions

The following assumptions have been made regarding the machines, jobs

and the operating process:

1.8.1. Assumptions Regarding Machines

i. No machine processes more than one operation at a time.

ii. Each operation on a machine once started must be performed to it

completion.

iii. Each operation takes finite time and it must be completed before any other

operation begins. The given operation time includes set up time, until

unless specified.

iv. There is only one machine of each type.

v. Each machine operates independently of the other.

1.8.2. Assumptions Regarding Jobs

i. All jobs are available for processing at time zero.

16

ii. All jobs allow the same sequence of operations.

iii. Jobs are independent of each other.

iv. No job is processed more than once on any machine.

v. The processing times of the jobs are independent of the order in which

jobs are performed.

vi. Each job once started must be processed to completion.

1.8.3. Assumptions Regarding Operating Process

i. Each job is processed as early as possible.

ii. Each job is considered as individual entity even though it may consist of a

number of individual units.

iii. Each machine is provided with sufficient waiting space for allowing job to

wait before starting their processing.

iv. Each machine process jobs in the same sequence, i.e. no passing or

overtaking of jobs is permitted.

v. Transportation time of job between machines is negligible, until unless

specified.

vi. Set up times are sequences-independent.

vii. The given operation times include set-up time, until unless specified.

1.9. Performance Measures

Let the number of jobs is denoted by n and the number of machines by m.

Usually, the subscript ‘i’ refers to a job while the subscript ‘j’ refers to machine.

Let the n jobs be identified as 1, 2… n and the m machines by M1, M2… Mm. The

various performance measures are as follows:

(i). Release Time/date (ri)

It is the date at which ith job is released to the shop by some external job

generating process. It is the earliest time at which processing of the first

generation of the job starts. It is also known as ready time or arrival time.

17

(ii). Processing Time (ai,j)

It is the time required to process ith job on jth machine. The processing

time ai,j will normally include both processing time and set-up time unless

specified. The subscript ‘j’ is omitted if the processing time of job ‘i’ does not

depend upon the machine or if job ‘i’ is only to be processed on one given

machine.

(iii). Due- Date (di)

The due date di of job i represents the committed shipping or completion

date, .i.e. the date at which the job is promised to the customers. Completion of a

job after its due date is allowed, but then a penalty is incurred. When a due date

must be met it is referred to as a deadline and denoted byi

d

.

(iv). Completion Time (ci)

It is the amount of time required for any particular task to be completed,

.i.e. the time at which the ith job is actually completed in a sequence of job

processing.

(v). Flow-time (fi)

It is the amount of time that a job spends in the system. It is the

difference between the completion time and ready time of the job, .i.e. for the ith

job, the flow time = fi = ci – ri.

(vi). Lateness (Li)

It is the difference between the completion time of a job and its due-date,

.i.e. for ith job, Lateness = Li =ci –di.

(vii). Tardiness (Ti)

18

It is the lateness of the job if it fails to meet its due date; otherwise it is

zero, .i.e. for ith job, Lateness = Tardiness = Ti = max {0, Li} = Max {0, ci - di }.

(viii). Earliness (Ei)

It is the negative of lateness of the job if the completion time (ci) is less

than its due date (di); otherwise it is zero, .i.e. for ith job, it is defined as

Ei = max {0, - Li} = max {0, di - ci}.

(ix). Total Elapsed time (Cmax)

It is defined as the completion time of the last job when the set of all

jobs finish their processing on all available machines. It is also called Makespan

and defined as max (c1, c2… cn).

.i.e. Cmax =max {ci} : i =1,2,……..n.

(x). Idle time (Ij)

The idle time of machine Mj is time for which machines Mj remain idle.

It is denoted by Ij and defined as, Ij = Cmax - ,1

n

i ji

a , where ai,j is the processing

time of ith job on machine M j.

(xi). Mean completion Time (C)

It is the average completion time of all the jobs, .i.e. mean completion

time of jobs =1

n

ii

C C n

, where ci is the completion time of ith job.

(xii). Mean Flow –time (F)

It is the average time that a job spends in the shop floor. Mean flow time

=1

n

ii

F F n

, where Fi is the flow time of ith job.

19

(xiii). Penalty Cost

It is defined as the total penalty paid by virtue of jobs being late in

completion by their due-dates.

(xiv). Total Production Cost

It is defined as the total cost of the production for a set of products on

the available machines. Two performances measures are equivalent if a schedule

which is optimal with respect to one performance is also optimal with respect to

other performance measure and vice-versa. The performance measures ci

(completion time) and Fi (flow time) are equivalent while the measure Cmax and

Fmax are not equivalent.

(xv). Weight (wi)

The weight wi of job ‘i’ is a priority factor, denoting the importance of

job ‘i’ relative to other jobs in the system. For example, weight may represent the

actual cost of keeping the job in the system. This cost may be a holding or

inventory cost.

(xvi). Preemptions (prmp)

Preemptions imply that it is not necessary to keep a job on a machine,

once started until its completion. The scheduler is allowed to interrupt the

processing of a job (preempt) at any point in time and put a different job on the

machine instead. The amount of processing a preempted job already has received

is not lost. When a preempted job is afterwards put back on the machine, it only

needs the machine for its remaining processing time.

(xvii).Breakdowns (brkdwn)

Machine breakdowns imply that a machine may not be continuously

available. The period that a machine is not available are assumed to fixed

20

(deterministic) in this research work. It may be due to shifts or schedule

maintenance etc. If there are a number of identical machines in parallel, the

number of machines available at any point in time is a function of time, .i.e. m(t).

Machine breakdowns for certain interval of time are also referred to as machine

non-availability constraints.

(xviii). Permutation (prmu)

A constraint that may appear in the flow shop environment such that the

queues in front of each machine operate according to the First In First Out (FIFO)

discipline. This implies that the order or permutation in which the jobs go through

the first machine is maintained throughout the system.

(xix). Blocking (block)

Blocking is a phenomenon that may occur in flow shop. If a flow shop

has a limited buffer in between two successive machines, then it may happen that

when the buffer is full, the upstream machine is allowed to release a completed

job. Blocking implies that completed job has to remain on the upstream machine

preventing (i.e. blocking) that machine from working on the next job.

(xx). No-Wait (nwt)

The no-wait is another important phenomenon that may occur in flow

shop. Jobs are not allowed to wait between two successive machines. This implies

that the starting time of a job at the first machine has to be delayed to ensure that

the job can go through the flow shop without having to wait for any machine. An

example of such an operation is steel rolling mill in which a slab of steel is not

allowed to wait as it would cool off during a wait. It is clear that under no-wait the

machines also operate according to the FIFO discipline.

(xxi). Total Weighted Completion Time ( )j j

w c

The sum of the weighted completion times of the n jobs gives an

indication of the total holding or inventory costs incurred by the schedule. The

21

sum of the completion times is in the literature often referred to as the flow time

and the total weighed completion time is then referred to as the weighted flow

time.

(xxii).Transportation Time

In most of manufacturing systems, finished and semi-finished jobs are to

be transferred from one machine to another for processing. In most of the

published literature explicitly or implicitly assumes that either there are an infinite

number of jobs transported instantaneously from one machine to another without

transportation time involved. However there are many situations where the

transportation times are quite significant and cannot be simply neglected. When

machines on which jobs are to be processed are planted at different places and

these jobs require forms of

Loading time of jobs

Moving time of jobs

Unloading time of job.

The sum of all the above times is designated as transportation time of jobs.

(xxiii). Setup Time

Many problems are encountered in a real world whenever sometime is

spent in bringing a given facility to a desired state for processing the job. The

time spent is called set up time and its magnitude depends upon the job just

completed and job waiting to be processed and hence occupied a substantial

percentage of the available production time on the manufacturing equipment.

Setup times are defined to be the work to arrange the resources, process, or bench

for tasks which includes obtaining tools, positioning work in process material,

cleaning up, adjusting and returning tools and inspecting material in

manufacturing system. Some example, of the above situation includes container

manufacturing industry: where the machines are to be adjusted whenever the

dimensions of the containers are changed; the paint industry: in which parts of

22

different colors are produced on the same piece or equipment and printing press:

where printing in different colors is to be done from the same machine; a paper

bag industry: in which a machine is to be switched over from one type of bag to

another, a set-up time is incurred.

(xxiv). Sequence Dependent Setup Time (SDST)

There are situations in which it is simply not acceptable to assume that

the time required to set up machine for the next job is independent of the job

immediate predecessor on the machine. Infact, the variation of setup time with

sequence provides the dominant criterion for evaluating schedule. These situations

are often found in process industries and are frequently associated with the

problem of lot sizing. For example the scheduling problem in the group

technology environment for each family of parts a long set up time is required to

initiate the process of family parts after which a short changeover time is required

which depend upon the sequence of jobs preceding a particular family part (job

being processed). In such cases it is advantages to consider these changeover

times explicitly in the identification of an optimal schedule. It has been reported in

the literature that sequence dependent set up time (SDST) is one of the most

recurrent additional complications in the scheduling problem. Very limited

number of research on flow shop scheduling under SDST environment for due

date related performance measure have been done.

(xxv). Equivalent Job-Block Concept

There are various situations where some of the specified jobs are required

to be processed together as block in a sequence of jobs either by a virtue of

technological constraints or some under extremely imposed restrictions. This type

of situation is known as group technology or equivalent job block. It has very

wide applications to a variety of production system for the purpose of improving

productivity.

(xxvi). Rental Policies

23

When machines are taken on rent, the following renting policies

generally exit:

1. All the machines are taken on rent at the same time and are also returned at

same time.

2. All the machines are taken on rent at the same time and are returned as and

when they are no longer required for processing the jobs.

3. All the machines are taken on rent as and when they are required and are

returned as and when they are no longer required for processing the jobs.

1.10.Components of Production Cost

The various components of the production cost are as follows:

a) Operation Cost

The component of cost which represents the cost incurred in actual

production and which may be treated as the processing cost of the jobs on all

machines is defined as the operation cost.

b) Job Waiting Cost

The component of cost which (also called as the in-process inventory

cost) reflects the opportunity cost due to the waiting of the semi-finished jobs in

the shop for processing on some machines. The job which waits in the shop in the

form of capital could have been utilized to produce addition return on capital.

c) Machine Idle Cost

When machine is idle, some opportunity is lost because by utilizing this

idle capacity of the machine, some return on machine could be obtained.

Determination of machine idle cost may be divided into two categories.

i)The idle time of the machines can be utilized to perform some other work

which may be as profitable as the exiting work.

24

ii) The idle time of the machines cannot be utilized to perform any other

useful work.

For case (i), the machine idle cost is the difference between the expected rates of

return from the machine which is obtained by utilizing the idle capacity of the

machine on a subordinate job. For case (ii), the idle cost of the machine is the

respected rate of return.

d) Total opportunity cost

The total opportunity cost of a schedule is the sum of the components of

the opportunity cost i.e. operation cost, job waiting cost, machine idle cost and

penalty cost of jobs.

1.11.Decision-Making Goals

According to Baker (1974), there are three common types of decision-

making goals in sequencing / scheduling problems. These are

Efficient utilization of resources

Rapid response to demand

Close conformance to meet deadlines.

Efficient utilization of resources (machines) implies activities are scheduled so

as to maintain high utilization of labor, equipment and space. The common goals

that can be achieved are

_ _ _

maxminimize C or I, or maximize or U

pN .

Rapid response to demands means scheduling should allow jobs to be processed

at rapid rate resulting in low levels of work-in-process inventory. The common

goals that can be achieved are

25

_ _ _ _ _

1 1 1 1 1

min imize ; ; ; ;C;F;L; ; or W.n n n n m

i i i ij wi i i i j

C F L w N

Close conformance to meet deadlines indicate that scheduling should ensure that

due dates are met with every time through shorter lead times. The common goals

that can be achieved are

_

max max1 1

minimize ; ; ; ; ; or .n n

i i ii i

L T NT T T WT

1.12.Dispatching Rules

Scheduling/Sequencing are forms of decision making which play an

important role in manufacturing as well as in service industries. In the current

competitive scenario effective scheduling has become a necessity for survival in

the market. The sequencing and scheduling problems have been solved by using

dispatching rules, also called Scheduling rules, decision rules or priority rules.

These dispatching rules are used to determine the priority of every job. When the

priority of each job is determined, jobs are sorted and then job with the highest

priority is selected and processed first. According to Baker (1974), the dispatching

rules are classified as follows:

Local Rules

These rules are concerned with the local available information.

Global Rules

Global rules are used to dispatch jobs using all information available on the shop

floor.

Static Rules

These rules do not change over time and ignore the status of job shop floor.

26

Dynamic Rules

These rules are time dependent and change according to the status of job shop

floor.

Forecast Rules

Forecast rules are used to give priority to jobs according to what the job is going

to come across in the future and according to the situation at local machine.

A number of dispatching rules have been reported by many researchers. The

following are some of the important dispatching rules:

1. Shortest Processing Time (SPT)

In shortest processing time (SPT) or Shortest expected processing time

(SEPT), the job with the smallest processing time is processed first. The other

versions of the shortest processing time are:

Total Shortest Remaining Processing Time (SRPT)

The job with shortest remaining processing time is processed first.

Weighted Shortest Processing Time (WSPT)

The ratio is computed by dividing the processing time of the job by its weight.

The job with smallest ratio is processed first.

2. Longest Processing Time (LPT)

In longest processing time (LPT) or Longest expected processing time

(LEPT), the jobs with the largest processing time is processed first. The other

versions of longest processing time are

Total Longest Processing Time (TLPT)

Total Longest Remaining Processing Time (TLPT)

27

3. Earliest Due Date (EDD)

In the earliest due date the jobs with smallest due date is processed first.

The other versions of earliest due date are:

Operation Due Date (ODD)

In this dispatching rule, the operation with the smallest due date is processed first.

Modified Due Date (MDD)

In modified due date, from the set of jobs waiting for a specific machine, jobs are

assigned a new due date, and EDD is performed on this set.

Modified Operation Due Date (MODD)

In modified operation due date, from the set of operations waiting for a specific

machine, operations are assigned a new due date, and ODD is performed on this

set.

4. Job Slack Time (JST)

In this dispatching rule, the job with minimum slack is processed first.

The job slack time is computed as the difference between the job due date, the

work remaining and the current time.

5. Critical Ratio (CR)

In the critical ratio, the job with the smallest ratio is processed first. The

critical region is determined by dividing job’s allowance by the remaining work

time.

6. First Come, First Served (FCFS)

In First come, first served or smallest ready time (SORT), the jobs which

arrive first at the machine will be served first.

28

7. Last Come, First Served (LCFS)

In last come, first served, the job which arrives last will be served first.

8. First Off, First On (FIFO)

In this dispatching rule, the job with operation that could be completed

earliest will be processed first even if this operation is not yet in queue. In this

case, the machine will remain idle until the job arrives.

1.13.Methods of Solution

The theory of scheduling includes a variety of techniques that are useful

in solving scheduling problems. In the present scenario, the scheduling field has

become a focal point for the development, application, and evaluation of

combinatorial procedures, simulation techniques, network methods and heuristic

solution approaches. The selection of an appropriate technique depends on the

complexity of the problem, the nature of model and choice of a criterion as well as

other factors. Several methods have been developed to solve scheduling models.

These methods of solution can be classified as follows:

1.13.1. Johnson’s Rule

The general two machine flow shop problem with the objective of

minimizing makespan is known as Johnson’s Rule or Johnson’s Problem. The

results originally obtained by Johnson’s (1954) are now standard fundamentals in

the theory of scheduling. In the formulation of this problem, job i is characterized

by processing time ai,1 required on machine 1, and ai,2 required on machine 2 after

the operation on machine 1 is completed. An optimal sequence can be

characterized by the following rule for ordering pairs of job: Job i precedes job j

in an optimal sequence if ,1 ,2 ,2 ,2min , min ,

i j i ja a a a

In practice, an optimal sequence is directly constructed with an adaption of this

result.

29

The position in sequences is filled by a one-pass mechanism that identifies at each

stage, a job that should fill either the first or last available position.

1.13.2. Extensions of Johnson’s Rule

For the makespan criteria for n jobs and three machines it is sufficient to

consider only permutation schedules in the search for an optimum solution. In his

original presentation Johnson (1954) showed that a generalization is possible

when the second machine is dominated.

Now, if any one of the following conditions is satisfied,

,1 ,2 ,3 ,2 min max a or min max

i i i ii ii ia a a

then, Johnson’s algorithm can be extended in the following way:

If ,1 ,2min max

i ii ia a , then job i precedes job j in an optimal schedule if

,1 ,2 ,2 ,3 ,2 ,3 ,1 ,2min , min ,

i i j j i i j ja a a a a a a a

If ,3 ,2min max

i ii ia a , then job i precedes job j in an optimal schedule if

,1 ,2 ,2 ,3 ,2 ,3 ,1 ,2min , min ,

i i j j i i j ja a a a a a a a .

1.13.3. Branch and Bound Technique

The basic branch and bound procedure was developed by Ignall and

Schrage (1965) and independently by Lomnicki (1965). It is most widely

technique used in scheduling. It is an enumeration technique and a useful method

for solving combinatorial problems. As name itself implies, this technique of

optimization consist of two fundamental steps:

a) Branching

It is process of partitioning a large problem into two or more sub problems. The

branching procedure replaces an original problem by a set of new problems that

are:

Mutually exclusive and exhaustive sub problems of the original problem.

30

Partially solved versions of the original problem.

Smaller problems than original problem.

Further, the sub problems can themselves be partitioned in a similar fashion.

b) Bounding

It is a process of calculating a lower bound of the optimal solution of a given sub

problem. At any point in time, we compare the lower bounds of all the terminal

nodes and select the node with minimum lower bound for further branching. If

there is a tie on the minimum lower bound, then select the node at the lower level

for further branching. If the node with minimum lower bound lies at (n -1)th level,

then the optimality is reached. A complete optimal sequence can be obtained by

filling the missing number in the first position of the partial sequence which is

leading to the node at the (n-1)th level. This process of branching and bounding is

called breadth-first search.

1.13.4. Heuristic Approaches

The branch and bound approach discussed above has two major

disadvantages, a typical of implicit enumeration methods. First, the computational

requirements will be severe for large problems. Second, even for relatively small

problems, there is no guarantee that the solution can be obtained quickly as the

extent of the partial enumeration depends on the data in problem. Heuristic

algorithms have overcome these two drawbacks, .i.e. they can obtain solution to

large problems with limited computational effort, and their computational

requirements are predictable for problems of a given size. However heuristic

approaches do not guarantee optimality and in some instances it may even be

difficult to judge their effectiveness. The three heuristic namely Palmer

Heuristic, Enscor, and Ham (NEH) algorithm and Campbell, Dudek, and

Smith (CDS) algorithm are representative of quick, suboptimal solution

techniques for the makespan problem.

31

1.13.5. Metaheuristic

A Metaheuristic is a higher level procedure or heuristic designed to find,

generate or select a lower level procedure or heuristic that may provide a

sufficiently good solution to an optimization problem. Metaheuristic are designed

to tackle complex optimization problem where other optimization methods have

failed to be either effective or efficient. These methods have come to be

recognized as one of the most practical approaches for solving many complex

problems that are combinatorial in nature. The practical advantage of

Metaheuristic lies in both their effectiveness and general applicability.

Numerous Metaheuristic such as Tabu Search, Simulated Annealing, and

Genetic Algorithm etc. have been developed for solving the various

combinatorial problems.

1.14.Computational Complexity

Practical experience shows that some computational problems are easier

to solve than others. Complexity theory provides a mathematical framework in

which computational problems are studied so that they can be classify as “easy”

or “hard”.

Classes P and NP

A computational problem can be viewed as a function h that maps each input x in

some given domain to an output h(x) in some given range. Such an algorithm

computes h(x) for each input x. For a precise discussion, a Turing machine is

commonly used as a mathematical model of an algorithm.

One of the main issues of complexity theory is to measure the performance of

algorithms with respect to computational time. To be more precise, for each input

x, we define the input length |x| as the length of some encoding of x. We measure

the efficiency of an algorithm by an upper bound T(n) on the number of steps that

the algorithm takes on any input x with |x| = n. In most cases it will be difficult to

calculate the precise form of T. For these reasons we will replace the precise form

32

of T by its asymptotic order. Therefore, we say that T(n) ∈ O(g(n)) if there exist

constants c >0 and a nonnegative integer no such that ( ) ( )T n cg n for all integers

on n . Thus rather than saying the computational complexity is bounded by

3 27 27 4n n , we simply say it is 3O n .

In general there are two types of algorithms, one that can be solved by a

polynomial time algorithm and other, the problems for which no polynomial time

algorithm is known. The class P consists of all problems for which algorithms

with polynomial type behavior have been found. The class NP consists of the set

of problems for which algorithms with exponential behaviors have been found.

Clearly, the class P is contained in the class NP, .i.e. a problem that having a

polynomial time algorithm can be inflated so that it can take exponential time.

NP-hard (Non-deterministic polynomial time hard) is class of problems which are

at least as hard as the hardest problem in NP. NP-hard problems do not have

element of NP.

NP-complete is class of problems which contain the hardest problem in NP. Each

element of NP-complete has to be an element of NP.

A NP-complete problem has the property that it can be solved in polynomial time

if and only if all other NP problems can be solved in polynomial time. If an NP-

hard problem can be solved in polynomial time, and then all NP-complete

problems can be solved in polynomial time. Therefore all NP-complete problems

are NP-hard, but some NP-hard problems are not known to be NP –complete.

1.15. Role of Fuzziness in Scheduling

Scheduling is a enduring process where the existence of real time

information frequently forces the review and modification of pre-established

schedules. The real world is complex; complexity in the world generally arises

from uncertainty. From this prospective, the concept of fuzzy environment is

introduced in the field of scheduling. The past few years have witnessed a rapid

growth in the number and variety of applications of fuzzy logic. In most

application, fuzzy logic is translation of a human solution which can model non

linear functions of arbitrary complexity to a desired degree of accuracy. Zadeh

33

(1965) introduced the term fuzzy logic in his seminal work “Fuzzy Set”, which

describes the mathematics of fuzzy set theory. The permissiveness of fuzziness in

the human thought process suggests that much of logic behind thought processing

is not traditional two-valued logic or even multivalued logic, but logic with fuzzy

truths, fuzzy connectiveness and fuzzy rules of inference.

Flow shop problem concerns the sequencing of a given number of jobs

through a series of machines in the exact same order on all machines with the aim

to satisfy a set of constraints as much as possible, and optimize a set of objectives.

The commonly studied objectives include: makespan, mean flow time, tardiness

etc. among those objectives, the makespan, defined as the time when the last job

completes on the last machine, is the most frequently studied one. A large number

of deterministic scheduling algorithms have been proposed in last decades to deal

with flow shop scheduling problems with various objectives and constraints.

However, it is often difficult to apply those algorithms to real-life flow shop

problems. For example, in practice the processing times of jobs could be uncertain

due to incomplete knowledge or uncertain environment which implies that there

exist various external sources and types of uncertainty. Fuzzy sets and logic can

be used to tackle uncertainties inherent in actual flow shop scheduling problems.

In our work, we have discussed a fuzzy flow shop sequencing model,

which is based on the assumption that processing times are not known exactly and

only estimated values are given. This leads to the use of fuzzy numbers to

represent these imprecise data values. We use triangular fuzzy numbers (TFN) for

the fuzzification to obtain a fuzzy flow shop sequencing problem. The main

interest of our study is to introduce the concept of fuzziness in flow shop

scheduling which is an extension of the crisp problem and the schedule obtained

from the fuzzy model has the same job sequence as that of the crisp problem. The

proposed approach is very useful in solving real life practical problems.

1.16.Literature Survey

Scheduling is one of the most widely researched areas of operational

research. It is due to the rich variety of different problem types within field.

Classical flow shop scheduling problems are mainly concerned with completion

34

time related objectives. Better scheduling system has significant impact on cost

reduction, increased productivity, customer satisfaction and overall competitive

advantage. In addition, recent customer demand for high variety products has

contributed to an increase in product complexity that further emphasizes the need

for improved scheduling. Proficient scheduling leads to increase in capacity

utilization efficiency and hence thereby reducing the time required to complete

jobs and consequently increasing the profitability of an organization in present

competitive environment. There are different systems of production scheduling

including flow shop in which jobs are to be processed through series of machines

for optimizing number of required performance measures. In modern

manufacturing and operations management, on time delivery with minimum

possible cost is a significant factor as for the reason of upward stress of

competition on the markets. Industry has to offer a great variety of different and

individual products while customers are expecting ordered goods to be delivered

on time. Hence, there is a requirement of multi-objective scheduling system

through which all the objectives can be achieved simultaneously.

It is generally believed that the seminal paper by Johnson’s (1954)

provided the starting point to scheduling being regarded as an independent area

within operational research. Johnson considered the production model that is now

called the flow shop. Smith (1956) whose work is one of the earliest considered

minimization of mean flow time and maximum tardiness. Ignall & Scharge (1965)

have developed branch and bound technique for minimizing the total flow time of

jobs in n x 2 flow shop. The general n x 3 problem was solved by Lomnicki

(1965) which gives the branch and bound technique for makespan minimization.

The general n x m problem was solved by Smith and Dudke (1967). Maggu and

Das (1977) introduced the concept of equivalent jobs for job block in job

sequencing. Yoshida and Hitomi (1979) further considered the problem with setup

times. Van Wassenhove and Gelders (1980) considered minimization of

maximum tardiness and mean flow time explicitly as objective. Singh (1985)

studied 2 x n flow shop problem involving job-block, transportation time,

arbitrary time and break down time. Rajendran (1992) proposed a technique based

on the branch and bound method with the satisfaction of the certain conditions to

35

obtain a sequence which minimize total flow time subject to minimum make span,

in two stage flow shop problem. McCahon and Lee (1990, 1992) studied the job

sequencing with fuzzy processing time. Ishibuchi and Lee (1996) introduced the

formulation of fuzzy flow shop scheduling problem with fuzzy processing time.

Martin and Roberto (2001) discussed the fuzzy scheduling with application to real

time system. Heydari (2003) discussed flow shop scheduling problems with

processing of n jobs in a string of disjoint job blocks fixed under jobs and

arbitrary job order. Singh and Gupta (2005) associated probabilities with

processing times and independent set up times. Narain (2006) studied bicriteria

problem to obtain a sequence which gives minimum possible rental cost while

minimizing total elapsed time under pre defined rental policy. Research is also

directed towards the development of heuristic and near exact procedures. Sanuja

and Xueyan (2006) studied a new approach to two machine flow shop problem

with uncertain processing time. Anghinofi and Paolucci (2007) studied parallel

machine scheduling involving total tardiness. Shin and Kim (2007) discussed the

scheduling on parallel identical machines to minimize total tardiness. Singh et al

(2009) introduced the fuzzy flow shop problem on two machines with single

transport facility- a heuristic approach. Some of the noteworthy heuristic

approaches are due to Bellman (1956), Jackson (1956), Bagga (1969), Campbell

et al (1970), Szwarc (1977), Ghanshiam (1978), Maggu and Das ( 1980), Nawaz

et al (1983), Gupta et al (1986, 2003), Harbanslal (1989), Panwalkar (1991),

Rajendran (1992), Bagga and Bhambani (1996), Sen and Deelipan (1999), Anup

and Maggu (2002), Singh et al (2005, 2006), Gupta et al (2006, 2007, 2011),

Chandramouli (2005), Pandian and Rajendran (2010) and Khodadadi (2011).

1.17.Objectives of Study

Scheduling model concerns with the determination of an optimal

sequence to service customers, to perform a set of jobs etc. in order to minimize

total elapsed time or some another suitable measure of performance. Scheduling

problems are used in the production concerns where the production of some items

is made into distinct but successive stages. At each stage there is a machine to

36

perform the required set of jobs. The main objectives of the present study are as

follows:

1) To develop an algorithm minimizing the rental cost of the machines taken

on rent under a restrictive rental policy in two stage flow shop scheduling,

the processing time of jobs on machines are associated with probabilities

including break-down intervals and job block criteria.

2) To develop a new heuristic algorithm minimizing the total elapsed time for

n jobs, 3 machines flow shop scheduling problem involving processing

times, transportation time and breakdown interval, whenever mean

weighted flow time is taken into consideration.

3) To study bi-criteria in two stages, three stage and multi stage flow shop

scheduling to minimize the rental cost of machines taken on rent under a

specified rental policy with minimum makespan.

4) To study bi-criteria in flow shop scheduling with sequence dependent set

up time.

5) To widen flow shop scheduling models with jobs in a string of disjoint job

block.

6) To study flow shop scheduling on two machines with setup time and single

transport facility in fuzzy environment.

7) To develop fuzzy flow shop model with single transport facility with job

block criteria.

8) To introduce the concept of fuzziness in parallel machine scheduling to

optimize bi criteria: Number of Tardy jobs and Maximum Tardiness; Total

Tardiness and Weighted Flow time.

1.18.Model Validation

Model validation is another important step in a mathematical or

simulation study which often is glossed over by modelers. Prior to embarking or

developing a mathematical or simulation model, it is necessary for the analyst to

become very familiar with the system being studied, to involve managers and

37

operating personnel of the system and thus to agree on the level of the detail

required to achieve the goal of study.

Validation is closely associated with verifications and credibility.

Verification has to do with program debugging to make sure that the computer

program does what is intended. Validation deal with how accurate representation

of reality, the model provides and credibility deals with how believable the model

is to users. To establish validity and credibility, users must be involved in the

study early and often. Goals of the study, appropriate system, performance

measures and level of detail must be agreed upon and kept as simple as possible.

The model can be run a variety of conditions, and results examined by the users

for plausibility, thus providing some validity and credibility checks.