prod. & operation mgt

57
Master of Business Administration Semester II MB0044 – Production & Operations Management Assignment Set- 1 Q1. Explain in brief the origins of Just In Time. Explain the different types of wastes that can be eliminated using JIT. Ans. Just-in-time (JIT) Just-in-time (JIT) is easy to grasp conceptually, everything happens just-in-time. For example consider my journey to work this morning, I could have left my house, just-in-time to catch a bus to the train station, just-in-time to catch the train, just-in-time to arrive at my office, just-in-time to pick up my lecture notes, just-in- time to walk into this lecture theatre to start the lecture. Conceptually there is no problem about this; however achieving it in practice is likely to be difficult! So too in a manufacturing operation component parts could conceptually arrive just-in-time to be picked up by a worker and used. So we would at a stroke eliminate any inventory of parts, they would simply arrive just-in-time! Similarly we could produce finished goods just-in-time to be handed to a customer who wants

Upload: rohit-mishra

Post on 02-Nov-2014

727 views

Category:

Business


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Prod. & operation mgt

Master of Business Administration

Semester II

MB0044 – Production & Operations Management

Assignment

Set- 1

Q1. Explain in brief the origins of Just In Time. Explain the different types of wastes that can be eliminated using JIT.

Ans.

Just-in-time (JIT)

Just-in-time (JIT) is easy to grasp conceptually, everything happens just-in-time. For example consider my journey to work this morning, I could have left my house, just-in-time to catch a bus to the train station, just-in-time to catch the train, just-in-time to arrive at my office, just-in-time to pick up my lecture notes, just-in-time to walk into this lecture theatre to start the lecture. Conceptually there is no problem about this; however achieving it in practice is likely to be difficult!

So too in a manufacturing operation component parts could conceptually arrive just-in-time to be picked up by a worker and used. So we would at a stroke eliminate any inventory of parts, they would simply arrive just-in-time! Similarly we could produce finished goods just-in-time to be handed to a customer who wants them. So, at a conceptual extreme, JIT has no need for inventory or stock, either of raw materials or work in progress or finished goods.

Obviously any sensible person will appreciate that achieving the conceptual extreme outlined above might well be difficult, or impossible, or extremely expensive, in real-life. However that extreme does illustrate that, perhaps, we could move an existing system towards a system with more of a JIT element than it currently contains. For example, consider a manufacturing process - whilst we might not be able to have a JIT process in terms of handing finished goods to customers, so we would still need some inventory of finished goods, perhaps it might be possible to arrange raw material deliveries so that, for example, materials needed for one day's production

Page 2: Prod. & operation mgt

arrive at the start of the day and are consumed during the day - effectively reducing/eliminating raw material inventory.

JIT originated in Japan. Its introduction as a recognised technique/philosophy/way of working is generally associated with the Toyota motor company, JIT being initially known as the "Toyota Production System". Note the emphasis here - JIT is very much a mindset/way of looking at a production system that is distinctly different from what (traditionally) had been done previous to its conception.

Within Toyota Taiichi Ohno is most commonly credited as the father/originator of this way of working. The beginnings of this production system are rooted in the historical situation that Toyota faced. After the Second World War the president of Toyota said "Catch up with America in three years, otherwise the automobile industry of Japan will not survive". At that time one American car worker produced approximately nine times as much as a Japanese car worker. Taiichi Ohno examined the American industry and found that American manufacturers made great use of economic order quantities - the traditional idea that it is best to make a "lot" or "batch" of an item (such as a particular model of car or a particular component) before switching to a new item. They also made use of economic order quantities in terms of ordering and stocking the many parts needed to assemble a car.

Ohno felt that such methods would not work in Japan - total domestic demand was low and the domestic marketplace demanded production of small quantities of many different models. Accordingly Ohno devised a new system of production based on the elimination of waste. In his system waste was eliminated by:

just-in-time - items only move through the production system as and when they are needed

autonomation - (spelt correctly in case you have never met the word before) - automating the production system so as to include inspection - human attention only being needed when a defect is automatically detected whereupon the system will stop and not proceed until the problem has been solved

In this system inventory (stock) is regarded as an unnecessary waste as too is having to deal with defects.

Ohno regarded waste as a general term including time and resources as well as materials. He identified a number of sources of waste that he felt should be eliminated:

overproduction - waste from producing more than is needed time spent waiting - waste such as that associated with a worker being idle whilst waiting

for another worker to pass him an item he needs (e.g. such as may occur in a sequential line production process)

transportation/movement - waste such as that associated with transporting/moving items around a factory

processing time - waste such as that associated with spending more time than is necessary processing an item on a machine

Page 3: Prod. & operation mgt

inventory - waste associated with keeping stocks defects - waste associated with defective items

At the time car prices in the USA were typically set using selling price = cost plus profit mark-up. However in Japan low demand meant that manufacturers faced price resistance, so if the selling price is fixed how can one increase the profit mark-up? Obviously by reducing costs and hence a large focus of the system that Toyota implemented was to do with cost reduction.

To aid in cost reduction Toyota instituted production levelling - eliminating unevenness in the flow of items. So if a component which required assembly had an associated requirement of 100 during a 25 day working month then four were assembled per day, one every two hours in an eight hour working day. Levelling was also applied to the flow of finished goods out of the factory and to the flow of raw materials into the factory.

Toyota changed their factory layout. Previously all machines of the same type, e.g. presses, were together in the same area of the factory. This meant that items had to be transported back and forth as they needed processing on different machines. To eliminate this transportation different machines were clustered together so items could move smoothly from one machine to another as they were processed. This meant that workers had to become skilled on more than one machine - previously workers were skilled at operating just one type of machine. Although this initially met resistance from the workforce it was eventually overcome.

Whilst we may think today that Japan has harmonious industrial relations with management and workers working together for the common good the fact is that, in the past, this has not been true. In the immediate post Second World War period, for example, Japan had one of the worse strike records in the world. Toyota had a strike in 1950 for example. In 1953 the car maker Nissan suffered a four month strike - involving a lockout and barbed wire barricades to prevent workers returning to work. That dispute ended with the formation of a company backed union, formed initially by members of the Nissan accounting department. Striking workers who joined this new union received payment for the time spent on strike, a powerful financial inventive to leave their old union during such a long dispute. The slogan of this new union was "Those who truly love their union love their company".

In order to help the workforce to adapt to what was a very different production environment Ohno introduced the analogy of teamwork in a baton relay race. As you are probably aware typically in such races four runners pass a baton between themselves and the winning team is the one that crosses the finishing line first carrying the baton and having made valid baton exchanges between runners. Within the newly rearranged factory floor workers were encouraged to think of themselves as members of a team - passing the baton (processed items) between themselves with the goal of reaching the finishing line appropriately. If one worker flagged (e.g. had an off day) then the other workers could help him, perhaps setting up a machine for him so that the team output was unaffected.

In order to have a method of controlling production (the flow of items) in this new environment Toyota introduced the kanban. The kanban is essentially information as to what has to be done. Within Toyota the most common form of kanban was a rectangular piece of paper within a

Page 4: Prod. & operation mgt

transparent vinyl envelope. The information listed on the paper basically tells a worker what to do - which items to collect or which items to produce. In Toyota two types of kanban are distinguished for controlling the flow of items:

a withdrawal kanban - which details the items which should be withdrawn from the preceding step in the process

a production ordering kanban - which details the items to be produced

All movement throughout the factory is controlled by these kanbans - in addition since the kanbans specify item quantities precisely no defects can be tolerated - e.g. if a defective component is found when processing a production ordering kanban then obviously the quantity specified on the kanban cannot be produced. Hence the importance of autonomation (as referred to above) - the system must detect and highlight defective items so that the problem that caused the defect to occur can be resolved.

Q2. What is Value Engineering or Value Analysis? Elucidate five companies which have

Page 5: Prod. & operation mgt

incorporated VE with brief explanation.

Ans.

Value engineering

Value engineering (VE) is a systematic method to improve the "value" of goods or products and services by using an examination of function. Value, as defined, is the ratio of function to cost. Value can therefore be increased by either improving the function or reducing the cost. It is a primary tenet of value engineering that basic functions be preserved and not be reduced as a consequence of pursuing value improvements.

In the United States, value engineering is specifically spelled out in Public Law 104-106, which states “Each executive agency shall establish and maintain cost-effective value engineering procedures and processes."

Value engineering is sometimes taught within the project management or industrial engineering body of knowledge as a technique in which the value of a system’s outputs is optimized by crafting a mix of performance (function) and costs. In most cases this practice identifies and removes unnecessary expenditures, thereby increasing the value for the manufacturer and/or their customers.

VE follows a structured thought process that is based exclusively on "function", i.e. what something "does" not what it is. For example a screw driver that is being used to stir a can of paint has a "function" of mixing the contents of a paint can and not the original connotation of securing a screw into a screw-hole. In value engineering "functions" are always described in a two word abridgment consisting of an active verb and measurable noun (what is being done - the verb - and what it is being done to - the noun) and to do so in the most non-prescriptive way possible. In the screw driver and can of paint example, the most basic function would be "blend liquid" which is less prescriptive than "stir paint" which can be seen to limit the action (by stirring) and to limit the application (only considers paint.) This is the basis of what value engineering refers to as "function analysis".

Value engineering uses rational logic (a unique "how" - "why" questioning technique) and the analysis of function to identify relationships that increase value. It is considered a quantitative method similar to the scientific method, which focuses on hypothesis-conclusion approaches to test relationships, and operations research, which uses model building to identify predictive relationships.

Value engineering is also referred to as "value management" or "value methodology" (VM), and "value analysis" (VA). VE is above all a structured problem solving process based on function analysis—understanding something with such clarity that it can be described in two words, the active verb and measurable noun abridgement. For example, the function of a pencil is to "make marks". This then facilitates considering what else can make marks. From a spray can, lipstick, a

Page 6: Prod. & operation mgt

diamond on glass to a stick in the sand, one can then clearly decide upon which alternative solution is most appropriate.

The Job Plan

Value engineering is often done by systematically following a multi-stage job plan. Larry Miles' original system was a six-step procedure which he called the "value analysis job plan." Others have varied the job plan to fit their constraints. Depending on the application, there may be four, five, six, or more stages. One modern version has the following eight steps:

1. Preparation2. Information3. Analysis4. Creation5. Evaluation6. Development7. Presentation8. Follow-up

Four basic steps in the job plan are:

Information gathering - This asks what the requirements are for the object. Function analysis, an important technique in value engineering, is usually done in this initial stage. It tries to determine what functions or performance characteristics are important. It asks questions like; What does the object do? What must it do? What should it do? What could it do? What must it not do?

Alternative generation (creation) - In this stage value engineers ask; What are the various alternative ways of meeting requirements? What else will perform the desired function?

Evaluation - In this stage all the alternatives are assessed by evaluating how well they meet the required functions and how great will the cost savings be.

Presentation - In the final stage, the best alternative will be chosen and presented to the client for final decision.

How it works

VE follows a structured thought process to evaluate options as follows.

Gather information

Page 7: Prod. & operation mgt

1. What is being done now?

Who is doing it?

What could it do?

What must it not do?

Measure

2. How will the alternatives be measured?

What are the alternate ways of meeting requirements?

What else can perform the desired function?

Analyze

3. What must be done?

What does it cost?

Generate

4. What else will do the job?

Evaluate

5. Which Ideas are the best?

6. Develop and expand ideas

What are the impacts?

What is the cost?

What is the performance?

7. Present ideas

Q3. Explain different types of Quantitative models. Differentiate between work study and motion study.

Page 8: Prod. & operation mgt

Ans.

Quantitative models of the action potential

In neurophysiology, several mathematical models of the action potential have been developed, which fall into two basic types. The first type seeks to model the experimental dataquantitatively, i.e., to reproduce the measurements of current and voltage exactly. The renowned Hodgkin-Huxley model of the axon from the Lolling squid exemplifies such models.

Althoughqualitatively correct, the H-H model does not describe every type of excitable membrane accurately, since it considers only two ions (sodium and potassium), each with only one type of voltage-sensitive channel. However, other ions such as calcium may be important and there is a great diversity of channels for all ions. As an example, the cardiac action potential illustrates how differently shaped action potentials can be generated on membranes with voltage-sensitive calcium channels and different types of sodium/potassium channels. The second type of mathematical model is a simplification of the first type; the goal is not to reproduce the experimental data, but to understand qualitatively the role of action potentials in neural circuits.

For such a purpose, detailed physiological models may be unnecessarily complicated and may obscure the "forest for the trees". The Fitzhugh-Nagumo model is typical of this class, which is often studied for its entrainment behavior. Entrainment is commonly observed in nature, for example in the synchronized lighting of fireflies, which is coordinated by a burst of action potentials; entrainment can also be observed in individual neurons. Both types of models may be used to understand the behavior of small biological neural networks, such as the central pattern generators responsible for some automatic reflex actions. Such networks can generate a complex temporal pattern of action potentials that is used to coordinate muscular contractions, such as those involved in breathing or fast swimming to escape a predator.

Page 9: Prod. & operation mgt

Hodgkin-Huxley model

Equivalent electrical circuit for the Hodgkin-Huxley model of the action potential. Im and Vm represent the current through, and the voltage across, a small patch of membrane, respectively. The Cm represents the capacitance of the membrane patch, whereas the four g's represent the conductances of four types of ions. The two conductances on the left, for potassium (K) and sodium (Na), are shown with arrows to indicate that they can vary with the applied voltage, corresponding to the voltage-sensitive ion channels.

In 1952 Alan Lloyd Hodgkin and Andrew Huxley developed a set of equations to fit their experimental voltage-clamp data on the axonal membrane. The model assumes that the membrane capacitance C is constant; thus, the transmembrane voltage V changes with the total transmembrane current Itot according to the equation

where INa, IK, and IL are currents conveyed through the local sodium channels, potassium channels, and "leakage" channels (a catch-all), respectively. The initial term Iext represents the current arriving from external sources, such as excitatory postsynaptic potentials from the dendrites or a scientist's electrode.

The model further assumes that a given ion channel is either fully open or closed; if closed, its conductance is zero, whereas if open, its conductance is some constant value g. Hence, the net current through an ion channel depends on two variables: the probability popen of the channel being open, and the difference in voltage from that ion's equilibrium voltage, V − Veq. For example, the current through the potassium channel may be written as

Page 10: Prod. & operation mgt

which is equivalent to Ohm's law. By definition, no net current flows (IK = 0) when the transmembrane voltage equals the equilibrium voltage of that ion (when V = EK).

To fit their data accurately, Hodgkin and Huxley assumed that each type of ion channel had multiple "gates", so that the channel was open only if all the gates were open and closed otherwise. They also assumed that the probability of a gate being open was independent of the other gates being open; this assumption was later validated for the inactivation gate. Hodgkin and Huxley modeled the voltage-sensitive potassium channel as having four gates; letting pn denote the probability of a single such gate being open, the probability of the whole channel being open is the product of four such probabilities, i.e., popen, K = n4. Similarly, the probability of the voltage-sensitive sodium channel was modeled to have three similar gates of probability m and a fourth gate, associated with inactivation, of probability h; thus, popen, Na = m3h. The probabilities for each gate are assumed to obey first-order kinetics

where both the equilibrium value meq and the relaxation time constant τm depend on the instantaneous voltage V across the membrane. If V changes on a time-scale more slowly than τm, the m probability will always roughly equal its equilibrium value meq; however, if V changes more quickly, then m will lag behind meq. By fitting their voltage-clamp data, Hodgkin and Huxley were able to model how these equilibrium values and time constants varied with temperature and transmembrane voltage. The formulae are complex and depend exponentially on the voltage and temperature. For example, the time constant for sodium-channel activation probability h varies as 3(θ−6.3)/10 with the Celsius temperature θ, and with voltage V as

In summary, the Hodgkin-Huxley equations are complex, non-linear ordinary differential equations in four independent variables: the transmembrane voltage V, and the probabilities m, h and n. No general solution of these equations has been discovered. A less ambitious but generally applicable method for studying such non-linear dynamical systems is to consider their behavior in the vicinity of a fixed point. This analysis shows that the Hodgkin-Huxley system undergoes a transition from stable quiescence to bursting oscillations as the stimulating current Iext is gradually increased; remarkably, the axon becomes stably quiescent again as the stimulating current is increased further still. A more general study of the types of qualitative behavior of axons predicted by the Hodgkin-Huxley equations has also been carried out.

Page 11: Prod. & operation mgt

Fitzhugh-Nagumo model

Figure FHN: To mimick the action potential, the Fitzhugh-Nagumo model and its relatives use a function g(V) with negative differential resistance (a negative slope on the I vs. V plot). For comparison, a normal resistor would have a positive slope, by Ohm's law I = GV, where the conductance G is the inverse of resistance G=1/R.

Because of the complexity of the Hodgkin-Huxley equations, various simplifications have been developed that exhibit qualitatively similar behavior. The Fitzhugh-Nagumo model is a typical example of such a simplified system. Based on the tunnel diode, the FHN model has only two independent variables, but exhibits a similar stability behavior to the full Hodgkin-Huxley equations. The equations are

where g(V) is a function of the voltage V that has a region of negative slope in the middle, flanked by one maximum and one minimum (Figure FHN). A much-studied simple case of the Fitzhugh-Nagumo model is the Bonhoeffer-van der Pol nerve model, which is described by the equations

Page 12: Prod. & operation mgt

where the coefficient ε is assumed to be small. These equations can be combined into a second-order differential equation

This van der Pol equation has stimulated much research in the mathematics of nonlinear dynamical systems. Op-amp circuits that realize the FHN and van der Pol models of the action potential have been developed by Keener.

A hybrid of the Hodgkin-Huxley and FitzHugh-Nagumo models was developed by Morris and Lecar in 1981, and applied to the muscle fiber of barnacles. True to the barnacle's physiology, the Morris-Lecar model replaces the voltage-gated sodium current of the Hodgkin-Huxley model with a voltage-dependent calcium current. There is no inactivation (no h variable) and the calcium current equilibrates instantaneously, so that again, there are only two time-dependent variables: the transmembrane voltage V and the potassium gate probability n. The bursting, entrainment and other mathematical properties of this model have been studied in detail.

The simplest models of the action potential are the "flush and fill" models (also called "integrate-and-fire" models), in which the input signal is summed (the "fill" phase) until it reaches a threshold, firing a pulse and resetting the summation to zero (the "flush" phase). All of these models are capable of exhibiting entrainment, which is commonly observed in nervous systems.

.

Page 13: Prod. & operation mgt

Q4. What is Rapid Prototyping? Explain the difference between Automated flow line and Automated assembly line with examples.

Ans.

Rapid prototyping

Rapid prototyping is the automatic construction of physical objects using additive manufacturing technology. The first techniques for rapid prototyping became available in the late 1980s and were used to produce models and prototype parts. Today, they are used for a much wider range of applications and are even used to manufacture production-quality parts in relatively small numbers. Some sculptors use the technology to produce exhibitions.

The use of additive manufacturing for rapid prototyping takes virtual designs from computer aided design (CAD) or animation modeling software, transforms them into thin, virtual, horizontal cross-sections and then creates successive layers until the model is complete. It is a WYSIWYG process where the virtual model and the physical model are almost identical.

With additive manufacturing, the machine reads in data from a CAD drawing and lays down successive layers of liquid, powder, or sheet material, and in this way builds up the model from a series of cross sections. These layers, which correspond to the virtual cross section from the CAD model, are joined together or fused automatically to create the final shape. The primary advantage to additive fabrication is its ability to create almost any shape or geometric feature.

The standard data interface between CAD software and the machines is the STL file format. An STL file approximates the shape of a part or assembly using triangular facets. Smaller facets produce a higher quality surface. VRML (or WRL) files are often used as input for 3D printing technologies that are able to print in full color.

The word "rapid" is relative: construction of a model with contemporary methods can take from several hours to several days, depending on the method used and the size and complexity of the model. Additive systems for rapid prototyping can typically produce models in a few hours, although it can vary widely depending on the type of machine being used and the size and number of models being produced simultaneously.

Some solid freeform fabrication techniques use two materials in the course of constructing parts. The first material is the part material and the second is the support material (to support overhanging features during construction). The support material is later removed by heat or dissolved away with a solvent or water.

Traditional injection molding can be less expensive for manufacturing polymer products in high quantities, but additive fabrication can be faster and less expensive when producing relatively small quantities of parts. 3D printers give designers and concept development teams the ability to produce parts and concept models using a desktop size printer.

Page 14: Prod. & operation mgt

Rapid prototyping is now entering the field of rapid manufacturing and it is believed by many experts that this is a "next level" technology.

Industrial KUKA robot for wood processing and rapid prototyping

A large number of competing technologies are available in the marketplace. As all are additive technologies, their main differences are found in the way layers are built to create parts. Some are melting or softening material to produce the layers (SLS, FDM) where others are laying liquid materials thermosets that are cured with different technologies. In the case of lamination systems, thin layers are cut to shape and joined together.

As of 2005, conventional rapid prototype machines cost around £25,000.

Prototyping technologies Base materials

Selective laser sintering (SLS) Thermoplastics, metals powders

Direct metal laser sintering (DMLS) Almost any alloy metal

Fused deposition modeling (FDM) Thermoplastics, eutectic metals

Stereolithography (SLA) photopolymer

Laminated object manufacturing (LOM) Paper

Electron beam melting (EBM) Titanium alloys

3D printing (3DP) Various materials

Q5. Explain Break Even Analysis and Centre of Gravity methods. Explain Product layout and process layout with examples.

Page 15: Prod. & operation mgt

Ans.

BREAK EVEN ANALYSIS & CENTER OF GRAVITY METHOD

This article is about Break-even (economics). For other uses, see Break-even (disambiguation).

The Break-Even Point'

In economics & business, specifically cost accounting, the break-even point (BEP) is the point at which cost or expenses and revenue are equal: there is no net loss or gain, and one has "broken even". A profit or a loss has not been made, although opportunity costs have been "paid", and capital has received the risk-adjusted, expected return.

For example, if a business sells fewer than 200 tables each month, it will make a loss, if it sells more, it will be a profit. With this information, the business managers will then need to see if they expect to be able to make and sell 200 tables per month.

If they think they cannot sell that many, to ensure viability they could:

1. Try to reduce the fixed costs (by renegotiating rent for example, or keeping better control of telephone bills or other costs)

2. Try to reduce variable costs (the price it pays for the tables by finding a new supplier)3. Increase the selling price of their tables.

Any of these would reduce the break even point. In other words, the business would not need to sell so many tables to make sure it could pay its fixed costs.

Page 16: Prod. & operation mgt

Margin of Safety

Margin of safety represents the strength of the business. It enables a business to know what is the exact amount it has gained or lost and whether they are over or below the break even point.

margin of safety = (current output - breakeven output)

margin of safety% = (current output - breakeven output)/current output x 100

When dealing with budgets you would instead replace "Current output" with "Budgeted output".

If P/V ratio is given then profit/ PV ratio

In unit

Break Even =

where FC is Fixed Cost, SP is Selling Price and VC is Variable Cost

Break Even Analysis

By inserting different prices into the formula, you will obtain a number of break even points, one for each possible price charged. If the firm changes the selling price for its product, from $2 to $2.30, in the example above, then it would have to sell only (1000/(2.3 - 0.6))= 589 units to break even, rather than 715.

To make the results clearer, they can be graphed. To do this, you draw the total cost curve (TC in the diagram) which shows the total cost associated with each possible level of output, the fixed cost curve (FC) which shows the costs that do not vary with output level, and finally the various total revenue lines (R1, R2, and R3) which show the total amount of revenue received at each output level, given the price you will be charging.

Page 17: Prod. & operation mgt

The break even points (A,B,C) are the points of intersection between the total cost curve (TC) and a total revenue curve (R1, R2, or R3). The break even quantity at each selling price can be read off the horizontal axis and the break even price at each selling price can be read off the vertical axis. The total cost, total revenue, and fixed cost curves can each be constructed with simple formulae. For example, the total revenue curve is simply the product of selling price times quantity for each output quantity. The data used in these formulae come either from accounting records or from various estimation techniques such as regression analysis.

Application

The break-even point is one of the simplest yet least used analytical tools in management. It helps to provide a dynamic view of the relationships between sales, costs and profits. A better understanding of break-even, for example, is expressing break-even sales as a percentage of actual sales—can give managers a chance to understand when to expect to break even (by linking the percent to when in the week/month this percent of sales might occur).

The break-even point is a special case of Target Income Sales, where Target Income is 0 (breaking even). This is very important for financial analysis.

Limitations

Break-even analysis is only a supply side (i.e. costs only) analysis, as it tells you nothing about what sales are actually likely to be for the product at these various prices.

It assumes that fixed costs (FC) are constant. Although this is true in the short run, an increase in the scale of production is likely to cause fixed costs to rise.

It assumes average variable costs are constant per unit of output, at least in the range of likely quantities of sales. (i.e. linearity)

It assumes that the quantity of goods produced is equal to the quantity of goods sold (i.e., there is no change in the quantity of goods held in inventory at the beginning of the period and the quantity of goods held in inventory at the end of the period).

In multi-product companies, it assumes that the relative proportions of each product sold and produced are constant (i.e., the sales mix is constant).

Center of Gravity calculations.

Page 18: Prod. & operation mgt

Introduction One of the problems that arises when building a model aircraft is the correct placement of the Center of Gravity. When assembling an ARTF or scratch building from a Plan the problem does not arise, as the designer should give full details on the plan or in the kit. However, when you are building to your own design you may need to work out where the C of G is located. This is not a problem with a "normal" model, but what about Biplanes, a Beech Staggerwing, Delta's, Canards & other odd layouts. Full size designers have powerful computers and wind tunnels but we must get it right for that first flight. Once in the air you are fully committed and an error will almost certainly cause a crash or at best a very twitchy flight. There is little point going into computational details here as there are one or two good programs on the Internet that will do most of the work for you. You will find links below to the ones I have located as they are quite hard to find and suggest you try these out, they all give more or less the correct answer. In the good old free flight days, test glides were the norm, trimming out a model into long grass until the model flew straight and level just off the stall. Test glides of heavy fast radio models are not possible so we need to get the C of G correct for that first flight. If in doubt use the old rule of thumb "1/4 of the wing chord back" This is generally not far out. See also the "model of a model" method.

Calculation Problems There are however a couple of problems. Most of the calculations involve an element of guesswork so the final result can only be at best described as a very accurate guess. For example tailplane efficiency varies between 30 and 100% and you need to make an educated guess as to the value you use in your calculations. A tailplane close to the wings trailing edge and in the wake vortex will come out as low as 30%, a "normal" location 60% whilst a canard (foreplane) is in the 95-100% range as it operates in "clean" air. A high set "Tee" tail will be closer to 90%. Do not bother with lifting tailplanes. A flat plate or thin symmetrical type is just as efficient. Secondly the C of G needs to be in front of the Neutral Point, but how far? Again a degree of intelligent guess work is required. The accepted figure is between 5 & 20%, 15% is a good compromised for first flights. See note below. Once you have a model that flies, at least well enough to land in one piece, you can then adjust the C of G based on the results of the first flight. Some links are given below where you can find Nomograms etc. to do most of the calculations for you and I will add others as I find them.

Fly by Wire Fighters Variously known as CCV (Control Configured Vehicles) or ACT (Active Control Technology) these aircraft are designed to be unstable and only fly under the full control of high speed computers, with minimum input from the pilot, where the roll and pitch sensors input to the control surfaces at 100 x per second. Not quite as fast as our 2.4GHz ! Some very successful large turbine models of full size CCV aircraft have been built and fly very well. The only difference being in the location of the C of G. CCV aircraft have this AFT of the NP whereas models should have this in front of the NP or Neutral Point. (or Center of Pressure CP) It should be noted that CCV aircraft have a tailplane significantly SMALLER than would be considered normal. This should be taken into account when designing a MODEL otherwise it may not be large enough for normal control. The WWII Spitfire has a very small tailplane to reduce drag and increase manoevreability.

Terminology There is a lot of confusing terminology regarding this subject and an attempt has been had to clarify these in the notes at the end. Please read these first. In most engineering calculations weights & pressures are assumed to act a one point ie. at the C of G. We know this

Page 19: Prod. & operation mgt

is not exactly the case as the weight of any object is spread over its entire volume/area, not always evenly, but spread nevertheless. Concentrating this mass or weight at one (imaginary) point makes meaningful calculations possible.

Biplanes and other multi-wing aircraft. I came across these diagrams in an old copy of the Aviation Handbook - Johnson - 1931 and it does seem to be a very authoritative method of determining the location of the MAC in the Z axis for a Biplane. Johnson further states in his book that the Upper Wing carries approx. 58% of the load and the Lower Wing approx. 41%. A very small amount comes from the Fuselage, Struts etc. The C of G should be located at approx. 30% of the mean chord. Optimum range 28% - 33%. The Fig D-1 allows you to compute the location plane of the MAC (in the Z axis) between the Upper and Lower Wing and Fig D-2 provides a solution for value K in D1. Fig A is a repeat of the graphical method of determining the MAC. Fig B is the same but for more complicated wing planforms and Fig C shows the limits for the C of G on various wing positions. Note that you will still need to compute the location of the C of G in the X & Y axis using other sections of this page. D-1 only shows 3 Gap/Chord ratio's. Extrapolate for others.

This article is reproduced with the kind permission of Alasdair Sutherland and should enable any modeller to determine the MAC on any Biplane

Determining the C of G by the use of a Model By building an accurate small scale model of your new model it is possible to determine very accurately where the C of G lies. Make a model (say 1/4) the size of your model as accurately as possible c/w with profile fuselage, tailplane and fin. Make the wings and tail in sheet balsa and round the LE/TE. Test glide the scale model until a nice flat glide results. Remove nose weight until a very slight stall results. This is the AFT position. Now add weight until the glide is unacceptable. This is the most FORWARD position. Check the location on the test model and set the C of G on your full size model somewhere in between these two extremes. At least it should fly with reasonable safety and is better than nothing for very unorthodox or hard to compute models, such as Rear Swept Biplanes or Canards.

Flying Wings & Deltas Use the calculators given below but put in very low values (0.01 for example) for the (non existant) Tailplane. Some of the calculators will not accept zero as a value. The results are confirmed by the positions on models that fly well. Zagi's, Ripmax Rapier, Delta 363 etc.

Gordon Whitehead - Winning Formula This article appeared in the May issue of Radio Modeller 1994 and is a very comprehensive look at Center of Gravity calculations. Interestingly it covers a wide variety of Multi Wing Aircraft as mentioned above. It probably gives the best method of calculating Centres of Gravity you are likely to find with an accuracy way above anything modellers are likely to require.

Rene Jassien Looking through some old Model Aircraft Magazines (circa 1983) I came across an article by Rene Jassien a well known and very successful competition flyer in the 1980's This article gives a multi-factor method of calculating the Center of Gravity to very precise limits. Specifically designed for competition use where the C of G may be set back as much as 75%

Page 20: Prod. & operation mgt

from the leading edge of the wings mean chord, or may be even BEHIND the trailing edge. This is due mainly by the use of lifting tailplanes set a long way back from the wings trailing edge. As you can see the theoretical values matched very precisely the actual settings on real well designed and successful models.

Surprisingly when the dimensions of a Frontier Basic Trainer were fed in the solution was 24%, much in line with what one would use for such a model. To make the calculation very simple I have produced an Excel spreadsheet, just feed in the numbers and out pops the solution. Please note that this formula and the models it was designed for are 25 years old and model design and technology may have moved on. Presumably the way aircraft fly has not, so the solutions may still have real value.

Barnaby Wainfan is not an author/designer who springs to mind but he has produced an excellent book on "Airfoil Selection" and he has also produced some foils of his own, these are hard to find and may be listed as BW types or as Wainfan. The book can be obtained from http://www.aircraftspruce.com navigate to Books/Videos then Books then Design. The Aircraft Spruce Company site is a mine of information for modellers & full size builders/flyers alike. Well worth a visit.

Model Aircraft Aerodynamics by Martin Simons is one of the best books on the subject. Motorbooks International Wisconsin USA ISBN 0-85242-915-0

Q6. Explain Juran’s Quality Trilogy and Crosby’s absolutes of quality. List out the pillars of Total Productive Maintenance.

Ans.

Page 21: Prod. & operation mgt

Juran’s quality triology

Juran uses his famous Universal Breakthrough Sequence to implement quality programmes. The universal breakthrough sequences are:

1. Proof of need: There should be a compelling need to make changes.2. Project identification: Here what is to be changed is identified. Specific projects with

time frames and the resource allocation are decided.3. Top management commitment: Commitment of the top management is to assign people

and fix responsibilities to complete the project4. Diagnostic journey: Each team will determine whether the problems result from systemic

causes or are random or are deliberately caused. Root causes are ascertained with utmost certainty.

5. Remedial action: This is the stage when changes are introduced. Inspection, testing, and validation are also included at this point.

6. Holding on to the gains: The above steps result in beneficiary results. Having records or all actions and consequences will help in further improvements. The actions that result in the benefits derived should be the norm for establishing standards.

Juran has categorised cost of quality into four categories:

1. Failure costs – Internal: These are costs of rejections, repairs in terms of materials, labour, machine time and loss of morale.

2. Failure costs – External: These are costs of replacement, on-site rework including   spare parts and expenses of the personnel, warranty costs and loss of goodwill.

3. Appraisal costs: These are costs of inspection, including maintenance of records, certification, segregation costs, and others.

4. Prevention costs: Prevention cost is the sequence of three sets of activities, Quality Planning, Quality Control, and Quality Improvement, forming the triology to achieve Total Quality Management.

Crosby’s absolutes of quality:-

Like Deming, Crosby also lays emphasis on top management commitment and responsibility for designing the system so that defects are not inevitable. He urged that there be no restriction on spending for achieving quality. In the long run, maintaining quality is more economical than compromising on its achievement. His absolutes can be listed as under:

Page 22: Prod. & operation mgt

1. Quality is conformance to requirements, not ‘goodness’2. Prevention, not appraisal, is the path to quality3. Quality is measured as the price paid for non-conformance and as indices4. Quality originates in all factions. There are no quality problems. It is the people, designs,

and processes that create problems

Crosby also has given 14 points similar to those of Deming. His approach emphasises on measurement of quality, increasing awareness, corrective action, error cause removal and continuously reinforcing the system, so that advantages derived are not lost over time. He opined that the quality management regimen should improve the overall health of the organisation and prescribed a vaccine. The ingredients are:

1) Integrity: Honesty and commitment help in producing everything right first time, every time

2) Communication: Flow of information between departments, suppliers, customers helps in identifying opportunities

3) Systems and operations: These should bring in a quality environment so that nobody is comfortable with anything less than the best.

Quality Tools:-

1. Cause-and-effect diagram (also called Ishikawa or fishbone chart): Identifies many possible causes for an effect or problem and sorts ideas into useful categories.

2. Check sheet: A structured, prepared form for collecting and analyzing data; a generic tool that can be adapted for a wide variety of purposes.

3. Control charts: Graphs used to study how a process changes over time.4. Histogram: The most commonly used graph for showing frequency distributions, or how

often each different value in a set of data occurs.5. Pareto chart: Shows on a bar graph which factors are more significant.6. Scatter diagram : Graphs pairs of numerical data, one variable on each axis, to look for a

relationship.7. Stratification : A technique that separates data gathered from a variety of sources so that

patterns can be seen (some lists replace “stratification” with “flowchart” or “run chart”).

ASSIGNMENT 2

Q1. Explain Logical Process Modelling and Physical Process Modelling. What are the ingredients of Business Process?

Page 23: Prod. & operation mgt

Ans.

Logical Process Modeling

Logical Process Modeling is the representation of a business process, detailing all the activities in the process from gathering the initial data to reaching the desired outcome. These are the kinds of activities described in a logical process model:

Gathering the data to be acted upon Controlling access to the data during the process execution Determining which work task in the process should be accomplished next Delivering the appropriate subset of the data to the corresponding work task Assuring that all necessary data exists and all required actions have been performed at

each task Providing a mechanism to indicate acceptance of the results of the process, such as,

electronic "signatures"

All business processes are made up of these actions. The most complex of processes can be broken down into these concepts. The complexity comes in the manner in which the process activities are connected together. Some activities may occur in sequential order, while some may be performed in parallel. There may be circular paths in the process (a re-work loop, for example). It is likely there will be some combination of these.

The movement of data and the decisions made determining the paths the data follow during the process comprise the process model. The contains only business activities, uses business terminology (not software acronyms, technical jargon, etc.…), completely describes the activities of the business area being modeled, and is independent of any individual or position working in the organization. Like its sibling, Logical Data Modeling, Logical Process Modeling does not include redundant activities, technology dependent activities, physical limitations or requirements or current systems limitations or requirements. The process model is a representation of the business view of the set of activities under analysis.

Heretofore, many applications and systems were built without a logical process model or a rigorous examination of the processes needed to accomplish the business goals. This resulted in applications that did not meet the needs of the users and / or were difficult to maintain and enhance.

Problems with an unmodeled system include the following:

Not knowing who is in possession of the data at any point in time Lack of control over access to the data at any point in the process Inability to determine quickly where in the process the data resides and how long it has

been there Difficulties in making adjustments to a specific execution of a business process

Page 24: Prod. & operation mgt

Inconsistent process execution

Logical Process Modeling Primer

Modeling methods can be grouped into Logical and Physical types. Using a combination of these methodologies can produce the most complete model, and no single method is sufficient to adequately define your processes.

Logical Process Modeling

Logical process modeling methods provide a description of the logical flow of data through a business process. They do not necessarily provide details about how decisions are made or how tasks are chosen during the process execution. They may be either manual or electronic, or a combination of methods. Some of the logical modeling formats are:

Written process descriptions Flow charts Data flow diagrams Function hierarchies Real-time models or state machines Functional dependency diagrams

A function is a high-level activity of an organization; a process is an activity of a business area; a sequential process is the lowest-level activity. Therefore:

Functions consist of Processes. Functions are usually identified at the planning stage of development, and can be decomposed into other functions or into processes. Some examples of Functions would include: Human Resource Management, Marketing, Claims Processing Processes consist of Sequential Processes. Processes are activities that have a beginning and an end; they transform data and are more detailed than functions. They can be decomposed into other processes or into Sequential Processes. Some examples of Processes would be: Make Payment, Produce Statement of Account, Verify Employment Sequential Processes are specific tasks performed by the business area, and, like a process, transform data. They cannot be further decomposed. Examples of Sequential Processes are: Record Customer Information, Validate Social Security Number, Calculate Amount Due

Each business activity in a logical process model is included in a decomposition diagram, given a meaningful name and described in detail with text. As in Logical Data Modeling, naming conventions are quite important in process modeling. Names for processes begin with a verb and should be as unique as possible while retaining meaning to the business users. Nouns used in the activity name should be defined and used consistently. In a decomposition diagram, each level completely describes the level above it and should be understandable to all appropriate business users.

Page 25: Prod. & operation mgt

Physical Process Modeling

Physical modeling methods specify the topology (connectivity), data, roles, and rules of a business process. This model describes items such as:

Work tasks to be completed during the process The order in which the tasks should be executed Data needed to start the process execution Data required to start and finish each work task Rules needed to determine routing through the process Exception handling techniques At least one defined business outcome Roles and permissions of each process participant

The physical model may not closely resemble the logical model, but they produce the same outcomes.

Data-Driven Approach to Process Definition

This approach, most commonly used in relational and object-oriented analysis efforts, analyzes the life cycle of each major data entity type. The approach defines a process for each phase or change the data undergoes, the method by which the data is created, the reasons for the change and the event that causes the data to achieve its terminal state. This method assures that all data actions are accounted for and that there are meaningful associations between the data and its processes. However, in a data-driven method, the logical data model must be completed before the process modeling and analysis can begin.

Major points of interest in constructing a Logical Process Model are:

The purpose of the process. Writing the purpose and referring to it frequently enables the analyst to recognize a step in the process that does not make sense in the context of the process.

Who will participate in the process. The participants may be people, groups of people, or electronic applications.

The order in which the steps of the process are done. The data you expect to be included in the process. There is an initial set of expected data,

plus you should know what data you expect to be modified or added during the process. Part of this step is deciding which subset of the data is appropriate at each task in the process.

Decisions that will be made during the execution of the process. These include decisions about which path the process should take, and whether all the required data is present at any given point in the process.

The rules you will use to define the various parts of the process. Also, note any naming conventions that are important for the business.

Page 26: Prod. & operation mgt

The disposition of the data at the end of the process. That is, will the data be retained or deleted? If you plan to store the data, where and in what form will the data be kept? Do future process-related reports need to access the data?

There may be other elements in the business processes that need to be included in the model. The more complete the model, the easier it will be to implement the software, and the more successful the processes will be in producing the desired output.

Process definition also helps you know when a process should be broken into smaller, sequential processes. If the definition of a process is ambiguous or lengthy, it is usually a candidate for decomposing into sequential processes. All functions are decomposed to processes, and all processes are ultimately decomposed into sequential processes.

Constructing the Process Model Diagrams

Once the functions, processes and sequential processes have been identified and defined, the analyst uses process modeling software to construct a set of diagrams to graphically represent the business processes under scrutiny.

In drawing the diagrams, consider including the following items:

The starting point of the process. There could be more than one starting point, depending on the purpose and the operation of the process. If a process contains more than one starting point, include all of them.

All tasks to be performed during the execution of the process. The order in which the tasks should be accomplished, including any tasks that may be

performed in parallel. All decision points, both those having to do with choosing a path through the process and

those that determine whether or not the process should continue. Any points at which the process path may split or merge. The completion point of the process. As a process may have multiple starting points, it

can also have multiple completion points.

You should also develop a means of identifying the data you expect at each point in the process. Be mindful of areas in the process where more than one task may be performed simultaneously. In these areas, you may need to show data being shared among participants, or different subsets of the data being made available to each participant.

Finally, include the ending point(s) of the process. This indicates that the process has been completed and that all the data generated by the process can be identified.

Q2. Explain Project Management Knowledge Areas. With an example explain Work Breakdown Structure.

Page 27: Prod. & operation mgt

Ans.

Project Management Body of Knowledge

A Guide to the Project Management Body of Knowledge (PMBOK Guide) is a book which presents a set of standard terminology and guidelines for project management. The Fourth Edition (2008) was recognized by the American National Standards Institute (ANSI) as an American National Standard (ANSI/PMI 99-001-2008) and by the Institute of Electrical and Electronics Engineers — IEEE 1490-2011.

A Guide to the Project Management Body of Knowledge (PMBOK Guide) was first published by the Project Management Institute (PMI) as a white paper in 1987 in an attempt to document and standardize generally accepted project management information and practices. The first edition was published in 1996 followed by the second edition in 2000.[2]

In 2004, the PMBOK Guide — Third Edition was published with major changes from the previous editions. The latest English-language PMBOK Guide — Fourth Edition was released on December 31, 2008.

Work on the Fifth Edition is in development. On February 17 2012 an Exposure Draft of the PMBOK Guide Fifth Edition was made available for review and comment[3]. The final version is expected to be published in 2012/2013.

The PMBOK Guide is process-based, meaning it describes work as being accomplished by processes. This approach is consistent with other management standards such as ISO 9000 and the Software Engineering Institute's CMMI. Processes overlap and interact throughout a project or its various phases. Processes are described in terms of:

Inputs (documents, plans, designs, etc.)

Tools and Techniques (mechanisms applied to inputs)

Outputs (documents, products, etc.)

The Guide recognizes 42 processes that fall into five basic process groups and nine knowledge areas that are typical of almost all projects.

The five process groups are:

Initiating

Page 28: Prod. & operation mgt

Planning

Executing

Monitoring and Controlling

Closing

The nine knowledge areas are:

Project Integration Management

Project Scope Management

Project Time Management

Project Cost Management

Project Quality Management

Project Human Resource Management

Project Communications Management

Project Risk Management

Project Procurement Management

Each of the nine knowledge areas contains the processes that need to be accomplished within its discipline in order to achieve an effective project management program. Each of these processes also falls into one of the five basic process groups, creating a matrix structure such that every process can be related to one knowledge area and one process group.

The PMBOK Guide is meant to offer a general guide to manage most projects most of the time. There are currently two extensions to the PMBOK Guide: the Construction Extension to the PMBOK Guide applies to construction projects, while the Government Extension to the PMBOK Guide applies to government projects.

Q3. Take an example of any product or project and explain Project Management Life Cycle.

Ans.

Page 29: Prod. & operation mgt

Product lifecycle management

In industry, product lifecycle management (PLM) is the process of managing the entire lifecycle of a product from its conception, through design and manufacture, to service and disposal. PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprise.

PLM systems help organizations in coping with the increasing complexity and engineering challenges of developing new products for the global competitive markets.

Product lifecycle management (PLM) should be distinguished from 'Product life cycle management (marketing)' (PLCM). PLM describes the engineering aspect of a product, from managing descriptions and properties of a product through its development and useful life; whereas, PLCM refers to the commercial management of life of a product in the business market with respect to costs and sales measures.

Product lifecycle management is one of the four cornerstones of a corporation's information technology structure. All companies need to manage communications and information with their customers (CRM-customer relationship management), their suppliers (SCM-supply chain management), their resources within the enterprise (ERP-enterprise resource planning) and their planning (SDLC-systems development life cycle). In addition, manufacturing engineering companies must also develop, describe, manage and communicate information about their products.

One form of PLM is called people-centric PLM. While traditional PLM tools have been deployed only on release or during the release phase, people-centric PLM targets the design phase.

As of 2009, ICT development (EU-funded PROMISE project 2004–2008) has allowed PLM to extend beyond traditional PLM and integrate sensor data and real time 'lifecycle event data' into PLM, as well as allowing this information to be made available to different players in the total lifecycle of an individual product (closing the information loop). This has resulted in the extension of PLM into closed-loop lifecycle management (CL2M).

Areas of PLM

Within PLM there are five primary areas;

1. Systems engineering (SE)2. Product and portfolio management (PPM)3. Product design (CAx)4. Manufacturing process management (MPM)5. Product Data Management (PDM)

Page 30: Prod. & operation mgt

Systems engineering is focused on meeting all requirements, primary meeting customer needs, and coordinating the systems design process by involving all relevant disciplines. Product and portfolio management is focused on managing resource allocation, tracking progress vs. plan for projects in the new product development projects that are in process (or in a holding status). Portfolio management is a tool that assists management in tracking progress on new products and making trade-off decisions when allocating scarce resources. Product data management is focused on capturing and maintaining information on products and/or services through their development and useful life.

Introduction to development process

The core of PLM (product lifecycle management) is in the creations and central management of all product data and the technology used to access this information and knowledge. PLM as a discipline emerged from tools such as CAD, CAM and PDM, but can be viewed as the integration of these tools with methods, people and the processes through all stages of a product’s life. It is not just about software technology but is also a business strategy.

For simplicity the stages described are shown in a traditional sequential engineering workflow. The exact order of event and tasks will vary according to the product and industry in question but the main processes are:

The reality is however more complex, people and departments cannot perform their tasks in isolation and one activity cannot simply finish and the next activity start. Design is an iterative process, often designs need to be modified due to manufacturing constraints or conflicting requirements. Where a customer order fits into the time line depends on the industry type and whether the products are for example, built to order, engineered to order, or assembled to order.

Page 31: Prod. & operation mgt

Phases of product lifecycle and corresponding technologies

Many software solutions have developed to organize and integrate the different phases of a product’s lifecycle. PLM should not be seen as a single software product but a collection of software tools and working methods integrated together to address either single stages of the lifecycle or connect different tasks or manage the whole process. Some software providers cover the whole PLM range while others a single niche application. Some applications can span many fields of PLM with different modules within the same data model. An overview of the fields within PLM is covered here. It should be noted however that the simple classifications do not always fit exactly, many areas overlap and many software products cover more than one area or do not fit easily into one category. It should also not be forgotten that one of the main goals of PLM is to collect knowledge that can be reused for other projects and to coordinate simultaneous concurrent development of many products. It is about business processes, people and methods as much as software application solutions. Although PLM is mainly associated with engineering tasks it also involves marketing activities such as product portfolio management (PPM), particularly with regards to new product development (NPD). There are several life-cycle models in industry to consider, but most are rather similar. What follows below is one possible life-cycle model; while it emphasizes hardware-oriented products, similar phases would describe any form of product or service, including non-technical or software-based products:

Phase 1: Conceive

Imagine, specify, plan, innovate

The first stage in idea is the definition of its requirements based on customer, company, market and regulatory bodies’ viewpoints. From this specification of the products major technical parameters can be defined. Parallel to the requirements specification the initial concept design work is carried out defining the aesthetics of the product together with its main functional aspects. For the industrial design, Styling, work many different media are used from pencil and paper, clay models to 3D CAID computer-aided industrial design software.

In some concepts, the investment of resources into research or analysis-of-options may be included in the conception phase – e.g. bringing the technology to a level of maturity sufficent to move to the next phase. However, life-cycle engineering is iterative. It is always possible that something doesn't work well in any phase enough to back up into a prior phase – perhaps all the way back to conception or research. There are many examples to draw from.

Phase 2: Design

Describe, define, develop, test, analyze and validate

Page 32: Prod. & operation mgt

This is where the detailed design and development of the product’s form starts, progressing to prototype testing, through pilot release to full product launch. It can also involve redesign and ramp for improvement to existing products as well as planned obsolescence. The main tool used for design and development is CAD. This can be simple 2D drawing / drafting or 3D parametric feature based solid/surface modeling. Such software includes technology such as Hybrid Modeling, Reverse Engineering, KBE (knowledge-based engineering), NDT (Nondestructive testing), Assembly construction.

This step covers many engineering disciplines including: mechanical, electrical, electronic, software (embedded), and domain-specific, such as architectural, aerospace, automotive, ... Along with the actual creation of geometry there is the analysis of the components and product assemblies. Simulation, validation and optimization tasks are carried out using CAE (computer-aided engineering) software either integrated in the CAD package or stand-alone. These are used to perform tasks such as:- Stress analysis, FEA (finite element analysis); kinematics; computational fluid dynamics (CFD); and mechanical event simulation (MES). CAQ (computer-aided quality) is used for tasks such as Dimensional tolerance (engineering) analysis. Another task performed at this stage is the sourcing of bought out components, possibly with the aid of procurement systems.

Phase 3: Realize

Manufacture, make, build, procure, produce, sell and deliver

Once the design of the product’s components is complete the method of manufacturing is defined. This includes CAD tasks such as tool design; creation of CNC Machining instructions for the product’s parts as well as tools to manufacture those parts, using integrated or separate CAM computer-aided manufacturing software. This will also involve analysis tools for process simulation for operations such as casting, molding, and die press forming. Once the manufacturing method has been identified CPM comes into play. This involves CAPE (computer-aided production engineering) or CAP/CAPP – (production planning) tools for carrying out factory, plant and facility layout and production simulation. For example: press-line simulation; and industrial ergonomics; as well as tool selection management. Once components are manufactured their geometrical form and size can be checked against the original CAD data with the use of computer-aided inspection equipment and software. Parallel to the engineering tasks, sales product configuration and marketing documentation work will be taking place. This could include transferring engineering data (geometry and part list data) to a web based sales configurator and other desktop publishing systems.

Phase 4: Service

Use, operate, maintain, support, sustain, phase-out, retire, recycle and disposal

The final phase of the lifecycle involves managing of in service information. Providing customers and service engineers with support information for repair and maintenance, as well as

Page 33: Prod. & operation mgt

waste management/recycling information. This involves using tools such as Maintenance, Repair and Operations Management (MRO) software.

There is an end-of-life to every product. Whether it be disposal or destruction of material objects or information, this needs to be considered since it may not be free from ramifications.

All phases: product lifecycle

Communicate, manage and collaborate

None of the above phases can be seen in isolation. In reality a project does not run sequentially or in isolation of other product development projects. Information is flowing between different people and systems. A major part of PLM is the co-ordination of and management of product definition data. This includes managing engineering changes and release status of components; configuration product variations; document management; planning project resources and timescale and risk assessment.

For these tasks graphical, text and metadata such as product bills of materials (BOMs) needs to be managed. At the engineering departments level this is the domain of PDM – (product data management) software, at the corporate level EDM (enterprise data management) software, these two definitions tend to blur however but it is typical to see two or more data management systems within an organization. These systems are also linked to other corporate systems such as SCM, CRM, and ERP. Associated with these system are project management Systems for project/program planning.

This central role is covered by numerous collaborative product development tools which run throughout the whole lifecycle and across organizations. This requires many technology tools in the areas of conferencing, data sharing and data translation. The field being product visualization which includes technologies such as DMU (digital mock-up), immersive virtual digital prototyping (virtual reality), and photo-realistic imaging.

Product and process lifecycle management (PPLM)

Product and process lifecycle management (PPLM) is an alternate genre of PLM in which the process by which the product is made is just as important as the product itself. Typically, this is the life sciences and advanced specialty chemicals markets. The process behind the manufacture of a given compound is a key element of the regulatory filing for a new drug application. As such, PPLM seeks to manage information around the development of the process in a similar fashion that baseline PLM talks about managing information around development of the product.

Page 34: Prod. & operation mgt

Q4. Explain PMIS. What is the difference between Key Success Factor (KSF) and Knowledge (K) Factor? Explain with example.

Ans.

PMIS

Project management information system

Page 35: Prod. & operation mgt

A project management information system (PMIS) is a part of management information systems (MIS) and manages information of a project centric organization. These electronic systems "help [to] plan, execute, and close project management goals." PMIS systems differ in scope, design and features depending upon an organisation's operational requirements.

Key Success Factors

A key success factor is a performance area of critical importance in achieving consistently high productivity. There are at least 2 broad categories of key success factors that are common to virtually all organizations: business processes and human processes. Both are crucial to building great companies. Our focus is primarily on the human processes.

To some extent, every human process is a key success factor. We talk about organizational performance, but in truth, it's people who produce results. Human processes are constantly evolving to fit new technologies and changing circumstances, but every once in a while, major shifts occur that dramatically change what's required in each of the key success areas. We’re experiencing such a shift right now—moving from the industrial age to a knowledge-based economy in a global marketplace.

Globalization and information technology are placing different, challenging demands on leaders and organizations in virtually every key success area. Here are some highlights of these changes:

Leadership "Command and control" leadership carried many organizations to very high levels of financial performance during periods when competition was not so great and things didn't change very fast, but its time has passed. The demands on the total organization are too great for a few people at the top to call all the shots.

Vision A compelling vision is one of a company's greatest assets. It can be a magnet for attracting talented people. It can serve as a beacon when people temporarily lose their way. It can be a source of energy and inspiration when people are encountering difficult obstacles. The CEO has a primary responsibility to shape, communicate and sustain the vision, but this need not be a solitary task. In fact the more people who can be involved in shaping the vision, the better.

Communication In most organizations, there have been 3 pervasive patterns that will no longer work in knowledge-based organizations: (1) the primary flow of information was vertical—within

Page 36: Prod. & operation mgt

departmental walls that were often impermeable, (2) information was hoarded and used as a source of power over others, and (3) people at the top often withheld crucial strategic information from those lower in the organization in the belief they couldn't handle it.

Teamwork Teamwork is more crucial to producing results today than ever before, and at the same time, the very nature of teams and their functions are changing rapidly. In the past it was typical to go for long periods—even an entire career—as the member of one functional team. Today, membership on more than one team is the norm, and it is unlikely that anyone entering the work force will remain on the first team they join for more than a year at most.

Strategic Alignment Process reengineering and systems thinking are moving strategic alignment back to the top of many corporate agendas. It has become crystal clear that many of the greatest opportunities for productivity improvement lie at the interfaces of the processes used to produce products and serve customers—and it is fruitless to excel in one process while lagging in others. In fact, it's counterproductive.

Conflict Resolution The new economy increases the potential for conflict in virtually every area of organizational life. Stakeholders are more informed and frequently more demanding. Employees are being asked to do more with less—without the promise of job security that existed in the past. Aligning self-interests with corporate interests is not as simple as it used to be. Alliances, mergers, and acquisitions bring together different cultures and set the stage for major internal conflicts and power struggles. Developing good conflict skills needs to be high on everyone's personal and corporate agendas.

Embracing Change Individuals and organizations that change before they have to will be the winners in global competition. People vary a lot in their tolerance of change and in the degree to which they actively seek change in their lives. It is difficult to grasp the potential for the continuing acceleration of change on a global scale. With more people having more access to more information, it is reasonable to expect more innovation and more competition on a daily basis. Merely accepting change and learning to tolerate it will not be enough to successfully compete in the next century. We must welcome change as our friend.

Learning Organization Leaders and managers have always given lip service to the notion of people being their most important asset and to the need for continuous training and development. In most companies, however, it has been no more than a notion. Most have not been consistent in this crucial area. The same company that will spend $5,000 a year to maintain a machine will not spend $500 to develop an employee. Of all the key success areas, this one is changing the most. The future belongs to learners—to individuals that take responsibility for updating their skills and knowledge, to teams that consciously develop the deep dialogue that enables team members to learn from one another, and to organizations that continuously improve their ability to transform data into value-added, actionable information to serve customers.

Page 37: Prod. & operation mgt

Q5. Explain the seven principles of supply chain management. Take an example of any product in the market and explain the scenario of Bullwhip effect.

Ans.

The Seven principles of Supply Chain Management

 1. Segment customers based on their service needs:This is about getting to know how to best, most profitably service the key types of customers of

Page 38: Prod. & operation mgt

your product and service offering.

2. Customise your logistics network:Following on from determining which customer segments are most important to you, you will need to customise the logistics network to the service requirements and profitability for each of them.

3. Drive operations from demand: Listen to market signals and align demand planning to ensure optimal resource allocation.

4. Differentiate product closer to the customer: Differentiate product and services closer to the customer to speed conversion across the supply chain.

5. Source strategically: Manage sources of supply strategically to reduce the total cost of acquiring and owning materials and services.

6. Develop a supply chain-wide technology strategy: Support multiple levels of decision-making and give a clear view of the flow of products, services and information.

7. Use supply chain spanning performance measures: Gauge collective (that is, together with your trading partners) success in reaching the end-user effectively and efficiently.

Extremely important it is to know what your customers really need and – more importantly - what they are willing to pay for. This information allows you to separate them into distinct categories in terms of their demand patterns and service requirements, and then build your supply chain strategy according to their real needs. This principle, customer segmentation, is the first step in creating an efficient supply chain for your organisation. Getting to work with the seven principles of Supply Chain Management By Barry Elliott, partner, Oliver Wight Asia Pacific Customise your logistics network So, the next challenge is to use this newfound information to create an efficient and complete fulfillment process, starting from the time of a client’s query and continuing through to the final collection of payment for a purchase.

Page 39: Prod. & operation mgt

It seems to us that the most obvious place to concentrate is on configuring your logistics network for the segments that you have identified. The range of possibilities is as varied as the types of businesses that exist, as shown by the following examples of three very different segments:

• Sophisticated organisation with high- volume, high frequency of order and delivery, reasonably few products, and little in the way of value added services required; example: the packaging purchases by bottlers of soft drinks

• Smaller organisations, low-volume, low frequency of order and delivery, probably a wide range of products, and these customers may find value added services (like Vendor Managed Inventory) appealing; example: pharmaceutical purchases by a small clinic

• Firms who run very, very large but infrequent major events; example: brewers who supply to clients in the sports or entertainment industry It becomes quite obvious that these three kinds of customers need to be serviced in very different ways, for both their benefit and yours. Since it is quite likely that you cannot afford to set up whole, independent supply chains for each of them, how can your same facilities, procedures, people, and systems deal really effectively and efficiently with such extremes?The options for creating effective fulfillment are often a continuation of the working relationship you have with the customer, where dialogue helps create a win-win situation.Here are the key foundations that would lead your organisation to excel in the fulfillment process:

• Manufacturing: manufacturing is driven by time, efficiency, and customer needs. Small production volumes can be produced at low cost, provided that manufacturing options are understood and kept opened and flexible with in-house or outsourcing. Continuous monitoring of the marketplace allows you to select the best option that suits your environment the most

• Inventory: implementation of a joint manufacturer/customer policy for inventory such as Just In Time (JIT) or Vendor Managed Inventory (VMI) is encouraged. Inventory should be managed in real time and known at all times

• Warehousing: the warehousing network is fully aligned to meeting customer needs at the lowest delivered cost and customers can select their own delivery options on each order

• Transportation: proactive management of mixed-mode transportation (air, ship, road) with tradeoffs in inventory, freight costs and customer service should be fully understood. Transportation costs are optimised across the entire supply chain. Delivery across suppliers, manufacturers and distributors are coordinated andleveraged. Customer orders from different divisions are merged efficiently. Warehouse cross-docks are used. Use of the third party logistics providers allows full truckload economics compared with less-than-truckload (LTL) from individual manufacturers

Page 40: Prod. & operation mgt

• Performance monitoring: perfect order metrics are used and monitored. Delivery/order accuracy are recorded and used proactively by customer service personnel. Orders are generated centrally by the manufacturer and reviewed by the customer as part of the Vendor Managed Inventory Flexibility and responsiveness to the specific needs of the client of your manufacturing system, inventory management, and logistics network are the key drivers of the efficient fulfillment process. With these indispensable factors, you can have “virtual” separate supply chains for your customers in all different segments. Drive operations from demand Looking at this next principle, the obvious question is “what do we mean by driving operation by demand - isn’t that what everybody normally does? What else could be used to plan the whole business if not needs from the customers?” While most companies think that they are planning and driving their business from customer demand, it is more likely that it is actually the demand ‘forecast’  that is being used; and there is one thing about the demand forecast that holds steady – it is always wrong, either by a few degrees or by leaps and bounds. So, how well could we operate the business or allocate our resources if we use the demand forecast figure, the always-wrong figure, instead of using the real demand? How would you know if you currently plan your operation based on the forecast? Following are a few key things that you may have experienced; these are NOT good things:

• There is a supply chain for each business unit. Supply chain units within the company plan and forecast independently • Sales forecasting is based on historical customer sales and spreadsheet analysis. Many forecasts are developed by the different groups in the supply chain - there is no one common forecast that drives all supply chain functions• Salespeople are responsible for developing the sales forecasts but they are rewarded on their ability to exceed the forecast• Understanding of customer markets is predominantly based on sales history and general macro-economic measures • Production overstretches to respond to every order. Emergency orders become normal practice. A ‘big-brain' planner uses experience to plan manually. In times of shortage, customer orders are allocated on a ‘first come, first served’ basis with some prioritisation of key accounts All of us know that, if we could, we should respond immediately to customer demand and use that information to trigger our planning process, sourcing process, and resource allocation, back along our supply chain.

Page 41: Prod. & operation mgt

THANK YOU