graphics

48
MC0072 – Computer Graphics Set-1 1. Explain the development of hardware and software for computer graphics. Ans: Development of Hardware and Software for Computer Graphics The development of hardware of computer graphics involves the development of input and output device technology. Therefore, in all development of computer graphics involves the development in three fields: 1. Output technology 2. Input technology and 3. Software technology 1 Output Technology Fig. 1.3 shows the historical development in the output technology. In early days of computer the hardcopy devices such as teletype printer and line printer were in use with computer driven CRT displays. In mid fifties command and control CRT display consoles were introduced. The more display devices developed in mid-sixties and in common use until the mid- eighties, are called vector, stroke, line drawing or calligraphic displays. The term vector is used as a synonyms for line; a stroke is a short line, and characters are made of sequence of such strokes.

Upload: muzaffar-khan

Post on 25-Oct-2014

80 views

Category:

Documents


1 download

DESCRIPTION

smu

TRANSCRIPT

Page 1: Graphics

MC0072 – Computer GraphicsSet-1

1. Explain the development of hardware and software for computer graphics.Ans:

Development of Hardware and Software for Computer Graphics

The development of hardware of computer graphics involves the development of input and output device technology. Therefore, in all development of computer graphics involves the development in three fields:

1. Output technology

2. Input technology and

3. Software technology

1 Output Technology

Fig. 1.3 shows the historical development in the output technology. In early days of computer the hardcopy devices such as teletype printer and line printer were in use with computer driven CRT displays. In mid fifties command and control CRT display consoles were introduced. The more display devices developed in mid-sixties and in common use until the mid-eighties, are called vector, stroke, line drawing or calligraphic displays. The term vector is used as a synonyms for line; a stroke is a short line, and characters are made of sequence of such strokes.

Fig. 1.3

Page 2: Graphics

2 Input Technology

Input technology has also improved greatly over the years. Number of input devices were developed over the years. These devices are punch cards, light pens, keyboard, tables, mouse and scanners.

3 Software Technology

Like output and input technology there is a lot of development in the software technology. In early days low level software were available. Over the years software technology moved from low level to device dependent and then to device independent packages. The device independent packages are high level packages with can drive a wide variety of display and printer devices. As a need for the device independent package a standardization is made and specification are decided. The first graphics specification to be officially standardized was GKS (the Graphical Kernel System). GKS supports the grouping of logically related primitives such as lines, polygons, and character strings and their attributes in collected form called segments. In 1988, a 3D extension of GKS, became an official standard, as did a much more sophisticated but even more complex graphics system called PHIGS (Programmer’s Hierarchical Interactive Graphics System).

PHIGS, as its name implies, supports nested hierarchical grouping of 3D primitives, called structures. In PHIGS, all primitives are subjected to geometric transformations such as scaling, rotation and translation to accomplish dynamic movement. PHIGS also supports a database of structures the programmer may edit and modify. PHIGS automatically updates the display whenever the database has been modified.

2. Explain the use of video lookup tableAns:Video lookup tableIn a color raster system, the number of color choices available depends on the amount of storage provided per pixel in the frame buffer. Also, color-information can be stored in the frame buffer in two ways: We can store color codes directly in the frame buffer, or we can put the color codes in a separate table and use pixel values as an index into this table. With the direct storage scheme, whenever a particular color code is specified in an application program, the corresponding binary value is placed in the frame buffer for each-component pixel in the output primitives to be displayed in that color. A minimum number of colors can be provided in this scheme with 3 bits of storage per pixel, as shown in the table 2.5.

Page 3: Graphics

Each of the three bit positions is used to control the intensity level (either on or off) of the corresponding electron gun in an RGB monitor. The leftmost bit controls the red gun, the middle bit controls the green gun, and the rightmost bit controls the blue gun. Adding more bits per pixel to the frame buffer increases the number of color choices. With 6 bits per pixel, 2 bits can be used for each gun. This allows four different intensity settings for each of the three color guns, and a total of 64 color values are available for each screen pixel. With a resolution of 1024 by 1024, a full-color (24bit per pixel) RGB system needs 3 megabytes of storage for the frame buffer. Color tables are an alternate means for providing extended color capabilities to a user without requiring large frame buffers. Lower cost personal computer systems, in particular, often use color tables to reduce frame-buffer storage requirements.

Color tables

In color displays, 24- bits per pixel are commonly used, where 8-bits represent 256 levels for each color. Here it is necessary to read 24-bits for each pixel from frame buffer. This is very time consuming. To avoid this video controller uses look up table (LUT) to store many entries of pixel values in RGB format. With this facility, now it is necessary only to read index to the look up table from the frame buffer for each pixel. This index specifies the one of the entries in the look-up table. The specified entry in the loop up table is then used to control the intensity or color of the CRT.

Usually, look-up table has 256 entries. Therefore, the index to the look-up table has 8-bits and hence for each pixel, the frame buffer has to store 8-bits per pixel instead of 24 bits. Fig. 2.6 shows the organization of a color (Video) look-up table.

Page 4: Graphics

Organization of a Video look-up table

There are several advantages in storing color codes in a lookup table. Use of a color table can provide a "reasonable" number of simultaneous colors without requiring Iarge frame buffers. For most applications, 256 or 512 different colors are sufficient for a single picture. Also, table entries can be changed at any time, allowing a user to be able to experiment easily with different color combinations in a design, scene, or graph without changing the attribute settings for the graphics data structure. In visualization and image-processing applications, color tables are convenient means for setting color thresholds so that all pixel values above or below a specified threshold can be set to the same color. For these reasons, some systems provide both capabilities for color-code storage, so that a user can elect either to use color tables or to store color codes directly in the frame buffer.

3. Explain the steps in DDA line drawing algorithmAns:

DDA Line Drawing Algorithm

Step 1:[Determine The Dx & Dy]

Dx=Xb-Xa

Dy=Yb-Ya

Step 2:[Determine Slope is Gentle or Sharp]

If |Dx|>|Dy| then

Gentle Slope

M=Dy/Dx

Set The Starting Point

If Dx>0 Then

C=CELING(Xb)

D=Yb+M*(C-Xb)

F=FLOOR(Xa)

Page 5: Graphics

R=FLOOR(D+0.5)

H=R-D+M

IF M>0 Then

Positive Gentle Slope

M1=M-1

Now Step Through The Columns

WHILE C<=F

4. Explain the steps in the scan line algorithm for filling the polygonAns:

Scan Line algorithm

Recursive algorithm for seed fill methods have got two difficulties. The first difficulty is that if some inside pixels are already displayed in fill color then recursive branch terminates, leaving further internal pixels unfilled. To avoid this difficulty, we have to first change the color of any internal pixels that are initially set to fill color before applying the seed fill procedure. Another difficulty with recursive seed fill methods is that it can not be used for large polygons. This is because recursive seed fill procedures require stacking of neighboring points. To avoid this problem more efficient method can be used. Such method fills horizontal pixel spans across scan lines, instead of proceeding to 4-connected or 8-connected neighboring points. This is achieved by identifying the right most and left most pixels of the seed pixel and then drawing a horizontal line between these 2 boundary pixels. This procedure is repeated with changing the seed pixel above and below the line just drawn until complete polygon is filled. With this efficient method we have to stack only a beginning position for each horizontal pixel span, instead of stacking all unprocessed neighboring positions around the current position.

Page 6: Graphics

Scan conversion algorithm for polygon filling

1. read n, the number of vertices of polygon

2. read x and y coordinates of all vertices in array x[n] and y[n]

3. find ymin and ymax

4. Store the initial x value(x1) y values y1 and y2 for 2 end points and x increment from scan line to scan line for each edge in the array edges [n][4]. While doing this check that y1> y2, if not interchange y1 and y2 and corresponding x1 and x2 so that for each edge, y1 represents its maximum y coordinate and y2 represents its minimum y coordinate.

5. Sort the rows of array, edges [n][4] in descending of y1, descending order of y2 and ascending order of x2.

6. Set y=ymax

7. Find the active edges and update active edge list:

8. If(y > y2 and y <=y1)

9. { edge is active}

10. Else

11. { edge is not active}

12. Compute the x intersects for all active edges for current y value. Initially x intersect is x1 and x intersects for successive y values can be given as

13. where = and m= i.e slope of a line segment

14. If x intersect is vertex i.e x-intersect=x1 and y=y1 then apply vertex test to check whether to consider one intersect or 2 intersects. Store all x intersects in the x-intersects [ ] array.

15. Sort x-intersects [ ] array in the ascending order

16. Extract pairs of intersects from the sorted x-intersect [ ] array

17. Pass pairs of x values to line drawing routine to draw corresponding line segments

18. Set y=y-1

Page 7: Graphics

19. Repeat step 7 through 18 until y>=ymin

20. Stop

5. Explain the Cyrus-Beck algorithm for generalized line clippingAns:

Generalized Clipping with Cyrus-beck Algorithm

The algorithms explained above assume that the clipping window is a regular rectangle. These algorithms are not applicable for non rectangular clipping windows. Cyrus and Beck have developed a generalized line clipping algorithm. This algorithm is applicable to an arbitrary convex region. This algorithm uses a parametric equation of a line segment to find the intersection points of a line with the clipping edges. The parametric equation of a line segment from P1 to P2 is

P(t) = P1 + (P2- P1)t ;0< t <1 . . . . . .(5.2)

Where is a parameter, t=0 at P1 and t =1 at P2

Consider a convex clipping region R, f is a boundary point of the convex region R and n is an inner normal for one of its boundaries, as shown in the Fig. 5.8.

Fig. 5.8: convex region, boundary point and inner normal

Then we can distinguish in which region appoint lie by looking at the value of the dot product n. [ P(t) – f ], as shown in the Fig.5.9.

1. If dot product is negative, i.e.

n.[ P(t) –f] <0 . . .(5.3)

Page 8: Graphics

Then the vector P(t) –f is pointed away from the interior of R.

2. If dot product is zero, i. e.,

n. [P(t)-f] =0. . . (5.4)

Then P(t) – f is pointed parallel to the plane containing f and perpendicular to the normal.

3. If dot product is positive, i.e.

n. [P(t) – f] > 0

Then the vector P(t) – f is pointed towards the interior of R, as shown in fig. 5.9.

Fig. 5.9: Dot products for 3 points inside, outside and on the boundary of the clipping region

As shown in the Fig. 5.9, if the point lies in the boundary plane or edge for which n is the inner normal, then that point t on the line P(t) which satisfies n.[P(t) –f ] = 0 condition is the intersection of the line with the boundary edge.

6. Derive the 3-D transformation matrix to transform world coordinates to viewingCoordinatesAns:3-D transformation matrix

the 3D viewing process is inherently more complex than the 2 D viewing process. In two dimensional viewing we have 3D window and 2D viewport and objects in the world coordinates are clipped against the window and are then transformed into the viewport for display. The complexity added in the three dimensional viewing is because of the

Page 9: Graphics

added dimension and the fact that even though objects are three dimensional the display devices are only 2D.

The mismatch between 3D objects and 2D displays is compensated by introducing projections. The projections transform 3D objects into a 2D projection plane. The Fig. 7.1 shows the conceptual model of the 3D transformation process.

In 3D viewing, we specify a view volume in the world coordinates using modeling transformation. The world coordinate positions of the objects are then converted into viewing coordinates by viewing transformation. The projection transformation is then used to convert 3D description of objects in viewing coordinates to the 2D projection coordinates. Finally, the workstation transformation transforms the projection coordinates into the device coordinates.

Specifying an Arbitrary 3D View

As mentioned earlier, we can view the object from the side, or the top, or even from behind. Therefore, it is necessary to choose a particular view for a picture by first defining a view plane. A view plane is nothing but the film plane in a camera which is positioned and oriented for a particular shot of the scene. World coordinate positions in the scene are transformed to viewing coordinates, then viewing coordinates are projected onto the view plane. A view plane can be defined by establishing the viewing – coordinate system or view reference coordinate system, as shown in the Fig. 7.2.

Page 10: Graphics

Fig. 7.2: Right handed viewing coordinate system

The first viewing parameter we must consider is the view reference point. This point is the center of our viewing coordinate system. It is often chosen to be close to or to the surface of some object in a scene. Its coordinates are specified as XW, YW and ZW.

The next viewing parameter is a view-plane normal vector, N. This normal vector is the direction perpendicular to the view plane and it is defined as [DXN DYN DZN]. We know that the view plane is the film in the camera and we focus camera towards the view reference point. This means that the camera is pointed in the direction of the view plane normal. This is illustrated in Fig. 7.3

Fig. 7.3: View reference point and View plane normal vector

As shown in the Fig. 7.3, the view plane normal vector is a directed line segment from the view plane to the view reference point. The length of this directed line segment is referred to as view-distance. This is another viewing parameter. It tells how far the camera is positioned from the view reference point. In other words we can say that a view

Page 11: Graphics

plane is positioned view-distance away from the view reference point in the direction of the view plane normal. This is illustrated in Fig. 7.4.

Fig. 7.4: 3D viewing parameters

As shown in the Fig. 7.4 we have world coordinate system which we used to model our object, and we have view plane coordinates, which are attached to the view plane.

It is possible to obtain the different views by rotating the camera about the view plane normal vector and keeping view reference point and direction of N vector fixed, as shown in the Fig. 7.5

Fig. 7.5: Rotating View plane

Fig. 7.6: Viewing object by changing view plane vector N

At different angles, the view plane will show the same scene, but rotated so that a different part of the object is up. The rotation of a camera or view plane is specified by a view-up vector V [XUP YUP ZUP] which is another important viewing parameter.

Page 12: Graphics

We can also obtain a series of views of a scene, by keeping the view reference point fixed and changing the direction of N, as shown in the Fig. 7.6 changing the view plane normal changes the orientation of camera or view plane giving different views.

7. Explain random scan display system.Ans:

Random scan system:-1)random displays have high resolutions since the picture defination is stored as a set of linedrawing commands and not as a set of intensity values.2)smooth lines are produced as the electron beam directly follows the line path.3)realism is difficult to achieve.4)random-scan system's are generally costlier.Video Display Controlleror VDCis anintegrated circuitwhich is the main component in avideo signal generator , a device responsible for the production of aTV video signalin a computing or game system. Some VDCs also generate anAudio signal, but in that case it's nottheir main function.VDCs were most often used in the oldhome-computersof the 80s, but also in some earlyvideo gamesystems.

8. Explain the steps in midpoint circle drawing algorithmAns:

Midpoint circle drawing Algorithm

The midpoint circle drawing algorithm also uses the eight-way symmetry of the circle to generate it. It plots 1/8 part of the circle i.e from 90° to 45° as shown in the figure 3.18. as circle is drawn from 90° to 45°, the x moves in the positive direction and y moves in the negative direction. To draw 1/8 part of the circle, we take unit steps in the positive x direction and make use of decision parameter to determine which of the 2 possible y position is closer to the circle path at each step.

Page 13: Graphics

Fig. 3.18: Decision parameter to select correct pixel

in circle generation algorithm

In the above figure the 2 possible y positions are yi and yi-1 at xi+1. Therefore we have to determine whether the pixel at (xi+1, yi) or at position (xi+1, yi-1) is closer to the circle. For this purpose decision parameter I used. It uses the circle function f(x,y)=x2+y2-r2 evaluated at the midpoint between these 2 pixels.

-1)

If di < 0, the midpoint is outside the circle and pixel on the scan line yi is closer to the circle boundary. If di >=0, the midpoint is outside or on thee circle boundary. And yi-1 I closer to the circle boundary. Now calculate di+1. There are 2 cases

Case 1: if di < 0 then

=

Case 2: if di >0

Page 15: Graphics

x=x+1

d=d+2x+1

}

Else

{

x=x+1

y=y-1

d=d+2x+1

} while(x < y)

5. determine symmetry points

6. stop

9. Explain the various types of parallel projections.Ans:

Parallel Projection

In parallel projection, z coordinate is discarded and parallel lined from each vertex on the object are extended until they intersect the view plane. The point of intersection is the projection of the vertex. We connect the projected vertices by line segments which correspond to connections on the original object.

Types of Parallel Projections

Fig. 7.10

Page 16: Graphics

Parallel projections are basically categorized into two types, depending on the relation between the direction of projection and the normal to the view plane. When the direction of the projection is normal (perpendicular) to the view plane, we have an orthographic parallel projection. Otherwise, we have an oblique parallel projection. Fig. 7.10 illustrates the two types of parallel projection.

7.5.3.1 Orthographic Projection

As shown in Fig 7.10 (a), the most common types of orthographic projections are the front projection, top projection and side projection. In all these, the projection plane (view plane) is perpendicular to the principle axis. These projection are often used in engineering drawing to depict machine parts, assemblies, buildings and so on.

The orthographic projection can display more than one face of an object. Such as orthographic projection is called axonometric orthographic projection. It uses projection planes (view planes) that are not normal to a principle axis. They resemble the perspective projection in this way, but differ in that the foreshortening is uniform rather than being related to the distance from the center of projection. Parallelism of lines is preserved but angles are not. The most commonly used axonometric orthographic projection is the isometric projection.

The isometric projection can be generated by aligning the view plane so that it intersects each coordinate axis in which the object is defined at the same distance from the origin. As shown in the Fig. 7.11, the isometric projection is obtained by aligning the projection vector with the cube diagonal. It uses an useful property that all three principle axes are equally foreshortened, allowing measurements along the axes to be made to the same scale (hence the name: iso for equal, metric for measure).

Fig. 7.11: Isometric projection of an object onto a viewing plane

7.5.3.2 Oblique Projection

An oblique projection is obtained by projecting points along parallel lines that are not perpendicular to the projection plane. Fig. 7.10(b) shows the oblique projection. Notice that the view plane normal and the direction of projection are not the same. The oblique projections are further classified as the cavalier and cabinet projections. For the cavalier projection, the direction of projection makes a 450 angle with the view plane. As a result, the projection of a line perpendicular to the view plane has the same length as the line

Page 17: Graphics

itself; that is, there is no foreshortening. Fig. 7.12 shows cavalier projections of a unit

cube with .

Fig. 7.12: Cavalier Projections of the unit cube

When the direction of projection makes an angle of arctan (2)=63.40 with the view plane, the resulting view is called a cabinet projection. For this angle, lines perpendicular to the viewing surface are projected at one-half their actual length. Cabinet projections appear more realistic than cavalier projections because of this reduction in the length of perpendiculars. Fig. 7.13 shows the examples of cabinet projections for a cube.

Fig. 7.13: Cabinet projections of the Unit Cube

10. Derive the transformation matrix for rotation about an arbitrary planeAns:transformation matrix for rotation about an arbitrary plane

The standard way to compute a rotation about an arbitrary axis through the origin is toconcatenate five rotation matrices. This note shows how vector algebra makes it easy torotate about an arbitrary axis in a single step.Figure 1 shows a point P which we want to rotate an angle _ about an axis that passesthrough B with a direction defined by unit vector n. So, given the angle _, the unit vectorn, and Cartesian coordinates for the points P, B, we want to find Cartesian coordinatesfor the point P′.

Page 18: Graphics

Figure 1: Rotation about an Arbitrary AxisThe key insight needed is shown in Figure 2. Let u and v be any two three-dimensional

Figure 2: Key Insight

vectors that satisfy u · v = 0 (that is, they are perpendicular) and |u| = |v| 6= 0 (that is,they are they same length but not necessarily unit vectors). We want to find a vector rthat is obtained by rotating u an angle _ in the plane defined by u and v. As suggested inFigure 2,r = u cos _ + v sin _. (1)With that insight, it is easy to compute a rotation about an arbitrary axis. Referring to Figure 3, we compute

Page 19: Graphics

Figure 3: Rotation about an Arbitrary Axis

C = B + [(P − B) · n]n. (2)u = P − C (3)v = n × u (4)Then, r is computed using equation (1), andP′ = C + r. (5)

It is possible to take these simple vector equations and to create from them a single4 × 4 transformation matrix for rotation about an arbitrary axis. Let P = (x, y, z), P′ =(x′, y′, z′), B = (Bx,By,Bz), and n = (nx.ny, nz). We seek a 4 × 4 matrix M such that

(Cx,Cy,Cz) = (Bx,By,Bz) + [xnx + yny + znz − B · n](nx, ny, nz) (6)Cx = xn2

x + ynxny + znxnz + Bx − (B · n)nx (7)Cy = xnxny + yn2

y + znynz + By − (B · n)ny (8)Cz = xnxnz + ynynz + zn2

z + Bz − (B · n)nz (9)u = (x, y, z) − (Cx,Cy,Cz) (10)ux = x(1 − n2

x) − ynxny − znxnz + (B · n)nx − Bx (11)uy = −xnxny + y(1 − n2

y) − znynz + (B · n)ny − By (12)uz = −xnxnz − ynynz + (1 − n2

z) + (B · n)nz − Bz (13)vx = nyuz − nzuy (14)vy = nzux − nxuz (15)vz = nxuy − nyux (16)rx = ux cos _ + (nyuz − nzuy) sin _ (17)ry = uy cos _ + (nzux − nxuz) sin _ (18)ry = uy cos _ + (nxuy − nyux) sin _ (19)(x′, y′, z′) = (Cx + rx,Cy + ry,Cz + rz) (20)x′ = xn2

x + ynxny + znxnz + Bx − (B · n)nx + (21)(x(1 − n2

x) − ynxny − znxnz + (B · n)nx − Bx) cos _ + (22)ny(−xnxnz − ynynz + (1 − n2

z) + (B · n)nz − Bz) sin _ − (23)

Page 20: Graphics

nz(−xnxny + y(1 − n2

y) − znynz + (B · n)ny − By) sin _ (24)x′ = x[n2

x(1 − n2

x) cos _] + y[nxny(1 − cos _) − nz sin _]+ z[nxnz(1 − cos _) + ny sin _] + (Bx − (B · n)nx)(1 − cos _) + nzBy − nyBz. (25)Since n2

x + n2

y + n2

z = 1, (1 − n2

x) = n2

y + n2

z. In like manner we can come up with anexpression for y′ and z′, and our matrix M is thus

withT1 = (Bx − (B · n)nx)(1 − cos _) + nzBy − nyBz

T2 = (By − (B · n)ny)(1 − cos _) + nzBx − nxBz

T3 = (Bz − (B · n)nz)(1 − cos _) + nxBy − nyBx

Set-2

1. With a neat block diagram explain the conceptual framework of interactivegraphics.Ans:

Conceptual Framework for Interactive Graphics

The Fig. 1.9 shows the high level conceptual framework for interactive graphics. It consists of input and output devices, graphics system, application program and application model. A computer receives input from input devices, and outputs images to a display device. The input and output devices are called the hardware components of the conceptual framework. There are three software components of conceptual framework.

These are:

· Application Model

· Application Program and

· Graphic System

Page 21: Graphics

Fig. 1.9: Conceptual frame work for interactive graphics

Application Model: The application model captures all the data and objects to be pictured on the screen. It also captures the relationship among the data and objects. These relationships are stored in the database called application database, and referred by the application programs.

Application program: It creates the application model and communicates with it to receive and store the data and information of object’s attributes. The application program also handles user input. It produces views by sending series of graphics output commands to the graphics system.

The application program is also responsible for interaction handling. It does this by event handling loops.

Graphics System: It accepts the series of graphics output commands from application program. The output commands contain both a detailed geometric description of what is to be viewed and the attributes describing how the objects should appear. The graphics system is responsible for actually producing the picture from the detailed descriptions and for passing the user’s input to the application program for processing.

2. Draw and explain the block diagram of typical workstationAns:

Graphics WorkstationGraphics Workstation is the Graphics Kernel System’s (GKS) term for a graphical device that can display graphical output or accept graphical input or both.

The below figure shows the block diagram of typical graphics station. It consists of CPU, display processor, memory, display devices, recorder, plotter, joystick, keyboard, light pen mouse, scanner etc, the main hardware components of a graphics workstation are CPU and Display Processor. The display processor is also called a graphics controller or a display coprocessor. It makes CPU free from the graphical chores. In addition to the system memory, a separate display processor memory area is provided in graphics workstations. Graphics workstations have a provision to interface video cameras and television set. The size of the display device, colors supported by it, whether it is a raster

Page 22: Graphics

or line drawing device are the main properties of the graphics workstation. The graphics workstation is always supported with graphics software. Graphics software acts as a very powerful tool to create scenes, images, pictures and also animated pictures.

block diagram of typical workstation

3. Explain the steps in Bresenham’s line drawing algorithmAns:

In Bresenham’s approach the pixel position along a line path aredetermined by sampling unit X intervals. Starting from the left end point(X0, Y0)of agiven line we step to each successive columns and plot the pixel whose scan line Y-valueis closest to the line path.Assuming the Kth step in process, determined that the pixel at(Xk, Yk)decide which pixel to plot in column Xk+1.The choices are (Xk+1, Yk) and (Xk+1,Yk+1)AlgorithmStep 1: Input the line endpoints and store the left endpoint in (X0, Y0)Step 2: Load (X0, Y0) in to the frame bufferStep 3: Calculate constants x, y, 2 y, -2 x, and obtain the decision parameters asP0 = 2 y – xStep 4 : At each Xk along the line, starting at k = 0, perform the following testIf Pk < 0, the next point to plot is (Xk+1, Yk) andPk+1 = Pk+2 y

Page 23: Graphics

Otherwise, the next point to plot is (Xk+1, Yk+1) andPk+1 = Pk+2 y - 2 xStep 5: Repeat step 4 x times

4. Explain the steps required to fill the polygon using food fill techniqueAns:

Filling Polygons using food fill technique

Filling the polygon means highlighting all the pixels which lie inside the polygon with any color other than background color. Polygons are easier to fill since they have linear boundaries.

There are 2 basic approaches used to fill the polygon. One way to fill a polygon is to start from a given “seed point” known to be inside the polygon and highlight outward from this point i.e neighboring pixels until we encounter the boundary pixel. This approach is called seed fill because color flows from the seed pixel until reaching the polygon boundary. Another approach to fill the polygon is to apply the inside test i.e to check whether pixel is inside the polygon or outside the polygon and then highlight pixels which lie inside the polygon. This approach is known as scan line algorithm.

1 Seed fill

This seed fill algorithm is further classified as flood-fill algorithm and boundary fill algorithm. Algorithms that fill interior defines regions are called flood-fill algorithms, those that fill boundary-defined regions are called boundary-fill algorithms or edge-fill algorithms.

Flood Fill algorithm

Sometimes it is required to fill an area that is not defined within a single color boundary. In such cases we can fill areas by replacing a specified interior color instead of searching for a boundary color. This approach is called flood fill algorithm. Here we start with some seed and examine the neighboring pixels. However, here pixels are checked for a specified interior color instead of a boundary color. And they are replaced by new color. Using either a 4-connected or 8-connected approach we can step through pixel positions until all interior points have been filled. The following procedure illustrates the recursive method for filling 8-connected region using flood fill algorithm

Procedure: flood_fill(x,y,old_color, new_color)

{

Page 24: Graphics

If (getpixel(x,y)=old_color))

{

putpixel(x,y,new_color)

flood_fill(x+1, y, old_color, new_color)

flood_fill(x-1, y, old_color, new_color)

flood_fill(x, y+1, old_color, new_color)

flood_fill(x, y-1, old_color, new_color)

flood_fill(x+1, y+1, old_color, new_color)

flood_fill(x-1, y-1, old_color, new_color)

flood_fill(x+1, y-1, old_color, new_color)

flood_fill(x-1, y+1, old_color, new_color)

}

}

5. Write a note on Strock method and Bitmap methodAns:Stroke method

Stroke method

This method uses small line segments to generate a character. The small series of line segments are drawn like a stroke of pen to form a character as shown in the fig. 5.19.

Page 25: Graphics

We can build our own stroke method character generator by calls to the line drawing algorithm. Here it is necessary to decide which line segments are needed for each character and then drawing these segments using line drawing algorithm.

Bitmap method

The third method for character generation is the bitmap method. It is also called dot matrix because in this method characters are represented by an array of dots in the matrix form. It is a two dimensional array having columns and rows. An 5 7 array is commonly used to represent characters as shown in the fig 5.21. However 7 9 and 9 13 arrays are also used. Higher resolution devices such as inkjet printer or laser printer may use character arrays that are over 100 100.

character A in 5 7 dot matrix format

6. Derive the transformation matrix for 2-D viewing transformationAns:

Transformation matrix for Two dimensional viewing transformation

the picture is stored in the computer memory using any convenient Cartesian coordinate system, referred to as world coordinate system (WCS). However, when picture is displayed on the display device it is measured in physical device coordinate system (PDCS) corresponding to the display device: Therefore, displaying an image of a picture involves mapping the coordinates of the points and lines that form the picture into the appropriate physical device coordinate where the image is to be displayed. This mapping of coordinates is achieved with the use of coordinate transformation known as viewing transformation.

Page 26: Graphics

The viewing transformation which maps picture coordinates in the WCS to display

coordinates in PDCS is performed by the following transformations:

         Normalization transformation (N) and

         Workstation transformation (W)

1 Normalization Transformation

We know that, different display devices may have different screen sizes as measured in

pixels. Size of the screen in pixels increases as resolution of the screen increases. When

picture is defined in the pixel values then it is displayed large in size on the low

resolution screen while small in size on the high resolution screen as shown in the Fig.

6.12. To avoid this and to make our programs to be device independent, we have to

define the picture coordinates in some units other than pixels and use the interpreter to

convert these coordinates to appropriate pixel values for the particular display device.

The device independent units are called the normalized device coordinates. In these

units, the screen measures 1 unit wide and 1 unit length as shown in the Fig. 6.13. The

lower left corner of the screen is the origin, and the upper-right is the point (1, 1). The

point (0.5, 0.5) is the center of the screen no matter what the physical dimensions or

resolution of the actual display device may be.

Picture definition in pixels

Picture definition in normalized device coordinates

Page 27: Graphics

The interpreter uses a simple linear formula to convert the normalized device coordinates

to the actual device coordinates.

… (6.27)

… (6.28)

where

x : Actual device x coordinate

y : Actual device y coordinate

xn : Normalized x coordinate

yn : Normalized y coordinate

XW : Width of actual screen in pixels

YH : Height of actual screen in pixels

The transformation which maps the work coordinate to normalized device coordinate is

called normalization transformation. It involves scaling of x and y, thus it is also

referred to as scaling transformation.

2 Workstation Transformation

The transformation which maps the normalized device coordinates to physical device

coordinates is called workstation transformation.

The viewing transformation is the combination of normalization transformation and workstation transformations as shown in the Fig. 6.14. It is given as

V = N . W…(6.29)

Page 28: Graphics

Two dimensional viewing transformationthat world coordinate system (WCS) is infinite in extent and the device display area in finite. Therefore, to perform a viewing transformation we select a finite world coordinate area for display called a window. An area on a device to which a window is mapped is called a viewport. The window defines what is to be viewed; the viewport defines where it is to be displayed

7. Write a short notes on video mixing.Ans:

Video Mixing

Video controller provides the facility of video mixing. In which it accepts information of two images simultaneously. One from frame buffer and other from television camera, recorder or other source. This is illustrated in fig 2.7. The video controller merges the two received images to form a composite image.

Video mixing

There are two types of video mixing. In first, a graphics image is set into a video image. Here mixing is accomplished with hardware that treats a designated pixel value in the frame buffer as a flag to indicate that the video signal should be shown instead of the signal from the frame buffer, normally the designated pixel value corresponds to the background color of the frame buffer image.

Page 29: Graphics

In the second type of mixing, the video image is placed on the top of the frame buffer image. Here, whenever background color of video image appears, the frame buffer is shown, otherwise the video image is shown.

8. Discuss the merits and demerits of DDA line drawing algorithmAns:DDA Line Algorithm

1. Read the line end points (x1,y1 ) and (x2,y2) such that they are not equal.

[if equal then plot that point and exit]

2. ∆x = and ∆y =

3. If then

Length=

else

Length=

end if

4. = (x2-x1)/length

= (y2-y1)/length

This makes either or equal to 1 because the length is either | x2-x1| or |y2-y1|, the incremental value for either x or y is 1.

5. x = x1+0.5 * sign( )

y = y1+0.5*sign( )

[Here the sign function makes the algorithm worl in all quadrant. It returns –1, 0,1 depending on whether its argument is <0, =0, >0 respectively. The factor 0.5 makes it possible to round the values in the integer function rather than truncating them]

6. i=1 [begins the loop, in this loop points are plotted]

Page 30: Graphics

7. while(i length)

{

Plot (Integer(x), Integer(y))

x= x+∆x

y= y+∆y

i=i+1

}

8. stop

Let us see few examples to illustrate this algorithm.

Ex.3.1: Consider the line from (0,0) to (4,6). Use the simple DDA algorithm to rasterize this line.

Sol: Evaluating steps 1 to 5 in the DDA algorithm we have

x1=0, x2= 4, y1=0, y2=6

length = =6

∆x = =

∆y=

Initial values for

x= 0+0.5*sign ( )=0.5

y= 0+ 0.5 * sign(1)=0.5

Tabulating the results of each iteration in the step 6 we get,

Page 32: Graphics

Initial values for

x= 0+0.5*sign (-1) = -0.5

y= 0+ 0.5 * sign(-1) = -0.5

Tabulating the results of each iteration in the step 6 we get,

Fig. 3.7

The results are plotted as shown in the Fig 3.7. It shows that the rasterized line lies on the actual line and it is 450 line.

Merits of DDA Algorithm

1. It is the simplest algorithm and it does not require special skills for implementation.

Page 33: Graphics

2. It is a faster method for calculating pixel positions than the direct use of equation y=mx + b. It eliminates the multiplication in the equation by making use of raster characteristics, so that appropriate increments are applied in the x or y direction to find the pixel positions along the line path.

Demerits of DDA Algorithm

1. Floating point arithmetic in DDA algorithm is still time-consuming.

2. The algorithm is orientation dependent. Hence end point accuracy is poor.

9. Explain the various types of perspective projectionsAns:

Perspective Projection

The perspective projection, on the other hand, produces realistic views but does not preserve relative proportions. In perspective projection, the lines of projection are not parallel. Instead, they all coverage at a single point called the center of projection or projection reference point. The object positions are transformed to the view plane along these converged projection lines and the projected view of an object is determines by calculating the intersection of the converged projection lines with the view plane, as shown in the below figure

Perspective projection of an object to the view plane

Various Types of Parallel Projections

Page 34: Graphics

Parallel projections are basically categorized into two types, depending on the relation between the direction of projection and the normal to the view plane. When the direction of the projection is normal (perpendicular) to the view plane, we have an orthographic parallel projection. Otherwise, we have an oblique parallel projection.

1 Orthographic Projection

As shown in Fig 7.10 (a), the most common types of orthographic projections are the front projection, top projection and side projection. In all these, the projection plane (view plane) is perpendicular to the principle axis. These projection are often used in engineering drawing to depict machine parts, assemblies, buildings and so on.

The orthographic projection can display more than one face of an object. Such as orthographic projection is called axonometric orthographic projection. It uses projection planes (view planes) that are not normal to a principle axis. They resemble the perspective projection in this way, but differ in that the foreshortening is uniform rather than being related to the distance from the center of projection. Parallelism of lines is preserved but angles are not. The most commonly used axonometric orthographic projection is the isometric projection.

The isometric projection can be generated by aligning the view plane so that it intersects each coordinate axis in which the object is defined at the same distance from the origin. As shown in the Fig. 7.11, the isometric projection is obtained by aligning the projection vector with the cube diagonal. It uses an useful property that all three principle axes are equally foreshortened, allowing measurements along the axes to be made to the same scale (hence the name: iso for equal, metric for measure).

Page 35: Graphics

Fig. 7.11: Isometric projection of an object onto a viewing plane

2 Oblique Projection

An oblique projection is obtained by projecting points along parallel lines that are not perpendicular to the projection plane. Fig. 7.10(b) shows the oblique projection. Notice that the view plane normal and the direction of projection are not the same. The oblique projections are further classified as the cavalier and cabinet projections. For the cavalier projection, the direction of projection makes a 450 angle with the view plane. As a result, the projection of a line perpendicular to the view plane has the same length as the line itself; that is, there is no foreshortening. Fig. 7.12 shows cavalier projections of a unit

cube with .

Fig. 7.12: Cavalier Projections of the unit cube

When the direction of projection makes an angle of arctan (2)=63.40 with the view plane, the resulting view is called a cabinet projection. For this angle, lines perpendicular to the viewing surface are projected at one-half their actual length. Cabinet projections appear more realistic than cavalier projections because of this reduction in the length of perpendiculars. Fig. 7.13 shows the examples of cabinet projections for a cube.

Page 36: Graphics

Fig. 7.13: Cabinet projections of the Unit Cube

10. Explain the homogeneous coordinates for translation, rotation and scaling.Ans:

Homogeneous Coordinates and Matrix Representation of 2D Transformations

In design and picture formation process, many times we may require to perform translation, rotations, and scaling to fit the picture components into their proper positions. In the previous section we have seen that each of the basic transformations can be expressed in the general matrix form

= P.M1 + M2 …… (6.12)

For translation:

i.e. M1 = Identity matrix

M2 = Translation vector

For rotation:

i.e. M1 = Rotational matrix

M2 = 0

For scaling:

i.e. M1 = Scaling matrix

Page 37: Graphics

M2 = 0

To produce a sequence of transformations with above equations, such as translation

followed by rotation and then scaling, we must calculate the transformed co-ordinates

one step at a time. First, coordinates are rotated. But this sequential transformation

process is not efficient. A more efficient approach is to combine sequence of

transformations into one transformation so that the final coordinate positions are obtained

directly from initial coordinates. This eliminates the calculation of intermediate

coordinate values.

In order to combine sequence of transformations we have to eliminate the matrix addition

associated with the translation term in M2 (Refer equation 6.12). To achieve this we have

to represent matrix M1 as 3 × 3 matrix instead of 2 × 2 introducing an additional dummy

coordinate W. Here, points are specified by three numbers instead of two. This coordinate

system is called homogeneous coordinate system and it allows us to express all

transformation equations as matrix multiplication.

The homogeneous coordinate is represented by a triplet

where

For two dimensional transformations, we can have the homogeneous parameter W to be

any non zero value. But it is convenient to have W = 1. Therefore, each two dimensional

position can be represented with homogeneous coordinate as (x, y, 1).

Summering it all up, we can say that the homogeneous coordinates allow combined

transformation, eliminating the calculation of intermediate coordinate values and thus

save required time for transformation and memory required to store the intermediate

coordinate values. Let us see the homogeneous coordinates for three basic

transformations.

1 Homogeneous Coordinates for Translation

The homogeneous coordinates for translation are given as

Page 39: Graphics

Note: In this book, the object matrix is written first and it is then multipled by the

required transformation matrix. If we wish to write the transformation matrix first and

then the object matrix we have to take the transpose of both the matrices and post-

multiply the object matrix i.e.,