mc0072 – software engineering

Upload: archanapjith

Post on 10-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 MC0072 Software Engineering

    1/30

    August 2010

    Master of Computer Application (MCA) Semester 3

    MC0072 Computer Graphics

    Assignment Set 1

    1. Describe theory of development of hardware and software for computer graphics.

    The development of hardware of computer graphics involves the

    development of input and output device technology. Therefore, in all

    development of computer graphics involves the development in three

    fields:1. Output technology

    2. Input technology and

    3. Software technology

    Output Technology

    Figure 1.3 shows the historical development in the output technology. In early days of

    computer the hardcopy devices such as teletype printer and line printer were in use with

    computer driven CRT displays. In mid fifties command and control CRT display consoles were

    introduced. The more display devices developed in mid-sixties and in common use until the

    mid-eighties, are called vector, stroke, line drawing or calligraphic displays. The term vector is

    used as synonyms for line; a stroke is a short line, and characters are made of sequence of such

    strokes.

  • 8/8/2019 MC0072 Software Engineering

    2/30

    Fig. 1.3

    Input Technology

    Input technology has also improved greatly over the years. Number of input devices were

    developed over the years. These devices are punch cards, light pens, keyboard, tables, mouse

    and scanners.

    Software Technology

    Like output and input technology there is a lot of development in the software technology. In

    early days low level software were available. Over the years software technology moved from

    low level to device dependent and then to device independent packages. The device

    independent packages are high level packages with can drive a wide variety of display and

    printer devices. As a need for the device independent package a standardization is made and

    specification are decided. The first graphics specification to be officially standardized was GKS

    (the Graphical Kernel System). GKS supports the grouping of logically related primitives such

  • 8/8/2019 MC0072 Software Engineering

    3/30

    as lines, polygons, and character strings and their attributes in collected form called segments.

    In 1988, a 3D extension of GKS, became an official standard, as did a much more sophisticated

    but even more complex graphics system called PHIGS (Programmers Hierarchical Interactive

    Graphics System).

    PHIGS, as its name implies, supports nested hierarchical grouping of 3D primitives, called

    structures. In PHIGS, all primitives are subjected to geometric transformations such as scaling,

    rotation and translation to accomplish dynamic movement. PHIGS also supports a database of

    structures the programmer may edit and modify. PHIGS automatically updates the display

    whenever the database has been modified.

    2. Explain the following with the help of relevant real time applications:

    A) Classification of Applications

    These uses of computer graphics can be classified as shown in the Fig. 1.2. As shown in the

    Fig. 1.2, the use of computer graphics can be classified according to dimensionality of the

    object to be drawn: 2D or 3D. It can also be classified according to kind of picture: Symbolic or

    Realistic. Many computer graphics applications are classified by the type of interaction. The

    type of interaction determines the users degree of control over the object and its image. In

    controllable interaction user can change the attributes of the images. Role of picture gives the

    another classification. Computer graphics is either used for representation or it can be an end

    product such as drawings. Pictorial representation gives the final classification to use computer

    graphics. It classifies the use of computer graphics to represent pictures such as line drawing,

    black and white, color and so on.

  • 8/8/2019 MC0072 Software Engineering

    4/30

    B) Development of Hardware and Software for Computer Graphics

    The development of hardware of computer graphics involves the

    development of input and output device technology. Therefore, in all

    development of computer graphics involves the development in three

    fields:

    1. Output technology

    2. Input technology and

    3. Software technology

    3. Explain the following with respect to Graphics Hardware:

    A) Color and Grayscale Levels

    Various color and intensity-level options can be made available to a

    user, depending on the capabilities and design objectives of a

    particular system. General purpose raster-scan systems, for example,

    usually provide a wide range of colors, while random-scan monitors

    typically offer only a few color choices, if any. Color options are

    numerically coded with values ranging from 0 through the positive

  • 8/8/2019 MC0072 Software Engineering

    5/30

    integers. For CRT monitors, these color codes are then converted to

    intensity level settings for the electron beams.

    In a color raster system, the number of color choices available

    depends on the amount of storage provided per pixel in the frame

    buffer. Also, color-information can be stored in the frame buffer in two

    ways: We can store color codes directly in the frame buffer, or we can

    put the color codes in a separate table and use pixel values as an

    index into this table. With the direct storage scheme, whenever a

    particular color code is specified in an application program, the

    corresponding binary value is placed in the frame buffer for each-

    component pixel in the output primitives to be displayed in that color.

    A minimum number of colors can be provided in this scheme with 3

    bits of storage per pixel, as shown in the table 2.5.

    Each of the three bit positions is used to control the intensity level

    (either on or off) of the corresponding electron gun in an RGB

    monitor. The leftmost bit controls the red gun, the middle bit controls

    the green gun, and the rightmost bit controls the blue gun. Adding

    more bits per pixel to the frame buffer increases the number of color

    http://train-srv.manipalu.com/wpress/wp-content/uploads/2010/01/clip-image00838.jpg
  • 8/8/2019 MC0072 Software Engineering

    6/30

    choices. With 6 bits per pixel, 2 bits can be used for each gun. This

    allows four different intensity settings for each of the three color

    guns, and a total of 64 color values are available for each screen

    pixel. With a resolution of 1024 by 1024, a full-color (24bit per pixel)

    RGB system needs 3 megabytes of storage for the frame buffer. Color

    tables are an alternate means for providing extended color

    capabilities to a user without requiring large frame buffers. Lower cost

    personal computer systems, in particular, often use color tables to

    reduce frame-buffer storage requirements.

    Color tables

    In color displays, 24- bits per pixel are commonly used, where 8-bits

    represent 256 levels for each color. Here it is necessary to read 24-

    bits for each pixel from frame buffer. This is very time consuming. To

    avoid this video controller uses look up table (LUT) to store many

    entries of pixel values in RGB format. With this facility, now it is

    necessary only to read index to the look up table from the framebuffer for each pixel. This index specifies the one of the entries in the

    look-up table. The specified entry in the loop up table is then used to

    control the intensity or color of the CRT.

    Usually, look-up table has 256 entries. Therefore, the index to the

    look-up table has 8-bits and hence for each pixel, the frame buffer

    has to store 8-bits per pixel instead of 24 bits. Fig. 2.6 shows the

    organization of a color (Video) look-up table.

  • 8/8/2019 MC0072 Software Engineering

    7/30

    Organization of a Video look-up table

    There are several advantages in storing color codes in a lookup table.

    Use of a color table can provide a "reasonable" number ofsimultaneous colors without requiring Iarge frame buffers. For most

    applications, 256 or 512 different colors are sufficient for a single

    picture. Also, table entries can be changed at any time, allowing a

    user to be able to experiment easily with different color combinations

    in a design, scene, or graph without changing the attribute settings

    for the graphics data structure. In visualization and image-processing

    applications, color tables are convenient means for setting color

    thresholds so that all pixel values above or below a specified

    threshold can be set to the same color. For these reasons, some

    systems provide both capabilities for color-code storage, so that a

    user can elect either to use color tables or to store color codes

    directly in the frame buffer.

    Grayscale

    With monitors that have no color capability, color functions can be

    used in an application program to set the shades of gray, or

  • 8/8/2019 MC0072 Software Engineering

    8/30

    grayscale, for displayed primitives. Numeric values over the range

    from 0 to 1 can be used to specify grayscale levels, which are then

    converted to appropriate binary codes for storage in the raster. This

    allows the intensity settings to be easily adapted to systems with

    differing grayscale capabilities.

    The table lists the specifications for intensity codes for a four-level

    grayscale system. In this example, any intensity input value near 0.33

    would be stored as the binary value 01 in the frame buffer, and pixels

    with this value would be displayed as dark gray. If additional bits per

    pixel are available in the frame buffer, the value of 0.33 would bemapped to the nearest level. With 3 bits per pixel, we can

    accommodate 8 gray levels; while 8 bits per pixel would give us 256

    shades of gray. An alternative scheme for storing the intensity

    information is to convert each intensity code directly to the voltage

    value that produces this grayscale level on the output device in use.

    B) Video Mixing

  • 8/8/2019 MC0072 Software Engineering

    9/30

    Video controller provides the facility of video mixing. In which it

    accepts information of two images simultaneously. One from frame

    buffer and other from television camera, recorder or other source.

    This is illustrated in fig 2.7. The video controller merges the two

    received images to form a composite image.

    Video mixing

    There are two types of video mixing. In first, a graphics image is set

    into a video image. Here mixing is accomplished with hardware that

    treats a designated pixel value in the frame buffer as a flag to

    http://train-srv.manipalu.com/wpress/wp-content/uploads/2010/01/clip-image01429.jpg
  • 8/8/2019 MC0072 Software Engineering

    10/30

    indicate that the video signal should be shown instead of the signal

    from the frame buffer, normally the designated pixel value

    corresponds to the background color of the frame buffer image.

    In the second type of mixing, the video image is placed on the top of

    the frame buffer image. Here, whenever background color of video

    image appears, the frame buffer is shown, otherwise the video image

    is shown.

    C) Random scan display processor

    The figure shows the architecture of a random scan display system

    with display processor. This architecture is similar to the display

    processor based raster system architecture except the frame buffer.

    In random scan display no local memory is provided for scan

    conversion algorithms, since that functionality is typically

    implemented using PLAs (Programmable Logical Arrays) or microcode.

    Random Scan Display System

    In random scan displays, the display processor has its own instruction

    set and instruction address register. Hence it is also called DisplayProcessing Unit ( DPU) or Graphics Controller. It performs instruction

    fetch, decode and execute cycles found in any computer. To provide a

    http://train-srv.manipalu.com/wpress/wp-content/uploads/2010/01/clip-image01624.jpg
  • 8/8/2019 MC0072 Software Engineering

    11/30

    flicker free display, the display processor has to execute its program

    30 to 60 times per second. The program executed by the display

    processor and the graphics package reside in the main memory. The

    main memory is shared by the general CPU and the display processor.

    The organization of a simple random-scan system (sometimes called

    vector scan system). An application program is input and stored in the

    system memory along with a graphics package. Graphics commands

    in the application program are translated by the graphics package

    into a display file stored in the system memory. This display file is

    then accessed by the display processor to refresh the screen. The

    display processor cycles.

    In a Random Scan system, also called vector, stroke-writing, or

    calligraphic the electron beam directly draws the picture.

  • 8/8/2019 MC0072 Software Engineering

    12/30

    Advantages of random scan:

    very high resolution, limited only by monitor

    easy animation, just draw at different positions

    requires little memory (just enough to hold the display program)

    Disadvantages:

    requires "intelligent electron beam, i.e., processor controlled

    limited screen density before have flicker, can't draw a complex

    image

    limited color capability (very expensive)

    4. Describe the theory of scan converting circles and the corresponding algorithms.

    A circle (through 0) with radius is given by the explicit equation

    or implicitly by . The straight

  • 8/8/2019 MC0072 Software Engineering

    13/30

    forward method of drawing a circle by approximating the values

    is ineffective (since it involves: squaring, taking roots and

    ) and it gives an asymmetric distribution.

    Figure: Straight forward scan

    converting a circleWe can make use of the 8-fold symmetry, so we only have to draw

    1/8 of the circle say from to .

  • 8/8/2019 MC0072 Software Engineering

    14/30

    Figure: 8-fold symmetry of the

    circle

    Circle Scan Conversion Algorithms

    ? Brute Force

    use explicit or parametric equations for

    circle

    y = SQRT(r*r - x*x);

    x = r*cos(theta);

    y = r*sin(theta);

    ? DDA Methods

    y = y - (x/y);

    x = x + 1;

    ? Too Slow!

  • 8/8/2019 MC0072 Software Engineering

    15/30

    Midpoint Circle Algorithm

    ? Extension of Bresenham ideas

    ? Circle equation: x2 + y2 = r2

    ? Define a circle function:

    f = x2 + y2 - r2

    ? f=0 ==> (x,y) is on circle

    ? f (x,y) is inside circle

    ? f>0 ==> (x,y) is outside circle

    ? Weve just plotted (xk,yk)

    ? (? x ? ?y), so were stepping in x

    ? Next pixel is either:

    (xk + 1, yk) -- the top case or

    (xk + 1, yk -1) -- the bottom case

    ? Look at midpoint

    August 2010

    Master of Computer Application (MCA) Semester 3

    MC0072 Computer Graphics

    Assignment Set 2

  • 8/8/2019 MC0072 Software Engineering

    16/30

    1. Describe the theory of Polygon and Pattern filling along with their corresponding

    algorithms.

    Rather than filling a polygon with a solid color, we may want to fill itwith a pattern or texture.

    We store the pattern in an N x M array (e.g. 8 x 8), then we want:

    row 0 of the pattern to be in rows 0, 8, 16, etc. of the polygon.

    row 1 of the pattern to be in rows 1, 9, 17, etc. of the polygon.

    Similarly, column 0 of the pattern will be in column 0, 8, 16, etc. of

    the polygon, column 1 of the pattern will be in column 1, 9, 17, etc. of

    the polygon.

    So, we will have a row pointer and a column pointer:

    for each scan line (Y): rowptr

  • 8/8/2019 MC0072 Software Engineering

    17/30

    slightly faster than the one presented here. The Cohen-Sutherland

    line clipping algorithm is a standard algorithm used in many places. It

    is a fast, reliable, easy to understand routine. What else could you ask

    for?

    As seen to the left, what we want to do is take the square viewport

    and clip the lines so that they fall inside of it. This can have a number

    of consequences. Lines such as A, F, and E should be ignored

    completely. Lines like B should be accepted with as little processing

    as possible. Lines like C and D though, need to be clipped before they

    can be drawn to the screen. The Cohen-Sutherland line clipping

    algorithm fills these requirements by classifying the endpoints of the

    line by which sector of the screen they fall into.

    In our implementation of this algorithm, we will have a byte for each

    endpoint of the line that holds the status of that point. We will use

    four bits of that byte, one for each possible state. These states are

    shown in green on the diagram (in binary). If the point is to the left ofthe clip area, then set bit #1. If the point is to the right of the clip

    area, then set bit #2. If the point is above the clip area, set bit #3.

    Finally, if the point is below the clip area, set bit #4. This ensures that

    if the point is inside of the clipping area the code is set to zero. If the

    point's code is above zero, then we have some clipping to do. Here is

    the code to generate status byte:

    Code := 0;

    IF X < Screen.Clip.X1 THEN Code := Code OR 1

    ELSE IF X > Screen.Clip.X2 THEN Code := Code OR 2;

  • 8/8/2019 MC0072 Software Engineering

    18/30

    IF Y < Screen.Clip.Y1 THEN Code := Code OR 4

    ELSE IF Y > Screen.Clip.Y2 THEN Code := Code OR 8;

    This is simple enough (simple is good... good is fast!), but how do we

    decide which points to accept or reject? It turns out that for lines like

    B that are completely inside of the clipping box, it is simple to trivially

    accept the line. If both status bytes are zero, then the line is

    completely inside of the box. It is also simple to trivially reject lines

    like A and F that are outside of the window.

    If we AND the two codes together, we find common sides of the box

    that the two points lie. If they both lie on the same side of the box,

    then a bit will still be set after the values are ANDed. For example

    with line F, the value 0101 AND 0001 = 0001. This means that the

    line is completely to the left of the bounding box and can be rejected.Now we are correctly drawing a good portion of the lines. We just

    have to worry about lines like E, C, and D now. To handle lines like C

  • 8/8/2019 MC0072 Software Engineering

    19/30

    and D, it is obvious that we have to clip the line somehow. Line E

    though, seems to be harder to classify. It turns out that if we apply

    the line clipping algorithm to line E and then test it again, it becomes

    a simple case. If we clip line E at point E1, then both points, when

    ANDed together will have the code of 0001... completely to the left. It

    can then be trivially rejected.

    B) Clipping circles and ellipses

    3. Describe the following with respect to Homogeneous Coordinates:

    A) for Translation

  • 8/8/2019 MC0072 Software Engineering

    20/30

    The third 2D graphics transformation we consider is that of translating

    a 2D line drawing by an amount along the axis and along theaxis. The translation equations may be written as:

    (5

    )

    We wish to write the Equations 5 as a single matrix equation. This

    requires that we find a 2 by 2 matrix,

    such that . From this it is clear that and ,

    but there is no way to obtain the term required in the first equation

    of Equations 5. Similarly we must have . Therefore,

    and , and there is no way to obtain the term required in the

    second equation of Equations 5.

    B) for Rotation

    Suppose we wish to rotate a figure around the origin of our 2D

    coordinate system. Figure 3 shows the point being rotated

    http://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#eqn:translatehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#eqn:translatehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#eqn:translatehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:math-rotatehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#eqn:translatehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#eqn:translatehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:math-rotatehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#eqn:translate
  • 8/8/2019 MC0072 Software Engineering

    21/30

    degrees (by convention, counter clock-wise direction is positive)

    about the origin.

    Figure 3: Rotating a Point Aboutthe Origin

    The equations for changes in the and coordinates are:

    (1

    )

    If we consider the coordinates of the point as a one row two

    column matrix and the matrix

    then, given the J definition for matrix product, mp =: +/ . *, we can

    write Equations (1) as the matrix equation

    http://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#eqn:rotatehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#eqn:rotate
  • 8/8/2019 MC0072 Software Engineering

    22/30

    (2

    )

    We can define a J monad, rotate, which produces the rotation matrix.

    This monad is applied to an angle, expressed in degrees. Positive

    angles are measured in a counter-clockwise direction by convention.

    rotate =: monad def '2 2 $ 1 1 _1 1 * 2 1 1 2 o. (o. y.) % 180'

    rotate 90

    0 1

    _1 0

    rotate 360

    1 _2.44921e_16

    2.44921e_16 1

    We can rotate the square of Figure 1 by:

    square mp rotate 90

    0 0

    0 10

    _10 10

    _10 0

    0 0

    producing the rectangle shown in Figure 4.

    http://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:squarehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:rotatehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:squarehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:rotate
  • 8/8/2019 MC0072 Software Engineering

    23/30

    Figure 4: The Square, Rotated 90 Degrees

    C) for Scaling

    Next we consider the problem of scaling (changing the size of) a 2D

    line drawing. Size changes are always made from the origin of the

    coordinate system. The equations for changes in the and

    coordinates are:

    (3

    )

    As before, we consider the coordinates of the point as a one row

    two column matrix and the matrix

    then, we can write Equations (3) as the matrix equation

    http://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#eqn:scalehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#eqn:scale
  • 8/8/2019 MC0072 Software Engineering

    24/30

    (4

    )

    We next define a J monad, scale, which produces the scale matrix.

    This monad is applied to a list of two scale factors for and

    respectively.

    scale =: monad def '2 2 $ (0 { y.),0,0,(1 { y.)'

    scale 2 3

    2 00 3

    We can now scale the square of Figure 1 by:

    square mp scale 2 3

    0 0

    20 0

    20 30

    0 30

    0 0

    producing the square shown in Figure 5.

    http://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:squarehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:scalehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:squarehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:scale
  • 8/8/2019 MC0072 Software Engineering

    25/30

    Figure 5: Scaling a

    Square

    4. Describe the theory and applications of Homogeneous Coordinates and Matrix

    representation of 2D Transformations

    Homogeneous Coordinates

    Homogeneous coordinates provide a method to perform certain

    standard operations on points in Euclidean space by means of matrix

    multiplications.

    For reasons that hopefully will become clear in a moment, it's useful

    to represent 3D points in computer graphics using a 4-vector

    coordinate system, known as homogeneous coordinates.

    To represent a point (x,y,z) in homogeneous coordinates, we add a 1

    in the fourth column:

    1. (x,y,z) -> (x,y,z,1)

  • 8/8/2019 MC0072 Software Engineering

    26/30

    To map an arbitrary point (x,y,z,w) in homogenous coordinates back

    to a 3D point, we divide the first three terms by the fourth (w) term.

    Thus:

    2. (x,y,z,w) -> (x/w, y/w, z/w)

    This sort of transformation has a few uses. For example, recall that

    one equation for determining points on a plane is the equation:

    3. A point is on a plane if the point

    satifies the relationship

    0 == A*x + B*y + C*z + D

    We can use this to our advantage by representing a plane L =

    (A,B,C,D). It is trivial to see that a point is on the plane L if

    4. P dot L == 0

    What makes this relationship interesting that if we have a

    "normalized" homogeneous point P and a "normalized" plane L,

    defined as:

    A homogeneous point P = (x,y,z,w) isnormalized iff w == 1.

    Likewise, a homogeneous plane L = (A,B,C,D)

    is normalized iff sqrt(A*A+B*B+C*C) == 1.

    then the dot product is the "signed" distance of the point P from the

    plane L. This can be a useful relationship for hit detection or collision

    detection, when we wish to determine where a path from P1 to P2

    intersects L. In that case, we can easily calculate the intersection

    point P by:

    a1 = P1 dot L;

  • 8/8/2019 MC0072 Software Engineering

    27/30

    a2 = P2 dot L;

    a = a1 / (a1 - a2);

    P = (1-a)*P1 + a*P2

    This is useful when we need to do clipping of lines and polygons to fit

    inside the screen, as well as in performing collision detection.

    Homogeneous coordinates have a range of applications, including

    computer graphics and 3D computer vision, where they allow affine

    transformations and, in general, projective transformations to be

    easily represented by a matrix.

    If the homogeneous coordinates of a point are multiplied by a non-zero scalar then the resulting coordinates represent the same point.

    An additional condition must be added on the coordinates to ensure

    http://wapedia.mobi/en/Computer_graphicshttp://wapedia.mobi/en/Computer_visionhttp://wapedia.mobi/en/Affine_transformationhttp://wapedia.mobi/en/Affine_transformationhttp://wapedia.mobi/en/Projective_transformationhttp://wapedia.mobi/en/Computer_graphicshttp://wapedia.mobi/en/Computer_visionhttp://wapedia.mobi/en/Affine_transformationhttp://wapedia.mobi/en/Affine_transformationhttp://wapedia.mobi/en/Projective_transformation
  • 8/8/2019 MC0072 Software Engineering

    28/30

  • 8/8/2019 MC0072 Software Engineering

    29/30

    Figure 1: A Square

    The idea behind this representation is that the first point represents

    the starting point of the first line segment drawn while the secondpoint represents the end of the first line segment and the starting

    point of the second line segment. The drawing of line segments

    continues in similar fashion until all line segments have been drawn. A

    matrix having points describes a figure consisting of line

    segments. It is sometimes useful to think of each pair of consecutive

    points in this matrix representation,

    as as a vector so that the square shown in Figure 1 is the result of

    drawing the vectors shown in Figure 2.

    http://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:squarehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:vectorhttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:squarehttp://www.cs.trinity.edu/~jhowland/cs2322/2d/2d/2d.html#fig:vector
  • 8/8/2019 MC0072 Software Engineering

    30/30

    Figure 2: The Vectors in A Square