CS318 - Fall 1997
Midterm Solutions


Question 1


Part (a) - 5 points

What is "refreshing" and why is it necessary on a CRT monitor?

Refreshing refers to the continual redirection of the electron beam (in the CRT) back to pixel positions that are to be illuminated. This is necessary because the light emitted from the screen phosphor at these positions lasts only a short time. Without refreshing, the displayed picture would fade away in less than a second.


Part (b) - 5 points

Briefly explain how stereoscopic views of a 3D scene are generated on a raster-scan monitor.

Two views of the scene are generated: one from the viewing position of the left eye, and the other from the viewing position of the right eye. The two views are then displayed on the monitor on alternate refresh cycles and viewed through special eyeglasses that alternately darken the left and right lenses in synchronization with the alternating displays. Thus, the left eye of the viewer sees only the left view, and the right eye sees only the right view of the scene. See page 50 in the textbook for an illustration of the technique.


Question 2


Part (a) - 6 points

Briefly explain the difference between the Bresenham method and the midpoint approach to scan-converting a straight line segment.

Both methods set up iterative calculations to determine which of two possible pixels at each step is closest to a specified line path. The Bresenham method calculates the distance from each of these two possible pixels to the line path. The decision parameter is based on their difference: greater to or less than zero. The midpoint method selects the closest pixel by determining whether the midposition between the two pixels is above or below the line path.


Part (b) - 9 points

Explain how pixel intensities along a straight line segment can be quickly loaded into the frame-buffer of a raster system using incremental calculations.

Assuming the addressing scheme for the frame-buffer is set up in row-major order on a bilevel system (eg. black and white), the address of a pixel at location (x,y) is calculated as an offset from the starting address of the frame-buffer as:

addr(x,y) = addr(0,0) + y * (xmax + 1) + x

If we move across a scan line, we can calculate the frame-buffer address for a pixel at location (x+1,y) by using the previous calculation for the pixel at (x,y) as:

addr(x+1,y) = addr(x,y) + 1

If we move diagonally down to the next scan line, we can calculate the frame-buffer address for a pixel at location (x+1,y+1) by using the previous calculation for the pixel at (x,y) as:

addr(x+1,y+1) = addr(x,y) + xmax + 2


Part (c) - 4 points

State the names only of two OpenGL attribute functions for points.

glPointSize
glColor


Question 3 - 14 points

Suppose an arbitrary polygon is defined with a vertex list. Which of the three area-fill methods (scan line, boundary fill, or flood fill) would be the best choice for scan converting the polygon? Why?

The scan line method would be best.

Both the boundary fill and flood fill methods require a starting interior position for filling. Given only the polygon vertices for an arbitrary fill shape, it would be difficult to determine an interior point for the fill. Also, boundary fill requires a single outline color for the polygon and flood fill requires a single background color for the polygon. This implies that the polygon would need to be in the frame-buffer already. (Another problem involves 4-connected vs. 8-connected for the filling algorithm.)

The scan line method involves only intersection calculations for scan lines with the polygon edges.


Question 4


Part (a) - 10 points

Suppose a polygon is to be rotated through an angle of 30 degrees, then scaled by a factor of 2 in the x direction and by a factor of 3 in the y direction. What is the composite matrix for this sequence of 2D transformations?
| 2 0 0 | | cos 30° -sin 30° 0 | | sqrt(3) -1 0 |
| 0 3 0 | * | sin 30° cos 30° 0 | = | 1.5 1.5*sqrt(3) 0 |
| 0 0 1 | | 0 0 1 | | 0 0 1 |


Part (b) - 10 points

Explain the "general" steps in applying this transformation sequence to the polygon so that only the final (transformed) polygon is displayed.

General Method: The original polygon is erased by re-scan converting it in the background color. If the original polygon overlaps other objects, then the entire scene (minus the original polygon) can be reloaded into the frame-buffer. Then, each vertex of the polygon is multiplied by the above composite matrix to obtain a list of "transformed" vertices. These new vertices are used to scan convert the new polygon into the frame-buffer.

OpenGL Method: The screen is cleared to the background color using glClear. Then, any other objects are drawn. Next, the above composite matrix is loaded onto the matrix stack using glLoadMatrix. Finally, the vertices of the polygon are redrawn, and the new polygon is automatically transformed by the OpenGL matrix stack into its new position.


Question 5


Part (a) - 16 points

State the "general" steps in the 2D viewing pipeline from modeling coordinates to the final coordinate system of a display device and explain the transformations that are involved at each step.

  1. Modeling to World Coordinates - Transforming from object "local" coordinates to a world system involves translating the world origin to modeling coordinates, then rotating the world system so that the x and y axes of the two systems coincide. This transformation may not be necessary if the objects are defined directly in world coordinates.
  2. World to Viewing Coordinates - This involves similar transformations to those listed above, except the viewing system is transformed to coincide with the world system. This step may be omitted if there is no rotated or displaced reference viewing.
  3. Viewing to Normalized Viewing Coordinates - Here, a rectangular window is specified in viewing coordinates and a rectangular viewport is specified in NVC space, where viewable points are in the range 0 to 1. Points within the window are transformed so that they have the same relative position in the viewport.
  4. NVC to Device Coordinates - This can be accomplished with one or more "workstation" transformations similar to the previous step. That is, for each workstation, a window is specified in NVC space, and a viewport is specified in the coordinate system of the device to be used to display that section of the scene.


Part (b) - 6 points

State the OpenGL functions and their 2D parameters (only) for the viewing window and viewport.

gluOrtho2D(Left,Right,Bottom,Top)
glViewport(xo,yo,Width,Height)


Question 6


Part (a) - 5 points

Briefly explain why a line-clipping algorithm cannot be used to clip polygons correctly.

Clipping each line produces only the clipped coordinates for that line (either no coordinates or the two endpoints for the clipped line). In general, there is no informataion about the order in which the line endpoints must be connected to produce a polygon vertex list. See Figure 6-17 on page 237 in the textbook for an illustration.


Part (b) - 10 points

Briefly explain why the Sutherland-Hodgeman algorithm will correctly clip any convex polygon, but may incorrectly clip a concave polygon. Give an example of a concave polygon that would not be clipped correctly by this algorithm.

The Sutherland-Hodgeman algorithm always returns a single vertex list, which results in a single "clipped" polygon. Clipping a convex polygon results in another convex polygon, so the algorithm works correctly with convex polygons. However, clipping a concave polygon may result in two or more separate polygon sections. Such a concave polygon clipped with this algorithm will have connecting lines between the "separate" polygon sections.

For an example of a concave polygon that would not be clipped correctly by the Sutherland-Hodgeman algorithm, see Figure 6-24 on page 242 in the textbook.