teaching a shader-based introduction to computer graphics

5
Published by the IEEE Computer Society 0272-1716/11/$26.00 © 2011 IEEE IEEE Computer Graphics and Applications 9 Education Editors: Gitta Domik and Scott Owen Teaching a Shader-Based Introduction to Computer Graphics Ed Angel University of New Mexico Dave Shreiner ARM O ver the past 10 years, OpenGL has become the standard application programming interface (API) for teaching introductory computer graphics courses in computer science, engineering, and mathematics. Faculty can rely on many OpenGL features—such as its availabil- ity across platforms, its backward compatibility, and its defaults—to support a variety of teaching styles and course goals. More recently, OpenGL has changed radically to support new hardware features, and new versions of OpenGL necessitate reexamining and updating our “standard” way of teaching computer graphics. Starting with OpenGL 3.1, OpenGL ES 2.0, and now WebGL, all applications require program- mable shaders, and every application must provide its own shaders. Also, the standard OpenGL func- tions that every instructor used to create simple objects, carry out transformations, and do lighting calculations have been deprecated, as have most of the standard state variables and their default values. These changes force instructors to either use an outdated model and software or radically change the introductory course. As we investigated whether such a course could incorporate these changes, we discovered three things. First, our academic col- leagues were unaware of the changes. Second, most of our professional colleagues didn’t believe we could successfully bring this approach to an intro- ductory course. Finally, after redoing many of our introductory examples, we found that incorporat- ing these changes is indeed possible, and teaching a shader-based course has surprising advantages. The Old Programming Model Introductory computer graphics courses focus on basic concepts of image formation, geometry, transformations, and lighting. Early courses used a bottom-up approach, 1,2 in which students started with low-level primitives and focused on algo- rithms for building and displaying higher-level ob- jects. As portable, robust graphics libraries became available, most instructors adopted a top-down approach, 3,4 in which students start programming early using a core set of graphics objects supported by standard APIs, principally OpenGL. Most programs in a beginning course have cen- tered on code of this form glBegin(…); glColor(…); glTexCoord(…); glNormal(…); glVertex(…); glColor(…); . . glEnd(); Such code relied on some basic assumptions about both the API and how data were processed. First, it assumed a data-flow or immediate-mode model in which, as soon as a vertex was generated, it was passed on for processing using the current system state. The state comprised many values, but stu- dents could rely on default values and settings in the API to get started fairly easily. When a vertex was generated, it went through a fixed-function pipeline whose behavior was controlled by the state and a fixed lighting model. Programmable Pipelines The advent of programmable pipelines in GPUs changed the environment dramatically. Figure 1

Upload: d

Post on 22-Sep-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

PublishedbytheIEEEComputerSociety 0272-1716/11/$26.00©2011IEEE IEEEComputerGraphicsandApplications 9

Education Editors: Gitta Domik and Scott Owen

TeachingaShader-BasedIntroductiontoComputerGraphicsEd AngelUniversity of New Mexico

Dave ShreinerARM

Over the past 10 years, OpenGL has become the standard application programming interface (API) for teaching introductory

computer graphics courses in computer science, engineering, and mathematics. Faculty can rely on many OpenGL features—such as its availabil-ity across platforms, its backward compatibility, and its defaults—to support a variety of teaching styles and course goals. More recently, OpenGL has changed radically to support new hardware features, and new versions of OpenGL necessitate reexamining and updating our “standard” way of teaching computer graphics.

Starting with OpenGL 3.1, OpenGL ES 2.0, and now WebGL, all applications require program-mable shaders, and every application must provide its own shaders. Also, the standard OpenGL func-tions that every instructor used to create simple objects, carry out transformations, and do lighting calculations have been deprecated, as have most of the standard state variables and their default values.

These changes force instructors to either use an outdated model and software or radically change the introductory course. As we investigated whether such a course could incorporate these changes, we discovered three things. First, our academic col-leagues were unaware of the changes. Second, most of our professional colleagues didn’t believe we could successfully bring this approach to an intro-ductory course. Finally, after redoing many of our introductory examples, we found that incorporat-ing these changes is indeed possible, and teaching a shader-based course has surprising advantages.

TheOldProgrammingModelIntroductory computer graphics courses focus on basic concepts of image formation, geometry,

transformations, and lighting. Early courses used a bottom-up approach,1,2 in which students started with low-level primitives and focused on algo-rithms for building and displaying higher-level ob-jects. As portable, robust graphics libraries became available, most instructors adopted a top-down approach,3,4 in which students start programming early using a core set of graphics objects supported by standard APIs, principally OpenGL.

Most programs in a beginning course have cen-tered on code of this form

glBegin(…); glColor(…); glTexCoord(…); glNormal(…); glVertex(…); glColor(…); . .glEnd();

Such code relied on some basic assumptions about both the API and how data were processed. First, it assumed a data-fl ow or immediate-mode model in which, as soon as a vertex was generated, it was passed on for processing using the current system state. The state comprised many values, but stu-dents could rely on default values and settings in the API to get started fairly easily. When a vertex was generated, it went through a fi xed-function pipeline whose behavior was controlled by the state and a fi xed lighting model.

ProgrammablePipelinesThe advent of programmable pipelines in GPUs changed the environment dramatically. Figure 1

10 March/April2011

Education

shows the basic model. Each vertex triggers ex-ecution of the vertex shader. Vertices are then as-sembled into primitives. The primitives are clipped and passed to the rasterizer, which generates frag-ments, each triggering a fragment shader’s execu-tion that can generate pixels in the frame buffer.

Support for this model started with OpenGL 2.0 and DirectX 8.0. You could write shaders using Nvidia’s Cg, Microsoft’s High-Level Shading Lan-guage (HLSL), or the OpenGL Shading Language (GLSL)—a C-like language with added data types for the 2D, 3D, and 4D vectors and matrices used in computer graphics. Some textbooks discussed programmable shaders.3,4 However, because the fixed-function graphics pipeline was still avail-able to the application, most introductory graph-ics classes either didn’t discuss shaders or did a cursory discussion of them as an advanced topic.

ANewProgrammingModelOpenGL 3.0 totally changed the picture. It con-tained a deprecation model under which—start-ing with OpenGL 3.1 (OpenGL is currently at version 4.1)—immediate-mode graphics would be eliminated. In addition, there would be no fixed-function pipeline and almost no default values for the internal state. Each application now must pro-vide at least a vertex shader and a fragment shader. These changes eliminate all the function calls we used previously, plus the familiar transformation and lighting functions.

The main justification for these radical changes is that programmers must be able to access the

power of modern GPUs. Immediate-mode applica-tions, written as our previous pseudocode example, intermingle state changes (glColor, glTexCoord, and glNormal) with functions that generate ge-ometry (glVertex). This coding style leads to in-efficient GPU use because most of the processing time involves CPU-GPU communication. A more efficient model (see Figure 2) looks at both the geometry and attributes (colors, normals, texture coordinates, and matrices) as data that are sent to the GPU. Consequently, an application program’s main task is to form these data and get them to the GPU as efficiently as possible and then let the GPU do all the graphics work.

Our initial reaction to these changes was that teaching the introductory course would be impos-sible without the familiar functions and defaults we had relied on for so long. Most of our colleagues agreed. As a challenge, we used the new model in a course at Siggraph 2009 using OpenGL 3.1. Al-though the course was a mixed success, we were convinced that this approach isn’t only possible but also leads to a stronger introductory course.

ASimpleExampleA basic “Hello World” application in graphics might do something such as draw a triangle using the default coordinate system. Using the standard model, the core of an application might be some-thing like this:

glBegin(GL_POLYGON); glVertex2f(0.0, 0.0);

Vertexshader

Vertices

Clipper andprimitive assembly Rasterizer

Fragmentshader

Pixels

Application Framebuffer

Fragments

StateCPUGPU

Figure1.Agraphicspipelinewithprogrammablevertexandfragmentshaders.Anapplicationprogramgeneratesverticesandstateinformation,whichissenttothevertexshaderforgeometricprocessing.Verticesareassembledintoobjectsandtestedforinclusionintheviewingvolumebeforebeingsenttotherasterizer,whichgeneratespotentialpixels(fragments),whichareprocessedbythefragmentshader.

Vertexshader

Clipper andprimitive assembly Rasterizer

Fragmentshader

Application Framebuffer

Vertex bufferobject

Drawarrays Vertex

shader programstate variables

Fragmentshader programstate variables

Figure2.SendingdatatotheGPU.StateinformationandgeometricdataareplacedinarrayswithinavertexbufferobjectthatcanbesentasablocktotheGPU.Processingthesedataistriggeredbyasinglefunctionexecution(DrawArrays)thattriggerspassingthesearraysthroughthepipeline.

IEEEComputerGraphicsandApplications 11

glVertex2f(0.5, 1.0); glVertex2f(1.0, 0.0);glEnd();glFlush();

Add a few lines to open a window and initialize the system, and you have a working program. You don’t have to specify colors because the defaults draw primitives in black on a white background. Also, you don’t have to worry about transforma-tions and viewing because these matrices are ini-tialized to identities.

Let’s see what we need to produce the same dis-play under the new model. First, we put all the data into an array (see Figure 3).

We send these data to the GPU through two interrelated entities: a vertex attribute array and a vertex buffer object (VBO). The VBO is on the GPU and can contain the data in a vertex attribute array (as well as other types of data). We must construct these objects, insert our data, and link them to our shaders. In the application, our ini-tialization code looks something like the code in Figure 4.

In the first line, loc connects the variable vPo-sition in the shaders (identified by the program). The next two lines enable vertex attribute arrays and describe the data format and location. The next three lines get an identifier for a VBO, al-locate one, and identify the data to place in it. When we’re ready to display the data on the GPU, we execute a single function call:

glDrawArrays(GL_TRIANGLES, 0, 3),

which draws the first three points in the array. This code would be essentially unchanged if we had more points or if the array contained other data besides vertex locations, such as colors or tex-ture coordinates.

We must provide both a vertex shader and a fragment shader, which we write using GLSL. The

standard arithmetic operators and functions are overloaded to support these types, making it easy to write shaders. A vertex shader must output a vertex position and is where we’d normally carry out modeling and viewing transformations. For example, our shader can be as simple as this:

in vec4 vPosition;void main(){ gl_Position = vPosition;}

This shader takes in each vertex that’s produced by executing glDrawArrays and passes it through the built-in variable gl_Position. Because our triangle fits in the default volume (the cube with a diagonal from [–1, –1, –1] to [1, 1, 1]), primitive assembly and clipping occurs and the rasterizer generates fragments. So, our fragment shader can be as simple as this:

void main() { gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);}

This shader only colors each fragment red.So far, everything looks pretty simple and not

much more difficult than our old Hello World program. But there’s a complexity. We haven’t yet discussed how we input our shaders and connect them to the application. Although this process re-quires several steps and some unfamiliar OpenGL functions, these steps are virtually identical for every application. We must do the following:

1. Create a program object.2. Create shader objects, and attach them to the

program object.3. Read the shaders from files.

float points[3][2] = {{0.0, 0.0}, {0.5, 1.0}, {1.0, 0.0}};

Figure3.Codeforinitializinganarraywiththevertexlocationsofatriangle.

loc = glGetAttribLocation(program, “vPosition”);glEnableVertexAttribArray(loc);glVertexAttribPointer(loc, 2, GL_FLOAT, GL_FALSE, 0, points);glGenBuffers(1, &buffer);glBindBuffer(GL_ARRAY_BUFFER, buffer);glBufferData(GL_ARRAY_BUFFER, sizeof(points), points, GL_STATIC_DRAW);

Figure4.Initializationofavertexattributearrayandavertexbufferobjectintheapplication.First,wemustobtainthelocationoftheshadervariablethatwillholdthedata.Then,wecanpointtheapplicationdatatothatlocationandgenerateavertexbufferobjecttoholdthevertexdata.

12 March/April2011

Education

4. Compile the shaders.5. Link everything.6. If successful, tell OpenGL to use the resulting

program object.

We’ve found it easiest to put all this initializa-tion into an include file that lets students carry out these steps by a single function call of the form in Figure 5.

It takes a bit longer to get to the point at which students can write their first programs. It might take an extra lecture or two to discuss the archi-tecture and the mechanics of setting up a VBO, but the students can see the potential of writing their own shaders right from the start. Modify-ing the two simple shaders we explained earlier to move vertices around or alter the colors is a trivial exercise.

ARevisedIntroductoryCourseLet’s consider how these changes affect a typical introductory computer graphics course. Once stu-dents have gotten started, most courses spend con-siderable time on basic geometry, transformations, and viewing. Without the standard transformation functions, such as glRotate and glOrtho, the in-structor can provide the code or have the students develop their own. We feel the latter is a major advantage of using shader-based OpenGL. Many instructors have observed that once good APIs be-came available and provided such functionality to students, the students spent less time on the math-ematics and failed to get the desired deeper under-standing of fundamental concepts. Removing these functions but providing an easy method—shaders—to implement the same functionality helps achieve the desired level of understanding.

A further advantage that comes from this ap-proach (and makes for excellent assignments) is that the students have multiple options for where and when to apply the transformations. Such ex-

ercises often help students better understand the architecture and how to get their applications to perform better.

Starting with OpenGL 3.1, polygons other than triangles were deprecated. Although generating two triangles for each quadrilateral is a bit annoying at first, it opens the door to fundamental topics such as triangulation and mesh generation.

Introductory computer graphics courses spend much time on lighting and shading. The origi-nal implementation of the Blinn-Phong lighting model in hardware and software was a great ad-vance. However, with programmable shaders and older software, it’s limiting. Using glLight and glMaterial restricts us to vertex lighting with one model. We can talk about other models and per-fragment lighting, but we have no way to implement them with the fixed-function pipe-line. With programmable shaders, we can do the lighting calculation in the application or in ei-ther shader, and that part of the code is almost identical regardless of where we do the work. This flexibility and the ability to use other mod-els, such as nonphotorealistic shading, make for a far better course.

A typical course also discusses texture mapping. Shader-based programming lets us easily apply tex-ture mapping in the fragment shader. OpenGL’s newer versions retain nearly all the functions to set up a texture object, so this part of the course can be almost unchanged.

In the past, a major problem for instructors was choosing a platform for student projects. One of OpenGL’s attractions has been that you can use it on Windows, Mac OS X, and Linux. OpenGL ES 2.0 is also shader-based and has been imple-mented on portable devices (such as cell phones). Likewise, WebGL is shader-based and operates under HTML5. Code for both these versions is almost identical to code developed for OpenGL 3.1. So, instructors of shader-based courses can let students do their work in new and exciting environments, while still concentrating on the standard core topics.

Depending on the students’ level and whether there are other computer graphics courses, in-structors might include more material in the in-troductory course. If instructors cover curves and surfaces, OpenGL-style evaluators have been dep-recated, but writing shaders to render curves and surfaces is a direct and informative exercise. The

program = initShader(“vertex shader file”, “fragment shader file”);

Figure5.Usingashaderinitializationfunctionthatreturnsaprogramobject’sidentifier.ThisfunctionisconstructedfromasetofOpenGLfunctionsthatreadintheshadersfromfiles,createprogramandshaderobjects,compiletheshaders,andlinkeverythingtogether.Thefunctionalsocontainserrorchecking.

Instructors of shader-based courses can let students do their work in new and exciting environments, while still concentrating on

the standard core topics.

IEEEComputerGraphicsandApplications 13

latest versions of OpenGL support geometry shad-ers, which let a vertex generate additional verti-ces—a key issue in subdividing curves and surfaces. OpenGL 4.1 also supports tessellation.

A s exciting as this approach is, as with any-thing new, it will take some time for in-

structors to prepare new teaching materials and become comfortable with the new programming model and available software. We’ve coauthored a textbook based on this approach5 and placed sam-ple code and other information at www.cs.unm.edu/~angel. You can find additional information, including the standards, at the OpenGL website (www.opengl.org) and in various programming guides.6–8

References 1. J. Foley et al., Computer Graphics, 2nd ed., Addison-

Wesley, 1996. 2. D. Rogers, Procedural Elements for Computer Graphics,

2nd ed., McGraw-Hill, 1998. 3. E. Angel, Interactive Computer Graphics, 5th ed.,

Addison-Wesley, 2009. 4. M. Bailey and S. Cunningham, Graphics Shaders, A

K Peters, 2009. 5. E. Angel and D. Shreiner, Interactive Computer

Graphics, 6th ed., Addison-Wesley, 2011. 6. R. Rost and B. Licea-Kane, OpenGL Shading Language,

3rd ed., Addison-Wesley, 2010. 7. D. Shreiner, OpenGL Programming Guide, 7th ed.,

Addison-Wesley, 2010. 8. A. Munshi, D. Ginsburg, and D. Shreiner, OpenGL

ES 2.0 Programming Guide, Addison-Wesley, 2009.

Ed Angel is a professor emeritus of computer science at the University of New Mexico and the founding director of the university’s Art, Research, Technology, and Science Labora-tory (ARTS Lab). Contact him at [email protected].

Dave Shreiner is the director of graphics technology at ARM. Contact him at [email protected].

Contact department editors Gitta Domik at [email protected] and Scott Owen at [email protected].

Selected CS articles and columns are also available

for free at http://ComputingNow.computer.org.

PURPOSE: The IEEE Computer Society is the world’s largest association of computing professionals and is the leading provider of technical information in the field.MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field.COMPUTER SOCIETY WEBSITE: www.computer.org

Next Board Meeting: 23–27 May 2011, Albuquerque, NM, USA

EXECUTIVE COMMITTEEPresident: Sorel Reisman*President-Elect: John W. Walz;* Past President: James D. Isaak;* VP, Standards Activities: Roger U. Fujii;† Secretary: Jon Rokne (2nd VP);* VP, Educational Activities: Elizabeth L. Burd;* VP, Member & Geographic Activities: Rangachar Kasturi;† VP, Publications: David Alan Grier (1st VP);* VP, Professional Activities: Paul K. Joannou;* VP, Technical & Conference Activities: Paul R. Croll;† Treasurer: James W. Moore, CSDP;* 2011–2012 IEEE Division VIII Director: Susan K. (Kathy) Land, CSDP;† 2010–2011 IEEE Division V Director: Michael R. Williams;† 2011 IEEE Division Director V Director-Elect: James W. Moore, CSDP* *voting member of the Board of Governors †nonvoting member of the Board of Governors

BOARD OF GOVERNORSTerm Expiring 2011: Elisa Bertino, Jose Castillo-Velázquez, George V. Cybenko, Ann DeMarle, David S. Ebert, Hironori Kasahara, Steven L. TanimotoTerm Expiring 2012: Elizabeth L. Burd, Thomas M. Conte, Frank E. Ferrante, Jean-Luc Gaudiot, Paul K. Joannou, Luis Kun, James W. MooreTerm Expiring 2013: Pierre Bourque, Dennis J. Frailey, Atsuhiro Goto, André Ivanov, Dejan S. Milojicic, Jane Chu Prey, Charlene (Chuck) Walrad

EXECUTIVE STAFFExecutive Director: Angela R. Burgess; Associate Executive Director, Director, Governance: Anne Marie Kelly; Director, Finance & Accounting: John Miller; Director, Information Technology & Services: Ray Kahn; Director, Membership Development: Violet S. Doan; Director, Products & Services: Evan Butterfield; Director, Sales & Marketing: Dick Price

COMPUTER SOCIETY OFFICESWashington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036Phone: +1 202 371 0101 • Fax: +1 202 728 9614Email: [email protected] Alamitos: 10662 Los Vaqueros Circle, Los Alamitos, CA 90720-1314Phone: +1 714 821 8380 • Email: [email protected]

MEMBERSHIP & PUBLICATION ORDERSPhone: +1 800 272 6657 • Fax: +1 714 821 4641 • Email: [email protected]/Pacific: Watanabe Building, 1-4-2 Minami-Aoyama, Minato-ku, Tokyo 107-0062, JapanPhone: +81 3 3408 3118 • Fax: +81 3 3408 3553Email: [email protected]

IEEE OFFICERSPresident: Moshe Kam; President-Elect: Gordon W. Day; Past President: Pedro A. Ray; Secretary: Roger D. Pollard; Treasurer: Harold L. Flescher; President, Standards Association Board of Governors: Steven M. Mills; VP, Educational Activities: Tariq S. Durrani; VP, Membership & Geographic Activities: Howard E. Michel; VP, Publication Services & Products: David A. Hodges; VP, Technical Activities: Donna L. Hudson; IEEE Division V Director: Michael R. Williams; IEEE Division VIII Director: Susan K. (Kathy) Land, CSDP; President, IEEE-USA: Ronald G. Jensen

revised 10 Feb. 2011