polygon 2008
DESCRIPTION
Polygon is a tribute to the scholarship and dedication of the faculty at Miami Dade College in interdisciplinary areas.TRANSCRIPT
Polygon is
is to displ
celebrate
understan
campus as
The edito
Prof. Jofre
the design
edition po
the journa
Sincerely,
The Polyg
The Edit
Dr. Moh
Dr. Jaim
Prof. Vic
Reviewe
Prof. Ste
Prof. Jos
Patrons:
Dr. Cind
Dr. Ana M
Dr. Carid
Prof. Ma
s MDC Hialea
ay the academ
the scholarsh
nd a polygon m
s a whole. W
orial committ
e for their unw
n of the journ
ossible. It is o
al.
,
gon Editorial C
torial Comm
ammad Shak
me Bestard
ctor Calderin
ers:
eve Strizver-M
seph Wirtel
:
dy Miles, Presi
Maria Bradley
dad Castro, Ch
aria Jofre, Cha
ah's Academic
mic work prod
hip of teaching
merely by con
e encourage o
tee and review
wavering sup
al. In additio
our hope that
Committee
mittee:
kil - Editor-in-
Munoz
ident
y-Hess, Dean
hair of LAS
air of EAP and
c Journal. It
duced by facu
g and learnin
ntemplating it
our colleagues
wers would lik
pport. Also, w
on, the commi
you, our colle
-Chief
n
d Foreign Lan
is a multi-dis
ulty and staff.
g from differe
ts sides, our g
s to send in su
ke to thank D
we would like t
ittee would lik
eagues, contin
nguages
sciplinary onli
In this issue
ent academic
goal is to pres
ubmissions fo
Dr. Miles, Dr. B
to thank Prof
ke to thank th
nue to contrib
Mi
Thpr
higthof
ine publicatio
e, we find eigh
disciplines. A
sent work that
or the next iss
Bradley-Hess
f. Javier Dueñ
he contributo
bute and supp
ission of Mi
he mission orovide acces
gh-quality ede learner’s n
f the decision
on whose purp
ht articles that
As we cannot
t represents t
sue of Polygon
s, Dr. Castro,
ñas for his wo
ors for making
port the missi
ami Dade C
of the Collegssible, affordducation thatneeds at the cn-making pro
pose
t
t
the
n.
and
ork on
g this
ion of
College
e is to dable, t keeps center ocess.
Miami Dade College District Board of Trustees
Helen Aguirre Ferré, Chair Armando J. Bucelo Jr. Peter W. Roulhac Marielena A. Villamil Mirta “Mikki” Canton Benjamin León III
Eduardo J. Padrón
Editorial Note
An Approach to Course Assessment Techniques: Implementation of Teaching Goals Inventory by MAC1105 "College Algebra" Learning Outcomes
Dr. Jaime Bestard
A Short Communication on Going Green and Sustainable Developed Economy in Terms of Fuel
Dr. Jaime Bestard
Science and Math: Multiple Intelligences and Brain-Based Learning Loretta Blanchette Camera Obscura: The Cult of the Camera in David Lynch's Lost Highway
Victor Calderín
Going Beyond Academics: Mentoring Latina Student Writers Dr. Ivonne Lamazares Classroom Assessment Techniques and Their Implementation in a Mathematics Class
Dr. Mohammad Shakil
A Multiple Linear Regression Model to Predict the Student's Final Grade in a Mathematics Class
Dr. Mohammad Shakil
Assessing Student Performance Using Test Item Analysis and its Relevance to the State Exit Final Exams of MAT0024 Classes
Dr. Mohammad Shakil
An Approach to Course Assessment Techniques: Implementation of Teaching Goals Inventory by MAC1105 “College Algebra” Learning Outcomes
Dr. Jaime BestardDepartment of MathematicsLiberal Arts and Sciences
Miami-Dade College, Hialeah Campus1780 West 49 th Street
Hialeah, Florida 33012, USAEmail: [email protected]
ABSTRACT The process that takes place at Miami-Dade College (MDC) with the Quality Enhancement Plan (QEP) and the implementation of the institutional Learning Outcomes include the disciplines and the courses as subjects of systematic teaching –learning units. This paper explains the implementation of the Teaching Goals Inventory (TGI) via the Classroom Assessment Techniques (CAT) in a specific course in the discipline of Mathematics that impacts significantly the performance of the students
Theme: Educational ResearchKey words: Assessment Technique
1
1. Introduction
Basic algebraic skills are put together in the instruction of MAC1105 “ College Algebra” as the first college course in mathematics the students are ( in many different majors at MDC) required to take. The path to develop the Teaching Goals Inventory and put them to work in terms of the competencies of the course to produce the desired learning outcomes, needs the assessment of the instructor, the following structures the process of the implementation of such CAT.
2. Methods
2.1) SELECTION OF THE TEACHING STRATEGY
The MAC1105 “College Algebra” is currently declared as at risk course according to QEP at MDC, college – wide.The faculty member for the instruction of that subject is entitled to develop the Teaching Goals Inventory, as well as the Course Objectives and Learning Outcomes as it is stated in the course syllabus (Appendix 1). There are the following course objectives:”:…
1) To manipulate algebraic expressions involving rational and radical as well as complex numbers components towards their simplification.
2) To solve equations and inequalities integrating the previous objectives.3) To graph equations, to identify functions and to integrate both to the analysis of
functions, graphically and analytically.…”Observe that these actions are to be taken into the consideration of the objective levels of knowledge, comprehension, analysis, application and synthesis.Specifically, the manipulation of algebraic expressions in the solution of equations and inequalities is an outcome that carries the three first levels, while the analysis of a function, just in the event of the domain and the function values is part of the application and synthesis of such competencies, Huitt, 2004 (citation for Bloom’s Taxonomy).But the mastery of these competencies determines the integration of the knowledge into an outcome and according to Angelo and Cross, by considering the integration in learning the problem solving skills as a learning outcome becomes a great tool for vertical links across the discipline.Such mastery produces the necessary and so-called meta-cognition in a student centered activity, which is understood as the avenue to the students self understanding of their performance and make them conscious and self controllers of their learning in the instructional process. This fact makes convenient to use the peer cooperation in class and the student participation as techniques that will improve and restore the confidence of the
2
students with applications and synthesis in the topic.These arguments allow the application of the seven principles and assure the good practice in undergraduate education, Chickering and Gamson, 1987.
2.2)HOPING TO ACCOMPLISH
The integration of the teaching goals inventory (as listed below) with the specific course competencies lead to the implementation of the Learning Outcomes that correspond to this activity:1) To manipulate algebraic expressions involving rational and radical as well as complex numbers components towards their simplification.2) To solve equations and inequalities integrating the previous objectives.3) To graph equations, to identify functions and to integrate both to the analysis of functions, graphically and analytically
By the following course competencies:
1) Solve linear equations and inequalities involving absolute value. 2) Solve equations involving rational expressions. 3) Solve word problems involving rational expressions. 4) Solve radical expressions. 5) Solve quadratic and cubic inequalities in one variable.
6) Solve inequalities involving rational expressions. 14) Find the domain of functions 15) Find the value of the function for certain inputs
The students are instructed how to produce the integration of the components of the goal, and become conscious learners by understanding the level of their performance and playing an active roll in the self control of their learning producing the learning outcomes.
2.3)SELECTION OF THE CLASS ASSESSMENT TECHNIQUE
In the process to select the corresponding CAT it is very systematic to follow the rationale that Angelo and Cross recommend: 1) Starting by the rationale, the selection of the CAT is #34, name: “Interest/ knowledge/skills checklist.2) This CAT has a medium level of time and energy to prepare to use by the faculty, since the course specific IKS Checklists are briefing versions prepared from the goals inventory in the subject (MAC1105 College Algebra) Appendix 1.
Based on the fact that the dynamics of the course design attempts to take time out from the chapters 1 and 2, the theory of equations and inequalities to expand the chapter 3 with applications, it is very important and convenient for this particular course, to assess, properly and on time, the acquisition of the full knowledge and skills, evaluating the students motivation by their shown interest.
3
3)The technique consists on a post topics explanation, 10 questions survey, involving how students feel solving the applications of equations and inequalities to the determination of the domain of a function. While the responses are kind of predictable, according to their anxiety and interest it is at the same time, an indicator of the level of the knowledge the obtained in chapter 2 and how they practice the skill to apply such knowledge in a particular problem with different difficulty levels.It is remarkable the previous application of a handout consisting in ten problems the students solve in a practical class as it is provided in Appendix 2.
4) The purpose of this technique is to produce the feedback regarding the incorporation of chapter 1 &2 to the applications and new principles in chapter 3, showing how effective the integration was. The CAT IKS checklist lets the instructor particularize the details of the particular skills the students may fell more anxious in front of applications.
5) The list of teaching goals under study by this CAT in order to facilitate feedback by the application of the IKS checklist, are:• To manipulate algebraic expressions involving rational and radical as well as
complex numbers components towards their simplification.• To solve equations and inequalities, integrating the previous objectives.• To graph equations, to identify functions and to integrate both to the analysis of
functions, graphically and analytically.6) An important suggestion for use is that this technique should encourage the partial
group work in class previous to the completion of the instrument the students may be allowed to consult in problems 5, 6 ,and 10. After the application of the class exercise, apply the CAT and let the students 15 min to process and summarize their experiences individually, since the instrument is anonymous it is almost sure they are to be critic enough to let the necessary feedback flow to the instructor
7) An example of the instrument can be observed in Angelo and Cross, pages 285-289.8) Step by step procedure:
a) Give the students the Class exercise (Appendix 2) 30 minb) Let the students exchange opinions about questions 5, 6 and 10 during 5 minc) Give the students the CAT instrument (Appendix 3)and allow time they answer
during 10 mind) Collect the instrument and out of class tally the data processing the information.
9) Suggestions to analyze the collected feedback:Tally the data and classify the results by overall, the results on basic skills 1, 2,3, 7, 8, as well as the intermediate difficulty by integration in 4,5, 9 and upper difficulty in 6 and 10. Compare the results in the CAT with the real performance of students in the class exercise given in Satisfactory, Progress and UnsatisfactoryThe display of the bar graphs gives a hint in whether to apply a quantitative statistics analysis to support the conclusions or simply discuss over the qualitative basis.
10) To apply and / or to extend this CAT the instructor may adapt the CAT to the topic of “Library of Functions”, Quadratic, Polynomials and Rational Functions, as well as to Exponential and Logarithmic Functions.
Also the instructor may extend the results to a post exercise activity giving the students a completed extended assignment on the domain and function values consisting in 30
4
similar questions the students may solve as homework and repeat the instrument in the next activity before discussing the previous one.
11) The previous point can be considered a pro – point, creating a remedial environment the students are definitely motivated to visit the academic resource centers in campus and using the office hours of the instructor.
12) The instructor may have not enough time dedicated to this activity in the tentative schedule
13) A caveat may appear when students exchange too much information in the class exercise. Therefore it is recommended to exchange their work rather their responses in the selected questions.
3) Data analysis and results :
a)Class exercise :
Question Results S P U1 18 1 22 20 1 13 13 4 44 11 3 75 10 5 66 7 9 57 17 3 18 15 4 29 16 4 1
10 13 2 6
b) CAT – IKS Checklist
1) INTEREST in Course Topics
5
Options 0 1 2 31 2 5 8 72 1 8 9 33 2 9 7 34 0 10 10 15 1 9 9 26 0 2 17 27 1 11 8 18 2 8 9 29 0 11 9 1
10 0 15 6 0
2) KNOWLEDGE
/ SKILLS Options N B F A
1 3 5 12 12 2 7 10 23 2 9 8 24 1 9 10 15 4 10 7 06 5 9 6 17 2 10 6 38 3 9 7 29 4 11 4 2
10 5 10 3 3
Analysis of results:
For a qualitative display observe Charts 1, 2 and 3 in the Appendix 4.
Observe in Chart 1 how the level of complexity of questions 5, 6 and 9 and 10 are actually shown by higher number of Un-satisfactory responses.
In Chart 2 students show more interest for the medium level, which still is shown in the higher level of TGI integration.
Chart 3 shows a consistent trend in medium level of knowledge skills.
Obviously when comparing the performance with the self confidence of students it is observed that their self assessment is still overestimating their capabilities, the implementation of the post exercise makes them realize their actual standing. This self confidence is not still dangerous since
6
the instructor is looking for reducing anxiety to conduct the task, which is actually observed before this activity.
The results of ANOVA by question and topic in class exercise, interests and knowledge / skills shows significant differences at 5 % significance level.
The results of a CORRELATION analysis between Class exercise and Course topic interest per question or difficulty level shows that there is a weak correlation at 5 % significance level, as well as between Course topic interest and Knowledge / Skills, per question or difficulty level, at the 5 % significance level.
It is remarkable the strong CORRELATION between Class exercise and Knowledge / Skills, per question or difficulty level, at the 5 % significance level.
T-TESTS supporting the CORRELATIONS:
No significant differences between “Satisfactory results in Class Exercises” and “Upper Level Interests” (5 % significance level)
No significant differences between “Upper Level Interests” and “Fairly / Advanced Levels of Knowledge / Skills” (5 % significance level)
Significant differences between “Satisfactory results in Class Exercises” and “Fairly / Advanced Levels of Knowledge / Skills” (5 % significance level)
4) Conclusive remarks and future implications
It is recommended to use this CAT since the level of association is strong for the CAT results of a class exercise and the effect the interests/knowledge / skills survey self confidence survey produces, specially when the course involves integrative topics from previous chapters like in the Library of Functions, Quadratic, Polynomial, Rational and specially in Exponential and Logarithm functions where students arrive with a deep lack of self confidence. It is also possible to modify the strategy linking to the results of the basic principles of analysis of functions.
The class activity and the students involvement level is still appropriate from the motivation to the self confidence, and even beyond the post activity will reinforce the results and the appropriation of the necessary skills at the level of analysis, synthesis and evaluation.
Acknowledgements:
To the institution that supports this research: Miami-Dade College, my departmental and discipline colleagues who made it possible to me with their contributions and their view points
7
REFERENCES:
[1] Angelo, T. A. and Cross, K. P. Classroom Assessment Techniques: A Handbook for College Teachers, 2nd Edition. San Francisco: Jossey- Bass, 1993, pp 19-25; 105-158; 213-230; 299-316.
[2] Chickering, A. W. and Gamson, Z.F. Seven Principles for Good Practice in Undergrad Education. AAHE Bulletin, March, 1987.
[3] Huitt, Teaching Strategies. AAHE Bulletin, March 2004 (citation for Bloom’s Taxonomy).
[4] McGl;ynn, Angela Provitera Successful Beginnings for College Teaching, Engaging your students from the first day, Atwood Publishing, Volume 2, Teaching techniques/ Strategies Series, 2001
[5] Multiple Authors. Course instructor packet MAC1105 , Mathematics Department, MDC, North Campus, July, 2004
[6] Multiple Authors. Quality Enhancement Plan. Mathematics, MDC, 2004
Dr. Jaime Bestard Received his Ph. D. Degree in Mechanical Engineering from the University of Las Villas (Cuba) in 1994 under the direction of Dr. Ing. Jochen Goldhan and Prof. Dr.Sc. Dr Ing. Klaus Ploetner from the University of Rostock (Germany). Since 1979-1995, he has been at University of Las Villas (Santa Clara, Cuba), 1998-2005 at Barry University (Miami, Fl,) and 2005-present at Miami Dade College ( Miami, Fl, USA). His research interests focus on Energy from agricultural by-products, Undergraduate Teaching of Mathematics, Physics, and Engineering Curriculum Development.
Appendix 1 Course Syllabus with TGI
MIAMI-DADE COLLEGEHIALEAH CAMPUS
Dept. Liberal Arts and SciencesCourse: REF # 423950 MAC 1105 “College Algebra” 3 credits. Fall 2007-1 Textbook:” Algebra and Trigonometry”, Author: Sullivan Pearson Addison Wesley; Eighth Edition; ISBN-10: 0132329034 ISBN-13: 9780132329033Meeting Days: M, W, F 9:00-9:50AM Room 1315 Instructor: Dr. Jaime Bestard. Email [email protected] Ph (305)237-8766
8
Office Hours: M, W, F 12:00- 1 PM Room 1413-06Course description: This course is a survey of the concepts of college algebra involving linear, quadratic, rational, and radical, exponential and logarithmic equations; graph linear equations and inequalities in one variable; solve systems of linear equations and inequalities in two variables; complex numbers; word problems and explore elementary functions Prerequisite: MAT 1033, or a prescribed score on the Algebra Placement Test. Special Fee. (3 hr. lecture). Calculator use is strongly advised. You must be familiar with the calculator you will use in the course; if necessary you must look for assistance out of class in office hours or in academic support laboratory.Course Objectives :
1) To manipulate algebraic expressions involving rational and radical as well as complex numbers components towards their simplification.2) To solve equations and inequalities integrating the previous objectives.3) To graph equations, to identify functions and to integrate both to the analysis of functions, graphically and analytically4) To integrate the principles in 1-3 to the exponential and logarithmic expressions, equations and functions.
5) To integrate the solution of systems of equations and inequalities to real professional problems.
General Education Learning Outcomes:1. Communicate effectively, using listening, speaking, reading, and writing skills.2. Use quantitative analytical skills to evaluate and process numerical data3. Solve problems using critical and creative thinking and scientific reasoning.4. Formulate strategies to locate, evaluate, and apply information.5. Demonstrate knowledge of diverse cultures, including global and historical perspectives.6. Create strategies that can be used to fulfill personal, civic, and social responsibilities.7. Demonstrate knowledge of ethical thinking and its application to issues in society.8. Use computer and emerging technologies effectively.9. Demonstrate an appreciation for aesthetics and creative activities.
9
10. Describe how natural systems function and recognize the impact of humans on the environment.
Course Competencies 1) Solve linear equations and inequalities involving absolute value. 2) Solve equations involving rational expressions. 3) Solve word problems involving rational expressions. 4) Solve radical expressions. 5) Solve quadratic and cubic inequalities in one variable. 6) Solve inequalities involving rational expressions. 7) Find the distance between two points on a number line. 8) Use the distance formula to find the distance between two points in the plane. 9) Determine the standard form of a circle, and graph the circle. 10) Determine the standard form of a line given certain conditions pertaining to the line. 11) Determine the standard form for the equation of a vertical parabola. 12) Graph a vertical parabola. 13) Define the terms ‘relation’ and ‘function’. 14) Find the domain of functions. 15) Find the value of the function for certain inputs. 16) Use function notation and simplify the difference quotient for certain functions. 17) Graph linear, quadratic, radical, absolute value, and root functions. 18) Graph piecewise-defined functions. 19) Solve certain maximum and minimum problems by finding the vertex of a parabola. 20) Find the sum, difference, product, quotient, and composition of two functions. 21) Show that a function is one-to-one by using the definition or the horizontal line test. 22) Find the inverse of a one-to-one function. 23) For a simple function f, graph both f and ƒ−1on the same coordinate system. 24) Graph a polynomial function. 25) Graph a rational function. 26) Solve certain exponential equations using the property: If ax= ay, then x=y, a>0 and a≠127) Graph both increasing and decreasing exponential functions.
28) Define the statement ‘y=loga x’.29) Know the properties of logarithms and solve certain problems which require their use.30) Graph a logarithmic and its inverse exponential function on the same coordinate system. 31) Solve exponential equations using logarithms.32) Use the change-of-base formula to evaluate logarithms with base other than 10 or e.33) Graph linear systems and solve these systems by substitution and elimination.34) Evaluate 2 x 2 and 3 x 3 determinants using expansions by minors.35) Use Cramer’s Rule to solve 2 x 2 and 3 x 3 linear systems.
10
EVALUATION POLICY: Three 1 hour tests, four quizzes, two projects, three HW portfolios and a mandatory comprehensive final exam will be given during the term. Students are supposed to show and write all their work and conclusions in quizzes, tests, exams, assessments in form of projects and HW. Homework will be returned in form of cumulative portfolios the day of each partial test. The final grade will be calculated as follows: 5% corresponds to each homework cumulative portfolio, 5% to each project, 5% to instructor criteria about class participation, 5% each quiz, 10% each test, final counts for 20%. Two missing partial evaluations will result in a failing grade. Students with EXCELLENT performance during the course might not be required to take the final exam and will be appointed by the instructor.Absolutely no MAKE-UPS AND LATE RETURN is the POLICY FOR evaluationsHOMEWORK IS DUE EVERY NEXT MEETING. Homework late returns are not accepted. GRADING SCALE:90 – 100=A;80 – 89=B;70 – 79=C;60 – 69=D;0 – 59=FATTENDANCE:Attendance and punctuality to class is mandatory, late arrivals and early leaves are supposed to be only on breaks of the session to eliminate disruptions. Students are expected to attend, to be punctual and to participate in class. Students are responsible to prepare all topics and material covered in class. Students who attend classes, and do not appear on the class roll will be asked to report to the Registrar’s Office to obtain a paid/validation schedule. Under no circumstances you will be allowed to remain in class if your schedule is not stamped paid/validated. Mobile phones are to be turned off during lectures.DROPS/WITHDRAWALS:It is the student’s responsibility to withdraw from the class if he/she should decide to.Cheating and Plagiarism: Academic honesty is the expected mode of behavior. Students are responsible for knowing the policies regarding cheating and plagiarism and the penalties for such behavior. Failure of an individual faculty member to remind the students as to what constitutes cheating and plagiarism does not relieve the student of his responsibility. Students must take care not to provide opportunities for others to cheat. Students must inform the faculty member if cheating or plagiarism is taking place.Diversity Statement: The MDC community shares the belief that individual and collective educational excellence can only be achieved in an environment where human diversity is valued.Students with Disabilities: It is my intention to work with students with disabilities and I recommend them to contact the Access Services, (305) 237-1272, Room 6112, North Campus, to arrange for any special accommodations.
11
TENTATIVE SCHEDULEWEEK DATE TOPICS & EVALUATIONS HW ASSIGNMENTS
1 Aug 29, 31 Introduction BriefingR.6; R.7; R.8 Review exercises every odd in corresponding topics
13.5; 12.5; 1.1; 12.1; 12.3; 12.6; 12.7; 12.8 Add-Drop Period ends Tu Sept 42 Sept 5, 7 QUIZ 1/ 1.2; 1.3 Review exercises every odd in corresponding topics
3 Sept 10-12 1.4; 1.5 Review exercises every odd in corresponding topics
4 Sept 17, 19, 21 1.6; 1.7 /TEST 1 Review exercises every odd in corresponding topics
5 Sept 24, 26, 28 2.1; 2.2; 2.3 Review exercises every odd in corresponding topics
6 Oct 1-3,5 QUIZ 2/ 2.4; 2.5 Review exercises every odd in corresponding topics
7 Oct 8, 10, 12 3.1; 3.2 Review exercises every odd in corresponding topics
8 Oct 15, 17, 19 PROJ 1 / 3.3; 3.4 Review exercises every odd in corresponding topics
9 Oct 22, 24, 26 3.5; 3.6 Review exercises every odd in corresponding topics
10 Oct 29, 31, Nov 2 TEST 2/4.1-4.5 Review exercises every odd in corresponding topics
W Period ends Tu Nov 611 Nov 5, 7, 9 QUIZ 3/ 5.1; 5.2 Review exercises every odd in corresponding topics
12 Nov 12, 14, 16 5.3; 5.4; 5.5; 5.6 Review exercises every odd in corresponding topics
13 Nov 19, 21 QUIZ 4/6.1; 6.2 Review exercises every odd in corresponding topics
14 Nov 26, 28, 30 6.3; 6.4; 6.5/ PROJ 2 Review exercises every odd in corresponding topics
15 Dec 3, 5, 7 6.6; 6.7; 6.8 Review exercises every odd in corresponding topics
16 Dec 10, 12, 14 TEST 3/ Exercises Extra Assignment
17 Dec 17, 19, 21 Exercises/ FINAL EXAM Extra Assignment
Appendix 2
Class exercise on applications of equations and inequalities to the determination of properties of functions in particular Domain and the value of functions.
I) Find the domain:
1) f(x) = 3/(x – 7)2) f(x) = √ (2x-5)3) f(x) = (7x+5) / (x2 +x - 8) ; x> 34) f(x) = √ (x + 4) / ( x2 – 1)
12
5) f(x) = 7/ √ (x2 – 16) ; x < 76) f(x) = √(x2 -7x + 12) / 3√( x2 – 1) ; x < 12
II) Find the value of the function or the value of the independent variable x: 7) f(x) = 3 x - 1 > 4 8) f(x) = x2 -7x + 12 ; x = 0 9) f(x) = 0 f(x) = x2 – 1 ; x = ? 10) f(x) = 3x2 -x – 2 > 0 ; x = ?
13
Appendix 3 CLASSROOM ASSESSMENT TECHNIQUE IKS CHECKLISTPart 1 Interest in course topicsDirections: Please circle or bubble the option of your choice after each item below that best represents the level of motivation you fell in each topic. Numeric options represent:0 = No interest in the topic1 = Somewhat interested2 = Fairly interested in discussing about the topic3 = Highly interested on the topicCourse topics OPTIONS 0 1 2 3 1) Solving linear, quadratic, rational and radical equations 0 1 2 32) Solving linear, quadratic, rational and radical inequalities 0 1 2 33) Understanding the exclusion of denominators from domain 0 1 2 34) Understanding the exclusion of sub radicands from domain 0 1 2 35) Understanding the exclusion of definition set from domain 0 1 2 36) Understanding the combination of topics 3 and 4 together 0 1 2 37) Understanding the combination of topics 3 and 5 together 0 1 2 38) Understanding the combination of topics 3, 4 and 5 together 0 1 2 39) Finding the value of the function for a particular input x 0 1 2 310) Finding the value of the input x for certain value of the function f(x) 0 1 2 3Part 2 Self- assessment of related skills and knowledge in domain and functions valuesDirections: Please circle or bubble the option of your choice that best represents your level of skills or knowledge in relation to topics of domain and function values. The letters symbolN = No skills, no knowledgeB = Basic skills and knowledgeF = Fairly adequate skills and knowledgeA = Advanced level of skills and knowledgeCourse Areas OPTIONS N B F A1) Solving linear, quadratic, rational and radical equations N B F A2) Solving linear, quadratic, rational and radical inequalities N B F A Understanding the exclusion of denominators from domain N B F A4) Understanding the exclusion of sub radicands from domain N B F A5) Understanding the exclusion of definition set from domain N B F A6) Understanding the combination of topics 3 and 4 together N B F A7) Understanding the combination of topics 3 and 5 together N B F A8) Understanding the combination of topics 3, 4 and 5 together N B F A9) Finding the value of the function for a particular input x N B F A10) Finding the value of the input x for certain value of the function f(x) N B F A
14
RESULTS OF CLASS EXCERCISE
0
5
10
15
20
25
1 2 3 4 5 6 7 8 9 10
QUESTION NUMBER
NU
MB
ER O
F R
ESPO
NSE
S
SPU
CHART 1: Results of the class Exercise
15
LEVEL OF INTEREST IN COURSE TOPICS
0
2
4
6
8
10
12
14
16
18
1 2 3 4 5 6 7 8 9 10
CORSE TOPICS
NU
MB
ER O
F R
ESPO
NSE
S
0123
CHART 2: Results of the Interest in course topics
16
LEVEL OF KNOWLEDGE/SKILLS
0
2
4
6
8
10
12
14
1 2 3 4 5 6 7 8 9 10
OPTIONS
NU
MB
ER O
F R
ESPO
NSE
S
NBFA
CHART 3: Results of the Knowledge / skills checklist
17
A short communication on going green and sustainable developed economy in terms of fuel
Dr. Jaime BestardDepartment of MathematicsLiberal Arts and Sciences
Miami-Dade College, Hialeah Campus1780 West 49 th Street
Hialeah, Florida 33012, USAEmail: [email protected]
ABSTRACT The fact that currently energy supply is overseas when U.S. continues fighting the war on terror and fuel reaches cost hard to afford, make it interesting just to open a discussion on the environmental issues related to ethanol production and its benefits. The topic becomes environmental in the moment that U.S. economy might be stronger if the “corn belt” region improves the production of ethanol, a green substitute for the expensive imported oil. This short communication aims to open the analysis of a topic that may become a center of service to community for MDC.
Theme: Technical-socialKey words: Environment, corn, ethanol, fuel
1
1. Introduction
The last three years were marked by an intensive construction activity in the US ethanol industry, as ground was broken on dozens of new plants throughout the U.S. Corn Belt and plans were drawn for even more facilities.[1]
By February 2006, the annual capacity of the U.S. ethanol sector reached 4.4 billion gallons, and plants under construction or expansion are likely to add another 2.1 billion gallons.[1]
If this trend and the existing policy incentives in support of ethanol continue, U.S. ethanol production could reach 7 billion gallons in 2010, about 75 % more than in 2005.[1]
2. Body
Where will ethanol producers get the corn needed to increase their production? With a corn-to-ethanol conversion rate of 2.7 gallons per bushel (a rate that many state-of-the-art facilities are already surpassing), the U.S. ethanol sector will need 2.6 billion bushels per year by 2010(27 % more than it consumed in 2005).[1]
Eventually, that is a great amount of corn, and how the market adapts to this increased demand is likely to be one of the major developments of the early 21st century in U.S. agriculture. The most recent USDA Baseline Projections suggest that much of the additional corn needed for ethanol production will be diverted from exports.[1]
However, if the United States successfully develops cellulosed biomass (wood fibers and crop residue)[2] as an alternative feedstock for ethanol production, corn would become one of many crops and plant-based materials used to produce ethanol, besides the biomass production is renewable using the sun light.
As cited by [1], just a reminder of:
“….That 70s Energy Scene
The factors behind ethanol’s resurgence are eerily reminiscent of the 1970s and early 1980s, when interest in ethanol rebounded after a long period of dormancy. First, the price of crude oil has risen to its highest real level in over 20 years, averaging more than
2
$50 per barrel in 2005. Long-term projections from the U.S. Department of Energy’s Energy Information Administration (EIA) suggest that the price of imported low-sulfur light crude oil will exceed $46 per barrel (in 2004 prices) throughout the period 2006-30 and will approach $57 per barrel toward the end of this period. It is important to remember, however, that as the price of oil dropped during the first half of the 1980s, so, too, did ethanol’s profitability.
Second, many refineries are replacing methyl tertiary butyl ether (MTBE) with ethanol as an ingredient in gasoline. Oxygenates such as MTBE and ethanol help gasoline to burn more thoroughly, thereby reducing tailpipe emissions, and were mandated in several areas to meet clean air requirements. But many State governments have recently banned or restricted the use of MTBE after the chemical was detected in ground and surface water at numerous sites across the country. In the 1970s and 1980s, a similar phase out ended the use of lead as a gasoline additive in the United States. Both ethanol and lead raise the octane level of gasoline, so the lead phase out also fostered greater use of ethanol.
Third, the Energy Policy Act of 2005 specifies a new Renewable Fuel Standard (RFS) that will ensure that gasoline marketed in the United States contains a specific minimum amount of renewable fuel. Between 2006 and 2012, the RFS is slated to rise from 4.0 to 7.5 billion gallons per year. Assessments of the existing and likely future capacity of the U.S. ethanol industry indicate that the RFS will easily be achieved. The RFS joins a long list of incentives that the State and Federal governments have directed toward ethanol since the 1970s. One of the most important of these incentives is the Federal tax credit, initiated in 1978, to refiners and marketers of gasoline containing ethanol. The credit, which may be applied either to the Federal sales tax on the fuel or to the corporate income tax of the refiner or marketer, currently equals 51 cents per gallon of ethanol used.”
It is an obvious alternative to imports, an employment opportunity to reduce the outsourcing effect, the post war and army discharge labor force increase, reduction of soil erosion, employment alternative for any potential immigration law act, among others benefits to the national economy.
Even considering the influence of natural disorders or climate regularities, the forecast is positive and the U.S. population may think in how to become green in terms of fuel producing “native ethanol”
The following chart shows the influence of ENSO (El Niño/Southern Oscillation)[3]on corn yields, then forecasting can provide the land to plant to convert in the amounts presented by the literature.
3
Fig. 1 Impact of ENSO on Maize yields-U.S. Corn Belt states(1972-1988)[3]( The El Niño/Southern Oscillation)
State MEAN St Corn Crop Yield ( t/ha)
Change from Neutral Years
El Niño La Niña Neutral El Niño La NiñaIllinois 7.34 6.11 7.28 0.06 -1.17Indiana 6.94 5.92 7.05 -0.11 -1.13Iowa 7.29 5.88 7.16 0.13 -1.28Minnesota 6.32 4.96 6.69 -0.37 -1.73Missouri 5.85 4.75 5.62 0.23 -0.87Nebraska 7.05 6.34 6.97 0.08 -0.63Ohio 6.78 5.4 6.92 -0.14 -1.52S. Dakota 4.13 3.06 4.22 -0.09 -1.16Wisconsin 6.36 4.88 6.55 -0.19 -1.67Ave. inter State 6.45 5.25 6.49 -0.04 -1.24
3. Conclusive remark
It is a fact, U.S. can become a world power in clean energy, and an environmental world leader.
Acknowledgements:
To the institution that supports this research: Miami-Dade College, my departmental and discipline colleagues who made it possible to me with their contributions and their view points.To my wonderful family, that has always supported me in this endeavor.
4
REFERENCES:
[1] Baker, A. and Zahniser, S. The expanding U.S. ethanol sector is stimulating demand for corn, but alternatives to corn may dampen that demand, Amber Waves, April, 2006.[2] Bestard, J. Analysis of the sugar cane agricultural by- products industrial cutting process, Doctoral Dissertation, Universidad Central de Las Villas(UCLV Sp. acronym), Santa Clara, Cuba, 1994
[3] Phillips. J.G. , Rosenzweig, C., Cane, M. Exploring the potential for using ENSO forecasts in the U.S. Corn Belt, Drought Network News, October, 1996
Dr. Jaime Bestard Received his Ph. D. Degree in Mechanical Engineering from the University of Las Villas (Cuba) in 1994 under the direction of Dr. Ing. Jochen Goldhan and Prof. Dr.Sc. Dr Ing. Klaus Ploetner from the University of Rostock (Germany). Since 1979-1995, he has been at University of Las Villas (Santa Clara, Cuba), 1998-2005 at Barry University (Miami, Fl,) and 2005-present at Miami Dade College ( Miami, Fl, USA). His research interests focus on Energy from agricultural by-products, Undergraduate Teaching of Mathematics, Physics, and Engineering Curriculum Development.
5
Science and Math:
Multiple Intelligences and Brain-Based Learning
Loretta Blanchette
Assistant Professor, Mathematics
Miami Dade College Hialeah Campus 1780 W 49th Street, Hialeah, Fl 33012Email: [email protected]
ABSTRACT
This paper explains the multiple intelligence theory and its congruence with the higher
education instructional practices and the level the students must reach, There is a very
deep discussion of the particularities of the theory.
Theme: Teaching learning instructional practices
Key Words: Multiple intelligences
1
In Howard Gardner’s book Frames of Mind, published in 1983, Gardner presents
the theory of multiple intelligences. The idea that people can exhibit intelligence in a
variety of ways was not new. However, the standard benchmarks for establishing
intelligence customarily involved linguistic and logical mathematical intelligences: the
so called “scholastic intelligences.” (See Becoming a Multiple Intelligences School, by
Thomas F. Hoerr) While standardized tests serve well to predict future academic
success, they often fail to predict future success in the real world, Hoerr claims. Thus a
void was seemingly filled when Gardner proposed his theory on multiple intelligences.
Students can be intelligent in many ways! As a respected Harvard psychologist,
researcher, and professor, Gardner had an immediate audience for his model of
intelligence. Campbell and Campbell state in Multiple Intelligences and Student
Achievement: Success Stories from Six Schools that the multiple intelligences theory was
“appealing in part because Gardner attributes specific functions to different regions of the
brain. This neuroanatomical feature enhances the theory’s credibility with teachers, other
professionals, and lay populations.” They go on to state that “teachers cite Gardner’s
work with a sense of confidence and security because it was generated by a foremost
cognitive psychologist at one of the world’s most prestigious institutions.” However,
Gardner’s research was conducted in the late 1970’s and early 1980’s. Since that time,
advancement in brain research fails to support Gardner’s premise that a specific
intelligence such as logical/mathematical intelligence is biologically located in a specific
region of the brain. In fact, Dr. Spencer Kagan argues “that since different facets of the
same intelligence (are) located in different parts of the brain, it is problematic to make
brain localization the most important criterion of defining intelligence.” (See Trialogue:
Brain Localization of Intelligences, by Kagan, Gardner, and Sylwester) This is not to say
that the concept of the existence of multiple ways to demonstrate ability does not hold
merit, Kagan argues, merely that the “idea that eight relatively distinct intelligences are
supported by brain localization studies” is a false premise. It is rather scary to see web
articles and literary work stating emphatically that “it’s true that each child possesses all
eight intelligences” (see Multiple Intelligences in the Classroom, by Thomas Armstrong)
as though this were established, scientific fact rather that a idea formulated by a
2
psychologist “in an effort to ascertain the optimal taxonomy of human capacities,” as
Gardner himself declares in Multiple Intelligences after Twenty Years. Gardner discusses
how the study of human abilities led him to create a new definition of what intelligence
might be, and to set up a list of “criteria that define what is, and what is not, an
intelligence.” The fact that at least some of his criterion cannot be supported by brain
science stands as a cautionary statement to over reliance on the theory of multiple
intelligences. This is not to suggest that the ways humans demonstrate ability are not,
indeed, diverse. Furthermore, the eight “intelligences,” when utilized appropriately as a
tool to expand an educator’s awareness, serve to advance the cause of student learning
and success.
Thomas Hoerr lists Gardner’s eight intelligences and defines them as follows:
(1) Linguistic: sensitivity to the meaning and order of words
(2) Logical/mathematical: the ability to handle chains of reasoning and to recognize
patterns and order
(3) Musical: sensitivity to pitch, melody, rhythm and tone
(4) Bodily/kinesthetic: the ability to use the body skillfully and handle objects
adroitly
(5) Spatial: the ability to perceive the world accurately and to recreate or transform
aspects of that world
(6) Naturalist: the ability to recognize and classify the numerous species, the flora
and fauna, of an environment
(7) Interpersonal: the ability to understand people and relationships
(8) Intrapersonal: access to one’s emotional life as a means to understand oneself
and others.
These intelligences are observable in the science and mathematics classroom as
students excel in their own unique ways. The student who delights in the lab portion of a
science class demonstrates bodily/kinesthetic intelligence. The student who favors the
charts and graphs, creating colorful poster displays for class projects, demonstrates
spatial intelligence. The student who memorizes the periodic table by creating a little
jingle exhibits musical intelligence. The naturalist intelligence evidences itself in the
student who delights in scientific exploration of the natural world. Linguistic intelligence
3
shines in the writing of eloquent reports and well-stated proofs. When students work well
in a group and enjoy teaching a friend, they demonstrate interpersonal intelligence. The
intrapersonal intelligence demonstrates itself in the more reflective paper and in the self-
guided, self-motivated learner.
Technology in the classroom and in the learning environment at large serves to
enhance the eight intelligences. Through technology, students can take online distance
learning courses that enable them to express intrapersonal intelligence. Through the
utilization of Blackboard, Web-CT and other software programs students can post to
discussion boards and hold real-time discussions, enhancing their interpersonal
intelligence. The bodily/kinesthetic student benefits from interactive programs that allow
for manipulation of the mouse, a joystick, or other control device. Naturalist intelligence
benefits from the vast resources online that enable research into nature and the
environment. In addition, the Discovery Channel serves as a valuable source of
information. Musical intelligence benefits from multimedia presentation. Spatial
intelligence is enhanced by simulation software and 3-D graphics. By creative use of
technology, instructors can incorporate a wide variety of teaching styles and each of the
eight intelligences benefits.
The application of multiple intelligences theory to adult learning carries with it many
possibilities. By viewing each adult learner as possessing individual strengths and
weaknesses, professors can seek to tap into the various intelligences as a means to
strengthening understanding of core content of the course. One way to do this is to
consciously create opportunities for students to demonstrate mastery using different
modalities. Davie Alick writes: “Experience has shown that individuals truly understand
something when they can represent the knowledge in more than one way.” The
“integration of multiple intelligences and multimedia” is one powerful tool to that end.
(See Integrating Multimedia and Multiple Intelligences to Ensure Quality Learning in a
High School Biology Classroom, by Dave Alick, 1999)
The Adult Multiple Intelligences (AMI) Study, conducted in 2002, involved ten
instructors who volunteered to incorporate the multiple intelligences theory into their
teaching of adult learners. According to the report generated by this study, the theory’s
major tenets are: “Intelligence is a biopsychological potential to solve problems and
4
fashion products that are valued in a community or culture. Intelligence is pluralistic;
there are at least eight intelligences. Intelligences operate in combination when applied
in the real world. Every individual has a unique profile of intelligences, including
different areas of strength and distinct profile of intelligence.” (NCSALL Reports #21)
The ten teachers interpreted and applied Gardner’s multiple intelligences theory in their
instruction of ABE, ESOL and GED to adult learners utilizing MI-inspired instruction
and MI reflections. The study shows that the students gained in aspects of engagement,
self-reflection, self-esteem, and self-efficacy. While these may be considered secondary
outcomes of education, they certainly cannot be dismissed as irrelevant: a student who
enjoys learning will develop into a life-long learner.
In Brain-Friendly Strategies for the Inclusion Classroom, Judy Willis discusses the
learning brain and the “series of steps that occur when students learn.” Willis explains
that the “information pathway begins when students take in sensory date. Their brains
generate patterns by relating new material with previously learned material or by
“chunking” material into pattern systems it has used before.” It is interesting to note the
path this input travels through the human brain. Patterned data travels from the “sensory
response regions through the emotional limbic system filters” and then onto “memory
stage neurons” in the cerebral cortex. In order to retrieve and utilize or apply this stored
knowledge, the information needs to be “activated and sent to the executive function
regions of the frontal lobes. These regions are where the highest levels of cognition and
information manipulation – forming judgments, prioritizing, analyzing, organizing, and
conceptualizing – take place.” On its way to memory storage, information passes
through the limbic system where emotion and motivation influence how this input gets
remembered. Clearly, motivation plays a key role in learning! Likewise, emotions affect
retention of information. Understanding these biological and neurological facts enables
educators to become more effective teachers. As Willis persuasively states,
“understanding this brain learning research will increase educators’ familiarity with
which methods are most compatible with how students acquire, retain, retrieve, and use
information.”
5
Dr. Jeffery Lackney lays out the design principles in 12 Design Principles Based on
Brain-based Learning Research. The list given below, by his own admission, is “not
intended to be comprehensive in any way.”
(1) Rich-simulating environments
(2) Places for group learning
(3) Linking indoor and outdoor places
(4) Corridors and public places
(5) Safe places
(6) Variety of places
(7) Changing displays
(8) All resources available
(9) Flexibility
(10)Active/passive places
(11)Personalized space
(12)Community-at-large as the optimal learning environment
The concept of design principles applies to the physical setting in which students engage
in the pursuit of learning. By creating safe, secure, interesting, dynamic, and varied
learning environments we facilitate learning that is brain-compatible.
Caine and Caine identified 12 core brain/mind learning principles in 1997. These
principles are intended to encourage educators to seek methods of teaching that optimize
learning based on brain research. The core principles based on brain-based learning, as
defined by Renate and Geoffrey Caine in Making Connections: Teaching and the Human
Brain and restated in BrainConnection – The Brain and Learning are as follows:
(1) The brain is a complex adaptive system
(2) The brain is a social brain
(3) The search for meaning is innate
(4) The search for meaning occurs through patterning
(5) Emotions are critical to patterning
(6) Every brain simultaneously perceives and creates parts and wholes
(7) Learning involves both focused attention and peripheral attention
(8) Learning always involves conscious and unconscious processes
6
(9) We have at least two ways of organizing memory
(10)Learning is developmental
(11)Complex learning is enhanced by challenge and inhibited by threat
(12)Every brain is uniquely organized
Other sources, such as Funderstanding – Brain-based Learning, list essentially these
same twelve core principles, with slightly differing vocabulary. For example, the first
core principle is stated as: “The brain is a parallel processor, meaning it can perform
several activities at once, like tasting and smelling.” The second core principle is given
as: “Learning engages the whole physiology.” In Principles of Brain Compatible
Learning, by Emily Hungerford, she writes “all learning is mind-body-movement.”
Funderstanding elaborates on core principle 9, stating the two types of memory are
“spatial and rote” and gives core principle 10 as “We understand best when facts are
embedded in natural, spatial memory.”
A brain-forming environment creates a safe place that at the same time engages
the learner. According to Caine and Caine, there are three critical elements necessary to
optimize complex learning: “relaxed alertness,” “orchestrated immersion,” and “active
processing.” (See brainconnection.com) By relaxed alertness the intention is to create a
non-threatening environment that challenges the student. By orchestrated immersion the
implication is to create authentic, relevant real life application of course content. Finally,
by active processing, the concept is to engage the student in meaningful processing of
input. For children, educators set the stage for brain-based learning by letting the tone of
the classroom be one that is welcoming and safe, warm and light-filled, with posters and
displays that relate well to their age/developmental interests. Students need to be actively
engaged in their learning and motivated toward independent discovery. Solutions need to
be approached from various perspectives and with varied methods, focusing on both the
big picture and the details. (Funderstanding) In The Brain-Compatible Classroom: Using
What We Know About Learning to Improve Teaching, Laura Erlauer gives an overview of
seven “brain-compatible fundamentals.” Among these she states that the classroom
ought to be “fun and safe,” that “oxygen, water, sleep, certain foods, and movement
affect students’ brains and their learning,” that content needs to be relevant, that students
7
need to be involved in decision making, and that since the brain is social, “students learn
effectively through collaborating with others, both adults and peers.”
For adult learners, a brain-forming environment involves similar concepts applied
in slightly different manners. Adult learners often work all day and take night classes.
This involves walking to and from their cars after dark. Security on campus ought to be
such that all students feel safe and secure on campus and in the parking lots. A non-
threatening environment also involves the tone of the classroom. As students are
encouraged to speak up, interacting with the instructor and their peers, the learning
environment takes on a non-threatening, challenging aspect. Instructors who choose
current, real world examples to illustrate concepts find learners who are motivated to
learn. Encouraging peer tutoring and group study as well as assigning team projects are
further examples of active processing and brain-compatible learning. These strategies
apply to both mathematics and science classrooms. Indeed, any learning environment
benefits from the understanding and application of brain-based learning principles.
References:
http://www.ascd.org/portal/site/ascd/template.chapter/menuitem.b71d101a2f7c208cdeb3f
fdb62108a0c/?chapterMgmtId=589c8aec2ecaff00VgnVCM1000003d01a8c0RCRD
Becoming a Multiple Intelligences School, by Thomas R. Hoerr
http://www.ascd.org/portal/site/ascd/template.chapter/menuitem.b71d101a2f7c208cdeb3f
fdb62108a0c/?chapterMgmtId=7316177a55f9ff00VgnVCM1000003d01a8c0RCRD
Multiple Intelligences and Student Achievement: Success Stories from Six Schools, by
Linda Campbell and Bruce Campbell
http://www.kaganonline.com/KaganClub/FreeArticles/Trialogue.html, Trialogue: Brain
Localization of Intelligences, Dr. Spencer Kagan, Dr. Howard Gardner, and Dr. Robert
Sylwester (Kagan Online Magazine, Fall 2002)
8
http://www.pz.harvard.edu/PIs/HG_MI_after_20_years.pdf, Multiple Intelligences after
Twenty Years, Howard Gardner, April 2003.
http://www.pz.harvard.edu/Research/AMI.htm, Adult Multiple Intelligences
http://eduscapes.com/tap/topic68.htm, Technology and Multiple Intelligences
http://www.casacanada.com/multech.html, Multiple Intelligences and Technology
http://www.angelfire.com/de2/dalick/researchMI.htm#integration, Integrating
Multimedia and Multiple Intelligences to Ensure Quality Learning in a High School
Biology Classroom, EDUC 685-Multimedia Literacy , Dave Alick
December 7, 1999
http://www.ncsall.net/fileadmin/resources/research/report21.pdf, NCSALL Reports #21
http://www.ascd.org/portal/site/ascd/template.chapter/menuitem.b71d101a2f7c208cdeb3f
fdb62108a0c/?chapterMgmtId=f7fc3b356a8e2110VgnVCM1000003d01a8c0RCRD,
Brain-Friendly Strategies for the Inclusion Classroom, by Judy Willis
http://designshare.com/Research/BrainBasedLearn98.htm, 12 Design Principles Based
on Brain-based Learning Research, By Jeffery A. Lackney, Ph.D.
http://www.brainconnection.com/topics/?main=fa/brain-based3#A1, Where Did the "12
Brain/Mind Learning Principles" Come From?
http://www.funderstanding.com/brain_based_learning.cfm, Brain-based Learning
9
http://aldertrootes.wcpss.net/tcteam/brainshow/index.htm, Principles of Brain
Compatible Learning, Author: Emily Hungerford; Aldert Root Classical Studies Magnet
School
http://www.ascd.org, The Brain-Compatible Classroom: Using What We Know about
Learning to Improve Teaching, by Laura Erlauer
Additional Sources:
http://www.kaganonline.com/KaganClub/FreeArticles/ASK31.html, Multiple
Intelligences Structures —Opening Doors to Learning, Dr. Spencer Kagan & Miguel
Kagan
http://www.thomasarmstrong.com/multiple_intelligences.htm, Multiple Intelligences, by
Thomas Armstrong
http://wik.ed.uiuc.edu/index.php/Brain_Based_Learning, Brain Based Learning
http://www.ascd.org, November 1998 | Volume 56 | Number 3 How the Brain Learns,
Pages 20-25, The Brains behind the Brain, Marcia D'Arcangelo
http://www.businessballs.com/howardgardnermultipleintelligences.htm
http://www.ascd.org/portal/site/ascd/template.chapter/menuitem.b71d101a2f7c208cdeb3f
fdb62108a0c/?chapterMgmtId=b44c177a55f9ff00VgnVCM1000003d01a8c0RCRD
Multiple Intelligences in the Classroom, Thomas Armstrong
http://www.ascd.org/portal/site/ascd/template.chapter/menuitem.b71d101a2f7c208cdeb3f
fdb62108a0c/?chapterMgmtId=e843099a63bc6010VgnVCM1000003d01a8c0RCRD
Literacy Strategies for Improving Mathematics Instruction, by Joan M. Kenney, Euthecia
Hancewicz, Loretta Heuer, Diana Metsisto and Cynthia L. Tuttle
10
http://www.udel.edu/bateman/acei/multint9.htm, Multiple Intelligences: Different Ways
of Learning , Judith C. Reiff
http://www.thirteen.org/edonline/concept2class/inquiry/index_sub4.html How has
inquiry-based learning developed since it first became popular? What is inquiry-based
learning?
http://www.brynmawr.edu/biology/franklin/InquiryBasedScience.html, Inquiry
Based Approaches to Science Education: Theory and Practice
http://pubs.aged.tamu.edu/jae/pdf/Vol45/45-04-106.pdf, INQUIRY-BASED
INSTRUCTION IN SECONDARY AGRICULTURAL EDUCATION: PROBLEM-
SOLVING – AN OLD FRIEND REVISITED, Brian Parr, Assistant Professor Murray
State University M. Craig Edwards, Associate Professor, Oklahoma State University
http://solomon.bond.okstate.edu/thinkchem97/frames20.htm, Imagination and the
rich learning environment that results
http://www.qtlcenters.org/k12/fivedays.htm Five "Core" Days of QTL™
11
Camera Obscura: The Cult of the Camera in David Lynch's Lost Highway
Victor CalderinDept. of Liberal Arts and Sciences
MDC- Hialeah Campus1780 W 49th Street
Hialeah, Florida 33012, USAEmail: [email protected]
ABSTRACT
In works of David Lynch, the mind behind “Twin Peaks” and films like Mulholand Drive
and Inland Empire, the role of machines has always been difficult to define, and they go
beyond simple tools that characters use as needed. These devices can define and
structure a character’s behavior. This is exactly the case in Lost Highway. In this film,
the camera evolves from a mechanism that captures images into something more sinister.
Lynch transforms the camera into a character that moves the plot of the film. This paper
explores this event and its ramifications.
Theme: Film Theory
Key Words: Meta-Film, Post-Noir Cinema, David Lynch
1
At first glance, everything looks simple enough; it is late, although you are not
sure how late. There are a few scattered cars, which look diminished from your elevated
vantage point, in the parking out. The parking lot itself is of the outdoor sort, allowing
the light to efficiently dissipate itself into the dark. The only movement is the rustling of
leaves from the trees and shrubbery in the peripheral, and after a few minutes, a man in
his early thirties enters your frame of vision. He is wearing a buttoned grey trench coat
and black leather gloves, which are your first indications that it is actually quite cold
outside.
In his hands you can see that he is dialing a number into his cell phone as he
slowly walks toward his car, but since there is no sound that you can perceive, you are
not privileged to the content of his conversation. What you are privy to is the intensity of
the dialogue. While standing near the door of his car, the man’s wild gesticulations are
quite the spectacle, one which lasts exactly three minutes, one that is halted by the violent
movement of the leaves and branches in a shrubbery at the edge of your field of vision.
There is a quick transition, and now, your point of view is that of the thing in the bushes.
The man is now frantically trying to get into his car, but as his nerves hamper his
coordination, his keys fall helplessly onto the floor. Then it happens. Whatever is in the
bushes leaps out (you know this because your field of vision has shifted forward with a
violent jolt) and is racing towards the man who is frantically trying to reach his keys.
And as whatever was in the bushes, but now is clearly quite out of them, is about to reach
the man, whose face is contorting itself in the register of horror, the screen goes black.
2
Despite its cold, indifferent glare, the camera defines and captures what we, the
viewers, see. Through its technical manipulation, the director is able to convey his or her
message to the spectator. In addition to this relationship, there is also a connection that
exists between the spectator and the camera. This bond is established due to the fact that
the viewer identifies himself or herself with the mechanism, for it is the camera that
integrates the spectator in the drama unraveling itself on the screen. Walter Benjamin
once stated that “the audience’s identification with the actor is really identification with
the camera” (Benjamin 740). We are vested in the film because we are part of the film,
in a static voyeuristic sense. But what happens when the camera invokes itself directly
into the narrative and becomes an instigator of violence? What happens when camera
pauses and turns to us? More importantly, what do we see?
David Lynch, the genius behind “Twin Peaks” and other post-noir films, uses the
camera as an instigator of violence. In addition to visually capturing instances of
violence, the camera itself becomes the driving force that leads the characters on the
screen to act violently. The machine becomes a means to understanding one’s identity
and also reveals hidden desires; this is especially the case with David Lynch, who usurps
the traditional role of the camera and forces the spectators to take a closer look at
themselves in a more subtle manner.
David Lynch’s Lost Highway illustrates how the camera plays an interior role in
the narration and drives the plot to its conclusion. Lynch uses the camera to introduce
problems of identity and desire that the characters face. The cast of Lost Highway is
composed of ambiguous characters that do not lend themselves towards simple
classification. The primary protagonist is Fred Madison, who is a middle-aged jazz
3
saxophone player that is having some marital problems and suspects that his wife Renee
is having an affair with one of her friends. Madison is a reserved character that does not
reveal much about himself. But he does say something crucial to understanding his
character. When questioned on his lack of photographs or film in his house, he responds,
“I like to remember things my own way, not exactly the way they happened” (Lost
Highway). This sheds some light onto the fact that Fred does not believe in the camera’s
ability to capture reality. Maya Deren observes that “if realism is the term of a graphic
image precisely simulates some real object, then a photograph must be different from it
as a form of reality itself” (Deren 219). Fred would openly agree with this statement. He
cannot attribute reality to photographic reproduction. The protagonist sees film, in its
various forms, as captured points of view that cannot be representative of reality itself.
This will change with the appearances of the mysterious videotapes.
The videotapes appear mysteriously one day on the steps of Fred and Renee’s
home. The first videotape contains a short pan across the house and stops at the door
whereupon it then slowly closes in on the door ending in a close-up of the door. The shot
is taken in broad daylight thus creating the air of security. Lynch is notorious for creating
the psychotic in settings that look peaceful and calm, as seen in “Twin Peaks” and Blue
Velvet. Fred believes that a real estate agent must have filmed it, and both carry on
without giving the video further significance.
The first video can be symbolic of many things. It seems to be an intrusion of the
technical in Fred’s life. It also reveals that something is not right. The video hides
something that neither the narrative nor the spectator is ready to deal with. There is an
eerie ambience overwhelming the video. Fred’s initially dismisses the first tape because
4
he is neither willing nor able to handle the implication of intrusion presented by the first
tape. The issue is further pressed with the second tape.
Upon discovering the tape, Renee is nervous when handling the video, as if there
is something hidden that she’s afraid might be on the film. The second video starts
exactly as the first, but after the close-up on the door, things become extremely
disturbing. The image quickly cuts to a high-angle traveling shot. The shot begins in the
living, then travels into the hallway, and finally ends looking at Fred and Renee’s
bedroom where they are both asleep. Both characters are extremely disturbed and
immediately call the police.
The spectator is used to the camera being an intrusion, but the characters are not.
As viewers, we are comfortable with the freedom that the camera allows us, but when
Fred and Renee see what the viewer sees, the images disturb them. Lynch melds the
realm of observational audience and fictional character in this scene. While the theater
audience passively intrudes on the action on the screen, the camera in this case has
literally violated the security of Fred and Renee’s home. The complete manifestation of
their martial crisis becomes manifest in the third tape.
Due to its complexity and composition, the third video is of the most significance.
Fred and Renee arrived home late from a party with friends. Due to the invasion of the
tapes, Fred checks the house before going to bed. The viewers see Fred walking out of
the darkness and towards the camera, the spectator’s eye, and then the screen goes black.
This take is crucial because it identifies Fred with the darkness in his house, a darkness
that is symbolic for the obscured troubles in his life. His movement towards the camera
signifies integration between himself and the mechanism. All this is connected with the
5
material on the last video, which Fred discovers the video the next morning. It is the
same footage from the previous two cassettes, but once the camera turns the corner of the
hallway leading into the bedroom, it reveals an image of Fred frantically screaming while
clutching at Renee’s dismembered body parts. The video ends with a close-up of Fred
screaming. After viewing this, Fred cries to Renee, but no one answers, and he blacks
out.
The third video is crucial to understanding the camera’s role in the narrative
because it is the camera that captures Fred’s violent act. It symbolically represents his
subconscious mind. Fred had previously stated that he does not like the use of
technology to capture the past. The video does exactly this; it captures exactly what Fred
doesn’t want to see. There is a technical aspect that needs to be observed when analyzing
the third video. During the first two videos, the footage is always in black and white, and
the quality is inferior to the spectator’s camera. The first two videos are recorded on a
handheld digital camera, which actually appears at the end of the movie. But when Fred
is viewing the gruesome footage on the third video, the footage quickly, and only for a
second, cuts to a high-quality color resolution, which is characteristic of a 16mm camera
that is professionally used in films.
This difference signifies that the footage is real because the color image is coming
from Fred’s memory, regardless of his previous suppression of the events. Fred’s
memory is merged with the images on the video as “the camera introduces us to
unconscious optics as does psychoanalysis to unconscious impulses” (Benjamin 746).
Fred’s view of film is completely skewed in this scene. His hidden desires have crept
into the physical world and have been captured on film. But even at the moment of the
6
viewing, he questions the camera’s validity. Fred is not ready to accept the evidence of
his subconscious rage captured on video. Something more crucial occurs in this scene.
The spectator’s perspective is combined with the footage on the video.
The camera becomes the mode that violence is realized in the scene pertaining to
the last videocassette. The video becomes the means that the viewer understands what
has transpired. The importance of all this is that the camera is missing. What the
spectator has is as much as Fred and Renee do: three videos. The means of production of
these tapes is missing. This parallels with the fact that the spectator rarely sees the main
cameras recording the footage that eventually becomes the film. What Lynch does with
this long sequence is to subject Fred and Renee to the same experience that a viewer will
have. During the viewing of the last tape, the events that transpire on Fred’s television,
which refers itself the viewer’s screen, are the consequences of the events that have been
previously displayed. Stanley Cavell asks an important question about the screen. He
asks:
What does the silver screen screen? It screens me from the world it holds – that is, makes me invisible. And it screens that world from me – that is, screens its existence from me” (Cavell 335).
The barrier between the viewer’s screen and Fred’s screen has been ruptured and
the viewer is involved in the violence of the camera. Cavell’s idea concerning the
divisive nature of the screen is complicated by Lynch, for the barrier is ruptured for Fred,
as the acts on the screen and the act in his life are fused. The video viewed by Fred
draws him into the narrative and also forces the viewer into the contemplative action of
the narrative. The viewer is forced to figure out where the murder occurred. And the
only available answer is “on tape.”
7
As the plot in Lost Highway develops, there is another instance where the camera
has an important role: the ending sequence. The last sequence establishes the camera as
a mechanism for self-identification. This issue comes up at the end of the movie with
Fred’s interaction with the Mystery Man, played by Robert Blake and referred to as the
Mystery Man in the credits. The Mystery Man seems to be a supernatural force that
directs and leads Fred at the end of the film. The scene that captures the significance of
the camera and its role in identification is when Fred enters the cabin looking for Renee’s
doppelganger, Alice. He finds the Mystery Man instead. The Mystery Man is holding a
camera in his hand and is recording Fred. Fred asks for Alice, but Blake’s character
responds that, “Her name is Renee. If she told you her name was Alice, then she was
lying.” The Mystery Man then says, “And your name…what is your name?” Fred
cannot handle this question and flees the cabin. The photographic property of the camera
seals identity; it solidifies the image and the reality of the situation. Even if Fred refuses,
or cannot, answer the question, the camera does.
Lynch manipulates the relationship between the character and identity through his
use of the camera. Fred can never face the camera. Renee has a more interactive
relationship with it and is even caught up in it, as seen in a previous scene. She defines
herself by the image she portrays. And while she has a doppelganger, both are defined by
their representation on the screen. Pete, Fred’s doppelganger, can never focus his vision,
which results in a blurred perspective when issues of identity arise. It is only the Mystery
Man who is secure in his identity, which is that of horrific deus ex machina. His persona
is defined by his technical mastery, so he handles various devices with a menacing
machine-like efficiency.
8
The camera captures violence in Lost Highway, and it is in this relationship that it
defines the characters involved in the narrative. It becomes a device that not only records
the image that the spectator views, but it becomes a narrative device in its own right. The
camera enters its own world and usurps the living characters in it (in almost a Gnostic
manner). Lynch’s characters cannot deal with the solidity that the camera represents.
Fred and Renee cannot come to terms with reality and because of this they must face the
surreal consequences of their actions. The camera is transplanted from a mere recorder
of images to the creator of the events that cause the images. The created becomes the
maker.
As the connection between camera and spectator cannot be lost, there is an
inverted narcissistic moment when the camera is placed in the film. The spectator’s eye
is now on display. The viewer must deal with the horror that is his own point of view, his
eye. The camera becomes intrusive, but it is this intrusion that changes the character and
forces the viewer to ponder. Lynch sees this activity between the observer and the
observed as cyclical and infinite, like a mobius strip. The end of Lynch’s narrative is the
exact moment in which it began, but from another camera angle. Temporally Lynch
positions both of Fred’s points of view together. Lynch sees that fetishizing the camera
and its powers only leads to a vicious cycle of stagnation, so there are no clear answers in
Lost Highway. Although the characters change, they will repeat their actions infinitely,
and in this repetition, the spectator is trapped via our identification with the camera
manifested on the screen. The camera becomes an instigator of violence for us that not
only forces us to understand its power but also forces us to look at ourselves.
9
Works Cited
Benjamin, Walter. “The Work of Art in the Age of
Mechanical Reproduction.” 1935. Film Theory and Criticism: Introductory
Readings. Ed. Leo Braudy and Marshall Cohen. New York: Oxford University
Press, 1999. 731-751
Cavell, Stanley. “From The World Viewed.” 1971. Film
Theory and Criticism: Introductory Readings. Ed. Leo Braudy and Marshall
Cohen. New York: Oxford University Press, 1999. 334-344.
Deren, Maya. “Cinematography: The Creative Use of
Reality.” 1960. Film Theory and Criticism: Introductory Readings. Ed. Leo
Braudy and Marshall Cohen. New York: Oxford University Press, 1999. 216-
227.
Lost Highway. Dir. David Lynch. Perf. Bill Pullman,
Patricia Arquette. USA Films, 1997.
Modleski, Tania. “The Terror of Pleasure: The
Comtemporary Horror Film and PostModern Theory.” 1986. Film Theory and
Criticism: Introductory Readings. Ed. Leo Braudy and Marshall Cohen. New
York: Oxford University Press, 1999. 691-700.
10
Going Beyond Academics: Mentoring Latina Student Writers
Dr. Ivonne LamazaresDept. of Liberal Arts and Sciences
MDC- Hialeah Campus1780 W 49th Street
Hialeah, Florida 33012, USAEmail: [email protected]
ABSTRACT
In the field of creative writing, mentoring relationships have a long and honorable
tradition. Poet William Carlos Williams mentored Denise Levertov; Marianne Moore
mentored Elizabeth Bishop; John Berryman mentored Philip Levine. Novelist John
Gardner mentored Ray Carver, Charles Johnson, and others; Gertrude Stein mentored
Ernest Hemingway; Nathaniel Hawthorne mentored Herman Melville, and so on. This
paper addresses the complexities of mentoring Latina students of creative writing.
(Paper Presented at the annual College Composition and Communication Conference, Spring 2006)
Theme: College Composition
Key Words: Mentoring, Latina students
1
Despite the myth that writing (particularly creative writing) is a solitary endeavor,
or the other myth -- that learning to write fiction or poetry can be done only through
formal course work in MFA programs -- I would argue that one-on-one conferencing and
informal interactions with a mentor remain central to a creative writing apprenticeship.
Can creative writing be taught? This is, of course, a question that has been posed
and endlessly debated. But in the end the important answers may arise in one-on-one
transactions between writing teacher and students: "Something is being taught," says
Jeffrey Skinner, "and something is being learned, in these 'conferences' between student
and teacher, each one in itself a paradoxical blend of institutionalized ritual and intimate
informality." (“Poets as Mentors,” Writers' Chronicle, 2005).
The challenges of creative writing mentoring are many. As Rilke expresses in
his Letters to a Young Poet, ". . for one person to be able to advise or even help another, a
lot must happen, a lot must go well, a whole constellation of things must come right in
order once, to succeed." Or as poet Richard Hugo writes, "Every moment," he tells his
students, "I am, without wanting or trying to, telling you to write like me." David
Wojahn warns that "artistic mentors may give bad advice, may in fact give dangerous
advice."
What happens when the beginning creative writer is a Latina student, facing the
added complexities of her gender and cultural background in the writing task and in her
creative work and professional aspirations? What sort of help does she need? And from
whom? What sort of mentoring is most appropriate and who is the most appropriate
mentor?
These are not questions that can be answered categorically. But here are some
possibilities that have suggested themselves to me in the process of mentoring Latina
students of creative writing.
I believe a Latina creative writing student may struggle with issues of legitimacy
2
regarding her own work that do not affect other students in quite the same way. A
mentor who does not understand some of these issues facing a Latina writing student may
be unable to respond to the student's needs. All writers struggle with self-doubt, -- this is
well known -- but I have found my Latina student mentees feel particularly unsure of the
extent to which they can mine the possibilities in their own bilingual, bicultural worlds
and backgrounds. To what degree should they use Spanish in their work? Should they
use italics to denote Spanish? Should they try to translate the Spanish words or to let the
reader infer from the context? Who do they write for? Mainstream America? Their own
communities? How to bridge these two audiences without confusing or betraying either?
There is often a shyness, a fear of not being accepted by others, a tentativeness, in my
Latina students' work.
I suffered this crisis of legitimacy as a beginning writer. It took Latina writer
circles to sustain my work for years before I dared to send my work out to magazines. In
1994, despite a few publications, I was still afraid of applying to writers' conferences. It
took a mentor to encourage me to apply to the Sewanee Writers' Conference and there, it
took another mentor to assure me of the legitimacy of my vision and of the voice and
culture present in my work.
As a woman writer, of course, a Latina also faces some of the negative inner tapes
associated with gender stereotypes. Virginia Woolf famously called the voice of such
tapes "the angel in the house" -- the selfless, egoless, proper little woman with no
ambition and no time for herself, which Woolf contended she found she had to kill. We
Latinas sometimes fight this stereotype implanted by our own cultural traditions. I recall
that the first time I was called "ambitious" by a white Anglo secretary I worked with --
she meant it as a compliment-- I took this as an insult. She was calling me ambiciosa --
which to me meant scheming, selfish, mala. Bad. All these cultural forces Latina writers
struggle with to some degree, and these issues need to be brought to the surface and
discussed by a mentor who is aware of them and of their effect on the writer's work.
3
To become an artist, a professional writer, a woman must have "an income of 500
pounds a year and a room of her own." This is Virginia Woolf's dictum and in my
opinion the concept still holds true. Latina student writers often need help to find the
resources and the time to get their creative writing work done. They need help to make
their own work a priority. They need to learn to balance their needs as artists with the
needs of family, friends, boyfriend, children, husband. They need to give themselves
permission to do what they must, to become the artists and writers they aspire to be. A
mentor can sometimes help a writer carve out the time, find the resources, give herself
that permission.
Some of the work any writer must do to be successful involves becoming familiar
with the authors who've come before him/her. Because of the inequities of our school
systems, some Latina students come to college without having read the traditional stories
and poems that other students may already be familiar with. They often come to college
without having read the work of other Latino authors as well. Many of my Latina
students are unfamiliar with the work of Sandra Cisneros, Julia Alvarez, Judith Ortiz-
Cofer, Junot Diaz, etc. As a nontraditional student, I also came to college with large gaps
in my reading (both canonical works and works outside the canon). This is part of the
work a mentor does with a minority student: to provide the mentee with that all-important
reading list that includes the works of those authors (minority and not-minority) the
student will be expected to be familiar with in creative writing workshop courses and in
MFA programs, as well as the work of authors that provide direct models to the student's
work, usually other Latina authors.
Beyond these possibilities, a mentor, and specifically a Latina mentor's presence
in the academy can provide a minority writer first-hand evidence that being a minority
woman and a writer are doable, possible, worthwhile tasks. And the Latina mentee can
see with her own eyes how another Latina writer goes about the exhilarating,
discouraging, daunting business of writing fiction or poetry, getting published,
4
negotiating the academic environment. The Latina mentee can see that the more
established Latina writer still struggles with issues of legitimacy, anxieties over the work,
social inequities ("she got published because she’s a minority; she got the teaching job
because she's a minority, " etc.). Such a mentor can lend an ear to similar student
concerns, perhaps offer solutions, ways to cope, invaluable advice from being there,
living the same realities the student lives through herself. A minority writer-mentor can
also critique the student's work, both from the perspective of an insider to the culture the
student writes about, as well as from the perspective of outsiders, since, as a published
writer, s/he has an understanding of the expectations of the mainstream publishing world.
But what can mentoring do for the mentor? Mentoring is not a one-way street.
Poet John Berryman told his mentee, Philip Levine, "You should always be trying to
write a poem you are unable to write, a poem you lack the technique, the language, the
courage to achieve. Otherwise you're merely imitating yourself, going nowhere, because
that's always easiest." This sort of dictum is not only a gift a mentor gives to a mentee,
the permission to turn herself inside out to achieve what seems at times an impossible
goal. It may also be a dictum the mentor herself can be reminded of, as she gives the
advice to others. I can't say how many times I have discovered the answer to one of my
writing problems through the advice I've given students.
Philip Levine passed on the same advice to his mentee, Larry Levis. Levis says,
"What I gathered from Philip Levine's generosity as a mentor seems to be this: to try to
conserve one's energy for some later use, to try to teach as if one isn't quite there, and has
more important things to do, to shy away from mentoring student writers, might be a way
to lose that energy completely, a way, quite simply, of betraying oneself."
Through mentoring, befriending, encouraging Latina students I find that I'm able
to fight my own demons of illegitimacy and self-doubt. Because through such fruitful
and fulfilling relationships with mentees I myself feel legitimized, able to accomplish
perhaps what I most want -- to give someone like myself the guidance I longed for as a
5
young Latina writer. I feel supported by the students I work with. Mentoring reminds all
of us -- students and teachers-- that as solitary an art as writing is, it is ultimately also an
act of community. As Lee Martin argues in his book Passing the Word: Writers on their
Mentors, "Students age. Teachers die. Students themselves become mentors to others,
and the cycle begins again. No writer is ever alone, really. There are always those
mentors, those students, who engage in a communal act of creation."
6
Classroom Assessment Techniques and Their Implementation in a Mathematics Class
Dr. M. Shakil Department of Mathematics
Miami-Dade College, Hialeah Campus 1780 West 49th Street
Hialeah, Florida 33012, USA E-mail: [email protected]
ABSTRACT Classroom assessment is one of the most significant teaching strategies. It is a major component of classroom research at present. Classroom Assessment Techniques (CAT’s) are designed to help teachers measure the effectiveness of their teaching by finding out what students are learning in the classroom and how well they are learning it. This paper deals with the implementation of Classroom Assessment Techniques, namely, “Course-Related Self-Confidence Surveys,” “Muddiest Point,” and “Exam Evaluations,” in a Business Calculus Class. These techniques are used for assessing: (i) Course-Related Knowledge and Skills; (ii) Learner Attitudes, Values, and Self-Awareness; (iii) Learner Reactions to Instruction. Theme: Educational Research Keywords: Attitudes, Assessment Technique, Exam Evaluations, Muddiest Point, Self-Confidence
2
1. Introduction
There are two fundamental issues with which the educational reformers are concerned. These are as follows: (i) The students’ learning in the classroom; and (ii) The effectiveness of the teaching by teachers in the classroom. To answer these questions, the movement for Classroom Research and Assessment was initiated during the 1990’s by Thomas A. Angelo and K. Patricia Cross, who devised various Classroom Assessment Techniques (known as CAT’s), (see, for examples, Angelo and Cross (1993), among others, for details). They developed these CAT’s in order to help teachers to measure the effectiveness of their teaching by finding out what students are learning in the classroom and how well they are learning. According to Angelo and Cross (1993), “These CAT’s are designed to encourage college teachers to become more systematic and sensitive observers of learning as it takes place everyday in their classrooms. Faculties have an exceptional opportunity to use their classrooms as laboratories for the study of learning and through such study to develop a better understanding of the learning process and the impact of their teaching upon it.” Thus, in Classroom Assessment Approach, students and teachers are involved in the continuous monitoring of students’ learning. It gives students the feedback of their progress as learners. The faculties, on the other hand, get to know about their effectiveness as teachers. According to Angelo and Cross (1993), the founders of classroom assessment movement, “Classroom Assessments are created, administered, and analyzed by teachers themselves on questions of teaching and learning that are important to them, the likelihood that instructors will apply the results of the assessment to their own teaching is greatly enhanced.” Following Angelo and Cross (1993), some important characteristics of Classroom Assessment Approach are given below:
(i) LEARNER-CENTERED (ii) TEACHER-DIRECTED
(iii) MUTUALLY BENEFICIAL
(iv) FORMATIVE
(v) CONTEXT-SPECIFIC
(vi) ONGOING
(vii) ROOTED IN GOOD TEACHING PRACTICE
According to a report by the Study Group on the Conditions of Excellence in American Higher Education (1984), “There is now a good deal of research evidence to suggest that the more time and effort students invest in the learning process and the more intensely they engage in their own education, the greater will be their satisfaction with their educational experience, and their persistence in college, and the more likely they are to continue their learning” (p. 17). As observed by Angelo and Cross (1993), “Active engagement in higher learning implies and requires self-awareness and self-direction,” which is defined as “metacognition” by cognitive psychologists. According to Weinstein and Mayer (1986), the following are the four activities that help students become more efficient and effective learners:
(i) COMPREHENSION MONITORING
3
(ii) KNOWLEDGE ACQUISITION
(iii) ACTIVE STUDY SKILLS
(iv) SUPPORT STRATEGIES As observed by Angelo and Cross (1993), “teachers are the closest observers of learning as it takes place in their classrooms-and thus have the opportunity to become the most effective assessors and improvers of their own teaching. But in order for teaching to improve, teachers must first be able to discover when they are off course, how far off they are, and how to get back on the right track.” Angelo and Patricia further observe, “The goals of college teachers differ, depending on their disciplines, the specific content of their courses, their students, and their own personal philosophies about the purposes of higher education. All faculty, however, are interested in promoting the cognitive growth and academic skills of their students” (Angelo and Cross, 1993, p. 115). Assessing accomplishments in the cognitive domain has occupied educational psychologists for long, (see, for example, Angelo and Cross (1993), and references therein). Many researchers have worked and developed useful theories and taxonomies on the assessment of academic skills, intellectual development and cognitive abilities, both from the analytical and quantitative point of view. The development of the general theory of measuring the cognitive abilities began with the work of Bloom and others (1956), known as “Bloom Taxonomy.” Further developments continued with the contributions of Ausubel (1968), Bloom, Hastings, and Madaus (1971), McKeachie, Pintrich, Lin, and Smith (1986), and Angelo and Cross (1993), among others. “Active engagement in higher learning implies and requires self-awareness and self-direction,” which is defined as “metacognition” by cognitive psychologists. For details on metacognition and its applications, see, for example, Brown, Bransford, Ferrara, and Campione (1983), Weinstein and Mayer (1986), and Angelo and Cross (1993), among others. No matter, what our topic design, classroom strategies, assessment practices and interactions with students may be, it is expected that a teacher uphold the following principles for effective teaching and learning in all classes (from “Education and Research Policy (2000)”, Flinders University of South Australia; http://www.flinders.edu.au/teach/teach/home.html). Teaching should:
focus on desired learning outcomes for students, in the form of knowledge, understanding, skill and attitudes;
assist students in forming broad conceptual understandings while gaining depth of knowledge;
encourage informed and critical questioning of accepted theories and views; develop an awareness of the limited and provisional nature of much of current
knowledge in all fields; see how understanding evolves and is subject to challenge and revision; engage students as active participants in the learning process, while
acknowledging that all learning must involve a complex interplay of active and receptive processes;
engage students in discussion of ways in which study tasks can be undertaken;
respect students' right to express views and opinions; incorporate a concern for the welfare and progress of individual students; proceed from an understanding of students knowledge, capabilities and
backgrounds;
4
encompass a range of perspectives from groups of different ethnic background, socio-economic status and sex;
acknowledge and attempt to meet the demands of students with disabilities; encourage an awareness of the ethical dimensions of problems and issues; utilize instructional strategies and tools to enable many different styles of
learning and; adopt assessment methods and tasks appropriate to the desired learning
outcomes of the course and topic and to the capabilities of the student.
It is evident, as noted above, that the classroom assessment technique is one of the most significant and important components of classroom research and teaching strategies. There are various classroom assessment techniques developed by Angelo and Cross (1993) which lead to better learning and more effective teaching. The following are some of the objectives of the Classroom Assessment Techniques (CAT’s):
• these CAT’s assess how well students are learning the content of the particular subject or topic they are studying.
• these are designed to give teachers information that will help them improve their course materials and assignments.
• these CAT’s require students to think more carefully about the course work and its relationship to their learning.
Thus, it is clear that the Classroom Assessment Techniques (CAT’s) are designed to help teachers measure the effectiveness of their teaching by finding out what students are learning in the classroom and how well they are learning it. For a detailed analysis of these CAT’s as well as their philosophical and procedural background, see, for example, Angelo and Cross (1993), among others. The kind of learning task or stage of learning assessed by these CAT’s is defined by Norman (1980, p. 46) as accretion, the “accumulation of knowledge into already established structures,” see, for example, Norman (1980), among others, for details. According to Greive (2003, p. 48), “classroom assessment is an ongoing sophisticated feedback mechanism that carries with it specific implications in terms of learning and teaching.” Grieve further observes, “The classroom assessment techniques emphasize the principles of active learning as well as student-centered learning.”
This paper deals with the implementation of three types of Classroom Assessment Techniques, namely, “Course-Related Self-Confidence Surveys,” “Muddiest Point,” and “Exam Evaluations,” in a Business Calculus Class. These techniques are used for assessing:
i. Course-Related Knowledge and Skills; ii. Learner Attitudes, Values, and Self-Awareness; and iii. Learner Reactions to Instruction.
The organization of this paper is as follows. Section 2 contains the methods of description, purpose and related teaching goals of using the Classroom Assessment Technique of Course-Related Self-Confidence Surveys (CATCRSCS), the Muddiest Point (CATMP), and the Exam Evaluations (CATEE). In Section 3, the implementations of these CAT’s in a Business Calculus Class are provided. Section 4 contains the data analysis and discussions of these techniques. Some concluding remarks are presented in Section 5.
5
2. Methods This section discusses the description, purpose and related teaching goals of three CAT’s, as stated above. 2.1 The Course-Related Self-Confidence Surveys 2.1.1 Description
The “Course-Related Self-Confidence Surveys (CATCRSCS)” is one of the five Classroom Assessment Techniques (CAT’s) discussed in Angelo and Cross (1993, Chapter 8, p. 255), for assessing learner attitudes, values, and self-awareness, known as “meta-cognition.” It is one of the simplest CAT’s. It provides an efficient avenue of input and a high information return to the instructor without spending much time and energy. It is designed to help teachers better understand and more effectively promote the development of attitudes, opinions, values, and self-awareness that takes place while students are taking their courses. The Course-Related Self-Confidence Surveys help teachers in assessing the students’ level of confidence in their ability to learn the relevant skills and materials. According to Angelo and Cross (1993, pp. 275 - 276), the Classroom Assessment Technique of “Course-Related Self-Confidence Surveys” is useful in the following situations:
a) In courses where students are trying to learn new and unfamiliar skills, or familiar skills that they failed in previous attempts;
b) In introductory courses, such as, in mathematics, public speaking, and natural sciences, before the skills in question are introduced, and again when students are likely to have made significant progress toward mastering them.
2.1.2 Purpose
The following are the main purpose of the Classroom Assessment Technique of “Course-Related Self-Confidence Surveys,” (see, for example, Angelo and Cross, 1993, pp. 275 & 277, for details):
(i) It helps teachers in assessing the students’ level of confidence in their ability to learn the relevant skills and materials;
(ii) It provides information on students’ self-confidence – and, indirectly, on their anxieties – about specific and often controllable elements of the course;
(iii) It helps students learn that a minimum level of confidence is necessary to learning;
(iv) The instructor uses this feedback to guide their teaching strategies to make a particular lesson or topic more clear, lucid, understandable and free from any anxieties.
2.1.3 Related Teaching Goals
The following are related teaching goals of using the “Course-Related Self-Confidence Surveys,” known as Teaching Goal Inventory (TGI), (see, for example,
6
the Teaching Goal Inventory (TGI), Exhibits 2.1. and 2.2., Angelo and Cross, 1993, pp. 20 – 23, for details):
a) Develop a lifelong love of learning; b) Develop (self-) management skills; c) Develop leadership skills; d) Develop a commitment to personal achievement; e) Improve self-esteem/self-confidence; f) Develop a commitment to one’s own values; g) Cultivate emotional health and well-being; h) Cultivate physical health and well-being.
2.2 The Muddiest Point 2.2.1 Description
The muddiest point assessment technique is another simplest CAT for assessing course-related knowledge and skills of students, known as “declarative learning,” (see, for example, Angelo and Cross 1993, Chapter 7, p. 115, for details). It provides an efficient avenue of input and a high information return to the instructor without spending much time and energy. In the muddiest point assessment technique, the students are to respond to a single question: “What was the muddiest point in _________?” The students are asked to identify “what they do not understand either about the topic or in the lecture or class.” The focus of the muddiest point assessment technique might be a lecture, a topic, a discussion, a homework assignment, a demonstration, a film, a play, or a general problem-solving activity. Angelo and Cross (1993, p. 155) suggest using the muddiest point assessment technique in the following situations:
a) Quite frequently in classes where a large amount of new information is presented each session – such as mathematics, statistics, economics, health sciences, and the natural sciences – probably because there is a steady stream of possible “muddy points;”
b) In courses where the emphasis is on integrating, synthesizing, and evaluating information.
2.2.2 Purpose The following are the main purpose of the muddiest point assessment technique:
(i) It provides information on what students find least clear about a particular lesson or topic;
(ii) It provides information on what students find most confusing about a particular lesson or topic;
(iii) The learners quickly identifies what they do not understand and articulate those muddy points;
(iv) The instructor uses this feedback to guide their teaching strategies to make a particular lesson or topic clearer, more lucid, understandable and free from any muddiest point.
7
2.2.3 Related Teaching Goals
The following are related teaching goals of using the Assessment Technique of “Muddiest Point,” (see, for example, the Teaching Goal Inventory (TGI), the Teaching Goal Inventory (TGI), Exhibits 2.1. and 2.2., Angelo and Cross, 1993, pp. 20 – 23, for details):
(i) Improve skill at paying attention; (ii) Develop ability to concentrate; (iii) Improve listening skills; (iv) Develop appropriate study skills, strategies, and habits; (v) Learn terms and facts of this subject; (vi) Learn concepts and theories in this subject.
2.3 The Exam Evaluations 2.3.1 Description There are various classroom assessment techniques developed by Angelo and Cross (1993) which are directly concerned with better learning, more effective teaching, and assessing learner reactions to instruction. The purpose of this project is also to apply one of the Classroom Assessment Techniques (CAT’s) designed for “Assessing Learner Reactions to Instruction.” These are classified into the following categories: (a) Assessing Learner Reactions to Teachers and Teaching; and (b) Assessing Learner Reactions to Class Activities, Assignments, and Materials, (see, for example, Angelo and Cross, 1993, Chapter 9, p. 317, among others, for details). Each of these categories has five classroom assessment techniques. According to Angelo and Cross (1993), “The second category of these CAT’s is designed to give teachers information that will help them improve their course materials and assignments. At the same time, these CATs require students to think more carefully about the course work and its relationship to their learning.” The “Exam Evaluations (CATEE)” is one of the simplest CAT’s, which belongs to this category. It is applicable to many classroom situations. It provides an efficient avenue of input and a high information return to the instructor without spending much time and energy. It is designed to help the instructor to examine both “what the students think that they are learning from exams, tests, or quizzes” and “their evaluations of the fairness, appropriateness, usefulness, and quality of exams, tests, or quizzes.” According to Davis (1999), “Exams, tests, or quizzes are powerful educational tools that serve at least four functions as follows: (I) These exams, tests, or quizzes help the instructors evaluate students and assess whether they are learning what the instructors are expecting them to learn. (II) Well-designed exams, tests, or quizzes serve to motivate and help students structure their academic efforts. The students study in ways that reflect how they think they will be tested. If they expect an exam focused on facts, they will memorize details; if they expect a test that will require problem solving or integrating knowledge, they will work toward understanding and applying information (see, for example, Crooks (1988), McKeachie (1986), and Wergin (1988), among others). (III) The exams, tests, or quizzes can help the instructors understand how successfully the instructors are presenting the material. (IV) Finally, the exams, tests, or quizzes can reinforce learning by providing students with indicators of what topics or skills they have not yet mastered and should concentrate on.” Davis (1999) further observes, “An examination is the most comprehensive
8
form of testing, typically given at the end of the term (as a final) and one or two times during the semester (as midterms). A test is more limited in scope, focusing on particular aspects of the course material. A course might have three or four tests. A quiz is even more limited and usually is administered in fifteen minutes or less.” For details on exams, tests, and quizzes, general strategies, types, etc., see, for example, Davis (1999), among others, and references therein.
Thus, it is clear from the above that the Classroom Assessment Technique of “Exam Evaluations” helps teachers in assessing the students’ level of confidence in their ability to learn the relevant skills and materials. It is designed to give teachers information that helps them improve their course materials and assignments. At the same time, this CAT requires students to think more carefully about the course work and its relationship to their learning. According to Angelo and Cross (1993, p. 359), the Classroom Assessment Technique of “Exam Evaluations” is useful in the following situations:
It can be profitably used to get feedback on any substantial quiz, test, or exam.
To ensure that the memory of quiz, test, or exam is still fresh in students’ minds, the “Exam Evaluation” may be included within the exam itself, as the final section.
The “Exam Evaluation Form” may be may be handed out to the students for completion soon after they have finished the exam.
2.3.2 Purpose The following are the main purpose of the Classroom Assessment Technique of “Exam Evaluations” (see, for example, Angelo and Cross, 1993, p. 359, for details).
It helps teachers to examine both “what the students think that they are learning from exams, tests, or quizzes” and “their evaluations of the fairness, appropriateness, usefulness, and quality of exams, tests, or quizzes.”
It provides teachers with specific student reactions to tests and exams, so that they can make the exams more effective as learning and assessment devices.
It helps teachers in assessing the students’ level of confidence in their ability to learn the relevant skills and materials.
It provides information on students’ self-confidence – and, indirectly, on their anxieties – about specific and often controllable elements of the course.
It helps students learn that a certain level of confidence is necessary to learning.
The instructor uses this feedback to guide their teaching strategies to make a particular lesson or topic more clear, lucid, understandable and free from any anxieties.
2.3.3 Related Teaching Goals
The following are related teaching goals of using the “Exam Evaluations” Technique (see, for example, the Teaching Goal Inventory (TGI), Exhibits 2.1. and 2.2., Angelo and Cross, 1993, pp. 20 – 23 and p. 359, for details).
(i) Develop appropriate study skills, strategies, and habits;
9
(ii) Learn to evaluate methods and materials in this subject; (iii) Cultivate an active commitment to honesty; (iv) Develop capacity to think for oneself.
3. Implementation This section discusses the implementation of three CAT’s, as described above, in a Business Calculus Class. 3.1 The Course-Related Self-Confidence Surveys This section discusses the development and implementation of the Classroom Assessment Technique of “Course-Related Self-Confidence Surveys (CATCRSCS)” in a Business Calculus Class. The following topics were already introduced, taught, and discussed in prior lectures of the class before the Course-Related Self-Confidence Surveys were conducted: “Limits and Continuity Concepts.” The prescribed textbook for this Course was: “Calculus for Business, Economics, and the Social and Life Sciences,” 8th edition, by Laurence D. Hoffman and Gerald L. Bradley, McGraw-Hill, 2004, ISBN: 0 - 07-242432 – X. Calculus is one of the most important and powerful branches of mathematics with a wide range of applications, including curve sketching, optimization of functions, analysis of rates of change, and computation of area and probability. The concepts of limits and continuity form the basis of any rigorous development of the laws and procedures of calculus. In any study of calculus, the concepts of limits and continuity of a function are fundamental. They are primary tools of calculus, and lie at the heart of much of modern mathematics. The limit process involves examining the behavior of a function as )(xf x approaches a number that may or may not be in the
domain of . On the other hand, a continuous function is one whose graph can be drawn continuously without any break or interruption. There are many practical situations and physical phenomena in which limiting and continuous behavior occurs. The limits and continuity of a function are defined as follows:
c)(xf
)(xf DEFINITION 1: LIMIT OF A FUNCTION Let be a function of x. Then a number is called the limit of the function
if gets closer and closer to as
)(xfy =)(xf f
Ly = )(x L x approaches a number c that may
or may not be in the domain of . This behavior of the function is
expressed by writing .
)(xf )(xfLxf
cx=
→)(lim
DEFINITION 2: EXISTENCE OF LIMIT OF A FUNCTION The limit of a function at )(xf cx = , i.e., exists if and only if the left-hand
limit and the right-hand limit exist and are equal.
)(lim xfcx→
)(lim xfc+
)(lim xfcx −→ x→
DEFINITION 3: CONTINUITY OF A FUNCTION AT A POINT
10
A function is said to be continuous at a point )(xf cx = if the following conditions are satisfied: (i) is defined;
(ii) exists;
(iii) .
The following ideas on limits and continuity of a function , with illustration by some examples and applications were introduced, defined, and discussed in the class before the surveys, (see, for example, Hoffman and Bradley, 2004, pp. 57 – 79, for details).
)
f
fc
(cflim
cx→
limx→
)(x
)(x )(cf=
)(xf
Limit of a function Limits at infinity Limits at infinity of a rational function Infinite limit One-sided limits Existence of a limit Continuity of a function at a point Discontinuity of a function at a point Limits and Continuity of polynomials, rational and piece-wise defined
functions
After introducing and discussing the concepts of limits and continuity of a function and illustrating with some examples and applications, the following Course-Related Self-Confidence Surveys (Table 3.1.1) were conducted.
)(xf
Table 3.1.1
“The Course-Related Self-Confidence Surveys” (On the Self-Confidence in Limits and Continuity Concepts)
(Students’ Response)
This survey is to help both of us understand your level of confidence in your limit and continuity skills. Rather than thinking about your self-confidence in limits and continuity concepts in general terms, please indicate how confident you feel about your ability to do the various kinds of problems on “Limits and Continuity Concepts” listed below in the Table. (Circle the most accurate response for each.) Self-Confidence in Your Ability to Do Them
(Students’ Response)
Items Kinds of Problems and Concepts
None Low Medium High
Totals
1 Limit of a function
0 1 2 8 11
2 Limits at infinity
0 0 9 2 11
11
3 Limits at infinity of a rational function
0 2 7 2 11
4 Infinite limit 0 4 6 1 11 5 One-sided limits 0 1 10 0 11 6 Existence of a limit 0 2 9 0 11 7 Continuity of a function at a
point 0 3 6 2 11
8 Discontinuity of a function at a point
0 3 7 1 11
9 Limits and Continuity of polynomials, rational and piece-wise defined functions
0 2 9 0 11
Totals
0 18 65 16 99
The students responded to the question very enthusiastically. Out of 14 students in the class, 11 were present on the day, when the Surveys were conducted. The students’ response (namely, none, low, medium, and high) on the nine components of the concepts of limits and continuity of a function , as discussed in the class, are tabulated in Table 3.1.1 above.
)(xf
3.2 The Muddiest Point This section discusses the development and implementation of the Classroom Assessment Technique of “Muddiest Point (CATMP),” in the said Business Calculus Class. The “Concepts of the Derivative of a Function” were already introduced, taught, and discussed in the previous lectures of the class before the Muddiest Point (CATMP) Surveys were conducted. The derivative of a function is a very important concept in calculus and mathematics, in general. It is one of the primary tools for studying the rates of change of a variable with respect to another variable. It is also used to compute the slope of the graph of a function of a variable. Many physical phenomena can also be described through the derivative of a function. It is defined as follows (see, for example, Hoffman and Bradley, 2004, pp. 96 – 104, for details).
1. Definition: The derivative of the function )(xfy = with respect to x is the
function )(/ xf given by
h
xfhxfxfh
)()(lim)(
0
/ −+=
→ (1.1)
(Read as “ prime of f x ”). The process of computing the derivative is called
differentiation, and is said to be differentiable at a point )x(f cx =c
if
exists, i.e., if the above limit (1.1) that defines exists when
)(/ cf)(/ xf x = .
2. Notation: The derivative of )(xfy = is denoted as: .,)(/
dxdyor
dxdfxf
12
3. Slope )(m of the tangent line to the graph of )(xfy = at a point ),( 00 yx ,
where )( 0xf , is given by the derivative of the function )(xfy = at 0x ,
i.e., by )( 0/ xf .
0y =
m =
4. Equation the tangent line to the graph of )(xfy = at ),( 00 yx is given by
)( 0 . 0 xxmyy −=−
After introducing and discussing the concepts of the derivative of a function and illustrating with some examples and applications, the following question was posed during the last ten minutes of the lecture. The students were provided with index cards to answer the question.
Question: “What was the muddiest point in the concept of the derivative of a function?” The students were asked to identify “what they did not understand about the topic or in the lecture or class.” What was the least clear and most confusing point about the topic?
The students responded to the question very enthusiastically. Out of 14 students in the class, 12 were present on that day. Based on the students’ response on the five components of the concept of the derivative of a function as discussed in the class, the muddiest points, namely, most confusing, least clear, and somewhat clear, are given in Table 3.2.1 below. The data analysis is also provided.
Table 3.2.1
The Muddiest Point (on the concept of the derivative of a function)
(Students’ Response)
Items Kinds of Problems Most Confusing
Least Clear
Somewhat Clear
Totals
1. Definition: The derivative of the function with respect to
)(xfy =x
3 0 9 12
2. Notation:
dxdy
3 1 8 12
3. Slope of the tangent line to the graph of
at .
)(m
)x(fy = ),( 00 yx
3 2 7 12
4. Equation the tangent line to the graph of at
.
)(xfy =),( 00 yx
4 1 7 12
5. Applications (Examples) 4 2 6 12 Totals 17 6 37 60
13
3.3 The Exam Evaluations This section discusses the development and implementation of the Classroom Assessment Technique of “Exam Evaluations (CATEE)” to one of the tests, i.e., Test # 1, in the said Business Calculus Class. Test # 1 was already administered in the class (the details of which are provided in Appendix I). After administering Test # 1, the following Exam Evaluations Surveys were conducted (see Table 3.3.1).
Table 3.3.1
Sample Survey: Exam Evaluations (of Test # 1) Name: ____________________________________ ID # ___________________ This survey is to help us to examine both what you think you are learning from exams and tests and your evaluations of the fairness, appropriateness, usefulness, and quality of tests or exams. On Tuesday, you took the first test (Test # 1) of this course. The test consisted of 80 % free-response questions (by solving the given problems) and 20 % multiple-choice questions. Please answer the following survey questions, as listed below in the Table, about the test as specifically as possible. (Circle the most appropriate response for each.) Questions Exam Evaluations (of Test # 1) 1. Did you feel that the test was fairer assessment of your learning of the materials covered before the test?
Fair Appropriate Useful All of these None
2. Did you enjoy the content or form of the test?
Content Form Both Content and Form
None
3. Did you learn more from free-response questions (by solving the given problems) than from the multiple-choice questions?
From free-response
From multiple-choice
From both From none
4. What type of test would you prefer as your remaining tests and final exam during the rest of the semester?
Free-response questions (by solving the given problems)
Multiple-choice questions
Both free-response questions (by solving the given problems) and the multiple-choice questions
The students responded to the question very enthusiastically. Out of 14 students in the class, 13 were present on that day. The data analysis of students’ responses on e four components of Test # 1 (see Table 3.3.1 above) is discussed below.
14
4. Data Analysis and Discussions
This section discusses the data analysis of the implementation of three CAT’s, as described above, in the said Business Calculus Class, i.e., CATCRSCS, CATMP, and CATEE. 4.1 CATCRSCS
Using MINITAB, the following bar graph, (see Figure 4.1.1 below), was drawn based on the students’ response on nine components of the concepts of limits and continuity of a function . During the survey, 11 out of 14 students were present in the class on that day. The total number of response was 99. It is clear that most of the students responded with “Medium” (i.e. 65.66 %) on all nine components. Approximately 18.18 % of the students’ response was “Low,” whereas 16.16 % students’ response was “High.” No student responded with “None” on the nine components. It is also clear from Table 3.1 that 72 % of Students’ Response on “Self-Confidence in Your Ability to Do Them” for the concept “Limit of a function” was “High.” 82 % of the students’ response was “Medium” for each of “Limits at infinity,” “Existence of a limit,” and “Limits and Continuity of polynomials, rational and piece-wise defined functions,” whereas 91 % was “Medium” for “One-sided limits.” Figure 4.1.1
)(xf
Dat
a
Response
Limits an
d Con
tinuit
y of p
olyno
Discon
tinuit
y of a
func
tion at
Contin
uity o
f a fu
nctio
n at a
p
Existen
ce o
f a lim
it
One-si
ded lim
its
Infin
ite lim
it
Limits
at in fin
ity o
f a ra
tiona
Limits
at inf
inity
Limit o
f a fu
nctio
n
Hig h
Medium
Low
NoneHigh
MediumLow
None
High
MediumLow
None
High
MediumLow
None
High
Medium
Low
None
Hig h
MediumLow
NoneHigh
MediumLow
None
High
MediumLow
None
High
MediumLow
None
10
8
6
4
2
0
“COURSE-RELATED SELF-CONFIDENCE SURVEYS”(On the Self-Confidence in Limits and Continuity Concepts)
(Students’ Response)
Percent within all data.
15
4.2 CATMP
Using MINITAB, the following bar graphs were drawn based on the students’ response on five components of the concept of the derivative of a function as discussed in the class (see Table 3.2.1 above). These are provided in the Figure 4.2.1 below. It is clear that most of the students responded as “Somewhat Clear” (i.e. 61.65 %) on all five components. Approximately 28.33 % of the students’ response was “Most Confusing,” whereas 10 % students’ response was “Least Clear.”
Figure 4.2.1
Data
Class
Applications
Eq. of Tangent
Slope
Notation
Definition
Somewhat ClearLeast Clear
Most Confusing
Somewhat ClearLeast Clear
Most Confusing
Somewhat ClearLeast Clear
Most Confusing
Somewhat ClearLeast Clear
Most Confusing
Somewhat ClearLeast Clear
Most Confusing
100806040200
Class
Applications
DefinitionNotationSlopeEq. of Tangent
Applications
Eq. of Tangent
Slope
Notation
Definition
The Muddiest Point in a Business Calculus Course
Percent within levels of Class.
4.3 CATEE
Using MINITAB and PHStat, the following graphs, (see Figures 4.3.1, and 4.3.2 below), were drawn based on the students’ response. During the survey, 13 out of 14 students were present in the class on that day. The total number of response was 52. As per analysis of the students’ response to survey questions, we observed: (i) That 14 % was “Both Content and Form” for survey question # 2;
(ii) That 15 % was “Form Both” for survey question # 3;
(iii) That 15 % was “Both Free-Response and Multiple-Choice Questions” for survey question # 4;
(iv) That Approximately 10 % of the students’ response was “Fair” and “All of these” for survey questions # 1.
16
For responses to other questions, see Figures 4.3.1 and 4.3.2 below.
Figure 4.3.1
Classroom Assessment Technique - Sample Survey: Exam Evaluations (Test # 1)
All of these10%
Appropriate6%
Both Content & Form14%
Both Free-response and Multiple-choice
questions15%
Content2%
Fair10%
Form6%
Free-response questions2%
From both15%
From free-response4%
From multiple-choice6%
Multiple-choice questions
8%None2%
17
Figure 4.3.2
Classroom Assessment Technique - Sample Survey: Exam Evaluations (Test # 1)
0 1 2 3 4 5 6 7 8 9
1
2
3
4
None
Multiple-choice questions
From multiple-choice
From free-response
From both
Free-response questions
Form
Fair
Content
Both Free-response and Multiple-choicequestionsBoth Content & Form
Appropriate
All of these
Count of Survey Questions
Survey Questions
Response
From the above analysis of data, it is easily observed:
(A) That most of the students of the said Business Calculus Class had the same response during the “COURSE-RELATED SELF-CONFIDENCE SURVEYS on the Self-Confidence in Limits and Continuity Concepts,” i.e., most of the students responded with “Medium” (i.e. 65.66 %) on all nine components.
(B) That most of the students mentioned the same “muddy point”: the concept of the derivative of a function is “Somewhat Clear,” but, at the same time, it is either “Most Confusing” or “Least Clear” to them.
(C) That the students’ responses were very encouraging, as most of them enjoyed Test # 1. They were able to apply the already-taught concepts to answer both free-response questions and the multiple-choice questions. Most of the students had the following opinion about Test # 1.
• They felt that the test was a fair assessment of their learning of the materials covered before the test.
• They enjoyed both the content and form of the test. • They felt that they learnt more from both free-response questions
(by solving the given problems) and the multiple-choice questions. • Their preference was both free-response questions (by solving the
given problems) and the multiple-choice questions for the remaining tests and final exam during the rest of the semester.
18
7. Concluding Remarks
Based on our observations and analysis, it is clear that the three CAT’s considered in this project, i.e., the Course-Related Self-Confidence Surveys (CATCRSCS), the Muddiest Point (CATMP), and the Exam Evaluations (CATEE), are the simplest and most important Classroom Assessment Techniques. These Classroom Assessment Techniques help teachers to measure the effectiveness of their teaching by finding out what students are learning in the classroom and how well they are learning. In addition, these techniques provide an efficient avenue of input and a high information return to the instructor without spending much time and energy. It is recommended that, in future, more techniques be developed and implemented in other mathematics classes, for example, college preparatory mathematics, college level mathematics, etc., for better learning and more effective teaching.
Acknowledgments
I am thankful to the authorities of Miami-Dade College for allowing me to take a course in “Analysis of Teaching (EDG 5325)” at Florida International University, Miami, Florida, USA, without attending which, it would not have been possible to complete this paper.
References
Angelo, T. A., and Cross, K. P. (1993), Classroom Assessment Techniques – A Handbook for College Teachers, Jossey-Bass, San Francisco.
Ausubel, D. P. (1968), Educational Psychology: A Cognitive View, Holt, Reinhart & Winston, Troy, Mo. Bloom, B. S., Hastings, J. T., and Madaus, G. F. (1971), Handbook on Formative and Summative Evaluation of Student Learning, McGraw-Hill, New York. Bloom, B. S., and others (1956), Taxonomy of Educational Objectives, Vol. 1: Cognitive Domain, McKay, New York. Brown, A. L., Bransford, J. D., Ferrara, R. A., and Campione, J. C. (1983), Learning, Remembering, and Understanding, in F. H. Flavell and E. M. Markman (eds.), Handbook of Child Psychology, Vol. 3: Cognitive Development, (4th ed.), Wiley, New York. Crooks, T. J. (1988), The Impact of Classroom Evaluation Practices on Students, Review of Educational Research, 58(4), 438-481. Davis, B. G. (1999), Quizzes, Tests, and Exams, http://honolulu.hawaii.edu/intranet/committees/FacDevCom/guidebk/teachtip/quizzes.htm. Flinders University of South Australia (2000), Education and Research Policy, http://www.flinders.edu.au/teach/teach/home.html.
19
Greive, D. (2003), A Handbook for Adjunct/Part-Time Faculty and Teachers of Adults, 5th Edition, The Adjunct Advocate, Ann Arbor. Hoffman, L. D., and Bradley, G. L. (2004), Calculus for Business, Economics, and the Social and Life Sciences, 8th Edition, McGraw-Hill, New York. McKeachie, W. J., Pintrich, P. R., Lin, Yi-Guang, and Smith, D. A. F. (1986), Teaching and Learning in the College Classroom: A Review of the Research Literature, National Center for Research to Improve Postsecondary Teaching and Learning, University of Michigan, Ann Arbor. McKeachie, W. J. (1986), Teaching Tips, 8th ed., Lexington, Mass.: Heath. Norman, D. A. (1980), What Goes On in the Mind of the Learner, in W. J. McKeachie (ed.), Learning, Cognition, and College Teaching, New Directions for Teaching and Learning, No. 2, Jossey-Bass, New York. Study Group on the Conditions of Excellence in American Higher Education (1984), Involvement in Learning, National Institute of Education, Washington, D. C. Weinstein, C., and Mayer, R. (1986), The Teaching of Learning Strategies, in M. C. Wittrock (ed.), Handbook of Research on Teaching, Macmillan, New York. Wergin, J. F. (1988), Basic Issues and Principles in Classroom Assessment, In J. H. McMillan (ed.), Assessing Students' Learning: New Directions for Teaching and Learning, No. 34, San Francisco: Jossey - Bass.
Appendix I
NAME: ______________________________ Student ID: __________________
MAC 2233: CALCULUS FOR BUSINESS
Test # 1
DIRECTIONS: Answer ALL questions. Total Points: 100.
PART A (80 Points) (Show your work for full credit.)
(1) Find the limit:
2 2
2lim
4xxx→−−
(2) Find the limit:
2
2
3 2lim
1xx x
x→∞+ +−
20
(3) Differentiate the following function:
7 51( ) 2 9 8
3f x x x x= − + −
(4) Differentiate the following function:
2
( )2
xf xx
=−
(5) Test the continuity of the following function at x = 3: 2 if 3
( )9 if 3
x xf x
x⎧ ≤
= ⎨>⎩
by showing the following steps: (a) Find f(3) = (b) Find the following limits for the above function:
(i) Right-hand limit: lim f(x) = x→3+ (ii) Left-hand limit: lim f(x) =
x→3–
(iii) Does the lim f(x) exist? If it exists, what is the lim f(x) =
x→3 x→3 (c) Is f (x) continuous at x = 3? State the reason(s).
(6) If , using the definition of derivative, find the first derivative of the function, as given below.
1)( 2 −= xxf
hxfhxfxf
h
)()(lim)(
0
/ −+=
→.
Hence find the slope and equation of the line that is tangent to the graph of the given function at . 1−=x
PART B (20 Points) Multiple-Choice Questions
(Circle your answers) (7) Differentiate
2( ) ( 1)( 3)f x x x= − −
(a) 23 6 1x x− − (b) 6x + 1
(c) (d) 2 1x + 2 1x + 23 6 1x x+ +
(8) True or false: The left-hand limit of the function given below, i.e.,
2lim ( )
xf x−→
, is 8= , where
2 if 2
( )2 if 2
x xf x
x x⎧ ≤
= ⎨+ >⎩
(a) True (b) False
21
(9) True or false: The right-hand limit of the function given below, i.e.,
3lim ( )
xf x+→
, where
if 3
( )1 if 3
x xf x
x x<⎧
= ⎨ + ≥⎩
is . 3= (a) True (b) False
(10) The derivative of
2( )f x
x= is
(a) 3
1
x−
(b) 3
1
x
(c) 1
x−
(d) x−
A Multiple Linear Regression Model to Predict the Student’s Final Grade in a Mathematics Class
Dr. M. Shakil Department of Mathematics
Miami-Dade College, Hialeah Campus 1780 West 49th Street
Hialeah, Florida 33012, USA E-mail: [email protected]
ABSTRACT
Multiple linear regression is one of the most widely used statistical techniques in educational research. It is defined as a multivariate technique for determining the correlation between a response variable and some combination of two or more predictor variables. In this paper, a multiple linear regression model is developed to analyze the student’s final grade in a mathematics class. The model is based on the data of student’s scores in three tests, quiz and final examination from a mathematics class. The use of multiple linear regression is illustrated in the prediction study of the student’s average performance in the mathematics class. The estimates both of the magnitude and statistical significance of relationships between the variables have been provided. The graphical representations of our analysis have been given. Some concluding remarks are given in the end. Key words: Regression, response variable, predictor variable. Mathematics Subject Classification: 65F359, 15A12, 15A04, 62J05. 1. INTRODUCTION Multiple linear regression is defined as a multivariate technique for determining the correlation between a response variable Y and some combination of two or more predictor variables, X , (see, for example, Montgomery and Peck (1982), Draper and Smith (1998), Tamhane and Dunlop (2000), and McClave and Sincich (2006), among others, for details). It can be used to analyze data from causal-comparative, correlational, or experimental research. It can handle interval, ordinal, or categorical data. In addition, multiple regression provides estimates both of the magnitude and statistical significance of relationships between variables. Multiple linear regression is one of the most widely used statistical techniques in educational research. It is regarded as the “Mother of All Statistical Techniques.” For example, many colleges and universities develop regression models for predicting the GPA of incoming freshmen. The predicted GPA can then be used to make admission decisions. In addition, many researchers have studied the use of multiple linear regression in the field of educational research. The use of multiple linear regression has been studied by Shepard (1979) to determine the predictive validity of the California Entry Level Test (ELT). In Draper and Smith (1998), the use of multiple linear regression is illustrated in a prediction study of the candidate’s
2
aggregate performance in the G. C. E. examination. The use of multiple regression is also illustrated in a partial credit study of the student’s final examination score in a mathematics class at Florida International University conducted by Rosenthal (1994). A multiple regression study was also conducted by Senfeld (1995) to examine the relationships among tolerance of ambiguity, belief in commonly held misconceptions about the nature of mathematics, self-concept regarding math, and math anxiety. In Shakil (2001), the use of a multiple linear regression model has been examined in predicting the college GPA of matriculating freshmen based on their college entrance verbal and mathematics test scores. The organization of this paper is as follows. In Section 2, the multiple linear regression model and underlying assumptions associated with the model are discussed. In Section 3, the problem and objective of this study are presented. Section 4 provides the data analysis, justification and adequacy of the multiple regression model developed. Some concluding remarks are given in Section 5. 2. MULTIPLE LINEAR REGRESSION MODEL AND ASSUMPTIONS 2.1. Model A multiple linear regression model (or a regression equation) based on a number of independent (or predictor) variables can be obtained by the method
of least squares, and is given by the equation kXXX ,,, 21 K
εββββ +++++= kk XXXY L22110 ,
where Y = response variable, X =predictor variables, kβ = the population regression
coefficients, and ε = a random error, (see, for example, Mendenhall, et al (1993), and Draper and Smith (1998), among others, for details). Multiple linear regression allows for the simultaneous use of several independent (or predictor) variables, X , to explain the variation in the response variable Y . The fitted equation is given by
, kk XXXY∧∧∧∧∧
++++= ββββ L22110
where ∧
Y = predicted or fitted value and = estimates of the population regression
coefficients. The sum of squares of deviations (residuals) of the observed value of
∧
kβY
from its predicted or fitted value is given by
( )2
122110
2
1∑∑=
∧∧∧∧∧
=⎥⎦⎤
⎢⎣⎡ ++++−=⎥⎦
⎤⎢⎣⎡ −=
n
iikkiiiii
n
i
XXXYYYresidualSS ββββ L
where is the fitted model and are
estimates of the model parameters. The “best fit” equation based on the sample data is the one that minimizes the
kk XXXY∧∧∧∧
++++= ββββ L22110
∧∧∧
kβββ ,,, 10 L
( )residualSS .
2.2. Assumptions For the multiple linear regression model
3
εββββ +++++= kk XXXY L22110 ,
the following assumptions are made:
a) The random error term ε has an expected value of zero and a constant
variance 2σ . That is, ( ) 0=εE and ( ) 2σε =V for each recorded value of the
dependent variable Y .
b) The error components are uncorrelated with one another.
c) The regression coefficients kβββ ,,, 10 L are parameters (and hence
constant).
d) The independent (predictor) variables kX are known constants. XX ,,, 21 K
e) The random error term ε is a normally distributed random variable, with an
expected value of zero and a constant variance 2σ , by assumption (a). That
is, ( )2,0 . Under this additional assumption, the error components are not only uncorrelated with one another, but also necessarily independent.
~ σε N
3. PROBLEM AND OBJECTIVE OF STUDY The purpose of the present study was to contribute to the body of knowledge pertaining to the use of multiple linear regression in educational research. The objective was to develop an appropriate multiple linear regression model to relate the student’s final examination score (considered as the dependent or response variable Y ) to the student’s scores in tests, quizzes, etc. (considered as the independent or predictor variables X ). It examined how well the scores in tests, quizzes, etc. could be used to predict the student’s final grade. Data were collected on Student’s Test # 1 score , Test # 2 score ( 1X ) ( )2X , Test # 3 score ( , and
Final Examination Score , for a sample of 39 students in a mathematics class. Using these variables, the following three-predictor multiple linear regression model (or the least squares prediction equation) was developed:
)3X(Y )
εββββ ++++= 3322110 XXXY ,
where s'β denote the population regression coefficients, and ε is a random error. The Minitab regression computer programs were used to determine the regression coefficients and analyze the data (see, for example, Mckenzie and Goldman (2005: MINITAB Release 14). The adequacy of the multiple linear regression model for predicting the student’s final examination grade was conducted using the F-test for significance of regression.
4
4. DATA ANALYSIS The Minitab regression computer program outputs are given below. The paragraphs that follow explain the computer program outputs. 4.1. Minitab Regression Computer Program Output: Analysis of Variance 4.1.1. Regression Analysis: Y versus X1, X2, X3 The regression equation is Y = 8.98 + 0.247 X1 + 0.338 X2 + 0.290 X3 Predictor Coef SE Coef T P VIF Constant 8.978 9.737 0.92 0.363 X1 0.2466 0.1456 1.69 0.099 1.6 X2 0.3384 0.1202 2.82 0.008 1.5 X3 0.2899 0.1146 2.53 0.016 1.1 S = 13.1376 R-Sq = 53.3% R-Sq(adj) = 49.3% PRESS = 7229.35 R-Sq(pred) = 44.09% Analysis of Variance Source DF SS MS F P Regression 3 6890.6 2296.9 13.31 0.000 Residual Error 35 6040.8 172.6 Total 38 12931.4 Source DF Seq SS X1 1 4251.7 X2 1 1534.1 X3 1 1104.8 Unusual Observations Obs X1 Y Fit SE Fit Residual St Resid 13 59.0 85.00 58.42 3.69 26.58 2.11R 38 78.0 45.00 72.63 2.37 -27.63 -2.14R R denotes an observation with a large standardized residual. 4.1.2. Interpreting the Results
I. From the Analysis of Variance table, we observe that the p-value is (0.000). This implies that that the model estimated by the regression procedure is significant at an α -level of 0.05. Thus at least one of the regression coefficients is different from zero.
5
II. The p-values for the estimated coefficients of 2X and 3X , are respectively
0.008 and 0.016, indicating that they are significantly related to Y . The p-value for 1X is 0.099, indicating that it is probably not related to Y at an α -level of 0.05.
III. The 2R and Adjusted 2R Statistic: There are several useful criteria for measuring the goodness of fit of the multiple regression model. One such criteria is to determine the square of the multiple correlation coefficient R2 (also called the coefficient of multiple determination), (see, for example, Mendenhall, et al (993), and Draper and Smith (1998), among others). The
2R value in the regression output indicates that only 53.3 % of the total variation of the Y values about their mean can be explained by the predictor
variables used in the model. The adjusted 2R value (2
aR ) indicates that only
49.3 % of the total variation of the Y values about their mean can be explained by the predictor variables used in the model. As the values of 2R
and 2
aR are not very different, it appears that at least one of the predictor
variables contributes information for the prediction of Y . Thus both values indicate that the model fits the data well.
IV. Predicted R2 Statistic: The predicted 2R value is 44.09%. Because the
predicted 2R value is close to the 2R and adjusted 2R values, the model does not appear to be overfit and has adequate predictive ability.
V. Estimate of Variance: The variance about the regression 2σ of the Y values for any given set of the independent variables kX is estimated
by the residual mean square 2s , which is equal to SS (residual) divided by appropriate number of degrees of freedom, and the standard error s is given by
XX ,, 21 ,K
squaremean residual 2ss = .
For our problem, we have
and 6.1722 =s 1376.13=s .
Examination of this statistics indicates that the smaller it is the better, that is, the more precise will be the predictions. A useful way of looking at the decrease in is to consider it in relation to response, (see, for example, s Draper and Smith (1998), among others, for details). In our example, as a s
percentage of mean −
Y , that is, the coefficient of variation ( )VC , is given by
%7292.1958974.66
1376.13==VC .
This means that the standard deviation of the student’s final examination
6
grade, Y , is only 19.72910 % of their mean.
VI. Unusual Observations: Observations 13 and 38 are identified as unusual because the absolute value of the standardized residuals are greater than 2. This may indicate they are outliers.
VII. Multicollinearity: By multicollinearity, we mean that some predictor variables are correlated with other predictors. Various techniques have been developed to identify predictor variables that are highly collinear, and for possible solutions to the problem of multicollinearity, (see, for example, Montgomery and Peck (1982), Draper and Smith (1998), Tamhane and Dunlop (2000), and McClave and Sincich (2006), among others, for details). For example, we can examine the variance inflation factors (VIF), which measure how much the variance of an estimated regression coefficient increases if the predictor variables are correlated. Following Montgomery and Peck (1982), if the VIF is 5 - 10, the regression coefficients are poorly estimated. Since the variance inflation factors (VIF) for each of the estimated regression coefficient in our calculations are less than 5, there does not seem to be multicollinearity in our model.
VIII. Predicted Values for New Observations: Using the model developed, some values are given below.
New Obs X1 X2 X3 1 70.0 65.0 80.0 Predicted Values for New Observations New Obs Fit SE Fit 95% CI 95% PI 1 71.43 2.67 (66.01, 76.84) (44.21, 98.64) 4.2. Best Subsets Regression: Y versus X1, X2, X3 Another important criterion function for assessing the predictive ability of a multiple linear regression model is to examine the associated -statistic. The best subsets
regression method is used to choose a subset of predictor variables so that the corresponding fitted regression model optimizes the -statistic. The Minitab
regression computer program output for best subsets regression is given below.
pC
pC
Best Subsets Regression: Y versus X1, X2, X3 Response is Y Mallows X X X Vars R-Sq R-Sq(adj) C-p S 1 2 3 1 37.6 36.0 11.7 14.763 X 1 32.9 31.1 15.3 15.316 X 2 49.5 46.6 4.9 13.474 X X 2 44.7 41.7 8.4 14.089 X X
7
3 53.3 49.3 4.0 13.138 X X X
In the above computer output, each line represents a different model. Vars is the number of variables or predictors in the model. The 2R and adjusted 2R statistic are converted to percentages. Predictors that are present in the model are indicated
by an X. The model with all the predictor variables has the highest adjusted 2R (49.3 %), a low Mallows Cp value (4.0), and the lowest S value (13.138). Note that two-predictor models or ( )32 , XX ( )21 , XX also exist here with respective highest
adjusted 2R , a low Mallows Cp value, and the lowest S value (see the output above).
4.3. Residual Plots for Y 4.3.1. The Minitab regression computer program outputs for residual plots of Y are given in Figure 4.2.1 below. The paragraphs that follow examine the goodness of fit model based on residual plots.
Standardized Residual
Per
cent
210-1-2
99
90
50
10
1
Fitted Value
Stan
dard
ized
Res
idua
l
9075604530
2
1
0
-1
-2
Standardized Residual
Freq
uenc
y
210-1-2
8
6
4
2
0
Observation Order
Stan
dard
ized
Res
idua
l
35302520151051
2
1
0
-1
-2
Normal Probability Plot of the Residuals Residuals Versus the Fitted Values
Histogram of the Residuals Residuals Versus the Order of the Data
Residual Plots for Y
Figure 4.2.1 4.3.2. Interpreting the Graphs (Figure 4.2.1)
A. From the normal probability plot, we observe that there exists an approximately linear pattern. This indicates the consistency of the data with a normal distribution. The outliers are indicated by the points in the upper-right corner of the plot.
8
B. From the plot of residuals versus the fitted values, it is evident that the residuals get smaller, that is, closer to the reference line, as the fitted values increase. This may indicate that the residuals have non-constant variance, (see, for example, Draper and Smith (1998), among others, for details).
C. The histogram of the residuals indicates that no outliers exist in the data.
D. The plot for residuals versus order is also provided in Figure 4.2.1. It is defined as a plot of all residuals in the order that the data was collected. It is used to find non-random error, especially of time-related effects. A clustering of residuals with the same sign indicates a positive correlation, whereas a negative correlation is indicated by rapid changes in the signs of consecutive residuals.
4.4. Testing the Adequacy of Multiple Regression Model for Predicting the Student’s Final Exam Grade From the above analysis, it appears that the fitted multiple regression model for predicting the student’s final examination grade, Y , is given by
Y . 321 290.0338.0247.098.8 XXX +++=∧
i
This section discusses the usefulness and adequacy of the above developed multiple regression model developed for predicting the student’s final examination grade.
β 4.4.1. Confidence Interval for the Parameters If we assume that the variation of observations about the line are normal, that is,
the error terms ε are all from the same normal distribution, ( )2,0 σN , it can be
shown that we can assign ( ) %1001 α− confidence limits for iβ by calculating
⎟⎜⎝
⎟⎠
⎜ −−± isent ββ .2
1,2⎠⎞⎛⎞⎛ ∧∧ α,
⎝i
where ⎟⎠
−2
1,2 ( ) %1001⎞⎛ α
⎜⎝
−nt α− percentage point of a t - distribution, is the
with degrees of freedom (the number of degrees of freedom on which the
estimate is based). Suppose
( 2−n2s)
α = 0.05. For ( )975.0,37t , we can use
or interpolate in the t – table. Thus we have ( 021975.0,40t ) .2=
(i) %95 confidence limits for 1β : (- 0.047641, 0.540905);
(ii) %95 confidence limits for 2β : (0.0954646, 0.5812434); and
(iii) %95 confidence limits for 3β : (0.0583227, 0.5214433).
9
4.4.2. Tests of Significance for Individual Parameters 0:0 =iH β versus 0: ≠iaH β A test of hypothesis that a particular parameter, say, iβ equals zero can be
conducted by using a - statistic given by t
⎟⎠⎞
⎜⎝⎛−
=∧
∧
i
i
set
β
β 0.
The test can also be conducted by using the -statistic since the square of a t -statistic (with
Fν degrees of freedom) is equal to an -statistic with 1 degree of
freedom in the numerator and F
ν degrees of freedom in the denominator. That is,
. Ft =2
Decision Rule: Reject if 0H ⎟⎠⎞
⎜⎝⎛ −−>
21,2
αntt .
Using the multiple linear regression computer outputs, the analysis of t - statistic values for different iβ ’s are given in Table 4.4.1 below.
Table 4.4.1
Null Hypothesis
( 975.0,37t ) * t Inference Conclusion
0: 10 =βH 2.021 1.69 Fail to reject 0H
In the presence of and
, is a poor predictor
of
2X
3X 1XY .
0: 20 =βH 2.021 2.82 Reject 0H In the presence of and
, is a good
predictor of
1X
3X 2XY .
0: 30 =βH 2.021 2.53 Reject 0H In the presence of and
, is a good
predictor of
1X
2X 3XY .
* For , we can use ( 975.0,37t ) ( ) 021.2975.0,40 =t or interpolate in the t – table. 4.4.3. -Test for Significance of Regression F Null Hypothesis: 0: 3210 === βββH (The regression is not significant) versus
10
Alternate Hypothesis: 0': ≠sofoneleastatH ia β (The regression is significant).
Test Statistic: 2s
MSF reg=
Decision Rule: Reject if 0H ⎟⎠⎞
⎜⎝⎛ −==>
2135,3 21
αννFF .
The value of – statistic for testing the hypothesis is that at least one of the predictor variables contributes significant information for the prediction of the student’s final examination grade,
F
Y . In the computer output, it is calculated as . Comparing this with the critical value of 31.13=F ( ) 84.295.0,35,3 21 === ννF
at 05.0=α , we reject the null hypothesis: 0321:0 === βββH
.0
, that is, the
regression is not significant. Thus, the overall regression is statistically significant. In fact, exceeds 31.13=F ( ,3531 ) 98.4005, 2 ==== αννF (see, for example,
Mendenhall, et al (1993), p. 994, Table 6), and is significant at a p -value . It appears that at least one of the predictor variables contributes information for the prediction of
005.0<
Y . 5. CONCLUDING REMARKS The fitted multiple regression model for predicting the student’s final examination grade, Y , is given by
. 321 290.0338.0247.098.8 XXXY +++=∧
From the above analysis, it appears that our multiple regression model for predicting the student’s final examination grade, Y , is useful and adequate. In the presence of x1 and x3, x2 is a good predictor of Y. In the presence of and , is a good
predictor of 1X 3X 2X
Y . In the presence of and , is a good predictor of 1X 2X 3X Y . As the
values of 2R and are not very different, it appears that at least one of the
predictor variables contributes information for the prediction of
2aR
Y . The coefficient of variation also tells us that the standard deviation of the student’s
final examination grade,
%.19 7292=VCY , is only 19.72910 % of their mean. Also, since the test
statistic value of F calculated from the data, 31.13=F , exceeds the critical value of ( )95.0 8431 = .2,35, =2 =ννF at 05.0=α , we reject the null hypothesis:
0: 10 32 === βββH , that is, the regression is not significant. Hence, our multiple
regression model for predicting the student’s final examination grade, Y , seems to be useful and adequate, and the overall regression is statistically significant. The
-statistic criterion and residual plots of pC Y (Figure 4.2.1) as discussed above also
confirm the adequacy of our model. For future work, one can consider to develop and study similar models from the fields of education, social and behavioral sciences. One can also develop similar models by adding other variables, for example, the attitude, interest, prerequisite, gender, age, marital status, employment status, race and ethnicity of the student, as well as the squares, cubes, and, cross products of , 1X
11
2X and . In addition, one could also study the effect of some data
transformations. 3X
REFERENCES
1. Borg, W. R., and Gall M. D. (1983). Educational Research – An Introduction (4th
edition). New York & London: Longman. 2. Draper, N. R., and Harry S. (1998). Applied Regression Analysis (3rd edition).
New York: John Wiley & Sons, INC. 3. McClave, J. T., and Sincich, T. (2006). Statistics (10th edition). Upper Saddle
River, NJ: Pearson Prentice Hall. 4. McKenzie, J. D., and Goldman, R. (2005). MINITAB Release 14. Boston: Addison
Wesley 5. Mendenhall, W., James E. R., and Robert J. B. (1993). Statistics for Management
and Economics (7th edition). Belmont, CA: Duxbury Press. 6. Montgomery, D. C., and Peck, E. A. (1982). Introduction to Linear Regression
Analysis. New York: John Wiley & Sons, INC. 7. Rosenthal, M. (1994). “Partial Credit Study.” University Park, Florida: Department
of Mathematics, Florida International University. 8. Senfeld, L. (1995). “Math anxiety and its relationship to selected student
attitudes and beliefs,” Ph. D. Thesis. Coral Gables, Florida: University of Miami. 9. Shakil, M. (2001). “Fitting of a linear model to predict the college GPA of
matriculating freshmen based on their college entrance verbal and mathematics test scores,” A Data Analysis I Computer Project. University Park, Florida: Department of Statistics, Florida International University.
10. Shepard, L. (1979). “Construct and Predictive Validity of the California Entry
Level Test.” Educational and Psychological Measurement, 39: 867 – 77. 11. Tamhane, A. C., and Dunlop, D. D. (2000). Statistics and Data Analysis: From
Elementary to Intermediate (1st edition). Upper Saddle River, NJ: Pearson Prentice Hall.
12
APPENDIX I OBS X1 X2 X3 Y OBS X1 X2 X3 Y 1 78 88 65 68 20 90 96 21 75 2 84 96 99 95 21 78 42 56 60 3 78 77 72 89 22 60 32 59 55 4 96 75 90 95 23 78 98 77 90 5 72 82 70 75 24 48 26 42 25 6 78 75 92 87 25 72 74 70 85 7 90 74 96 75 26 84 54 82 80 8 66 84 64 57 27 96 93 81 75 9 68 52 84 60 28 68 33 65 45 10 90 92 35 70 29 60 63 50 60 11 66 95 90 80 30 42 72 84 60 12 96 75 62 91 31 30 50 32 35 13 59 68 41 85 32 18 32 33 40 14 66 44 63 58 33 47 48 70 55 15 72 88 60 91 34 78 39 52 60 16 66 59 25 30 35 78 48 79 60 17 48 43 53 75 36 90 60 68 75 18 42 32 76 51 37 66 74 72 65 19 66 61 73 45 38 78 73 68 45 39 45 32 77 75
Assessing Student Performance Using Test Item Analysis and its Relevance to the State Exit Final Exams of MAT0024 Classes - An Action Research Project*
Dr. Mohammad Shakil
Department of Mathematics
Miami Dade College
Hialeah, FL 33012, USA; E-mail: [email protected]
Abstract
The classroom assessment and action research are the two most crucial components of the teaching and learning process. These are also essential parts of the scholarship of teaching and learning. Action Research is an important, recent development in classroom assessment techniques, defined as teacher-initiated classroom research which seeks to increase the teacher’s understanding of classroom teaching and learning and to bring about improvements in classroom practices. Assessing the student performance is very important when the learning goals involve the acquisition of skills that can be demonstrated through action. Many researchers have worked and developed useful theories and taxonomies on the assessment of academic skills, intellectual development, and cognitive abilities of students, both from the analytical and quantitative point of view. Different kinds of assessments are appropriate in different settings. Item analysis is one powerful technique available to instructors for the guidance and improvement of instruction. In this project, student performance using test item analysis and its relevance to the State Exit Final Exams of MAT0024 classes have been investigated. Keywords: Action Research, Discriminators, Discrimination Index, Item Analysis, Item Difficulty, Point-Biserial, Reliability. *Part of this article was presented on MDC Conference Day, March 6th, 2008 at MDC, Kendall Campus.
2
1. Introduction Assessing student performance is very important when the learning goals involve the acquisition of skills that can be demonstrated through action. Many researchers have worked and developed useful theories and taxonomies (for example, Bloom’s taxonomy) on the assessment of academic skills, intellectual development, and cognitive abilities of students, both from the analytical and quantitative point of view. For details on Bloom’s cognitive taxonomy and its applications, see, for example, Bloom (1956), Ausbel (1968), Bloom et al. (1971), Simpson (1972), Krathwohl et al. (1973), Angelo & Cross (1993), and Mertler (2003), among others. Different kinds of assessments are appropriate in different settings. One of the most important and authentic techniques of assessing and estimating student performance across the full domain of learning outcomes as targeted by the instructor is the classroom test. Each item on a test is intended to sample student performance on a particular learning outcome. Thus, creating valid and reliable classroom tests are very important to an instructor for assessing student performance, achievement and success in the class. The same principle applies to the State Exit Exams and Classroom Tests conducted by the instructors, state and other agencies. Moreover, it is important to note that, most of the time, it is not well known whether the test items (e.g., multiple-choice) accompanied with the textbooks or test-generator software or constructed by the instructors are already tested for their validity and reliability. One powerful technique available to the instructors for the guidance and improvement of instruction is the test item analysis. It appears from the literature that, in spite of the extensive work on item analysis and its applications, very little attention has been paid to this kind of quantitative study of item analysis of state exit exams or classroom tests, particularly at Miami Dade College. After thorough search of the literature, the author of the present article has been able to find two references of this kind of study, that is, Hostetter & Haky (2005), and Hotiu (2006). Accordingly, in this project, student performance using test item analysis and its relevance to the State Exit Final Exams of MAT0024 classes have been investigated. By conducting the test item analysis of the State Exit Final Exams of some of my MAT0024 classes, this project discusses how well these exams distinguish among students according to the how well they met the learning goals of these classes. The data obtained from these exit exams are presented here as an item analysis report, which, it is hoped, will be helpful in recognizing the most critical pieces of the state exit test items data, and evaluating whether or not that test item needs revision. The organization of this paper is as follows. Section 2 discusses briefly ‘what action research is’. In Section 3, an overview of some important statistical aspects of test item analysis is presented. Section 4 contains the test item analysis and other statistical analyses of the State Exit Final Exams of MAT0024 classes. Some conclusions are drawn in Section 5. 2. An Overview of Action Research This section discusses briefly ‘what action research is’. 2.1 What Is Action Research?
The development of the general idea of “action research” began with the work of Kurt Lewin (1946) in his paper entitled “Action Research and Minority Problems,” where he describes action research as “a comparative research on the conditions and effects of various forms of social action and research leading to social action” that uses “a spiral of steps, each of which is composed of a circle of planning, action, and fact-finding about
3
the result of the action”. Further development continued with the contributions by many other authors later, among them Kemmis (1983), Ebbutt (1985), Hopkins (1985), Elliott (1991), Richards et al. (1992), Nunan (1992), Brown (1994), and Greenwood et al. (1998), are notable. For recent developments on the theory of action research and its applications, the interested readers are referred to Brydon-Miller et al. (2003), Gustavsen (2003), Dick (2004), Elvin (2004), Barazangi (2006), Greenwood (2007), and Taylor & Pettit (2007), and references therein. As cited in Gabel (1995), following are some of the commonly used definitions of action research:
Action Research aims to contribute both to the practical concerns of people in an
immediate problematic situation and to the goals of social science by joint collaboration within a mutually acceptable ethical framework. (Rapoport, 1970).
Action Research is a form of self-reflective enquiry undertaken by participants in social (including educational) situations in order to improve the rationality and justice of (a) their own social or educational practices, (b) their understanding of these practices, and (c) the situations in which the practices are carried out. It is most rationally empowering when undertaken by participants collaboratively... ...sometimes in cooperation with outsiders. (Kemmis, 1983).
Action Research is the systematic study of attempts to improve educational
practice by groups of participants by means of their own practical actions and by means of their own reflection upon the effects of those actions. (Ebbutt, 1985).
In the field of education, the term action research is defined as inquiry or research in the context of focused efforts in order to improve the quality of an educational institution and its performance. Typically, in an educational institution, the action research is designed and conducted by the instructors in their classes to analyze the data to improve their own teaching. It can be done by an individual instructor or by a team of instructors as a collaborative inquiry. Action research gives an instructor opportunities to reflect on and assess his/her teaching and its effectiveness by applying and testing new ideas, methods, and educational theory for the purpose of improving teaching, or to evaluate and implement an educational plan. According to Richards et al. (1992), action research is defined as teacher-initiated classroom research, which seeks to increase the teacher's understanding of classroom teaching and learning and to bring about improvements in classroom practices. Nunan (1992) defines it as a form of self-reflective inquiry carried out by practitioners, aimed at solving problems, improving practice, or enhancing understanding. According to Brown (1994), “Action research is any action undertaken by teachers to collect data and evaluate their own teaching. It differs from formal research, therefore, in that it is usually conducted by the teacher as a researcher, in a specific classroom situation, with the aim being to improve the situation or teacher rather than to spawn generalizeable knowledge. Action research usually entails observing, reflecting, planning and acting. In its simplest sense, it is a cycle of action and critical reflection, hence the name, action research.”
2.2 My Action Research Project
There are many ways in which an instructor can exploit the classroom tests for assessing student performance, achievement and success in the class. It is one of the
4
most important and authentic techniques of assessing and estimating student performance across the full domain of learning outcomes as targeted by the instructor. One powerful technique available to an instructor for the guidance and improvement of instruction is the test item analysis. In this project, I have investigated student performance using test item analysis and its relevance to the State Exit Final Exams of MAT0024 classes. By conducting the test item analysis of the State Exit Final Exams of some of my MAT0024 classes, that is, Fall 2006-1, Spring 2006-2 and Fall 2007-1, this project discusses how well these exams distinguish among students according to the how well they met the learning goals of these classes. The data obtained from these exit exams are presented here as an item analysis report based upon the classical test theory (CRT), which is one of the important, commonly used types of Item Analysis. It is hoped that the present study would be helpful in recognizing the most critical pieces of the state exit test items data, and evaluating whether or not that test item needs revision. The methods discussed in this project can be used to describe the relevance of test item analysis to classroom tests. These procedures can also be used or modified to measure, describe and improve tests or surveys such as college mathematics placement exams (that is, CPT), mathematics study skills, attitude survey, test anxiety, information literacy, other general education learning outcomes, etc. Further research based on Bloom’s cognitive taxonomy of test items (see, for example, the references as cited above), the applicability of Beta-Binomial models and Bayesian analysis of test items (see, for example, Duncan,1974; Gross & Shulman, 1980; Wilcox, 1981; and Gelman, 2006; among others), and item response theory (IRT) using the 1-parameter logistic model (also known as Rasch model), 2- & 3- parameter logistic models, plots of the item characteristic curves (ICCs) of different test items, and other characteristics of measurement instruments of IRT are under investigation by the present author and will be reported soon at an appropriate time. For details on IRT and recent developments, see, for example, Rasch (1960/1980), Lord & Novick (1968), Lord (1980), Wright (1992), Hambleton et al. (1991), Linden & Hambleton (1997), Thissen & Steinberg (1997), and Gleason (2008), among others.
3. An Overview of Test Item Analysis In this section, an overview of test item analysis is presented. 3.1 Item Analysis Item analysis is a process which examines student responses to individual test items (questions) in order to assess the quality of those items and of the test as a whole. It is a valuable, powerful technique available to teaching professionals and instructors for the guidance and improvement of instructions. It enables instructors to increase their test construction skills, identify specific areas of course content which need greater emphasis or clarity, and improve other classroom practices. According to Thompson & Levitov, (1985, p. 163), “Item analysis investigates the performance of items considered individually either in relation to some external criterion or in relation to the remaining items on the test." For example, when norm-referenced tests (NRTs) are developed for instructional purposes, such as placement test, or to assess the effects of educational programs, or for educational research purposes, it can be very important to conduct item and test analyses. Similarly, criterion-referenced tests (CRTs) compare students’ performance to some preesablished criteria or objectives (such as classroom tests designed by the instructors). These analyses evaluate the quality of items and of the test as a whole. Such analyses can also be employed to revise and improve both items and
5
the test as a whole. Many researchers have contributed to the theory of test item analysis, among them Galton, Pearson, Spearman, and Thorndike are notable. For details on these pioneers of test item analysis theories and their contributions, see, for example, Gulliksen (1987), among others. For recent developments on the test item analysis practices, see Crocker & Algina (1986), Gronlund & Linn (1990), Pedhazur & Schemlkin (1991), Sax (1989), Thorndike, et al. (1991), Elvin (2003), and references therein.
3.2 Classical Test Theory (CTT) An item analysis involves many statistics that can provide useful information for improving the quality and accuracy of multiple-choice or true/false items (questions). It describes the statistical analyses which allow measurement of the effectiveness of individual test items. An understanding of the factors which govern effectiveness (and a means of measuring them) can enable us to create more effective test questions and also regulate and standardize existing tests. The item analysis is an important phase in the development of an exam program. For example, a test or exam consisting of multiple-choice or true-false items is used to determine the proficiency (or ability) level of an examinee in a particular discipline or subject. Most of the times, the test or exam score obtained contributes a considerable weight in determining whether or not an examinee has passed or failed the subject. That is, the proficiency (or ability) level of an examinee is estimated using the total test score obtained from the number of correct responses to the test items. If the test score is equal to a cut-off score or greater than a cut-off score, then the examinee is considered to pass the subject, otherwise, it is considered a failure. This approach of using the test score as proficiency (or ability) estimate is called as the true score model (TSM) or classical test theory (CTT) approach. Classical Item Analysis, based on traditional classical theory models, forms the foundation for looking at the performance of each item in a test. The development of the CTT began with the work of Charles Spearman (1904) in his paper entitled “General intelligence: Objectively determined and measured”. Further development continued with the contributions by many researchers later, among them Francis Galton (1822 – 1911), Karl Pearson (1857 – 1936), and Edward Thorndike (1874 – 1949) are notable, (for details, see, for example, Nunnally, 1967; Gulliksen 1987; among others). For recent developments on the theory of CTT and its applications, the interested readers are referred to Chase (1999), Haladyna (1999), Nitko (2001), Tanner (2001), Oosterhof (2001), Mertler (2003), and references therein. The TSM equation is given by
ε+= TX ,
where , , scoreobservedX = scoretrueT = errorrandom=ε , and . Note ( ) TXE =that, in the above TSM equation, the true score reflects the exact value of the examinee’s ability or proficiency. Also, the TSM assumes that abilities (or traits) are constant and the variation in observed scores are caused by random errors, which may result from factors such as guessing, lack of preparation, or stress. Thus, in CTT, all test items and statistics are test-dependent. The trait (or ability) of an examinee is defined in terms of a test, whereas the difficulty of a test item is defined in terms of the group of examinees. According to Hambleton, et. al (1991, p. 3), “Examinee characteristics and test item characteristics cannot be separated: each can be interpreted only in the context
6
of the other.” Some important criterias which are employed in the determination of the validity of a multiple-choice exam are following:
Whether the test items were too difficult or too easy. Whether the test items discriminated between those examinees who really knew
the material and those who did not. Whether the incorrect responses to a test item were distractors or non-
distractors.
3.3 Item Analysis Statistics An item analysis involves many statistics that can provide useful information for determining the validity and improving the quality and accuracy of multiple-choice or true/false items. These statistics are used to measure the ability levels of examinees from their responses to each item. The ParSCORETM item analysis generated by Miami Dade College – Hialeah Campus Reading Lab when a multiple-choice MAT0024 State Exit Final Exam is machine scored consists of three types of reports, that is, a summary of test statistics, a test frequency table, and item statistics. The test statistics summary and frequency table describe the distribution of test scores, (for details on these, see, for example, Agresti and Finlay, 1997; Tamhane and Dunlop, 2000; among others). The item analysis statistics evaluate class-wide performance on each test item. The ParSCORETM report on item analysis statistics gives an overall view of the test results and evaluates each test item, which are also useful in comparing the item analysis for different test forms. In what follows, descriptions of some useful, common item analysis statistics, that is, item difficulty, item discrimination, distractor analysis, and reliability, are presented below, (for details on these, see, for example, Wood, 1960; Lord & Novick, 1968; Henrysson, 1971; Nunally, 1978; Thompson & Levitov, 1985; Crocker & Algina, 1986; Ebel & Frisbie, 1986; Suen, 1990; Thorndike et al., 1991; DeVellis, 1991; Millman & Greene, 1993; Haladyna, 1999; Tanner, 2001; Haladyna et al., 2002; Mertler, 2003; among others). For the sake of completeness, definitions of some test statistics as reported in the ParSCORETM analysis are also provided.
7
(I) Item Difficulty: Item difficulty is a measure of the difficulty of an item. For items (that is, multiple-choice questions) with one correct alternative worth a single point, the item difficulty (also known as the item difficulty index, or the difficulty level index, or the difficulty factor, or the item facility index, or the item easiness index, or the p -value) is defined as the proportion of respondents (examinees) selecting the answer to the item correctly, and is given by
ncp =
where =p the difficulty factor, =c the number of respondents selecting the correct answer to an item, and total number of respondents. Item difficulty is relevant for determining whether students have learned the concept being tested. It also plays an important role in the ability of an item to discriminate between students who know the tested material and those who do not. Note that
=n
(i) 1.
0 ≤≤ p
(ii) A higher value of p indicate low difficulty level index, that is, the item is easy. A lower value of p indicate high difficulty level index, that is, the item is difficult. In general, an ideal test should have an overall item difficulty of around 0.5; however it is acceptable for individual items to have higher or lower facility (ranging from 0.2 to 0.8). In a criterion-referenced test (CRT), with emphasis on mastery-testing of the topics covered, the optimal value of p for many items is expected to be 0.90 or above. On the other hand, in a norm-referenced test (NRT), with emphasis on discriminating between different levels of achievement, it is given by 50.0≈p . For details on these, see, for example, Chase(1999), among others.
(iii) To maximize item discrimination, ideal (or moderate or desirable) item difficulty level, denoted as Mp , is defined as a point midway between the probability of success, denoted as Sp , of answering the multiple - choice item correctly (that is, 1.00 divided by the number of choices) and a perfect score (that is, 1.00) for the item, and is given by
2
1 SSM
ppp
−+= .
(iv) Thus, using the above formula in (iv), ideal (or moderate or desirable)
item difficulty levels for multiple-choice items can be easily calculated, which are provided in the following table, (for details, see, for example, Lord, 1952; among others).
8
Number of Alternatives
Probability of Success ( ) Sp
Ideal Item Difficulty Level ( ) Mp
2 0.50 0.75 3 0.33 0.67 4 0.25 0.63 5 0.20 0.60
(Ia) Mean Item Difficulty (or Mean Item Easiness): Mean item difficulty is the average of difficulty easiness of all test items. It is an overall measure of the test difficulty and ideally ranges between 60 % and 80 % (that is, 80.060.0 ≤≤ p ) for classroom achievement tests. Lower numbers indicate a difficult test while higher numbers indicate an easy test.
(II) Item Discrimination: The item discrimination (or the item discrimination index) is a basic measure of the validity of an item. It is defined as the discriminating power or the degree of an item's ability to discriminate (or differentiate) between high achievers (that is, those who scored high on the total test) and low achievers (that is, those who scored low), which are determined on the same criterion, that is, (1) internal criterion, for example, test itself; and (2) external criterion, for example, intelligence test or other achievement test. Further, the computation of the item discrimination index assumes that the distribution of test scores is normal and that there is a normal distribution underlying the right or wrong dichotomy of a student’s performance on an item. For details on the item discrimination index, see, for example, Kelly (1939), Wood (1960), Henrysson (1971), Nunally (1972), Ebel (1979), Popham (1981), Ebel & Frisbie (1986), Weirsma & Jurs (1990), Glass & Hopkins (1995), Brown (1996), Chase (1999), Haladyna (1999), Nitko (2001), Tanner (2001), Oosterhof (2001), Haladyna et al. (2002), and Mertler (2003), among others. There are several ways to compute the item discrimination, but, as shown on the ParSCORETM item analysis report and also as reported in the literature, the following formulas are most commonly used indicators of item’s discrimination effectiveness.
(a) Item Discrimination Index (or Item Discriminating Power, or -Statistics), : Let the students’ test scores be rank-ordered from lowest to highest. Let
D D
groupupperinstudentsofNumberTotalcorrectlyitemtheansweringgroupupperinstudentsofNopU %30%25
%30%25.
−−
= ,
and
grouplowerinstudentsofNumberTotalcorrectlyitemtheansweringgrouplowerinstudentsofNopL %30%25
%30%25.
−−
=
9
The ParSCORETM item analysis report considers the upper and the lower as the analysis groups. The item discrimination index, , is given by . Note that
%27 %27D
LU ppD −=
(i) 1 . 1 +≤≤− D(ii) Items with positive values of D are known as positively discriminating
items, and those with negative values of D are known as negatively discriminating items.
(iii) If 0=D , that is, LU pp = , there is no discrimination between the upper and lower groups.
(iv) If 00.1+= , that is, 0D 00.1 == LU pandp , there is a perfect discrimination between the two groups.
(v) If 00.1−= , that is, 00.1D 0 == LU pandp , it means that all members of the lower group answered the item correctly and all members of the upper group answered the item incorrectly. This indicates the invalidity of the item, that is, the item has been miskeyed and needs to be rewritten or eliminated.
(vi) A guideline for the value of an item discrimination index is provided in the following table, see, for example, Chase(1999), and Mertler(2003), among others.
Item Discrimination Index, D Quality of an Item
50.0≥D Very Good Item; Definitely Retain 49.040.0 ≤≤ D Good Item; Very Usable 39.030.0 ≤≤ D Fair Quality; Usable Item 29.020.0 ≤≤ D Potentially Poor Item; Consider Revising
20.0<D Potentially Very Poor; Possibly Revise Substantially, or Discard
(b) Mean Item Discrimination Index, : This is the average discrimination index for all test items combined. A large positive value (above 0.30) indicates good discrimination between the upper and lower scoring students. Tests that do not discriminate well are generally not very reliable and should be reviewed.
D
10
(c) Point-Biserial Correlation (or Item-Total Correlation or Item Discrimination) Coefficient, : The point-biserial correlation coefficient is another item discrimination index of assessing the usefulness (or validity) of an item as a measure of individual differences in knowledge, skill, ability, attitude, or personality characteristic. It is defined as the correlation between the student performance on an item (correct or incorrect) and overall test score, and is given by either of the following two equations (which are mathematically equivalent).
pbisr
(i) Suen (1990); DeVellis (1991); Haladyna (1999)
qp
sXXr TC
pbis ⎥⎥
⎦
⎤
⎢⎢
⎣
⎡ −=
−−
,
where the point-biserial correlation coefficient; the mean total score for =pbisr =−
CX
examinees who have answered the item correctly; the mean total score for all =T
−
Xexamines; =p the difficulty value of the item; pq −= 1 ; and =s the standard deviation of total exam scores.
(ii) Brown (1996)
qps
mmr qp
pbis ⎥⎦
⎤⎢⎣
⎡ −= ,
where the point-biserial correlation coefficient; =pbisr =pm the mean total score for
examinees who have answered the item correctly; =qm the mean total score for examinees who have answered the item incorrectly; =p the difficulty value of the item; ; and the standard deviation of total exam scores. pq −= 1 =s
Note that
(i) The interpretation of the point-biserial correlation coefficient, pbisr , is
same as that of the D -statistic.
(ii) It assumes that the distribution of test scores is normal and that there is a normal distribution underlying the right or wrong dichotomy of a student performance on an item.
11
(iii) It is mathematically equivalent to the Pearson (product moment) correlation coefficient, which can be shown by assigning two distinct numerical values to the dichotomous variable (test item), that is, incorrect = 0 and correct = 1.
(iv) 1 . 1 +≤≤− pbisr
(v) 0≈pbisr means little correlation between the score on the item and the
score on the test.
(vi) A high positive value of pbisr indicates that the examinees who answered
the item correctly also received higher scores on the test than those examinees who answered the item incorrectly.
(vii) A negative value indicates that the examinees who answered the item correctly received low scores on the test and those examinees who answered the item incorrectly did better on the test. It is advisable that an item with 0≈pbisr or with large negative value of pbisr should be
eliminated or revised. Also, an item with low positive value of pbisr should be revised for improvement.
(viii) Generally, the value of pbisr for an item may be put into two categories as provided in the following table.
Point-Biserial Correlation Coefficient, pbisr Quality
30.0≥pbisr Acceptable Range
1≈pbisr Ideal Value
(ix) The statistical significance of the point-biserial correlation coefficient,
pbisr , may be determined by applying the Student’s t test, (for details, see, for example, Triola, 2007, among others).
Remark: It should be noted that the use of point-biserial correlation coefficient, , is more advantageous than that of item discrimination index statistics, , because every student taking the test is taken into consideration in the computation of pbis , whereas only 54 % of test-takers passing each item in both groups (that is, the upper 27 % + the lower 27 %) are used to compute .
pbisrD
r
D
(d) Mean Item-Total Correlation Coefficient, pbis : It is defined as the average correlation of all the test items with the total score. It is a measure of overall test discrimination. A large positive value indicates good discrimination between students.
r
12
(III) Internal Consistency Reliability Coefficient (Kuder-Richardson 20, , Reliability Estimate): The statistic that measures the test reliability of inter-item consistency, that is, how well the test items are correlated with one another, is called the internal consistency reliability coefficient of the test. For a test, having multiple-choice items that are scored correct or incorrect, and that is administered only once, the Kuder-Richardson formula 20 (also known as KR-20) is used to measure the internal consistency reliability of the test scores (see, for example, Nunally, 1972; and Haladyna, 1999, among others). The KR-20 is also reported in the ParSCORETM item analysis. It is given by the following formula:
20KR
( )12
1
2
20 −
⎟⎟⎠
⎞⎜⎜⎝
⎛−
=∑=
ns
qpsnKR
n
iii
where = the reliability index for the total test; n = the number of items in the test; 20KR
2s = the variance of test scores; = the difficulty value of the item; and . ip ii pq −=1
Note that
(i) 0.1 . 0.0 20 ≤≤ KR
(ii) 020 ≈ indicates a weaker relationship between test items, that is, the KRoverall test score is less reliable. A large value of 20KR indicates high reliability.
(iii) Generally, the value of 20KR for an item may be put into the following categories as provided in the table below.
20KR Quality
60.020 ≥KR Acceptable Range
75.020 ≥KR Desirable
85.080.0 20 ≤≤ KR Better t
120 ≈KR Ideal Value
(iv) Remarks: The reliability of a test can be improved as follows:
a) By increasing the number of items in the test for which the following Spearman-Brown prophecy formula is used (Mertler, 2003).
13
( )rnrnrest 11 −+
=
where estr = the estimated new reliability coefficient; r = the original 20KR reliability coefficient; n = the number of times the test is lengthened.
b) Or, using the items that have high discrimination values in the test.
c) Or, performing an item-total statistic analysis as described above.
(IV) Standard Error of Measurement ( ): It is another important component of test item analysis to measure the internal consistency reliability of a test see, for example, Nunally, 1972; and Mertler, 2003, among others). It is given by the following formula:
mSE
201 KRsSEm −= , 0.10.0 20 ≤≤ KR , where = the standard error of measurement; = the standard deviation of test scores; and = the reliability coefficient for the total test.
mSE s
20KR
Note that
(i) 0=mSE , when 120 =KR .
(ii) 1=mSE , when 020 =KR .
(iii) A small value of mSE (e.g., 3< ) indicates high reliability; whereas a large value of mSE indicates low reliability.
(iv) Remark: Higher reliability coefficient (i.e., 120 ≈KR ) and smaller standard deviation for a test indicate smaller standard error of measurement. This is considered to be more desirable situation for classroom tests.
(v) Test Item Distractor Analysis: It is an important and useful component of test item analysis. A test item distractor is defined as the incorrect response options in a multiple-choice test item. According to the research, there is a relationship between the quality of the distractors in a test item and the student performance on the test item, which also affect the student performance on his/her total test score. The performance of these incorrect item response options can be determined through the test item distractor analysis frequency table which contains the frequency, or number of students, that
14
selected each incorrect option. The test item distractor analysis is also provided in the ParSCORETM item analysis report. For details on test item distractor analysis, see, for example, Thompson & Levitov (1985), DeVellis (1991), Milman & Greene (1993), Haladyna (1999), and Mertler (2003), among others. A general guideline for the item distractor analysis is provided in the following table:
Item Response Options
Item Difficulty p
Item Discrimination Index D or pbisr
Correct Response
85.035.0 ≤≤ p (Better) 30.0≥D or 30.0≥pbisr(Better)
Distractors 02.0≥p (Better) 0≤D or 0≤pbisr (Better)
(v) Mean: The mean is a measure of central tendency and gives the average test score of a sample of respondents (examinees), and is given by
( )
n
xx
n
ii∑
=−
= 1 ,
where , scoretestindividualxi = scoretestindividualxi = , srespondentofnon .= .
(vi) Median: If all scores are ranked from lowest to highest, the median is the middle score. Half of the scores will be lower than the median. The median is also known as the 50th percentile or the 2nd quartile. (vii) Range of Scores: It is defined as the difference of the highest and lowest test scores. The range is a basic measure of variability.
(viii) Standard Deviation: For a sample of n examinees, the standard deviation, denoted by , of test scores is given by the following equation
s
1
1
2
−
⎟⎠⎞
⎜⎝⎛ −
=∑=
−
n
xxs
n
ii
,
where and . The standard deviation is a measure of variability or the spread of the score distribution. It measures how far the scores deviate from the mean. If the scores are grouped closely together, the test will have a small standard deviation. A test with a large value of the standard deviation is considered better in discriminating the student performance levels.
scoretestindividualxi = scoretestaveragex =−
15
(ix) Variance: For a sample of examinees, the variance, denoted by , of test scores is defined as the square of the standard deviation, and is given by the following equation
n 2s
1
1
2
2
−
⎟⎠⎞
⎜⎝⎛ −
=∑=
−
n
xxs
n
ii
.
(x) Skewness: For a sample of examinees, the skewness, denoted by n 3β , of the distribution of the test scores is given by the following equation
( )( )⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛ −−−
= ∑=
−n
i
i
sxx
nnn
1
3
3 21β ,
where , and . It measures the lack of symmetry of the
distribution. The skewness is 0 for symmetric distribution and is negative or positive depending on whether the distribution is negatively skewed (has a longer left tail) or positively skewed (has a longer right tail).
scoretestindividualxi =testofdeviationdardtan
scoretestaveragex =−
scoresss =
(xi) Kurtosis: For a sample of examinees, the kurtosis, denoted by n 4β , of the distribution of the test scores is given by the following equation
( )
( )( )( )( )
( )( )32
13
321
1 2
1
4
4 −−−
−⎪⎭
⎪⎬
⎫
⎪⎩
⎪⎨
⎧
⎟⎟⎟
⎠
⎞
⎜⎜⎜
⎝
⎛ −−−−
+= ∑
=
−
nnn
sxx
nnnnn n
i
iβ ,
where , , and . It measures the tail-heaviness (the amount of
probability in the tails). For the normal distribution,
scoretestindividualxi =testofdeviationdardtan
scoretestaveragex =−
scores
4
ss =3=β . Thus, depending on whether
33 <or4 >β , a distribution is heavier tailed or lighter tailed than the normal distribution.
16
4. Results of the Research This section consists of four parts, which are described below. 4.1 Test Item Analysis of 20071 MAT0024 Versions A and B State Exit Final Exams An item analysis of the data obtained from my Fall 2007-1 MAT0024 class State Exit Final Exam Items (Versions A and B) is presented here based upon the classical test theory (CRT). Various test item statistics and relevant statistical graphs (for both test forms, Versions A and B) using the ParSCORETM item analysis report and the Minitab software are computed and summarized in the Tables 1 – 5 below. Each version consisted of 30 items. There were two different groups of 7 students for each version.
It appears from these statistical analyses that a large value of ( )190.020 ≈=KR
450 >
for Version B indicates its high reliability in comparison to Version A, which is also substantiated by large positive values of 3.0.0=DIMean
19
and 4223.0 , small value of standard error of measurement (that is,
82.1= ), and an ideal value of mean (that is, 1857... =BisrPtMean
SEM >=μ , the passing score) for Version B. These analyses are also evident by the bar charts and scatter plots drawn for various test item statistics using Minitab, that is, item difficulty ( p ), item discrimination index ( D ) and point-biserial correlation coefficient ( pbisr ), which are presented below in Figures 1 and 2.
The results indicate a definite correlation between item difficulty level and item
discrimination index. For example, as the item difficulty level increases, the item discrimination index (D or r) also increases. However, there is an optimum level of item difficulty level, that is, 40 % - 70 % in Version A and 40 % - 50 % in Version B, after which the item discrimination index (D or r) starts decreasing. This means that the test items were too difficult in these ranges for both the high scorers and the low scorers, and did not have a good and effective discriminating power.
Filter for Selecting, Rejecting and Modifying Test Items: The analysis also
indicated two extremes, that is, the test items which were too easy (with item difficulty level as 100 %) and too difficult (with item difficulty level as 0 %). This implies that these test items did not have the effective discriminating power between students of different abilities (that is, between high achievers and low achievers). This process may be used for the selection, rejection and modification of test items (Figures 1 and 2).
17
Table 1
A Comparison of 20071 MAT0024 Ver. A and B State Exit Test Items
Exam. Version 20
Re
−KRliability
Mean SD SEM 3.0<p 7.03.0 ≤≤ p
7.0>p
2.0>D
A 0.53 17.14 2.80 1.92 8 10 12 14 B 0.90 19.57 5.75 1.82 1 15 14 20 Exam. Version DIMean .. BisrPtMean A 0.233 0.2060 B 0.450 0.4223
Table 2
MAT0024_2007_1_Ver_A
Data Display
Disc. Ind. Difficulty Pt-Bis Row PU PL (D) Difficulty (p) (p) % (r) 1 1.0 0.0 1.0 0.4286 42.86 0.78 2 1.0 1.0 0.0 0.8571 85.71 0.02 3 1.0 0.5 0.5 0.8571 85.71 0.46 4 1.0 0.0 1.0 0.5714 57.14 0.66 5 1.0 0.0 1.0 0.5714 57.14 0.77 6 1.0 0.0 1.0 0.7143 71.43 0.82 7 0.5 0.0 0.5 0.5714 57.14 0.56 8 1.0 1.0 0.0 1.0000 100.00 0.00 9 0.0 0.5 -0.5 0.1429 14.29 -0.46 10 0.5 0.5 0.0 0.4286 42.86 0.27 11 0.5 0.5 0.0 0.4286 42.86 -0.15 12 1.0 1.0 0.0 1.0000 100.00 0.00 13 1.0 1.0 0.0 1.0000 100.00 0.00 14 0.0 0.0 0.0 0.0000 0.00 0.00 15 1.0 0.5 0.5 0.5714 57.14 0.25 16 1.0 0.5 0.5 0.7143 71.43 0.37 17 1.0 0.5 0.5 0.8571 85.71 0.60 18 1.0 1.0 0.0 1.0000 100.00 0.00 19 1.0 1.0 0.0 1.0000 100.00 0.00 20 1.0 0.5 0.5 0.8571 85.71 0.46 21 1.0 0.5 0.5 0.8571 85.71 0.46 22 0.5 0.5 0.0 0.5714 57.14 -0.16 23 0.0 0.5 -0.5 0.1429 14.29 -0.46 24 0.5 1.0 -0.5 0.5714 57.14 -0.27 25 0.0 0.0 0.0 0.2857 28.57 0.08 26 0.0 0.0 0.0 0.1429 14.29 -0.02 27 1.0 0.5 0.5 0.4286 42.86 0.37 28 0.5 0.0 0.5 0.1429 14.29 0.71 29 0.5 0.0 0.5 0.2857 28.57 0.53 30 0.0 0.5 -0.5 0.1429 14.29 -0.46
18
Table 3
Descriptive Statistics: MAT0024_2007_1_Ver_A
Variable Mean SE Mean StDev Variance Minimum Q1 Disc. Ind. (D) 0.2333 0.0821 0.4498 0.2023 -0.5000 0.000000000 Difficulty (p) 0.5714 0.0573 0.3139 0.0985 0.000000000 0.2857 Difficulty (p) % 57.14 5.73 31.39 985.11 0.000000000 28.57 Pt-Bis (r) 0.2063 0.0703 0.3850 0.1482 -0.4600 -0.00500 Variable Median Q3 Maximum Disc. Ind. (D) 0.000000000 0.5000 1.0000 Difficulty (p) 0.5714 0.8571 1.0000 Difficulty (p) % 57.14 85.71 100.00 Pt-Bis (r) 0.1650 0.5375 0.8200
Filter for Selecting, Rejecting and Modifying Test Items (Figure 1)
Difficulty (p) %
Coun
t
100.0085.7171.4357.1442.8628.5714.290.00
6
5
4
3
2
1
0
Chart of Difficulty (p) %
Disc. Ind. (D)
Coun
t
1.00.50.0-0.5
12
10
8
6
4
2
0
Chart of Disc. Ind. (D)
Pt-Bis (r)
Coun
t
0.82
0.78
0.77
0.71
0.66
0.60
0.56
0.53
0.46
0.37
0.27
0.25
0.08
0.02
0.00
-0.02
-0.15
-0.16
-0.27
-0.46
6
5
4
3
2
1
0
Chart of Pt-Bis (r)
Difficulty (p) %
Dis
c. I
nd. (
D)
100806040200
1.00
0.75
0.50
0.25
0.00
-0.25
-0.50
Scatterplot of Disc. Ind. (D) vs Difficulty (p) %
Difficulty (p) %
Pt-B
is (
r)
100806040200
1.00
0.75
0.50
0.25
0.00
-0.25
-0.50
Scatterplot of Pt-Bis (r) vs Difficulty (p) %
Figure 1
(Bar Charts and Scatter Plots for p , D , and , Version A) pbisr
19
Table 4
MAT0024_2007_1_Ver_B
Data Display Disc. Ind. Difficulty Pt-Bis Row PU PL (D) Difficulty (p) (p) % (r) 1 1.0 1.0 0.0 1.0000 100.00 0.00 2 1.0 1.0 0.0 0.7143 71.43 0.06 3 1.0 1.0 0.0 1.0000 100.00 0.00 4 1.0 1.0 0.0 0.8571 85.71 0.11 5 1.0 0.5 0.5 0.8571 85.71 0.54 6 1.0 0.5 0.5 0.7143 71.43 0.67 7 1.0 0.0 1.0 0.4286 42.86 0.92 8 1.0 0.5 0.5 0.4286 42.86 0.37 9 0.5 0.5 0.0 0.4286 42.86 0.42 10 1.0 0.0 1.0 0.4286 42.86 0.92 11 1.0 0.5 0.5 0.5714 57.14 0.69 12 1.0 1.0 0.0 1.0000 100.00 0.00 13 1.0 0.5 0.5 0.8571 85.71 0.32 14 0.5 0.0 0.5 0.4286 42.86 0.37 15 1.0 0.5 0.5 0.5714 57.14 0.54 16 0.5 0.0 0.5 0.5714 57.14 0.34 17 1.0 0.0 1.0 0.5714 57.14 0.69 18 1.0 1.0 0.0 1.0000 100.00 0.00 19 1.0 1.0 0.0 1.0000 100.00 0.00 20 1.0 0.5 0.5 0.8571 85.71 0.54 21 0.5 1.0 -0.5 0.8571 85.71 -0.39 22 1.0 0.5 0.5 0.7143 71.43 0.67 23 0.5 0.0 0.5 0.1429 14.29 0.67 24 1.0 0.0 1.0 0.4286 42.86 0.92 25 1.0 0.0 1.0 0.5714 57.14 0.44 26 1.0 0.0 1.0 0.4286 42.86 0.67 27 1.0 0.5 0.5 0.7143 71.43 0.06 28 0.5 0.0 0.5 0.1429 14.29 0.67 29 1.0 0.5 0.5 0.8571 85.71 0.54 30 1.0 0.0 1.0 0.4286 42.86 0.92
Table 5
Descriptive Statistics: MAT0024_2007_1_Ver_B Variable Mean SE Mean StDev Variance Minimum Q1 Disc. Ind. (D) 0.4500 0.0733 0.4015 0.1612 -0.5000 0.000000000 Difficulty (p) 0.6524 0.0458 0.2508 0.0629 0.1429 0.4286 Difficulty (p) % 65.24 4.58 25.08 628.81 14.29 42.86 Pt-Bis (r) 0.4223 0.0628 0.3440 0.1183 -0.3900 0.0600 Variable Median Q3 Maximum Disc. Ind. (D) 0.5000 0.6250 1.0000 Difficulty (p) 0.6429 0.8571 1.0000 Difficulty (p) % 64.29 85.71 100.00 Pt-Bis (r) 0.4900 0.6700 0.9200
20
Filter for Selecting, Rejecting and Modifying Test Items (Figure 2)
Difficulty (p) %
Coun
t
100.0085.7171.4357.1442.8614.29
9
8
7
6
5
4
3
2
1
0
Chart of Difficulty (p) %
Disc. Ind. (D)
Coun
t
1.00.50.0-0.5
14
12
10
8
6
4
2
0
Chart of Disc. Ind. (D)
Pt-Bis (r)
Coun
t
0.920.690.670.540.440.420.370.340.320.110.060.00-0.39
5
4
3
2
1
0
Chart of Pt-Bis (r)
Difficulty (p) %
Dis
c. I
nd. (
D)
100908070605040302010
1.00
0.75
0.50
0.25
0.00
-0.25
-0.50
Scatterplot of Disc. Ind. (D) vs Difficulty (p) %
Difficulty (p) %
Pt-B
is (
r)
100908070605040302010
1.00
0.75
0.50
0.25
0.00
-0.25
-0.50
Scatterplot of Pt-Bis (r) vs Difficulty (p) %
Figure 2
(Bar Charts and Scatter Plots for p , , and , Version B) D pbisr
4.2 A Comparison of 2007-1 MAT0024 Ver. A and B State Exit Exams Performance A Two-Sample T-Test: To identify if there is a significant difference between the 2007-1 MAT0024 Versions A and B state exit exams performance of the students, a two-sample T-test was conducted using the Minitab and Statdisk software. For this, first the assumption of normality was checked using the histograms and Anderson-Darling Test for both groups. The results are provided in the Tables 6 -7 and Figures 3-4 below. It is evident that the normality tests are easily met. Moreover, at the significance level of
05.0=α , the two-sample T-test conducted fails to reject the claim that BA μμ = , that is, the sample does not provide enough evidence to reject the claim.
21
Figure 3
(Anderson-Darling Normality Tests for 2007-1 MAT0024 A & B Exit Exam Scores)
22
Figure 4
(Two-Sample T-Test for 2007-1 MAT0024 A & B Exit Exam Scores)
Table 6
Descriptive Statistics: 2007-1A, 2007-1B
Total Variable Count N Mean SE Mean StDev Variance Minimum Q1 Median 2007-1A 7 7 17.14 1.14 3.02 9.14 13.00 14.00 17.00 2007-1B 7 7 19.57 2.35 6.21 38.62 12.00 15.00 18.00 Variable Q3 Maximum Skewness Kurtosis 2007-1A 19.00 22.00 0.16 -0.03 2007-1B 25.00 29.00 0.40 -1.31
23
Table 7
Two-Sample T-Test and CI: 2007-1A, 2007-1B (Assume Unequal Variances)
Two-sample T for 2007-1A vs 2007-1B N Mean StDev SE Mean 2007-1A 7 17.14 3.02 1.1 2007-1B 7 19.57 6.21 2.3 Difference = mu (2007-1A) - mu (2007-1B) Estimate for difference: -2.42857 95% CI for difference: (-8.45211, 3.59497) T-Test of difference = 0 (vs not =): T-Value = -0.93 P-Value = 0.380 DF = 8
Two-Sample T-Test and CI: 2007-1A, 2007-1B (Assume Equal Variances) Two-sample T for 2007-1A vs 2007-1B N Mean StDev SE Mean 2007-1A 7 17.14 3.02 1.1 2007-1B 7 19.57 6.21 2.3 Difference = mu (2007-1A) - mu (2007-1B) Estimate for difference: -2.42857 95% CI for difference: (-8.11987, 3.26273) T-Test of difference = 0 (vs not =): T-Value = -0.93 P-Value = 0.371 DF = 12 Both use Pooled StDev = 4.8868
4.3 A Comparison of 2007-1 MAT0024 Classroom Test Aver (Pre) Vs State Exit Exam (Post) Performance A Paired Samples T-Test: To identify if there is a significant gain in the 2007-1 MAT0024 posttest (state exit exam) compared to the pretest (classroom test Average) performance of the students, a paired samples T-test was conducted using the Minitab and Statdisk software. For this, first the assumption of normal distribution of the post, pre, and gain (post – pre) scores was checked using the histograms (see Figure 5). The histograms suggest that the distributions are close to normal. To check whether normality assumption for a paired samples t-test is met, the Kolmogorov-Smirnov and Shapiro-Wilk tests for the gain scores were conducted using Minitab. The results are provided in the Tables 8 -10 and Figure 5 below. It is evident that the normality tests are easily met. Moreover, at the significance level of 05.0=α , the paired samples T-test conducted fails to reject the claim that BA μμ = , that is, the sample does not provide enough evidence to reject the claim.
24
HISTOGRAMS
Freq
uenc
y
908070605040
3
2
1
010090807060504030
4.8
3.6
2.4
1.2
0.0
20100-10-20
4
3
2
1
0
20071-Pre 20071-Post
Gain
20071-Pre
61.19StDev 16.21N 14
GainMean -4.564StDev 10.01
Mean
N 14
65.76StDev 11.66N 14
20071-PostMean
MAT0024: 2007-1 Classroom Test Aver (Pre) Vs State Exit Exam (Post)Normal
KOLMOGOROV-SMIRNOV SHAPIRO-WILK TEST TEST
Gain
Perc
ent
20100-10-20-30
99
95
90
80
70
60504030
20
10
5
1
Mean
>0.150
-4.564StDev 10.01N 14KS 0.172P-Value
MAT0024(2007-1) Gain Score = Pre - PostNormal
Gain
Perc
ent
20100-10-20-30
99
95
90
80
70
60504030
20
10
5
1
Mean
>0.100
-4.564StDev 10.01N 14RJ 0.957P-Value
MAT0024(2007-1) Gain Score = Pre - PostNormal
Figure 5
TESTS FOR NORMALITY (MAT0024: 2007-1 Classroom Test Aver (Pre) Vs State Exit Exam (Post))
25
MAT0024 (2007-1)
Paired T-Test and CI: 20071-Post, 20071-Pre (Gain Score = Post – Pre)
Hypothesis Test for the Mean Difference: Matched Pairs
Figure 7
(Paired Samples T-Test: MAT0024 2007-1 Pre Vs Post (State Exit Exam)
26
Table 8
Data Display: MAT0024 (2007-1) 20071-Post, 20071-Pre (Gain Score = Post – Pre)
Row 20071-Pre 20071-Post Gain 1 69.4 56.7 -12.7 2 63.2 50.0 -13.2 3 54.8 60.0 5.2 4 78.0 83.3 5.3 5 75.6 76.7 1.1 6 66.8 63.3 -3.5 7 51.8 46.7 -5.1 8 44.6 40.0 -4.6 9 72.6 56.7 -15.9 10 68.4 60.0 -8.4 11 67.2 50.0 -17.2 12 76.6 96.7 20.1 13 82.6 73.3 -9.3 14 49.0 43.3 -5.7
Table 9
MAT0024 (2007-1)
Descriptive Statistics: 20071-Post, 20071-Pre (Gain Score = Post – Pre)
Total Variable Count N Mean SE Mean StDev Variance Minimum Q1 Median 20071-Post 14 14 61.19 4.33 16.21 262.62 40.00 49.18 58.35 20071-Pre 14 14 65.76 3.12 11.66 136.01 44.60 54.05 67.80 Gain 14 14 -4.56 2.67 10.01 100.14 -17.20 -12.83 -5.40 Variable Q3 Maximum Range IQR Skewness Kurtosis 20071-Post 74.15 96.70 56.70 24.98 0.84 0.22 20071-Pre 75.85 82.60 38.00 21.80 -0.51 -0.80 Gain 2.13 20.10 37.30 14.95 1.10 1.56
Table 10
MAT0024 (2007-1)
Paired T-Test and CI: 20071-Post, 20071-Pre (Gain Score = Post – Pre) Paired T for 20071-Post - 20071-Pre N Mean StDev SE Mean 20071-Post 14 61.1929 16.2056 4.3311 20071-Pre 14 65.7571 11.6622 3.1169 Difference 14 -4.56429 10.00704 2.67450 95% CI for mean difference: (-10.34218, 1.21361) T-Test of mean difference = 0 (vs not = 0): T-Value = -1.71 P-Value = 0.112
27
4.4 A Comparison of MAT0024: 2006-1, 2006-2, 2007-1 State Exit Exams To identify if there is a significant difference in the MAT0024: 2006-1, 2006-2, 2007-1 State Exit Exams performance of the students, one-way analysis of variance was conducted using the Minitab and Statdisk software. For this, first the assumption of normality was checked using the histograms and Anderson-Darling Test for the three groups. The results are provided in the Tables 11 -12 and Figures 7-9 below. It is evident that the normality tests are easily met. Moreover, at the significance level of
05.0=α , the data does not provide enough evidence to indicate the claim that the sample means are unequal.
Perc
ent
3020100
99
90
50
10
13020100
99
90
50
10
1
3020100
99
90
50
10
1
2006-1 2006-2
2007-1
2006-1
P-Value 0.732
2006-2Mean 18.57StDev 4.939N 30AD
Mean
0.256P-Value 0.703
2007-1Mean 18.36StDev 4.861N 14
16.36
AD 0.362P-Value 0.392
StDev 5.187N 22AD 0.245
Comparison of MAT0024: 2006-1, 2006-2, 2007-1 State Exit ExamsNormal - 95% CI
Figure 7
(Normality Tests: MAT0024 2006-1, 2006-2, 2007-1 State Exit Exams)
28
Freq
uenc
y
252015105
4.8
3.6
2.4
1.2
0.03025201510
8
6
4
2
0
3025201510
4
3
2
1
0
2006-1 2006-2
2007-1
2006-1
18.57StDev 4.939N 30
2007-1Mean 18.36StDev 4.861
Mean
N 14
16.36StDev 5.187N 22
2006-2Mean
Comparison of MAT0024: 2006-1, 2006-2, 2007-1 State Exit ExamsNormal
Figure 8
(Normality Tests: MAT0024 2006-1, 2006-2, 2007-1 State Exit Exams)
Table 11
One-way ANOVA MAT0024: 2006-1, 2006-2, 2007-1 State Exit Exams
Source DF SS MS F P Factor 2 67.4 33.7 1.34 0.268 Error 63 1579.7 25.1 Total 65 1647.0 S = 5.007 R-Sq = 4.09% R-Sq(adj) = 1.04% Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev ---------+---------+---------+---------+ 2006-1 22 16.364 5.187 (----------*---------) 2006-2 30 18.567 4.939 (--------*--------) 2007-1 14 18.357 4.861 (-------------*------------) ---------+---------+---------+---------+ 16.0 18.0 20.0 22.0 Pooled StDev = 5.007
29
Table 12
One-way ANOVA (Analysis of Variance) MAT0024: 2006-1, 2006-2, 2007-1 State Exit Exams
One-way Analysis of Variance: Hypothesis Test
Figure 9
(One-way ANOVA: MAT0024 2006-1, 2006-2, 2007-1 State Exit Exams)
30
5. Concluding Remarks
This paper discusses the classroom assessment and action research, which are the two most crucial components of the teaching and learning process. Student performance using test item analysis and its relevance to the State Exit Final Exams of MAT0024 classes have been investigated. By conducting the test item analysis of the State Exit Final Exams of some of my MAT0024 classes, this project discusses how well these exams distinguish among students according to the how well they met the learning goals of these classes.
It is hoped that the present study would be helpful in recognizing the most critical pieces of the state exit test items data, and evaluating whether or not that test item needs revision. The methods discussed in this project can be used to describe the relevance of test item analysis to classroom tests. These procedures can also be used or modified to measure, describe and improve tests or surveys such as college mathematics placement exams (that is, CPT), mathematics study skills, attitude survey, test anxiety, information literacy, other general education learning outcomes, etc.
Further research based on Bloom’s cognitive taxonomy of test items, the applicability of Beta-Binomial models and Bayesian analysis of test items and item response theory (IRT) using the 1-parameter logistic model (also known as Rasch model), 2- & 3- parameter logistic models, plots of the item characteristic curves (ICCs) of different test items, and other characteristics of measurement instruments of IRT are under investigation by the present author and will be reported soon at an appropriate time.
Finally, this action research project has given me new directions about the needs of my students in MAT0024 and other mathematics classes. It has helped me to know about their learning styles, individual differences and ability. It has also given me insight to construct valid and reliable tests & exams for more student successes and achievements in my math classes. This action research project has provided me inputs to coordinate with my colleagues in mathematics and other disciplines at the Hialeah Campus as well as college wide to identify methods to improve classroom practices through test item analysis and action research in order to enhance the student success and achievement in the class and, later, in their lives, which are also the MDC QEP and General Education Learning Outcomes.
31
Acknowledgments
I would like to express my sincere gratitude and thanks to the LAS Chair, Dr. Cary Castro, the Academic Dean, Dr. Ana Maria Bradley-Hess, and the President, Dr. Cindy Miles, of Miami-Dade College, Hialeah Campus, for their continued encouragement, support and patronage. I would like to thank the Hialeah Campus College Prep Lab Coordinator, Professor Javier Duenas and the Lab Instructor, Mr. Cesar Rueda, for their kind support and cooperation in providing me with the ParSCORETM item analysis reports on the MAT0024 State Exit Final Exams. I’m also thankful to Dr. Hanadi Saleh, MDC CT & D Instructional Designer/Trainer, Hialeah Campus, for her valuable and useful comments, suggestions, and contributions to the Power Point which considerably improved the quality of this presentation. I would also like to acknowledge my sincere indebtedness to the works of various authors and resources on the subject which I have consulted during the preparation of this research project. Last but not the least I am thankful to the authorities of Miami-Dade College for allowing and giving me an opportunity to present this paper on MDC Conference Day.
References
Angelo, T. A. and Cross, K. P. (1993). Classroom Assessment Techniques – A Handbook for College Teachers. Jossey-Bass, San Francisco. Agresti, A. and Finlay, B. (1997). Statistical Methods for the Social Sciences. Prentice Hall, Upper Saddle River, NJ. Ausubel, D. P. (1968). Educational Psychology: A Cognitive View. Holt, Reinhart & Winston, Troy, Mo. Barazangi, N. H. ( 2006). An ethical theory of action research pedagogy. Action Research, 4(1), 97-116. Bloom, B. S. (1956). Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain. David McKay Co., Inc., New York. Bloom, B. S., Hastings, J. T. and Madaus, G. F. (1971). Handbook on Formative and Summative Evaluation of Student Learning. McGraw-Hill, New York. Brown, H. D. (1994). Teaching by Principles: An Interactive Approach to Language Pedagogy. Prentice Hall, Englewood Cliffs, NJ. Brown, J. D. (1996). Testing in language programs. Prentice Hall, Upper Saddle River, NJ. Brydon-Miller, M., Greenwood, D. and Maguire, P. (2003). Why Action Research?. Action Research, 1(1), 9-28.
32
Chase, C. I. (1999). Contemporary assessment for educators. Longman, New York. Crocker, L. and Algina, J. (1986). Introduction to classical and modern test theory. Holt, Rinehart and Winston, New York. DeVellis, R. F. (1991). Scale development: Theory and applications. Sage Publications, Newbury Park. Dick, B. (2004). Action research literature: Themes and trends. Action Research, 2(4), 425-444. Duncan, G. T. (1974). An empirical Bayes approach to scoring multiple-choice tests in the misinformation model. Journal of the American Statistical Association, 69(345), 50-57. Ebel, R.L. (1979). Essentials of educational measurement (3rd ed). Prentice Hall, Englewood Cliffs, NJ. Ebel, R. L. and Frisbie, D. A. (1986). Essentials of educational measurement. Prentice- Hall, Inc, Englewood Cliffs, NJ. Ebbutt (1985). Educational action research: Some general concerns and specific quibbles. In Burgess R (ed.) ‘Issues in educational research: Qualitative methods’. Falmer Press, Lewes. Elliott, J. (1991). Action research for educational change. Open University Press, Philadelphia. Elvin, C. (2003). Test Item Analysis Using Microsoft Excel Spreadsheet Program. The Language Teacher, 27 (11), 13-18 Elvin, C. (2004). My Students' DVD Audio and Subtitle Preferences for Aural English Study: An Action Research Project. Explorations in Teacher Education, 12 (4), 3-17. Gabel, D (1995). An Introduction to Action Research. http://physicsed.buffalostate.edu/danowner/actionrsch.html Gelman, A. (2006). Prior distributions for variance parameters in hierarchical models. Bayesian Analysis, 1(3), 515-533. Glass, G. V. and Hopkins, K. D. (1995). Statistical Methods in Education and Psychology, 3rd edition, Allyn & Bacon, Boston. Gleason, J. (2008). An evaluation of mathematics competitions using item response theory. Notices of the AMS, 55(1), 8-15.
Greenwood, D. J. and Lewin, M. (1998), Introduction to Action Research, Sage, London. Greenwood, D. J (2007). Teaching/learning action research requires fundamental reforms in public higher education. Action Research, 5(3), 249-264.
33
Gronlund, N.E., & Linn, R.L. (1990). Measurement and evaluation in teaching (6th ed). MacMillan, New York. Gross, A. L. and Shulman, V. (1980). The applicability of the beta binomial model for criterion referenced testing. Journal of Educational Measurement, 17(3), 195-201. Gulliksen, H. (1987). Theory of mental tests. Erlbaum, Hillsdale, NJ.
Gustavsen, B. (2003). New Forms of Knowledge Production and the Role of Action Research. Action Research, 1(2), 153-164. Haladyna. T. M. (1999). Developing and validating multiple-choice exam items, 2nd ed. Lawrence Erlbaum Associates, Mahwah, NJ. Haladyna, T. M., Downing, S.M. and Rodriguez, M.C. (2002). A review of multiple- choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309-334. Hambleton, R. K., Swaminathan, H. and Rogers, H. J. (1991). Fundamentals of Item Response Theory. Sage Press, Newbury Park, CA. Henrysson, S. (1971). Gathering, analyzing, and using data on test items. In R.L. Thorndike (Ed.), Educational Measurement (p. 141). American Council on Education, Washington DC. Hopkins, D. (1985). A teacher's guide to classroom research. Open University Press, Philadelphia. Hostetter, L. and Haky, J. E. (2005). A classification scheme for preparing effective multiple-choice questions based on item response theory. Florida Academy of Sciences, Annual Meeting. University of South Florida, March, 2005 (cited in Hotiu, 2006). Hotiu, A. (2006). The relationship between item difficulty and discrimination indices in multiple-choice tests in a physical science course. Master in Science Thesis, Charles Schmidt College of Science. Florida Atlantic University, Boca Raton, Florida. Kelley, T. L. (1939). The selection of upper and lower groups for the validation of test items. J. Ed. Psych., 30, 17-24. Kemmis, S. (1983). Action Research. In DS Anderson & C Blakers (eds), Youth, Transition and Social Research. Australian National University, Canberra. Krathwohl, D. R., Bloom, B. S. and Bertram, B. M. (1973). Taxonomy of Educational Objectives, the Classification of Educational Goals. Handbook II: Affective Domain. David McKay Co., Inc., New York. Lewin, K. (1946). Action research and minority problems. Journal of Social Issues, 2,
34
34-46. Lord, F. M. (1952). The Relationship of the Reliability of Multiple-Choice Test to the Distribution of Item Difficulties. Psychometrika, 18, 181-194. Lord, F. M. and Novick, M. R. (1968). Statistical Theories of Mental Test Scores. Addison-Wesley, Reading, MA. Lord, F. M. (1980). Applications of item response theory to practical testing problems. Lawrence Erlbaum Associates, Inc, New Jersey. Mertler, C. A. (2003). Classroom Assessment – A Practical Guide for Educators. Pyrczak Publishing, Los Angeles, CA. Millman, J. and Greene, J. (1993). The specification and development of tests of achievement and ability. In R.L. Linn (Ed.), Educational measurement (pp. 335-366). Oryx Press, Phoenix, AZ. Nitko, A. J. (2001). Educational assessment of students (3rd edition). Prentice Hall, Upper Saddle River, NJ Nunan, D. (1992). Research Methods in Language Learning. Cambridge University Press, Cambridge. Nunnally, J. C. (1972). Educational measurement and evaluation (2nd ed). McGraw-Hill, New York. Nunnally, J. C. (1978). Psychometrics Theory, Second Edition. : McGraw Hill, New York. Oosterhof, A. (2001). Classroom applications for educational measurement. Merrill Prentice Hall, Upper Saddle River, NJ. Pedhazur, E. J. and Schmelkin, L. P. (1991). Measurement, design, and analysis: An integrated approach. Erlbaum , Hillsdale, NJ. Popham, W. J. (1981). Modern educational measurement. Prentice-Hall, Englewood Cliff, NJ. Rapoport, R. (1970). Three dilemmas in action research. Human Relations, 23(6), 499- 513. Rasch, G. (1960/1980). Probabilistic models for some intelligence and attainment tests. (Copenhagen, Danish Institute for Educational Research), expanded edition (1980) with foreword and afterword by B.D. Wright. The University of Chicago Press, Chicago. Richards, J. C., Platt, J. and Platt, H. (1992). Dictionary of Language Teaching and Applied Linguistics, Second Edition, Longman, London. Sax, G. (1989). Principles of educational and psychological measurement and evaluation (3rd ed). Wadsworth, Belmont, CA.
35
Simpson, E.J. (1972). The Classification of Educational Objectives in the Psychomotor Domain. Gryphon House, Washington, DC. Spearman, C. (1904). “General intelligence,” objectively determined and measured. American Journal of Psychology, 15, 201-293. Suen, H. K. (1990). Principles of exam theories. Lawrence Erlbaum Associates, Hillsdale, NJ. Tamhane, A. C. and Dunlop, D. D. (2000). Statistics and Data Analysis from Elementary to Intermediate. Prentice Hall, Upper Saddle River, NJ. Tanner, D. E. (2001). Assessing academic achievement. Allyn & Bacon, Boston. Taylor, P. and Pettit, J (2007). Learning and teaching participation through action research: Experiences from an innovative masters programme. Action Research, 5(3), 231-247. Thompson, B. and Levitov, J. E. (1985). Using microcomputers to score and evaluate test items. Collegiate Microcomputer, 3, 163-168.
Thorndike, R. M., Cunningham, G. K., Thorndike, R. L. and Hagen, E.P. (1991). Measurement and evaluation in psychology and education (5th ed). MacMillan, New York. Triola, M. F. (2006). Elementary Statistics. Pearson Addison-Wesley, New York. Van der Linden, W. J. and Hambleton, R. K. (Eds.) (1997). Handbook of modern item response theory. Springer, New York.
Wiersma, W. and Jurs, S. G. (1990). Educational measurement and testing (2nd ed). Allyn and Bacon, Boston, MA. Wilcox, R. R. (1981). A review of the beta-binomial model and its extensions. Journal of Educational Statistics, 6(1), 3-32.
Wood, D. A. (1960). Test construction: Development and interpretation of achievement tests. Charles E. Merrill Books, Inc, Columbus, OH. Wright, B. D. (1992). IRT in the 1990s: Which Models Work Best?. Rasch measurement transactions, 6(1), 196-200