Introduction
This glossary of key assessment terms is designed to help you understand some of the terminology used in the assessment cycle. This glossary was made with the help of numerous sources. Use the table of contents to navigate to particular words
Table of Contents
A
Accreditation
Accreditation is the establishment of the status, legitimacy, or appropriateness of an institution or program of study by an organization delegated to make decisions, on behalf of the higher education sector, about the status, legitimacy or appropriateness of an institution or program of study. The primary accrediting body for the University of Denver is the Higher Learning Commission. However, certain programs within the university also have specialized accrediting bodies.
Artifact
An object created by students during the course of instruction. Artifacts help indicate mastery of a skill or component of knowledge. Artifacts are a form of evidence educators can use to tell the story of their classrooms and showcase their instructional practices as well as student growth.
Assessment
A systematic process for understanding and improving student learning. The ongoing process engages faculty, staff, and students at multiple points to ensure that evidence is analyzed in alignment with the institutional, program, and course level goals and outcomes in order to improve student learning and inform curricular and pedagogical decisions. (“NILOA Glossary”)
The process of collecting and analyzing data for the purpose of evaluation. The assessment of student learning involves describing, collecting, recording, scoring, and interpreting information about performance. A complete assessment of student learning should include measures with a variety of formats as developmentally appropriate. Assessments and the tests they use are usually classified by how the data are used; either formative, benchmark or interim, and summative.
Authentic Assessment
Assessment strategies that require students to directly reveal their ability to think critically and apply and synthesize their knowledge. A goal of authentic assessment is to determine if student knowledge can be applied outside of the classroom. Generally, authentic assessment:
- engages students and is based in content or media in which the students actually have a genuine interest.
- asks students to synthesize information and use critical-thinking skills.
- is a learning experience in and of itself.
- measures not just what students remember but how they think.
- helps students understand where they are academically and helps teachers know how to best teach them.
B
Benchmarking
Benchmarking is a process that enables comparison of inputs, processes, or outputs between institutions (or parts of institutions) or within a single institution over time. A benchmark statement, in higher education, provides a reference point against which outcomes can be measured and refers to a particular specification of program characteristics and indicative standards.
Benchmarking tallies and tracks key, agreed-upon, markers of accomplishment, usually to help students progress through a program and/or to demonstrate the program’s success. For example, a program might monitor how many students pass their qualifying exams within three years of entering the program, how many papers each student publishes, and/or the timely completion of coursework for each student. A program will set out particular expectations and how many students meet them and when they meet them. This is a crucial process for ensuring that students meet program expectations and that they don’t get stuck at certain stages. Benchmarking, then, is a way of tracking the progress of each student, and, in aggregate, of demonstrating that a program has succeeded (or perhaps failed) in advancing its students appropriately.
C
Capstone
A culminating experience required of students nearing the end of a program. In the course, a student is required to create a project that integrates and applies what they’ve learned. The project might be a research paper, performance, portfolio, or artwork exhibition. Capstones can be offered in departmental programs and in general education as well.
D
Datum (Data)
Raw facts and figures submitted or by or for you for the purpose of analyzing by or for you into information. In common usage, however, the terms “data” and “information” are often used synonymously. Therefore, for assessment purposes, data will be the base facts and figures and information will be the analyzed data.
Direct Measures
Direct measures require students to demonstrate their knowledge and skills. They provide tangible, visible, and self‐explanatory evidence of what students have and have not learned as a result of a course, program, or activity.
E
Evaluation
Evaluation includes both qualitative and quantitative descriptions of student behavior, plus value judgments concerning the desirability of that behavior. Using collected information (assessments) to make informed decisions about continued instruction, programs, and activities.
Educational Program
“A legally authorized postsecondary program of organized instruction or study that:
Leads to an academic, professional, or vocational degree, or certificate, or other recognized educational credential, or is a comprehensive transition and postsecondary program, as described in 34 CFR part 668, subpart O; and
May, in lieu of credit hours or clock hours as a measure of student learning, utilize direct assessment of student learning, or recognize the direct assessment of student learning by others, if such assessment is consistent with the accreditation of the institution or program utilizing the results of the assessment and with the provisions of 34 CFR § 668.10.
HLC does not consider that an institution provides an educational program if the institution does not provide instruction itself (including a course of independent study) but merely gives credit for one or more of the following: Instruction provided by other institutions or schools; examinations or direct assessments provided by agencies or organizations; or other accomplishments such as “life experience.” “Educational program” is synonymous with HLC’s use of the terms “academic offering(s)” and “academic program(s).”
F
Formative Assessment
Formative assessments are measures which help shape students throughout a program. They are the types of measures faculty can use to give feedback and modify learning.
Formative assessment is often done at the beginning or during a program, thus providing the opportunity for immediate evidence for student learning in a particular course or at a particular point in a program. Classroom assessment is one of the most common formative assessment techniques. The purpose of this technique is to improve quality of student learning, leading to feedback in the developmental progression of learning. This can also lead to curricular modifications when specific courses have not met the student learning outcomes. Classroom assessment can also provide important program information when multiple sections of a course are taught because it enables programs to examine if the learning goals and objectives are met in all sections of the course. It also can improve instructional quality by engaging the faculty in the design and practice of the course goals and objectives and the course impact on the program.
H
High-Impact Practices (HIPs)
High-impact practices are educational opportunities that have been widely tested and shown to improve student success, especially among historically underserved students. Founding director of the National Survey of Student Engagement (NSSE), George Kuh found that these practices benefit students by connecting learning to life, fostering quality interaction between faculty and students, increasing the likelihood that students will experience diversity through contact with people different from themselves, and helping students understand themselves in relation to others in light of the larger world.
Kuh initially identified ten high-impact practices and later added e-portfolios. The list includes, first-year seminars, learning communities, common intellectual experiences, undergraduate research, capstone courses, diversity/global learning, collaboration, e-portfolios, writing intensive courses, service-learning, and internships. (Kuh)
Kuh, George D. High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter. American Association of Colleges & Universities, 2008.
I
Indirect Measures
Assessments that measure opinions or thoughts about student or alumni knowledge, skills, attitudes, learning experiences, perception of services received, or employers’ opinions. While these types of measures are important and necessary, they do not measure student performance directly. They supplement direct measures of learning by providing information about how and why learning is occurring
Information
Content conveyed or represented by a particular arrangement or sequence of facts and figures.
Institution
Institution is shorthand for institution of higher education (IHE), which is an educational institution that has students graduating at bachelor-degree level or above.
J
Join Degree
A joint degree is a single degree awarded by more than one higher-education institution.
O
Outcomes
What you want students to know and understand after they complete a learning experience, usually a culminating activity, product, or performance that can be measured. There are different levels of outcomes:
Student Learning Outcomes (SLOs)
These outcomes are connected to student learning at the course level. These are measured throughout a particular course offering.
Program Learning Outcomes (PLOs)
These outcomes are connected to student performance during a major or general education program. These are usually measured through course and co-curricular experiences throughout a program.
Institutional Learning Outcomes (ILOs)
These outcomes are connected to student performance during their entire time at the institution. At the University of Denver, these outcomes are found in the 4D experience. These outcomes are usually measured through larger initiatives in various programs.
P
Portfolio
A systematic and organized collection of student work that exhibits the direct evidence of a student’s efforts, achievements, and progress over a period of time. The collection may involve the student in the selection of its contents, and should include information about the performance criteria, the rubric or criteria for judging merit, and evidence of student self-reflection or evaluation. It should include representative work, providing a documentation of the students’ performance and a basis for evaluation of the student’s progress. Portfolios may include a variety of demonstrations of learning and have been gathered in the form of a physical collection of materials, videos, CD-ROMs, reflective journals, etc.
R
Rubric
In general, a rubric is a scoring guide used in subjective assessments. A rubric implies that a rule defining the criteria of an assessment system is followed in evaluation. A rubric can be an explicit description of performance characteristics corresponding to a point on a rating scale. A scoring rubric makes explicit expected qualities of performance on a rating scale or the definition of a single scoring point on a scale.
S
Self-Assessment
A process in which a student engages in a systematic review of a performance, self-assessment is usually employed for the purpose of improving future performance. It may involve comparison with a standard, established criteria; or it may involve critiquing one’s own work or may be a simple description of the performance. Reflection, self-evaluation, metacognition, are related terms.
Summative Assessments
Summative assessments are measures that occur near the end of a unit, course, or program and seek to assess student mastery of an outcome.
Summative assessment is comprehensive in nature, provides accountability and is used to check the level of learning at the end of the program. For example, if upon completion of a program students will have the knowledge to pass an accreditation test, taking the test would be summative in nature since it is based on the cumulative learning experience. Program goals and objectives often reflect the cumulative nature of the learning that takes place in a program. Thus, the program would conduct summative assessment at the end of the program to ensure students have met the program goals and objectives. Attention should be given to using various methods and measures in order to have a comprehensive plan. Ultimately, the foundation for an assessment plan is to collect summative assessment data and this type of data can stand-alone. Formative assessment data, however, can contribute to a comprehensive assessment plan by enabling faculty to identify particular points in a program to assess learning (i.e., entry into a program, before or after an internship experience, impact of specific courses, etc.) and monitor the progress being made towards achieving learning outcomes.
T
Teaching Quality Framework (TQF)
The Teaching Quality Framework engages faculty leaders, departments, and administrators, and provides a structure to identify (or co-create), refine, and implement improved teaching assessment practices. It is an opt-in model, with departments choosing to become leaders in this process. This strategy empowers the community to voluntarily engage with new ways of assessing teaching and to adopt an evidence-based framework for teaching assessment. (University of Colorado Boulder)
Key TQF principles:
- Grassroots (faculty-level) selection, refinement, and adoption of new assessment practices is important to improve teaching and teaching assessment.
- Effective teaching assessment should be multidimensional and incorporate 3 “voices” (data sources) of assessment: the instructor/self, student voice, and peer review.
- Assessment should drive improvements to teaching by being formative.
V
VALUE Rubrics
VALUE rubrics were developed by teams of faculty experts representing colleges and universities across the United States. The rubrics articulate fundamental criteria for each learning outcome, with performance descriptors demonstrating progressively more sophisticated levels of attainment. The rubrics are intended for institutional-level use in evaluating and discussing student learning, not for grading. The core expectations articulated in all 15 of the VALUE rubrics can and should be translated into the language of individual campuses, disciplines, and even courses. The utility of the VALUE rubrics is to position learning at all undergraduate levels within a basic framework of expectations such that evidence of learning can by shared nationally through a common dialog and understanding of student success. (“VALUE Rubrics”)