HEIghten® Assessment Suite Product Guide

The HEIghten® outcomes assessment modules are designed to assess students’ skill levels in cross-disciplinary areas: Critical Thinking,Written Communication,Quantitative Literacy,Civic Competency & Engagement,and Intercultural Competency & Diversity.

badge-galaxy


Introduction

Overview of HEIghten Uses

Inappropriate Uses of HEIghten Scores  

The HEIghten Assessment Suite 

Key Features

Construct Definitions

HEIghten Approach to Critical Thinking

HEIghten Approach to Written Communication 

HEIghten Approach to Quantitative Literacy

HEIghten Approach to Civic Competency and Engagement

HEIghten Approach to Intercultural Competency and Diversity 

Description of HEIghten Assessments Task Types and Item Formats 

Logistical Characteristics of HEIghten Assessment modules 

Technical Guidelines for Use of HEIghten Assessment Module Scores

Description of Scores from HEIghten Modules 

Types of Scores

Types of Score Reports

Explanation of Performance Level Descriptions (PLDs) and Attitude/Approach Level Descriptions 

Practical Guidelines for Use of HEIghten Modules for Demonstrable Student Learning Improvement 

Contextual Factors Impacting HEIghten Score Interpretations 

References


 

Introduction 

The Institutional stakeholders, employers and organizations identified Critical Thinking, Written Communication, Quantitative Literacy, Civic Competency and Engagement, and Intercultural Competency and Diversity as important skills for undergraduate students to develop. 

It is clear from existing research and surveys that the skills of Critical Thinking, Written Communication, Quantitative Literacy, Civic Competency and Engagement, and Intercultural Competency and Diversity, are the kinds of skills employers agree prepare students for long-term career success (Hart Research Associates, 2015). To help students develop these skill sets, institutions need sound measurement tools that can capture or gauge students’ abilities, track improvements over time and inform conversations concerning educational interventions1 (Millett, Payne, Dwyer, Stickler, & Alexiou, 2008). 

Assessing these various skills using sound instruments can be important for university-level improvement efforts. That is, as institutions strive to continuously improve student learning outcomes related to Critical Thinking, Written Communication, Quantitative Literacy, Civic Competency and Engagement, and Intercultural Competency and Diversity, they need high-quality assessment instruments with desirable psychometric properties (e.g., scores that are reliable). Such instruments must provide scores that are supported by validity evidence — meaning stakeholders can trust the appropriateness of the inferences or conclusions made based on assessment module scores. 

In addition to informing and documenting institutional improvement efforts, high-quality assessment of these critical skills can be useful for accreditation and other accountability efforts. Most accreditation bodies require institutions to assess university-level student learning outcomes, such as those related to cross-disciplinary general education concepts (e.g., Critical Thinking, Written Communication, Quantitative Literacy). Others require institutions to go beyond simply assessing: they must create, implement and document plans to improve student learning related to these cross-disciplinary skills. In this sense, assessment of Critical Thinking, Written Communication, and Quantitative Literacy skills can positively contribute to demonstrating learning improvement, as well as accountability-based reporting. Additionally, in an effort to contend globally and prove that students are prepared for a global workforce, higher education institutions have started to include civic and intercultural competencies in a holistic manner in their mission statements, emphasizing that they inspire students to become civically engaged and promote global perspectives and awareness (Perry et al., 2016). Thus, the assessment of Civic Competency and Engagement and Intercultural Competency and Diversity is also of increased importance across institutions. 

The HEIghten student learning outcomes suite is designed to assess students’ skill levels in critical general education areas as well as Civic Competency and Engagement and Intercultural Competency and Diversity. As measures with known statistical properties and high-quality technical characteristics2, the scores from these assessments, when used properly, can help faculty members and other institutional stakeholders answer important questions concerning student learning. 

Back to Top

Overview of HEIghten Uses 

Each HEIghten module can be used as a stand-alone assessment or in conjunction with an institution’s internal assessments. Provided all applicable guidelines are adhered to, particularly the use of multiple sources of information when assessing student learning outcomes, scores from the HEIghten modules can also help faculty and other stakeholders: 

  • Document and describe student achievement of general education competencies in terms of both benchmarking and trend analyses over time 
  • Provide evidence of improvements in student learning 
  • Identify performance levels of a specific group of interest 
  • Facilitate conversations and pinpoint areas of strength regarding pedagogy, curricula and educational interventions 
  • Inform faculty development/training opportunities 
  • Support accreditation and accountability initiatives by describing students’ ability to meet institutional and program-level learning outcomes 
  • Provide cross-institutional and other comparative data (e.g., major, ethnicity, transfer student) 

The suitability of the HEIghten assessment modules for a particular use should be explicitly examined before using the module scores for that purpose. The aforementioned list includes the intended uses. Uses other than those listed above should be discussed in advance with Territorium Higher Education staff to determine their appropriateness. If a use other than those appropriate uses listed above is contemplated, it will be important for the user to validate the use of scores for that purpose. The Territorium Higher Education staff will provide advice on the design of such validity studies. 

Back to Top

Inappropriate Uses of HEIghten Scores  

Uses and interpretations of HEIghten scores without supporting validity evidence are inappropriate. We caution against the use of HEIghten assessment modules as the sole criteria for decisions at the individual student level. Scores provided by the HEIghten assessment are best used in conjunction with other criteria when making decisions about a curriculum or academic program. Institutions and academic departments are strongly cautioned against making the achievement of a certain score or percentile on the HEIghten assessment modules a necessary condition for a student’s graduation.  

The HEIghten modules were developed to support faculty members in their process of evaluating the effectiveness of their curricular and co-curricular programs. While faculty members are an important factor in the learning process for students, the HEIghten modules were not developed to capture the impact of any one faculty member’s contribution to student learning. Therefore, use of module scores to evaluate the effectiveness of faculty members is also considered inappropriate. 

Back to Top

The HEIghten Assessment Suite 

General Description of HEIghten Assessment Modules 

The HEIghten outcomes assessment modules are designed to assess students’ skill levels in critical cross-disciplinary areas: Critical Thinking, Written Communication, Quantitative Literacy, Civic Competency and Engagement, and Intercultural Competency and Diversity.  

Back to Top

Key features of HEIghten include: 

  • Suite of computer-based assessment modules 
  • Flexible and customizable modular format 
  • Static and variable score reporting at the institution level 
  • Credential for the individual test taker that reflects the skill level demonstrated on the module 
  • Convenient administration online (taking approximately 45 minutes), in either a proctored or non-proctored setting 
  • Accommodations for those test-takers with disabilities 

Back to Top

HEIghten Construct Definitions 

To adequately align assessment modules with student learning outcomes and educational interventions, it is important to understand how all general education skill areas (i.e., Critical Thinking, Written Communication, Quantitative Literacy, Civic Competency and Engagement, and Intercultural Competency and Diversity) are defined within the HEIghten assessment suite. Institutional stakeholders should carefully consider the following operational definitions of the skills measured by the HEIghten assessments. Should the HEIghten operational definition of a particular skill area differ substantially from the institution or program’s definition of that skill area, the institution or program may consider using supplemental, locally developed assessments that — when coupled with the HEIghten assessment modules — are more aligned with institution- or program-level student learning outcomes and/or educational interventions. 

For each of the five HEIghten assessments, operational definitions and assessment frameworks were developed based a comprehensive review of existing definitions, frameworks, and assessments for each of the constructs. It is important to note that these assessment frameworks were the ideal framework for a particular construct. As the framework was used to create the operational assessments, some slight modifications were made to the definitions to ensure that these various dimensions could be adequately assessed. Each of these operational definitions and assessment frameworks are discussed in detail below. 

Back to Top

heighten-higher-ed-critical-thinking-icon-080623HEIghten Approach to Critical Thinking

The research report Assessing Critical Thinking in Higher Education: Current State and Directions for Next-Generation Assessment provides an overview of how critical thinking has been defined (Liu, Frankel, & Roohr, 2014). It also provides insights into various considerations for assessing this construct. The authors of the research report noted that critical thinking is a frequently discussed and highly valued skill in higher education; however, researchers often debate its definition. Based on a comprehensive review of critical thinking definitions and frameworks and empirical research studies, the authors proposed an operational definition for critical thinking. The definition includes two central aspects (i.e., dimensions) of critical thinking which are included on the HEIghten Critical Thinking assessment: 

  • Analytical skills (i.e., evaluate evidence and its use; analyze and evaluate argument structure) 
  • Synthetic skills (i.e., understand implications and consequences; develop sound and valid arguments) 

Additionally, this definition includes understanding causation and explanation, which is relevant to both the analytical and synthetic dimensions. Table 1 comes from Table 4 of Liu et al. (2014).

📌 Table 1: Framework for Critical Thinking

Back to Top

heighten-higher-ed-communication-icon-080623HEIghten Approach to Written Communication 

Like Critical Thinking, Written Communication is considered a key competency for both academic and career success. Similarly, there is also some debate and disagreement about how it should be defined. In Assessing Written Communication in Higher Education: Review and Recommendations for Next-Generation Assessment, the authors (Sparks, Song, Brantley, & Liu, 2014) proposed an operational definition for written communication. For the purposes of the HEIghten Written Communication module, the operational definition includes four dimensions: 

  • Knowledge of social and rhetorical situations, which concerns the purpose-driven, social nature of all written communication, and includes the ability to adapt one’s writing to the demands of the specific context, purpose, or task; and audience awareness, which can include identifying or writing to address a particular audience. 
  • Knowledge of conceptual strategies, which concerns the use of relevant content knowledge to support writing including the presentation of those ideas in an organized, logical, and coherent sequence within a text; and use of information drawn from sources to support one’s ideas without distorting the author’s original meaning. 
  •  Knowledge of language use and conventions, which concerns the linguistic elements of writing, and includes the ability to convey meaning clearly by using appropriate word choice, tone and style, given the purpose of the writing, as well as the ability to produce relatively error-free text without substantial flaws in usage, syntax and mechanics. 
  •  Procedural knowledge, which cuts across the preceding social, conceptual and linguistic dimensions, concerns the various strategies used to support prewriting or planning, drafting and revision of text, as well as reading and appropriately responding to others’ feedback. 

Sparks et al. (2014) further describes these four dimensions and provides examples of how they could be assessed. A version of the table found in Sparks et al. (2014) is displayed in Table 2 below. Institutional stakeholders should review these descriptions to ensure adequate alignment among the HEIghten construct definitions and institutional or programmatic student learning outcomes. 

📌 Table 2: Framework for Written Communication

Back to Top

heighten-higher-ed-literacy-icon-080623HEIghten Approach to Quantitative Literacy  

In addition to Critical Thinking and Written Communication, institutional stakeholders and employers value Quantitative Literacy skills (Roohr, Graf, & Liu, 2014). There are many different frameworks and definitions for Quantitative Literacy. The authors reviewed current literature and pre-existing frameworks from national organizations and employers related to definitions of Quantitative Literacy. In Assessing Quantitative Literacy in Higher Education: An Overview of Existing Research and Assessments with Recommendations for Next-Generation Assessment, Roohr et al. (2014) proposed an operational definition for the HEIghten Quantitative Literacy assessment module:  

Quantitative literacy is the comprehension of mathematical information in everyday life, and the ability to detect and solve mathematics problems in authentic contexts across a variety of mathematical content areas. Solving these applied mathematical problems includes (a) interpreting information, (b) strategically evaluating, inferring and reasoning, (c) capturing relationships between variables by mapping, interpreting and modeling, (d) manipulating mathematical expressions and computing quantities, and (e) communicating these ideas in various forms.  

This operational definition includes three specific dimensions. The following dimensions are measured as part of the HEIghten Quantitative Literacy assessment:  

  • Problem-solving skills, which include (a) interpretation, (b) strategic knowledge and reasoning, (c) modeling, and (d) communication. For example, communication involves the presentation of mathematical concepts, data, procedures and solutions in a variety of forms (e.g., written, graphic or tabular format). These problem-solving skills are not mutually exclusive but, instead, are dynamically interrelated. A secondary problem-solving skills includes computation skills, which is fundamental to each of the above problem-solving skills.  
  • Content area, which includes (a) number and operations, (b) algebra, (c) geometry and measurement, and (d) statistics and probability. The construct of quantitative literacy focuses on problem solving in applied and authentic contexts, requiring advanced reasoning skills rather than just relying on memorization skills.  
  • Contexts, which include stimuli for all of the items that are embedded within real-world contexts such as personal and everyday life, the workplace and society. To solve the problems, students must apply mathematical knowledge to authentic contexts or situations.  

Each of these aspects of the proposed quantitative literacy framework is aligned with definitions, frameworks and assessments throughout the relevant quantitative literacy literature. Roohr et al. (2014) further describes these three dimensions and provide examples of how they could be assessed. Institutional stakeholders should review these descriptions to ensure adequate alignment among 1) the HEIghten construct definition and 2) institutional or programmatic student learning outcomes (SLOs) and educational interventions.  

Tables 3 and 4 comes from Assessing Quantitative Literacy in Higher Education: An Overview of Existing Research and Assessments with Recommendations for Next-Generation Assessment.  

These tables provide descriptions of the content areas and problem-solving skills measured by the Quantitative Literacy module, respectively.

📌Table 3: Framework for Quantitative Literacy - Content Areas

📌 Table 4: Framework for Quantitative Literacy - Problem-Solving Skills

Back to Top

heighten-higher-ed-competency-icon-080623HEIghten Approach to Civic Competency and Engagement 

Educational leaders have also stressed the need to expand learning beyond cognitive subject matters, and have begun to prioritize the need for a society that is civically literate in an effort to support democracy and the growing economy (Torney-Purta, Cabrera, Roohr, Liu, & Rios, 2015). In the research report, Assessing Civic Competency and Engagement in Higher Education: Research Background, Frameworks, and Directions for Next-Generation Assessment, Torney-Purta et al. (2015) reviewed and synthesized the existing frameworks, research, and assessments that fall within the area of civic learning and proposed an assessment framework that contains two domains – Civic Competency and Civic Engagement – that are part of a higher-level civic learning construct. 

Civic Competency refers to the combination of civic knowledge and skills. This includes the ability to analyze and make reasoned judgments about civic- and political-related issues or situations. For the HEIghten assessment and module, Civic Competency includes two main components: Civic Knowledge and Civic Skills (including analytic skills, and participatory or involvement skills). 

  • Civic knowledge includes knowledge of facts, concepts, and principles (e.g., democratic processes, government structures, voting) across various contexts (e.g., local, national, international, and past or present). The knowledge assessed in this section is nontechnical and applicable to students of various majors. 
  • Civic skills includes both analytic and participatory skills. Analytic skills include the ability to systematically analyze written material from charts, graphic material, texts, and political cartoons, etc., and includes the ability to apply political and civic knowledge to systematically analyze civic-related issues or scenarios. Participatory and involvement skills describe the individual’s capability to identify the most appropriate action in a group setting or in finding the appropriate solution or reaction to diverse social and civic issues. 

Civic Engagement is defined as the active and informed participation in various aspects of democratic life – voting, participation, volunteering, etc. It contains two key components: Civic Attitudes and Civic Participation. 

  • Civic Attitudes includes Efficacy and Democratic Norms and Values. Efficacy includes the belief that one can understand and influence political and government affairs; and Democratic Norms and Values relate to one’s beliefs in basic principles regarding democracy and to behaviors that display consideration and respect for diversity. 
  • Civic Participation considers the civic and political actions and behaviors displayed in various settings – in the community, nationally, and globally – in both face-to-face and online contexts. 

Table 5 is a modified version of the table found in Assessing Civic Competency and Engagement in Higher Education: Research Background, Frameworks, and Directions for Next-Generation Assessment. This table provides a description of the various aspects of civic learning measured in the Civic Competency and Engagement module.

📌Table 5: Framework for Civic Competency and Engagement

Back to Top

heighten-higher-ed-intercultural-icon-080623HEIghten Approach to Intercultural Competency and Diversity 

The modern wave of globalization has entered the realm of higher education with more institutions recognizing the importance of graduates entering the 21st-century workforce with intercultural competency and diversity skills (Griffith, Wolfeld, Armon, Rios, & Liu, 2016). The framework applied in the HEIghten modules has considered existing models, scales, and empirical research, but extends beyond former models in its consideration and use of theories and research to guide assessment. The framework in Assessing Intercultural Competence in Higher Education: Existing Research and Future Directions conceptualizes intercultural interactions into three dimensions: Approach, Analyze, and Act (Griffith et al., 2016). 

  • The Approach dimension considers positive cultural orientation, tolerance for ambiguity, and cultural self-efficacy. Positive cultural orientation includes concepts such as reduced ethnocentrism, open-mindedness, inquisitiveness, curiosity and respect for other cultures. Tolerance for Ambiguity is the ability to maintain composure and well-being in uncertain situations without compromising effectiveness. Cultural Self-Efficacy plays a role as well since it influences the situations in which an individual chooses to partake in and the attitude the individual will have in that particular situation. 
  • The Analyze dimension refers to one’s ability to take in, evaluate and synthesize important information without preconceived notions. Within this dimension, abilities such as self-awareness, social monitoring, suspending judgment, perspective-taking, and cultural knowledge application are considered. 
  • The final dimension, Act, describes the ability to translate thoughts into actions. It also includes behavior and emotion regulation. Behavior regulation refers to an individual’s ability to control or suppress any actions that would be inappropriate based on the cultural context, to act appropriately in cross-cultural situations, or to choose not to act at all. 

Table 6 is from Assessing Intercultural Competence in Higher Education: Existing Research and Future Directions. This table provides the framework for the Intercultural Competency and Diversity module. 

📌 Table 6: Framework for Intercultural Competence

Back to Top

Description of HEIghten Assessments Task Types and Item Formats 

Each of the modules in the HEIghten assessment suite uses different task types and item formats. It is important for institutional stakeholders to understand these task types and item formats to ensure alignment among the assessment, the student learning outcomes and the educational interventions at their institution and/or program related to general education skills. 

Technologically-enhanced task types and item formats (e.g., multistep-selection items, quantitative comparisons, passage-based item sets) go beyond traditional multiple-choice items, and should help to keep students engaged during the assessment. In addition, these task types and item formats promote the measurement of a wide range of critical thinking, written communication, quantitative literacy, civic competency and engagement, and intercultural competency and diversity skills. They emphasize the critical balance between the authenticity of the assessment and its technical quality. The assessment modules also include both real-world and higher-level academic contexts. 

1. Critical Thinking 

For the Critical Thinking assessment module, the assessment features the following item types: 

    • Critical thinking sets (i.e., a series of selected-response questions based on a shared multi-part stimulus that reflects real-world, authentic issues) 
    • Logical reasoning items 
    • Analytical reasoning sets
In addition, item formats include: 
    • Single- and multiple-selection multiple-choice 
    • Inline choice (i.e., drop-down menu items) 
    • Select-in-passage 

Additional features of the assessment include graphs or charts in the test questions, and quantitative contexts. 

2. Written Communication
 

The Written Communication assessment module also presents items to students in a variety of ways that go beyond multiple-choice items. Students are asked to respond to the following types of tasks: 

    • Passage-based sets (including 12 selected-responses item for each set) 
    • Essay writing task 

3. Quantitative Literacy 

Like the Critical Thinking and Written Communication modules, the Quantitative Literacy assessment module contains different tasks, presented to students in a variety of ways that go beyond multiple-choice items. The Quantitative Literacy assessment features two main types of items: selected-response and open-ended response. Within these two types, the assessment employs several more specific types of items that assess an examinee’s ability to answer quantitative items. These item types include the following: 

    • Single- and multiple-selection multiple choice 
    • Numeric entry, fraction entry 
    • Grid/table items (e.g., a table with statements where the correct property is selected by check-marking a cell in the table) 
    • Quantitative comparison 

Within the assessment, students are provided with an on-screen four-function calculator to answer the test questions, allowing the assessment to focus on the assessment of quantitative problem-solving skills, not just students’ computational skills. 

4. Civic Competency and Engagement 

The Civic Competency and Engagement assessment module contains various item types as well for the various sections of the assessment. The Civic Competency item formats include: 

    • Single-selection multiple choice 
    • Multiple-selection multiple choice 
    • Drop-down menus 
    • Situational judgment 

The Civic Attitude item formats include: 

    • Hypothetical scenarios followed by Likert-type items on a four-point scale (Strongly Disagree to Strongly Agree). 

Lastly, for Civic Participation, item formats include: 

    • Multiple-selection multiple choice (context of participation including on campus, in the local community, at the state level, national level, or international level) 
    • Single-selection multiple choice (number of hours volunteering) 
    • Likert-type items on a four-point scale (Never, Occasionally, Weekly, or Daily) 

5. Intercultural Competency and Diversity 

The Intercultural Competency and Diversity assessment module uses multiple items types and response formats to assess students on different dimensions. The item types include: 

    • Cross-cultural situational judgment tasks 
    • Likert self-report items 

This module uses different types of item formats, including: 

    • Single-selection multiple choice 
    • Multiple-selection multiple choice 
    • Likert scales 

Of interest, the scenarios and situations in this module include items centered around: studying abroad, teaching abroad, work-related travel aboard, international travel, interactions with guests from other cultures, and sub-cultures within the U.S. The situations the students will be presented with may or may not be aligned with their own cultures and will require decisions on how to best react in such a situation. 

Back to Top

Logistical Characteristics of HEIghten Assessment modules 

Each of the assessment modules within the HEIghten assessment suite is characterized in Table 7 by different logistical characteristics. It is important for institutional stakeholders to consider these module characteristics to ensure that they are well-aligned with their institutional- or program-level data needs, reporting processes and research questions.

📌 Table 7. Summary of Logistical Characteristics for the HEIghten Outcomes Assessments

The HEIghten® Outcomes Assessment Suite is administered online in one of three ways: proctored on campus (also referred to as Institutional Proctoring), remote proctoring, or non-proctored.  

The use of non-proctored assessments introduces more flexibility to the administration process. We warn; however, that the use of non-proctored environments may impact student interest and effort on the assessment. Please refer to the section on student motivation about ways to address this concern. 

Back to Top

Technical Guidelines for Use of HEIghten Assessment Module Scores 

Overview 

The following guidelines have been adopted by the HEIghten Assessment Suite program staff to provide information about the appropriate use of HEIghten scores when assessing institutional and programmatic student learning outcomes. They are also intended to provide considerations for fair and best practices when using assessment scores. Adherence to the technical guidelines is important.  

Policies and Guidelines for Appropriate Use of HEIghten Assessment Information  

Score users are highly encouraged to become knowledgeable about the validity of score uses and interpretations. To this end, the following policies and guidelines are available to users of HEIghten:  

Score users. Higher education institutions and their students are considered score users.  

Validity. The general appropriateness of using scores from the HEIghten assessment outcomes suite to measure critical knowledge and skills has been established by research studies. HEIghten scores may be appropriate for some other purposes, but it is important for institutions and programs to validate their use for those purposes. The HEIghten staff can advise institutions on different processes and strategies for conducting validity studies and gathering necessary validity evidence.  

Confidentiality. HEIghten assessment scores, whether those of an individual or aggregated for an institution, are confidential and can be accessed only by authorization of the individual or institution.  

Use of scores in aggregated form. Use of HEIghten scores as a measure for ranking or rating undergraduate programs, institutions, university systems or states is strongly discouraged except when the scores are used as one indicator among several appropriate indicators of educational quality.  

Use of scores for grades or graduation requirements. Use of HEIghten scores to provide a grade for a student is strongly discouraged given that the assessment modules are not developed to assess course-level material. Use of HEIghten scores for graduation requirements is also strongly discouraged. 

Back to Top

Description of Scores from HEIghten Modules 

HEIghten scores provide institutions with the tools for making appropriate inferences about their students’ demonstration of learning, and to make informed changes to educational interventions. To appropriately use data from HEIghten assessment instruments, stakeholders must understand the various types of scores made available to them, what these scores mean, and appropriate uses/interpretations of these scores. 

Back to Top

Types of Scores 

Each HEIghten module uses scaled scores to aid in the interpretation of student performance. A scaled score is a conversion of the numerical raw score achieved on the assessment (e.g., 50 of 80 tasks/questions answered correctly) to a score in the predetermined range used for each HEIghten assessment module (from 150 to 180, or 90 to 150). Each HEIghten module has multiple test forms. The scaled scores on the HEIghten modules are computed in a way that adjusts for the potential difference in difficulty of the questions on each test form. This statistical process allows the performance of students taking different forms of the same module to be compared, regardless of minor differences in the difficulty of those forms. While scores cannot be compared across the different modules (e.g., comparing Critical Thinking scores to Written Communication scores), scores can be compared across different forms of the same module. Additionally, overall scores on the Civic Competency and Engagement module (including Civic Competency and Civic Attitudes) and Intercultural Competency and Diversity module (including Approach, and Analyze & Act) should not be combined to create a total score. 

The following are the types of scores that are found on the HEIghten modules: 

Total/Overall Score. This score is provided on each HEIghten assessment module as a description of the total or overall performance within a module. It is the estimated statistical representation of a student’s skill as represented by the HEIghten assessment module content. Higher scores indicate an estimate of higher skill or attitude than lower scores. Total scores are provided for the Critical Thinking, Quantitative Literacy, and Written Communication modules, and overall scores are provided for the different dimensions of the Civic Competency and Engagement and Intercultural Competency and Diversity modules (the Civic Competency and Engagement and Intercultural Competency and Diversity modules do not report a total score). Specifically, overall scores are provided for the Civic Competency and Civic Attitudes dimensions of the Civic Competency and Engagement module, and overall scores are provided for Approach and Analyze & Act dimensions of the Intercultural Competency and Diversity module. It should also be noted that some scores in the Intercultural Competency and Diversity and Civic Competency and Engagement modules may not be accurate reflections of how individuals will actually respond or perform in real-world situations. 

Subscores. These scores represent performance in key aspects of each skill area measured by the HEIghten assessment modules. The number of questions on each assessment module and the breadth of the subscore domains helps to determine if a reliable subscore can be reported. When subscores are available for a HEIghten assessment module, those subscores will only be provided at the group level. Individual students will not receive subscores. 

Percentile Ranks. This statistic illustrates the percentage of scores in a frequency distribution that are equal to or below a particular score. Percentile ranks will be provided for institutions and students. This statistic ranges from 100% to 0%. 

Proficiency Classifications. Also known as Performance Level Descriptions (PLDs), these criterion-referenced scores indicate an achievement of varying levels of standards for each HEIghten assessment module. The criterion score for reaching each proficiency level differs. For each HEIghten assessment the proficiency levels are noted as “Developing,” “Proficient,” and “Advanced.” The process for determining the relationship between the total scores on the HEIghten assessments and the corresponding proficiency levels is described in further detail later in this section.  

Attitude/Approach Classifications. In addition to the PLDs, both the Civic Competency and Engagement, and Intercultural Competency and Diversity modules also have classifications for the Civic Attitude and Approach Levels, respectively. For the Civic Competency and Engagement module, the Civic Attitudes subscale utilizes three Attitude Level Descriptions - “Lower,” “Medium,” and “Higher” - based on the students’ responses to civic-related scenarios. In the case of the Intercultural Competency and Diversity module, the Approach dimension includes three Approach Level Descriptions: “High,” “Neutral,” and “Low.” The process for determining these classifications is described in more detail later in this section. 

Back to Top

Types of Score Reports 

Standard score reports are available to all HEIghten users. Examples of the Individual Score Report and Institutional Score Report are described below. Institutions will receive separate reports for each of the HEIghten modules administered to their students (i.e., an institutional score report for Critical Thinking and a separate institutional score report for Quantitative Literacy). 

Individual Score Report. This report provides an individual student’s total score on the HEIghten module for Critical Thinking, Written Communication, and Quantitative Literacy. This information is an estimate of the amount of critical knowledge in the domain that has been demonstrated by the student. Higher scores indicate the demonstration of more critical knowledge. Individual student performance relative to the performance of students across all participating institutions is indicated by the percentile rank and a comparison to the average score of all other students. Students and faculty members can use this report to compare this student’s critical knowledge to the average score of other students who were assessed during a specific period of time (e.g., 2016–2019). 

The individual score reports for Civic Competency and Engagement and Intercultural Competency and Diversity do not contain a total score. Rather, these modules contain overall scores for the individual for each domain (i.e., Civic Competency and Civic Attitudes for the Civic Competency and Engagement module, and Approach and Analyze & Act for the Intercultural Competency and Diversity module). In the case of the Civic Participation domain for Civic Competency and Engagement, individuals are provided with item-level response percentages for each response. 

This report also includes each of the three criterion-referenced proficiency levels (“Developing,” “Proficient,” and “Advanced”) to further describe the student’s performance and the average performance of all students who were assessed during a specific period of time for the Critical Thinking, Written Communication, and Quantitative Literacy total scores, and for the Civic Competency and Analyze & Act overall scores within the Civic Competency and Engagement, and Intercultural Competency and Diversity modules, respectively. The Civic Attitudes domain in the Civic Competency and Engagement module, uses “Lower,” “Medium,” and “Higher” to indicate the level of agreement to realistic situations or scenarios. In addition, the Approach domain in the Intercultural Competency and Diversity module uses “Low,” “Neutral,” and “High” level descriptors to reflect the test takers’ views of themselves.  

 

Institutional Score Report. The institutional score report contains information about the distribution of students’ scaled scores within the Reporting group and Comparison group. The Reporting group includes students from your institution that completed at least 75% of the assessment. Institutions must have a minimum of 30 valid results to produce the institutional score report. Multiple cohorts can be combined to achieve this minimum. The Comparison group includes students from other institutions who also took the assessment. This distribution of scores allows the institution to view the range, modality and skewedness of scores. Characteristics of the Reporting and Comparison groups are described at the top of the institutional score report. Institutional stakeholders can use this information to compare their students’ average performance to the performance of students included in the Comparison group (e.g., senior students from all institutions). Institutional stakeholders can also use this information to examine the percentage of students achieving each possible total or overall scaled score on the module (ranging from 150–180 for Critical Thinking, Written Communication, Quantitative Literacy, Civic Competency for Civic Competency and Engagement, and Analyze & Act for Intercultural Competency and Diversity; and ranging from 90-150 for Civic Attitudes for Civic Competency and Engagement, and Approach for Intercultural Competency and Diversity).  

The institutional report also includes the distribution of students within the Reporting group and the Comparison Group who achieved each performance/attitude/ approach level. The report also includes a graph indicating the percentage of students at each of the three criterion-referenced proficiency levels or Performance Level Descriptions, or PLDs (“Developing,” “Proficient,” and “Advanced”), from the Reporting group and the Comparison group. There are also graphs indicating the percentage of students at each of the three attitude or approach levels (“Lower,” “Medium,” and “Higher”; “Low,” “Neutral,”, and “High”), from the Reporting group and the Comparison group. Institutional stakeholders can use this information to determine what percentage of their students performed at each of the three performance/attitude/approach levels. This information is provided for students in the Comparison group as well. Lastly, the report provides the average total/overall scaled scores and subscores for assessed students for Critical Thinking, Written Communication, and Quantitative Literacy. Overall scaled scores are only provided from the domain scores for Civic Competency and Engagement and Intercultural Competency and Diversity modules. Faculty members can use this information to compare students’ critical knowledge to the average score of students from a Comparison group at both the total/overall score level and at the subscore level.  

 

Data Download Report. This report is a data file that provides all administration, biographic and student performance information. This data file is provided to allow institutions the flexibility of conducting their own analyses on student data. For added flexibility, this report is available in Microsoft® Excel® format. Institutions can add or modify student information from their databases such as course tracks, standardized test scores and GPA. Faculty members can use this report to conduct further analyses of student data. This report provides faculty with the opportunity to manage their own data files, and is often used by programs that conduct research on student learning within their discipline. 

Back to Top

Explanation of Performance Level Descriptions (PLDs) and Attitude/Approach Level Descriptions 

PLDs are a way to describe specific knowledge and skills students can demonstrate on each of the HEIghten assessment modules. For instance, as part of the Institutional Score Report, stakeholders will be provided with histograms that show the distribution of individual students’ scaled scores within the Reporting Group and the Comparison Group, according to performance levels. To accurately understand, interpret and meaningfully use this information, stakeholders need to comprehensively understand what each of the PLDs mean (i.e., what distinguishes “Developing” from “Proficient” and “Advanced”). The following tables provide an in-depth description of each performance indicator for all five HEIghten assessment modules (Tables 9-13). 

📌 Table 9: Critical Thinking Performance Level Descriptors

📌 Table 10: Written Communication Performance Level Descriptors

📌 Table 11: Quantitative Literacy Performance Level Descriptors

📌 Table 12: Civic Competency Performance Level Descriptors

📌 Table 13: Intercultural Competency Performance Level Descriptors

It must be noted that the Civic Attitudes and Civic Participation sections of the Civic Competency and Engagement module do not have PLDs. Instead, the Civic Attitudes domain uses attitude level descriptors that consider how likely test takers are to agree with a variety of statements, based on their responses to items that include civic-related scenarios. The test takers responses distinguish them into one of three groups: “Higher,” “Medium,” or “Lower” (Table 14). Similarly, the Approach section of the Intercultural Competency and Diversity module uses approach level descriptors that describe how test takers view their capabilities (e.g., Level 3 Approach – “High”; Level 2 Approach – “Neutral”; Level 1 Approach – “Low”) (Table 15). 

📌 Table 14: Civic Competency Attitudes Level Descriptors

📌 Table 15: Intercultural Competency Approach Level Descriptors

Stakeholders and users are strongly advised to review and understand the meaning of these PLDs and level descriptors. 

Back to Top

Practical Guidelines for Use of HEIghten Modules for Demonstrable Student Learning Improvement 

Overview 

The following practices are provided to assist institutional stakeholders with implementing best practices when assessing and improving student learning at the program or institutional level. In this section, “tips” accompany each practical consideration. These tips offer actionable advice to institutions engaged in the process of learning outcomes assessment. To that end, these tips will help institutions: 

    • Engage faculty members in assessment processes 
    • Improve the quality of the results obtained from learning outcomes assessment 
    • Modify pedagogies and curricula 
    • Demonstrate student learning improvements related to HEIghten results, among other outcomes 

Use these practical guidelines — in conjunction with the HEIghten assessment suite — as thought leadership to promote innovation, facilitate conversations, support educators and empirically demonstrate learning improvements on campus. 

 

Practical Considerations 

Engage faculty members and students in the assessment process. 

Assessment works best when everyone in the teaching and learning process understands that assessment data is being used to modify pedagogies and curricula for improving student learning. When faculty members are able to answer the questions they have about student learning with assessment data, that data is more likely to be used to improve pedagogies, curricula and learning interventions. Likewise, students may have more incentive to engage in the assessment process when they know that their data will be used to meaningfully inform improvements at the program or university level. 

💡 Consider what questions about student learning you want to answer, at the outset. Then, focus improvement efforts on one or two targeted learning outcomes that faculty are interested in examining and improving. 

💡 Share a brief summary of results with faculty and students; discuss if and/or how faculty and students would find this useful for informing pedagogical and curricular changes and enhancing student learning. 

💡 Survey faculty members to see what kind of educational activities they would add if students weren’t performing at expectations on the learning outcomes as measured by the HEIghten outcomes assessments. “Crowd-source” potential actions to be taken to improve student learning. 

💡 If faculty buy-in is a concern, start with a dedicated “core nucleus” of faculty who want to work collaboratively with colleagues, assessment practitioners, faculty developers, students, etc. to improve learning. 

💡 Disseminate assessment data in a variety of easily accessible and understandable formats to both students and faculty. 

 

Frame assessment processes as research and scholarship opportunities. 

To promote learning improvement, it is essential to align assessments with student learning outcomes, pedagogy and curricula. In doing so, institutions can begin to frame assessment processes as research opportunities. Often, faculty have research questions regarding the effectiveness of teaching and learning practices that can be addressed through assessment methodologies — that is, assessment processes can be used to answer important research questions related to student learning. 

💡 Encourage faculty to use assessment processes to document and share important findings related to student learning at disciplinary conferences, in peer-reviewed publications, etc. 

💡 Most faculty are intrinsically interested in teaching and learning. Leverage this interest by framing assessment in terms of research questions faculty want to answer about student learning. 

💡  Assessment can be framed as research that is driven by previous results and/or informed by research from applicable literature (including focus groups, faculty experiences, interviews, anecdotal evidence, etc.). 

💡 Focus on one or two key learning outcomes that are of interest to your faculty members. Assessment can become resource-intensive when many assessment initiatives are occurring simultaneously. 

 

Provide adequate support systems and recognition for faculty. 

Using assessment results to influence and support continuous learning improvement requires adequate support from administrative stakeholders, as well as expertise from assessment professionals and faculty developers.  

💡 Consider recognizing faculty members or programs/units within your institution for contributing to assessment initiatives that lead to improved student learning. 

💡 Opportunities for support or recognition may include (but are not limited to) provost’s or president’s awards for assessment practices; including involvement in assessment and learning improvement activities as part of applicable tenure or promotion experiences; offering stipends for time spent promoting and facilitating learning improvement projects; supporting faculty attendance/participation in summer institutes or workshops related to learning improvement; incorporating learning improvement into pre-exiting assessment/reporting processes like program reviews, etc. 

 

Ensure HEIghten assessment modules meet the purpose(s) for assessing students. 

When selecting your learning outcomes assessment, make sure the module meets the program’s purpose(s) for assessing its students. Particular attention must be paid to the curricula or co-curricular programs that will impact the skills measured by the HEIghten assessments. By focusing on the extent to which the assessment meets your purposes, you can set goals appropriately and supplement the assessment modules where needed. Determine when the learning outcomes are addressed in the curriculum; this information will help with score interpretation and learning improvement. 

💡 Review the framework paper for your chosen assessment, sample questions and sample score reports. Also, review your curriculum map and substantive co-curricular program or initiatives. 

💡 Request a Confidential Review Copy of the module and review it as a department. 

💡 Map assessment module characteristics and/or content back to your student learning outcomes and/or curricula to ensure modules are aligned with educational experiences at your institutions and are meeting your intended assessment purposes. 

 

Examine the current pedagogies and curricula. 

HEIghten assessment module results can be used to influence and inform intentional changes to pedagogies and curricula at your institution. To do so successfully, it is imperative to have intimate knowledge of your curriculum. To make intentional and informed changes, you must understand the pre-existing pedagogies and curricula that were in place — e.g., how would you describe the educational experiences students had that likely contributed to their performance on the assessment modules? 

💡 Collaborate with faculty who teach in general education to determine/outline where in the curriculum students are currently learning specific knowledge, skills and abilities. Does this differ from where faculty feel students should be learning the specific knowledge, skills and abilities? 

💡 Consult with faculty who teach in general education to determine how various specific knowledge, skills and abilities are being taught, what teaching strategies are being used, what additional pedagogical strategies faculty may want to pilot, and how you might modify current pedagogies or curricula or create new ones to give students better/more opportunities to learn the specific knowledge, skills and abilities. 

💡  Should faculty want to pilot new pedagogical strategies or redesign courses, they will most likely need support from administration and appropriate faculty development/training opportunities, in addition to adequate time. 

💡 It’s important to consider co-curricular activities and opportunities to learn when examining the current pedagogies and curricula.  

 

Use HEIghten module scores as part of a customized assessment plan. 

The HEIghten assessment suite can serve a complementary role to locally developed assessments. The HEIghten program staff continues to work with partnering institutions of higher education to implement best-practice tips for creating customized assessment plans — for both accountability and improvement — that are applicable to all colleges. Institutions and programs can use the HEIghten assessment suite as part of a custom assessment solution that best fits their needs. This may involve the need for locally developed assessments in addition to the HEIghten assessments. 

💡 Institutions and programs should take advantage of the transparency provided in the development of the HEIghten modules to ensure that institutions can identify potential areas for augmentation. 

💡 Each assessment module allows users to append their own assessment tools. This process allows you to manage and combine multiple data points from your customized assessment plan to meet your institutional or programmatic needs. 

 

Make sure the group of students to be assessed will provide the necessary information. 

If the purpose of assessing student learning is to make inferences about the performance of groups of students, it is important to assess an adequate number of students from your total population or from the subgroups you’ve identified. These students should be selected in such a way that the students assessed from each group are representative of the group or groups about which you wish to make inferences. The best way to accomplish this purpose is to assess all of the students; however, it is often the least efficient. Nevertheless, it is particularly important not to limit the assessment to students who volunteer to be assessed, unless the program wants information that applies only to those students. Programs should think carefully about the demographic information the students provide as well. This is the information that will be used to split students into different subgroups for comparison. Obtaining more demographic information allows for more comparisons to be made. 

💡 When assessing the quality of your data, begin by comparing your institution’s or program’s demographics to the demographics of the group of students tested. This allows you to place assessment results in appropriate contexts. 

 

Make sure students are motivated to take each module, and motivated to do well. 

Student motivation in learning outcomes assessment is a serious concern. If the students are not motivated to do well on the module, their module scores will not reflect their actual skill levels; therefore, the students’ scores will not accurately reflect the impact of educational interventions (e.g., pedagogies, curricula) on their learning. In addition, conclusions or inferences drawn from students’ scores may be inaccurate. Student motivation may be of additional concern when administering the assessment in a nonproctored setting. Although this approach allows for more flexibility, administering assessments in a nonproctored environment may impact student effort on the assessment. 

To promote student motivation, the HEIghten assessment suite will provide student credentials. This serves as a permanent credential of the skills demonstrated. Your institution can use the credential as a motivational tool to encourage your students to perform well on the assessment. 

Although the HEIghten suite does not include a measure of student motivation, several psychometrically sound instruments exist for examining students’ self-reported levels of motivation. For instance, if students self-reported giving low effort on the modules, institutions could potentially have justification for excluding their data from subsequent analyses. 

Finding the most appropriate motivation technique is a matter of finding the incentive that speaks to your students and is best aligned with your institutional culture. Best practices indicate that often a combination of incentives may be required to encourage students to do their best on assessments. The most effective combinations of motivational incentives strive to achieve a delicate balance between the extrinsic (cash, prizes, giveaways) and the intrinsic (or largely academic, in which pride takes an important role). 

💡 In addition to HEIghten credentials, consider providing students with information on how well they perform compared to their peers at your institution. In addition, make students aware of resources your campus offers (specific classes, on-campus tutors, co-curricular opportunities, etc.) to help them improve the kinds of skills that are assessed by the HEIghten assessment modules. 

💡 If using a longitudinal assessment design, tell students they will be assessed again and that they will be able to see the progress they have made in developing these important skills over time. 

💡 Tell students that graduate schools and employers value these skills and tell them why. 

💡 Determine the stakes of the assessment and clearly communicate those to students. For instance, an example of a low-stakes motivation technique might be to place a hold on students’ records and course registration if they failed to complete the assessments. Another technique might include a requirement for low-performing students to meet with their program head and academic adviser to discuss their performance on the assessment. Yet another technique could be to simply provide instructions to students that explain the importance of these results for your institution (Liu, Bridgeman, & Adler, 2012). 

 

Use rigorous data collection methodologies and appropriate data analyses. 

Both cross-sectional and longitudinal data from HEIghten assessment modules can be useful for assessing and improving student learning. However, longitudinal data (e.g., pre-test/post-test, pre-intervention/post-intervention) from HEIghten modules allow institutional stakeholders to gauge student development over time and evidence of improvements in learning (e.g., from pre-intervention or “baseline” to after students experience the modified or redesigned educational intervention or experience). 

💡 Use trained proctors and protocols or scripts when administering assessments. This ensures standardization across assessment sessions within the institution. Moreover, proctor scripts can be modified to convey the importance of student engagement in the assessment process. 

💡 Collect pre-test data before implementing new pedagogies or curricula, making pedagogical or curricular modifications, etc. Also, it is important to collect post-test data using the same assessment modules and data collection processes after students have experienced the new or modified pedagogies/curricula. The combination of HEIghten pre- and post-test data allow faculty to demonstrate learning improvement. 

💡 Note the stability of your pre-test data or data collected at the freshman level. If your freshman performance is stable over time, you may want to shift your assessment resources to your post-test, or target larger subgroups of freshmen and junior/seniors. 

💡 Keep faculty research questions at the forefront of data analysis and interpretation. 

💡 Share results with students and faculty.  

 

Use Results for HEIghten modules to design or modify pedagogies and curricula. 

When HEIghten assessment modules are well aligned with your general education student learning outcomes, results can be used to help faculty implement new types of pedagogies or curricula. They can also aid faculty with pedagogical and curricular modifications. For instance, you can request descriptive, item-level data for a particular cohort of students to further pinpoint strengths and weaknesses in the curriculum and make modifications accordingly. If students are not meeting expectations for a particular learning outcome, based on assessment results, you can target pedagogical and curricular modifications to courses/learning experiences that emphasize the content or skills related to that learning outcome. 

💡 There are two ways to go about identifying educational activities that can be implemented in the curriculum to improve learning. First, consult the research literature on each of the skills you are measuring. Second, crowd-source ideas from faculty members who understand your curriculum and your students. 

💡 Consult with faculty development experts to help faculty articulate their program theories, redesign courses, examine course scaffolding, implement evidence-driven pedagogies, etc. 

Back to Top

Contextual Factors Impacting HEIghten Score Interpretations 

In addition to the aforementioned considerations and tips for best practices in assessment, faculty should consider certain contextual factors when making decisions or modifications based on HEIghten module assessment results. These include the following: 

 

Number of general education courses students have completed 

The HEIghten outcomes assessments are intended to measure cross-disciplinary general education skills (i.e., critical thinking, written communication, quantitative literacy, civic competency and engagement, and intercultural competency and diversity). It is important to consider how many applicable general education courses students have completed prior to taking the assessment. Students who have completed more hours of coursework in these types of general education courses should (in theory) earn higher scores on the HEIghten assessments, compared to students who have completed none or fewer general education courses. Administering the HEIghten assessments to students before they have completed their general education coursework and again after they have completed most or all of their general education coursework (e.g., 1.5 to 2 years later) can help your institution demonstrate the extent to which your general education curriculum positively influences students’ growth or development over time. 

 

Knowledge, skills, or attitudes with which students come in 

Another important contextual factor to consider is the knowledge, skills, abilities, or attitudes that students bring with them to their educational environments. Different students will enter your institutional programs and courses with varying degrees of skills or attitudes. Longitudinal data collection methodology can help you to track student development and acknowledge the skills or abilities with which students enter your institution. However, this methodology requires users to understand retention or attrition in their institution or program. 

Use of the HEIghten assessment suite can aid your institution in understanding what knowledge, skills, and attitudes students enter with and how much they have gained through general education coursework. The HEIghten assessments can be used as part of longitudinal research designed to assess students’ knowledge, skills, or attitudes prior to them beginning any general education coursework, and again once they have successfully completed most or all of their general education classes. 

 

Special groups such as transfer, non-traditional and developmental students 

As higher education is expanding, institutions must serve an increasingly diverse population of students, including those who have transferred from two- or four-year institutions, those who entered higher education from “non-traditional” pathways having previous learning experiences, and those who require developmental educational experiences to achieve success. These are important subpopulations of students to assess. Institutions need to gauge these students’ skills and abilities while also understanding how to help improve their learning. Moreover, ignoring or excluding these subpopulations of students may limit or compromise the interpretations or conclusions institutions can draw from their assessment results. Institutions should consider creating infrastructures and using methodologies that allow for the assessment of these students. 

Back to Top

📕 References 

Bikson, T. K., Treverton, G. F.,Moini, J., & Lindstrom, G. (2003). New challenges for international leadership: Lessons from organizations with global missions. Santa Monica, CA: RAND. Retrieved from http://www.rand.org/content/dam/rand/pubs/monograph_reports/2005/MR1670.pdf 

Casner-Lotto, J., & Barrington, L. (2006). Are they really ready to work? Employers’ perspectives on the basic knowledge and applied skills of new entrants to the 21st century U.S. workforce. Retrieved from http://www.p21.org/storage/documents/FINAL_REPORT_PDF09-29-06.pdf 

Fellows, K. L., Goedde, S. D., & Schwichtenberg, E. J. (2014). What’s your CQ? A thought leadership exploration of cultural intelligence in contemporary institutions of higher learning. Romanian Journal of Communication and Public Relations/Revista Româna de Comunicare si Rela¸tii Publice, 2, 13–34. 

Griffith, R. L., Wolfeld, L., Armon, B. K., Rios, J., & Liu, O. L. (2016). Assessing intercultural competence in higher education: Existing research and future directions (ETS Research Report No. RR-16-25). Princeton, NJ: Educational Testing Service. 

Hart Research Associates. (2015). Falling short? College learning and career success. Washington, DC: Association of American Colleges and Universities. 

Hart Research Associates. (2013). It takes more than a major: Employer priorities for college learning and student success. Washington, DC: Association of American Colleges and Universities. 

Hart Research Associates. (2010). Raising the bar: Employers’ views on college learning in the HEIghten Outcomes wake of the economic downturn. Washington, DC: Association of American Colleges and Universities. 

Liu, O. L., Bridgeman, B., & Adler, R. M. (2012). Measuring learning outcomes in higher education: Motivation matters. Educational Researcher, 41(9), 352–362. 

Liu, O. L., Frankel, L., & Roohr, K. C. (2014). Assessing critical thinking in higher education: Current state and directions for next-generation assessment (ETS Research Report No. RR-14-10). Princeton, NJ: Educational Testing Service. 

Millett, C. M., Payne, D. G., Dwyer, C. A., Stickler, L. M., & Alexiou, J. J. (2008). A culture of evidence: An evidence-centered approach to accountability for student learning outcomes. Princeton, NJ: Educational Testing Service. 

Perry, L., Stoner, K. R., Stoner, L., Wadsworth, D., Page, R., & Tarrant, M. A. (2016). The importance of global citizenship to higher education: The role of short-term study abroad. Cambridge Journal of Education and Science, 1, 754–769. 

Peter D. Hart Research Associates. (2008).How should colleges assess and improve student learning? Employers’ views on the accountability challenge. Washington, DC: Association of American Colleges and Universities. 

Peter D. Hart Research Associates. (2006). How should colleges prepare students to succeed in today’s global economy? Washington, DC: Association of American Colleges and Universities. 

Roohr, K. C., Graf, E. A., & Liu, O. L. (2014). Assessing quantitative literacy in higher education: An overview of existing research and assessments with recommendations for next-generation assessment (ETS Research Report No. RR-14-22). Princeton, NJ: Educational Testing Service. doi:10.1002/ets2.12024. 

Sparks, J. R., Song, Y., Brantley, J. W., Liu, O. L. (2014). Assessing written communication in higher education: Review and recommendations for next-generation assessment (ETS Research Report No. RR-14-37). Princeton, NJ: Educational Testing Service. 

Swiggett, W. D. (2018). Providing threshold score recommendations for the second phase of assessments for the HEIghten® outcomes assessment suite: A standard-setting study (ETS Research Memorandum). Princeton, NJ: Educational Testing Service. 

Swiggett, W. D. (2017). Providing threshold score recommendations for the first three tests of the HEIghten outcomes assessment suite: A standard-setting study (ETS Research Memorandum No. RM-17-06). Princeton, NJ: Educational Testing Service. 

Torney-Purta, J., Cabrera, J. C., Roohr, K. C., Liu, O. L., & Rios, J. A. (2015). Assessing civic competency and engagement in higher education: Research background, frameworks, and directions for next-generation assessment (ETS Research Report No. RR-15-34). Princeton, NJ: Educational Testing Service. 

Back to Top