Assessment Process

Building Bridge

What to Assess    Collecting Evidence    Closing the Loop
Other Considerations    Submitting Annual Reports

 

What to Assess

Program assessment is like a checkup for your program; it gives you the data to determine if all parts of your program are working as intended, and if your program is headed in the intended direction. The following broad categories should be examined in the assessment plan:

  • Program learning outcomes. Program LO’s articulate the learning that should be achieved by graduates of the program. They are aligned with the mission and purpose of the program, as well as the University Learning Goals (Senate Policy S13-2) for all graduates of SJSU. They reflect the needs of employers and of society. They are supported by learning objectives at the course level. For more help on authoring or updating Program LO’s, please refer to guidelines on Determining Program Learning Objectives. The latest LO’s for all programs at SJSU by college and department can be found in the Reports Archives.

    Program LO’s should be mapped to the University Learning Goals. Courses that support Program LO’s should be identified.

  • Department Operations. Effective department operations support student learning and achievement. Suggested areas might include: graduation rates (and other indirect measures of student success), advising, facilities, faculty, administrative support, and overall student experience.

  • Strengths and weaknesses. Unique strengths and weaknesses of a program not in the above categories should also be captured by assessment. These might include things such as faculty achievements, resources, external funding, awards, employment rate of graduates, high impact practices in the curriculum, or other artifacts.

 

Collecting Evidence

Assessment methods used will vary depending on the nature of the information sought and program being assessed. For example, methods that are appropriate for an engineering program might not work for a dance program; those that are work well for a small program might be infeasible for a large one. While the exact methods used are for the faculty of the program to decide, suggestions and ideas are listed below:

  • Discipline-specific knowledge, skills, and abilities: Discipline-specific outcomes should be assessed using direct methods deemed appropriate by disciplinary faculty. Suggestions for student work to examine include targeted course assignments, portfolio, capstone project, or an exit examination. If you are using portfolios or capstone projects, you might take a look at the applicable WASC rubrics. If your discipline has a meaningful licensing exam (e.g. engineering, accounting, nursing, psychology) or some other benchmarking exam (e.g. business, chemistry), you might also look into the role it can play in program assessment.

  • Writing and Critical Thinking: Some resources for assessing proficiency in writing and critical thinking include the corresponding LEAP rubrics, the CLA exam, and the Criterion software by ETS. The LEAP rubrics include written communication and critical thinking, and were developed by xxx and tested by xxx; they can be used as-is, or customized for a particular program. The CLA+ measures writing and critical thinking skills. It has the added advantages of accounting for students’ SAT scores in the results (thus accounting for differences in student ability from institution to institution), and benchmarking to other comparable institutions. Lastly, Criterion is software that provides near-instantaneous feedback on student papers in the areas of spelling, grammar, style, and organization. SJSU currently has a subscription and offers training to interested instructors.  

  • Information literacy: The SJSU Library can assist in the assessment of information literacy and has done so for a number of programs. Click here for a list of prior assessments.

  • Individual and social responsibility: Outcomes that fall in this category are arguably harder to assess reliably. They do not target knowledge or skills, but rather behavior and beliefs. They can be somewhat “coachable”. Nevertheless, LEAP rubrics exist for civic knowledge and engagement, intercultural knowledge, and ethical reasoning (among others), and can be used as a starting point. In addition, upper division GE courses, required for all undergraduates at SJSU, cover these areas to some extent.

  • Department Operations: Indirect measures of student success such as graduation and retention rates, enrollment, and student-to-faculty ratio can be downloaded from the IEA website. A subset of this data comprises the Required Data Elements for program planning; more information is available in the RDE guide. This data can address questions regarding the demand for a program, the efficiency in which students progress towards a degree, and other programs served. A student exit survey is also appropriate here to get feedback on student experience and various areas of department operations.

  • Strengths and Weaknesses: Other strengths and weaknesses of a program can be recorded from faculty feedback, alumni and/or employer surveys, feedback from an advisory board, or other appropriate sources.

 

Closing the Loop

Once sound data is being collected at regular intervals, changes to improve student learning can be implemented and its effect measured. Perhaps an area requiring improvement was revealed in latest assessment data, or perhaps there are action items from the latest program review. Propose a change to instruction, assignment, or policy. Implement it. Look at the next round of data and assess improvement. Repeat if necessary.

Strictly speaking, there are some commonly accepted predictors of student success that should be addressed in analyses. These include ability (measured by high school GPA and/or SAT score), ethnicity (underrepresented minorities or not), and economic background (often measured by Pell grant eligibility). For example, it may appear that deliberate interventions resulted in increased graduation and retention rates. To be thorough, however, one should also check to be sure that the student composition is the same. If, for example, students’ economic background or grades were also increasing (due to increases in tuition, more out of state students, or impaction, say), this might be just as likely of an explanation as the intervention.

 

Other Considerations

  • Benchmarking

  • Streamlining

  • Learning vs. Achievement

  • Direct vs. Indirect Assessment

 

Submitting Annual Reports

Annual assessment reports should be submitted to the Assessment Facilitator in your college, and cc’d to the current Director of Assessment, by June 1 of each year.

 

Questions?

If you have questions on assessment, please do not hesitate to contact the Assessment Facilitator in your college, or the current Director of Assessment.