Step-by-Step Process for Program Assessment
Program assessment is required annually for all SJSU degree programs. The annual assessment report should cover a deliberate subset of the following areas of program assessment: soundness of program learning objectives, student achievement of program learning objectives, student preparedness upon graduation, and department operations, as well as action items from the latest program planning cycle.
Annual Assessment Reports due on March 15 of each academic year.
Learn more about Nuventive, SJSU's new assessment software.
Program assessment is like a checkup for your program; it gives you the data to determine if all parts of your program are working as intended, and if your program is headed in the intended direction.
Questions Answered by Assessment. Program assessment can take many forms, but the questions that it should strive to answer are the following:
- Do the learning outcomes of the program meet disciplinary and degree standards?
- Are students consistently achieving the learning outcomes of the program?
- Are students prepared for the workforce and society upon graduating?
- Is the program being run well?
The following broad categories should be examined in the assessment plan:
Program Learning Outcomes (PLOs) - Resources. PLOs articulate the learning that should be achieved by graduates of the program. They are aligned with the mission and purpose of the program, as well as the University Learning Goals (University Policy S17-12 [pdf]) for all graduates of SJSU. They reflect the needs of employers and of society. They are supported by course learning outcomes (CLOs). Courses that support PLO’s should be identified in a Curriculum Map.
Department Operations. Effective department operations support student learning and achievement. Suggested areas might include: graduation rates (and other indirect measures of student success), advising, facilities, faculty, administrative support, and overall student experience.
Strengths and weaknesses. Unique strengths and weaknesses of a program not in the above categories should also be captured by assessment. These might include things such as faculty achievements, resources, external funding, awards, employment rate of graduates, high impact practices in the curriculum, or other artifacts.
An assessment schedule that cycles through all areas of program review is developed and followed. A sample assessment schedule table is included for illustration purposes.
|Assessment Year||PLO1||PLO2||PLO3||IR Data||Exit Survey||Alumni Survey||Licensing Exam Results|
C = Data collected
I = Improvement implemented (if necessary)
Questions to Consider
- Who is the designated personnel to collect, analyze, and interpret student learning outcome data?
- How will the data/findings be disseminated?
- How will data/findings be reported: quantitatively or qualitatively?
- How will the ULGs map to your PLOs? In which course(s) will the PLO(s) are assessed? What assessment activity/assignment is used? What assessment tool will be used to measure the PLO success? You might consider a table to identify the answers to these questions.
Source: Adapted from the CSU System-wide Assessment Plan Template [docx].
Assessment methods used will vary depending on the nature of the information sought and program being assessed. For example, methods that are appropriate for an engineering program might not work for a dance program; those that are work well for a small program might be infeasible for a large one. While the exact methods used are for the faculty of the program to decide, suggestions and ideas are listed below:
Discipline-Specific Knowledge, Skills, and Abilities. Discipline-specific outcomes should be assessed using direct methods deemed appropriate by disciplinary faculty. Suggestions for student work to examine include targeted course assignments, portfolio, capstone project, or an exit examination. If you are using portfolios or capstone projects, you might take a look at the applicable WASC rubrics [pdf]. If your discipline has a meaningful licensing exam (e.g. engineering, accounting, nursing, psychology) or some other benchmarking exam (e.g. business, chemistry), you might also look into the role it can play in program assessment.
Writing and Critical Thinking. Some resources for assessing proficiency in writing and critical thinking include the corresponding LEAP rubrics, the CLA+ Exam, and the Criterion software by ETS.
The LEAP rubrics include written communication and critical thinking; they can be used as-is, or customized for a particular program.
The CLA+ Exam measures writing and critical thinking skills. It has the added advantages of accounting for students’ SAT scores in the results (thus accounting for differences in student ability from institution to institution), and benchmarking to other comparable institutions.
Criterion by ETS is software that provides near-instantaneous feedback on student papers in the areas of spelling, grammar, style, and organization. SJSU currently has a subscription and offers training to interested instructors.
Information Literacy. The SJSU Library can assist in the assessment of information literacy and has done so for a number of programs. The Library also provides a number of information literacy resources.
Individual and Social Responsibility. Outcomes that fall in this category are arguably harder to assess reliably. They do not target knowledge or skills, but rather behavior and beliefs. They can be somewhat “coachable”. Nevertheless, LEAP rubrics exist for civic knowledge and engagement, intercultural knowledge, and ethical reasoning (among others), and can be used as a starting point. In addition, upper division GE courses, required for all undergraduates at SJSU, cover these areas to some extent.
Department Operations. Indirect measures of student success such as graduation and retention rates, enrollment, and student-to-faculty ratio can be downloaded from the Institution Research website. This data can used to address questions regarding the demand for a program, the efficiency in which students progress towards a degree, and other programs served. A student exit survey (Sample Library Information Science Student Exist Survey) is also appropriate here to get feedback on student experience and various areas of department operations.
A subset of this data comprises the Required Data Elements (RDE) for program planning.
- Graduates. If your program assessment requires feedback from alumni, employer and/or an advisory board, a survey could be administered. Contact the Institutional Research for assistance in administering surveys.
Strengths and Weaknesses. Other strengths and weaknesses of a program can be recorded from faculty feedback, alumni and/or employer surveys, feedback from an advisory board, or other appropriate sources.
- Other Considerations.
- Learning vs. Achievement
- Direct vs. Indirect Assessment
Closing the loop on assessment refers to deliberate actions taken to improve and evaluate program improvements, or to an affirmation that a practice, course, or program is effective. The action plan from the previous program planning review should be addressed here, in addition to other areas requested by the annual assessment template or those that may be revealed during the annual assessment process. Propose a change to instruction, assignment, or policy. Implement it. Look at the next round of data and assess improvement. Repeat if necessary.
Once sound data is being collected at regular intervals, changes to improve student learning can be implemented and its effect measured. Perhaps an area requiring improvement was revealed in latest assessment data, or perhaps there are action items from the latest program review. Propose a change to instruction, assignment, or policy. Implement it. Look at the next round of data and assess improvement. Repeat if necessary.
Strictly speaking, there are some commonly accepted predictors of student success that should be addressed in analyses. These include ability (measured by high school GPA and/or SAT score), ethnicity (underrepresented minorities or not), and economic background (often measured by Pell grant eligibility). For example, it may appear that deliberate interventions resulted in increased graduation and retention rates. To be thorough, however, one should also check to be sure that the student composition is the same. If, for example, students’ economic background or grades were also increasing (due to increases in tuition, more out of state students, or impaction, say), this might be just as likely of an explanation as the intervention.