3.2. Mining the CS151 Report

 

Kim submitted a thorough report on how many of her CS151 students mastered outcomes SLO6-I, SLO8-I, and SLO13-I. Apparently the corresponding report from two years earlier was "unusable" according to the Assessment Committee's notes. It would therefore be nice if some of this conscientiously written report could serve as a baseline for future task-based assessments of these outcomes.

Kim associates a list of tasks with each outcome. Most of these tasks seemed reasonable and the Assessment Committee might want to consider adopting some of them for future task-based assessments of these outcomes. Kim then cites several problems (homework assignments, exam questions, projects) that tested her students' abilities to perform these tasks.

For example, for SLO8-I (The ability to carry out object oriented design and to apply design patterns) Kim defines three tasks:

T1: Represent the design of a system or component with a UML class diagram.

T2: Translate a UML class diagram into Java class declarations.

T3: Employ at least three different design patterns. For example, three instances where students were required to recognize the applicability of a particular design pattern, then correctly instantiate that pattern either in the form of a UML class diagram or code or both.

She then defines three problems, although the second problem has three parts, so really there are five problems. Each problem tests a student's ability to perform one or more of the tasks. The following table shows her results:

Problem

Tasks

% mastered

P1

T1

90.00%

P2.1

T3

51.60%

P2.2

T3

51.60%

P2.3

T3

100.00%

P3

T1, T2, & T3

100.00%

While the results for tasks T1 and T2 appear to be encouraging, it is difficult to say if students mastered task T3 or not. How should this data be interpreted?

She defines a single task for SLO6-I (The ability to design and implement graphical user interfaces)

T1: Design and implement a GUI application that involves with use of multiple GUI components, layout management, and event handling.

She then cites four problems where this task was tested and reports mixed results:

Problem

% mastered

P1

79.40%

P2

64.70%

P3

73.50%

P4

64.50%

avg

70.53%

Again, it's difficult to interpret these results. The problems seem very easy, almost at the CS46A level, so these numbers could be too low.

Finally, Kim defines two tasks for SLO13-I (The ability to solve computing problems as part of a team):

T1: Conduct a team-based project to design and implement an application that involves with about 10 classes, GUI programming, and design patterns.

T2: Employ a mechanism that allows the instructor to judge load balancing and degree of participation among team members.

Kim only cites the team project as the basis for assessing presumably both of these tasks. She reports that 58.3% of her students mastered these tasks. (25% partially mastered it.)

Task T2 seems a little out of place. It sounds more like a task for instructors (and a difficult one at that). If only 58.3% of her students mastered task T1, then his would be worrisome. Of course we don't have anything to compare this data with, the figures do include students who presumably didn't pass the class (the new rubrics will only take into account percentages of students who received a C- or better in the class), and CS151 is the first place in the curriculum where team projects are required.