Outcome (j) Recommendation Spring 2013

It is easiest to compare results for Sections 1 and 2 to Spring 2012 results (http://www.sjsu.edu/cs/ assessment/bscs/reports/s12/), as these two instructors (Jeffrey Smith and Katerina Potika) were the ones that taught both semesters. However, in preparing this report, I noticed that they had included all students in their statistics, while for the purpose of program assessment, we have traditionally  recorded results of passing students only. Prof. Smith was able to recalculate his numbers for 2013, as well as half of his outcomes for 2012, while. Dr. Potika was not. For those professors over all students, the percentage of students failing to perform at satisfactory (or better) was roughly cut in half, in particular in the areas noted as needing work from Spring 2012. Prof. Smith in particular notes that he spent more class time on those topics, as well as using quizzes within our Course Management System (Canvas) to give some sample questions regarding algorithm analysis. In particular, Prof. Smith’s numbers from the Spring 2012 report should be modified to subtract out 9 failing students. For the four indicators he was able to recalculate (A1, A2, J2, J3), those 9 students had exemplary outcomes for 5 total indicators, and beginning (that is, unsatisfactory) for 31, which improve his passing numbers for Spring 2012.

 For the numbers reported in Sections 2.1 and 2.1 in this report, we have included only passing students, except for the students of Dr. Potika (Section 1), who included all students, passing or not, in both Spring 2012 and Spring 2013. For Spring 2013, she only had 4 students who did not get a C- or better in her section, and in Spring 2012, only 6 in two sections combined, and it is her belief that these students generally did very poorly on the indicators, but she was unable to retroactively collect that data. Because she generally passes a higher percentage of students (roughly 90% vs. 70-75% for Profs. Smith and Taylor), this should not make as large a difference in the numbers as it would for either of the other instructors.  However, it does raise the issue that, perhaps, overall rates of passing students should be discussed among those teaching this course, which is used as a gateway/filter for our upper division courses.  For the discussion which follows, I generally treat the numbers above as statistics for passing students. 

Considering only Sections 1 and 2, which (by instructor)  are easier to compare to sections taught in Spring 2012, results were promising, with improvements in almost every indicator. Including the sections taught by me (Taylor) is more difficult.  I have taught more sections of CS146 over the past 10 years than anybody else in the department, but I was on leave for research in academic year 2011-2012, and thus didn’t teach any of the sections evaluated last year.

While the improvement of my colleagues’ student performance on the indicators over last year appears significant, I believe that we should consider changing how we consider the individual indicators. This is an assessment-wide belief, not only for CS146: for each of the indicators for a given outcome, for each student, a combined score over all indicators should give one score for the outcome as a whole. Students who are able to answer basic indicators can be marked as satisfactory, while those who answer basic indicators and additionally show some nice work on one advanced indicator might be marked as the higher category. This would allow each instructor more flexibility in asking they type of test questions they prefer, while allowing the overall score rubric to normalize between difficulties in different teacher questions. I describe the issue more fully in Section 4.

 Additionally, in Spring 2013, I (Taylor) experimented with some “flipped classroom” techniques. The results were promising, but simply too work-intensive to implement during the semester. I plan to do much of the work needed to reorganize the course (flipping perhaps half of it) over the summer. For my sections (and others, if they like), this will allow for more lab-work and problem solving in class, working with students towards better solutions, rather than spending most of class time on the “stand and lecture” model. It does seem that many students are simply not ready for anything but simple programming tasks when they enter the course, and the “sink-or-swim” approach of just giving them bigger assignments to do is causing problems in the course (some students spend too much time on the assignments, while others are cheating). Further, for problem solving, when they don’t get an answer, getting delayed feedback is simply not as valuable as having answers real-time during class. I expect a fairly serious change in the course for next semester: besides the at-home lectures to watch, new software may allow for an introduction of some Mastery-Teaching concepts into the course. (This is less certain than the flipped lectures, and will likely take longer than the summer to develop, but I will at least try to start.)  Others teaching the course in the future may or may not adopt these changes, perhaps depending on their success in Fall 2013.