Evidence of Learning: Courses within the Program |
Measurable Learning Outcome
Students will . . .
|
Method of Measurement
Direct and Indirect Measures
|
Threshold for Evidence of Student Learning |
Learning Outcome 1:
Demonstrate the ability to apply knowledge of math, science, and engineering.
|
Measure 1: This is applied in multiple courses (see the curriculum map table). All courses require projects and exams to assess how well students have learned the material. |
Measure 1: Students must pass the projects and exams in order to pass the respective courses with a grade of B- or higher. |
Learning Outcome 2:
Demonstrate the ability to design a system, component, or process.
|
Measure 1: This is applied in multiple courses (see the curriculum map table). All courses require projects and exams to assess how well students have learning the material. |
Measure 1: Students must pass the projects and exams in order to pass the respective courses with a grade of B- or higher. |
Learning Outcome 3:
Demonstrate the ability to identify, formulate and solve computer science problems.
|
Measure 1: This is applied in multiple courses (see the curriculum map table). All courses require projects and exams to assess how well students have learning the material. |
Measure 1: Students must pass the projects and exams in order to pass the respective courses with a grade of B- or higher. |
Learning Outcome 4:
Demonstrate the ability to apply master's level knowledge to the specialized area of computer science.
|
Measure 2: Create a formal report and presentation on a particular CS subject. |
Measure 2: A formal grade from CS 6000 |
The assessment plan is executed using two types of instruments:
1. Course assessment rubrics.
2. Project thesis/defense assessment.
These assessment instruments are described below.
Course assessment rubrics
The course assessment rubric is a direct assessment instrument that articulates the expectations for student performance. The rubric consists of three elements:
• Dimensions (performance indicators)
• Scale (levels of performance) of 1, 2, 3 or 4
• Descriptors (descriptions of the levels of performance)
Each course in the MSCS curriculum grid has an associated assessment rubric that measures students’ performance with respect to the 4 student learning outcomes listed in Section C. Through the continuous use of these rubrics, assessment at both the course and program level is an ongoing process that provides a measurable means of program improvement.
The course assessment rubric works as follows. At the end of each semester, the instructor scores each performance indicator (PI) for the course. A four-point scale is used. The rubrics are designed with a “trigger point.” If the score of a PI is 1 (unsatisfactory) or 2 (developing), the instructor initiates action to make course level changes with respect to the applicable PI for the course. If the score of a PI is 3 (satisfactory) or 4 (exemplary), no action is taken by the instructor. Then, the mean PI score for each course and section* is transferred to a program level “continuous course improvement” record, a document that summarizes the mean PI scores. This spreadsheet utilizes a trigger point of 2.67 and if a mean PI score falls below the trigger point, the faculty at the program level must make significant changes to the course or the program to remedy the problem. Thus, depending on the trigger points activated, both the instructor and program faculty have input to the continuous improvement process.
*CS 6010 and CS 6011 assessment data are recorded in the continuous course improvement record only for the semester in which the student defends.
Project Thesis/Defense Assessment
The thesis or project defense assessment is a direct assessment instrument that is completed by all faculty attending the final design review (defense) of a student’s thesis or project. This instrument assesses the student’s mastery of the program-level learning outcomes listed in Section C.
The thesis or project defense assessment instrument works as follows: Faculty attending a final design review answer four questions corresponding to the four learning outcomes listed in Section C. Responses from these questions fall into a four-point asymmetrical Likert scale:
4 = strongly agree
3 = agree
2 = mixed, and
1 = disagree
The student’s committee chair calculates the mean response for each question. These responses are recorded in the Project Defense Assessment Report, which the chair submits to the program director. The director computes a graduating cohort average for each of the four questions and enters those averages into the continuous improvement record. If the mean value for any question falls below 2.67, the program faculty must initiate action to address the unsatisfactory learning outcome result(s). Conversely, if all mean values are at or above 2.67, no action is initiated by the faculty.