Mapping out your path to success

Kimberly Pearce, Director of Assessment & Institutional Research

I think these data are really important to a perspective learner, because a perspective learner should be looking in this site and say, why does this matter to me? Why should I care? And I would answer those kinds of questions by saying, this is a big investment for you, and you should have some informed decision making, so take a look at our program outcomes. Are they strong enough for you in your opinion to deliver the career success that you are looking for? And then look at our history of delivering on those learning outcomes. How well have we done in the past, the recent past on delivering on those, and then make the judgment yourself: Are these programs right for you?

At Capella we sometimes talk about a line of sight, and when we talk about line of sight, we really mean that there is a clear map between the program outcomes on one end, which is the end-game of the—or the final destination, of all of our courses and the degree programs we offer, all the way down to specific assignments and courses and assessment criterion, and course competencies. And that metaphor is helpful because it allows us to really think about how we structure and map out an entire set of learning experiences, for both learners and instructors, and others who are interested in the work we do. For example, accreditors, either professional accreditors, specialized accreditors or our regional accreditors. So having that a very clear line of sight gets everybody on the same page and in a shared understanding of what we promise, as well as what we deliver.

The program outcomes assessment results that we are publishing here, really are the creation or the result of a pretty sophisticated system of assessment that we instituted for all of the capstone courses in which program outcomes are demonstrated by our learners. So for instance, we got the program outcomes as with the faculty who created them. Some of the core instructors for the capstone course itself, and other experts among the faculty with assessment specialists who have specific expertise in measurement and evaluation.

They all got together to create the assessment rubrics that we use, that instructors use, to grade learners work, and judge the learners work on different performance levels. So all of them getting together, agreeing what the rubric should be, taking some exemplar learner work, and applying that rubric to that work. Coming together and saying, this is how I use the rubric, this is how you use the rubric, and they all came to agreement eventually. Sometimes they had to rework the rubric. But what that process really did, was to create some validity and reliability for these data, because we have a team of experts all agreeing. These are the right criteria, and we are applying them correctly to the learners work to create judgments we can be confident in, and confident not to report upon in a venue like this.

The assessment system that we use here at Capella is geared toward understanding in pretty specific detail the learning experience. So we have assessment results on top of curricular maps, and we use those to generate insights and action plans, and new decisions ultimately to improve the learning experience. And not only the experience, but the actual learning that matters to a professional as they try to advance their career.

Ultimately what we are trying to do with this very large program of work around learning assessment and collecting data that helps us both understand current state as well as make better improvement decisions, is that we want to impact the learning experience, because we know when we can deliver a strong learning experience toward outcomes that matters to professionals, and then link those outcomes and the demonstrations of those outcomes with career success, we are creating a very strong model, not only in current state, but to improve for the future.