Going back to school is one of the biggest investments of your life, and you need to know that your time, money, and effort will be worth it. At Capella, we use outcomes to show the direct impact our programs have on the professional lives of our graduates. We're proud to bring this level of quality and accountability to higher education.
Using learning outcomes ensures that Capella delivers an engaging, relevant learning experience to adults like you. A Capella program's expected learning outcomes form the foundation of all our courses, which are constructed by a team of specialists:
Courses are designed to deliver each program's expected learning outcomes; each course assignment aims to measure each student's proficiency in demonstrating those outcomes. This builds a rich context for interpreting data that we use to continuously improve the learning experience.
I think these data are really important to a perspective learner, because a perspective learner should be looking in this site and say, why does this matter to me? Why should I care? And I would answer those kinds of questions by saying, this is a big investment for you, and you should have some informed decision making, so take a look at our program outcomes. Are they strong enough for you in your opinion to deliver the career success that you are looking for? And then look at our history of delivering on those learning outcomes. How well have we done in the past, the recent past on delivering on those, and then make the judgment yourself: Are these programs right for you?
At Capella we sometimes talk about a line of sight, and when we talk about line of sight, we really mean that there is a clear map between the program outcomes on one end, which is the end-game of the—or the final destination, of all of our courses and the degree programs we offer, all the way down to specific assignments and courses and assessment criterion, and course competencies. And that metaphor is helpful because it allows us to really think about how we structure and map out an entire set of learning experiences, for both learners and instructors, and others who are interested in the work we do. For example, accreditors, either professional accreditors, specialized accreditors or our regional accreditors. So having that a very clear line of sight gets everybody on the same page and in a shared understanding of what we promise, as well as what we deliver.
The program outcomes assessment results that we are publishing here, really are the creation or the result of a pretty sophisticated system of assessment that we instituted for all of the capstone courses in which program outcomes are demonstrated by our learners. So for instance, we got the program outcomes as with the faculty who created them. Some of the core instructors for the capstone course itself, and other experts among the faculty with assessment specialists who have specific expertise in measurement and evaluation.
They all got together to create the assessment rubrics that we use, that instructors use, to grade learners work, and judge the learners work on different performance levels. So all of them getting together, agreeing what the rubric should be, taking some exemplar learner work, and applying that rubric to that work. Coming together and saying, this is how I use the rubric, this is how you use the rubric, and they all came to agreement eventually. Sometimes they had to rework the rubric. But what that process really did, was to create some validity and reliability for these data, because we have a team of experts all agreeing. These are the right criteria, and we are applying them correctly to the learners work to create judgments we can be confident in, and confident not to report upon in a venue like this.
The assessment system that we use here at Capella is geared toward understanding in pretty specific detail the learning experience. So we have assessment results on top of curricular maps, and we use those to generate insights and action plans, and new decisions ultimately to improve the learning experience. And not only the experience, but the actual learning that matters to a professional as they try to advance their career.
Ultimately what we are trying to do with this very large program of work around learning assessment and collecting data that helps us both understand current state as well as make better improvement decisions, is that we want to impact the learning experience, because we know when we can deliver a strong learning experience toward outcomes that matters to professionals, and then link those outcomes and the demonstrations of those outcomes with career success, we are creating a very strong model, not only in current state, but to improve for the future.
The reports we publish reflect the collective results of Capella students in capstone courses, typically the final course of the program. During these courses, instructors assess each student's proficiency level using specific criteria; we capture detailed records that we use to create outcomes reports.
We conduct regular alumni surveys to collect up-to-date information about our graduates' professional achievements, what they learned in their programs, and their overall satisfaction with their experience at Capella. We publish the results so that you can see how our graduates advance after graduation.
Our students participate in 2 standardized surveys that include students from online universities nation-wide: The Noel-Levitz Priorities Survey for Online Learners (PSOL) and the National Survey of Student Engagement (NSSE). These surveys—and the published results—help evaluate Capella's performance and compare it to other, similar institutions.
NILOA helps educational institutions develop and use learning outcomes data to improve higher education; and share that data with policy makers, students, and others interested in higher education.
Capella is a member of the Presidents' Alliance for Excellence in Student Learning and Accountability which leads and supports voluntary and cooperative efforts to move the higher education community towards gathering, reporting on, and using evidence to improve student learning in American undergraduate education.