The AACSB peer review team left this morning after a whirlwind onsite accreditation visit to our business school. I shared a copy of our report with you a couple of months ago. Today I want to share a lesson on bottom line messages.
Some background. One area in which the College is reviewed is called Assurance of Learning (AOL). Also sometimes referred to as assessment. Reporting on our AOL activities allows us to document that we know what our students learn in our degree programs and that we continuously improve those programs to enhance student learning. In our accreditation report, we followed a prescribed arrangement of materials with AOL information contained within one section of the body and within the appendices in summary tables. Here’s an example of one of those tables for our undergraduate program (with around 6,500 students currently enrolled).
- The portion of the table devoted to Learning Goals describes the target knowledge or skills for students in a specific degree program. (There are five for our bachelor’s degree.)
- The entries for Assessment Tools describe how the learning goals are measured. (Two of these are writing assignments from required courses, which were collected from hundreds of students during spring of 2011 and scored using a faculty-developed rubric by independent raters.)
- The content in the Results section describes performance of students on the chosen measures. (For instance, although our students met the target of 80% meeting expectations for professional writing, there was a significant decrease in the percentage who exceeded expectations compared with a couple of years ago.)
- The bottom of the table is devoted to Resulting Improvement Initiatives in three categories: curricular changes, assessment procedure changes, and co-curricular changes. (One of the curricular changes recommended in this case was to hire more full-time faculty instead of relying on graduate students to teach professional writing.)
Back to my lesson on the bottom line.
Here’s what happened when our reviewers read our report: They recognized that we had appropriate learning goals and were collecting measures of that learning. However, they questioned whether we were actually using those measurements to drive improvement in our programs. (In the assessment world, this is called “closing the loop” and is obviously of critical importance.) Luckily, when our reviewers visited with our faculty and asked about continuous improvement, we were able to describe examples from lots of degree programs, including the bachelor’s degree.
So what went wrong? As the writer, I failed to understand what the bottom line message of these summary tables was for my readers. If I had recognized the improvement initiatives as the bottom line (rather than thinking the bottom line was about the appearance of all four aspects of the process), I would have highlighted the improvement initiatives. How? What is first (or biggest or loudest) has salience. (I’ve written about the psychology behind perception in information design before.) So I could have used some change in typeface or size or color to make this section of the table more salient. Or I could have moved this information out of the table and made it more prominent by discussing it first. Etc.
Here’s what I think is important about this lesson. It’s another reminder that the bottom line message is critical. And that it’s not the same for every reader. In addition, it shows that soliciting comments from readers on drafts (ours saw two versions of the report before it was officially submitted) doesn’t guarantee success. Despite the fact that our readers praised the quality of our report, this bottom line message wasn’t clear to them. So perhaps the critical lesson is about keeping open the possibility of conversation to supplement any written documentation even in — or especially in — a bureaucracy.