Tag Archives: assessment

A Challenging Question

What are the “unobtrusive measures” we might use to capture the results of civic and community engagement?

For anyone interested in creative ways to document results, St. Cloud State University president and Minnesota Campus Compact board vice chair Earl Potter recommends Unobtrusive Measures, a classic book by psychologist Eugene J. Webb and several colleagues. (Originally published in 1966, it is still in print and available from Sage Publications.) While the authors consider surveys and interviews “probably the most flexible and generally useful devices we have for gathering information,” the book emerges from concern that:

The dominant mass of social science research is based upon interviews and questionnaires. We lament this overdependence upon a single, fallible method. Interviews and questionnaires intrude as a foreign element into the social setting they would describe, they create as well as measure attitudes, they elicit atypical roles and responses, they are limited to those who are accessible and will cooperate, and the responses obtained are produced in part by dimensions of individual differences irrelevant to the topic at hand.

But the principal objection is that they are used alone. No research method is without bias. Interviews and questionnaires must be supplemented by methods testing the same social science variables but having different methodological weaknesses.

Most civic and community engagement practitioners are well aware that self-reported data can be dismissed as unreliable evidence of real change. Students may answer questions selectively because they are aware they are being tested and the institution hopes they will respond in certain ways. Fewer students may respond because of survey fatigue, as an article in the Chronicle of Higher Education this fall suggests, and different groups of students may be more or less prone to respond. Surveys of students, community partners, and other stakeholders need not be abandoned, but finding additional ways to document impact could triangulate results and make a stronger case.

So how might we collect meaningful data without directly asking people for it? A few examples from the book may help spark creative ideas:

  • tracking how often floor tiles in different parts of a museum need to be replaced is a measure of the exhibits’ relative popularity;
  • counting empty bottles in trash cans would indicate how much alcohol is drunk by residents of an officially “dry” town; and
  • the dilation of a person’s pupils may indicate fear or interest in something.

Some public records offer information about civic behaviors, such as
registering to vote and actually voting. The number of complaints local police receive about loud parties might be an indication of how responsible and considerate students are as neighbors. Similarly, the number of dormitory residents requesting a new roommate might be a measure of how skilled students are at working through differences. The number and content of letters to the editor or online comments on news sites might reflect how engaged students are with public issues and how effectively they articulate a position. Obituaries can probably tell us a great deal about the behaviors and values of our alumni, though few of us would want to wait that long to assess the results of our efforts—plus obituaries, like other public and private records, are shaped by social expectations and may not be kept consistently over time.

Another technique is simple observation. Perhaps we could be trained to recognize contempt, fear, and other emotions in people’s facial expressions. We might see how often recyclable items are placed in regular trash cans or how many bicycles are parked on campus in order to measure people’s commitment to environmental sustainability. These examples are just the initial results of a little brainstorming. What measures can you imagine being useful in capturing the outcomes of campuses’ civic and community engagement efforts?

One final point to keep in mind as we strive to improve assessment and accountability: Mike Newman of the Travelers Foundation says he looks for evidence of contribution rather than attribution. Many people do not expect us to use experimental methods or to prove a causal effect free of any other factors. Evidence of success matters, but so does trust (and open communications and relationships), whether we want to gain support or achieve our core goals.
— Julie Plaut

Team-Based Learning

What kind of test can engage students and enhance their learning?  Participants in a Minnesota Campus Compact workshop last Friday experienced firsthand a team-based technique that was fun and informative.  Individuals first completed a multiple-choice test on their own, then gathered in small groups to discuss the questions and determine the group’s answers together.  By next going through and scoring both our individual and group tests, we got immediate feedback, learned the right answers—and found that no one individual had scored higher than the group.  We’d also enjoyed the opportunity to share the reasoning behind our answers and learn from others’ perspectives.  The next step was to apply what we’d learned, which in this case meant applying principles about designing courses for significant learning to revising specific course syllabi.

Faculty around the country have conducted research on this kind of team testing, with similarly positive results.  Dr. Robert Dunbar at the University of Minnesota Rochester has begun related research and shares his preliminary findings and both pedagogical and research motivations below:

The group component [in classes in this study] goes beyond testing. Students complete study guides prior to class. Then in class, they complete a quick “confusion and clarifications” survey (think – muddiest point) before they do anything else. Next, they split into their groups to discuss the study guide questions/concepts and then complete the same “confusions and clarifications” survey. The second time, they focus on highlighting concepts that were not clarified by the small group discussion. We then take the results of the second survey to guide the focus of our “lecture”/class discussion. Therefore, the students help to clarify topics in their groups before we do anything. Preliminary results of the surveys suggest that there is a significant gain in understanding just through peer interactions before we lecture. This is not yet published and is still very preliminary.

Pedagogy – Based on feedback from graduate/professional programs as well as industry, it is clear that there is a demand for graduates who can effectively work in groups. However, we also still needed to support and encourage individual, self-accountability. How do we reconcile these apparently opposing goals? Furthermore, can we encourage people to value group work AND self-accountability without generating animosity between high and low performers? The model that I (and the student based faculty who work with me) have implemented includes a high point value for the individual exams (~150 pts) and a lower possible extra credit contribution from the group tests (up to 10 pts). Under this model, performance of the group does seem to be influenced by the high performers but the high performers also value the contributions of all members of the group to make up the gaps in understanding that they (the high performers) have. Furthermore, the lower performers no longer resent the high performers as “curve breakers” because there is no curve. Rather, the extra-credit earned by all effectively replaces any curve so ALL students benefit when the group performs better.

Research – Does working in groups facilitate learning the material? This is a tricky question. The data collected to date argues that the vast majority of the time, the group result is above even the highest performer in the group. In other words, even the highest performers benefit from working in groups. As with all studies of this nature, there are exceptions but these are rare. Furthermore, and this is anecdotal for the moment, students actually discuss and appear to learn while they go over the test as a group. I am currently trying to figure out how gender and social self-efficacy relate to group performance as well as how to capture the level of learning that appears to be happening. Undoubtedly, future analysis will include questions that appear on multiple exams and, someday with appropriate IRB approval, an analysis of student discussions while they work.

While this team-oriented approach to traditional course content seems like a natural fit for classes that form teams to complete community-engaged projects, it is applicable to a wide array of courses.  Harvard physicist Eric Mazur developed a similar peer instruction technique that has received widespread acclaim and adoption in the sciences.  His research and UMR’s unique curriculum are both highlighted in an American Radio Works documentary aired last week, Don’t Lecture Me.  A few additional resources on this topic:

Even beyond the context of teaching and course development, the research showing that student teams do better than individuals is intriguing.  In The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations (Doubleday, 2004), James Surowiecki argues that groups can make better decisions or predictions than experts, particularly when those groups draw on a diversity of opinions from individuals whose opinions were reached independently, and when decentralization allows the group to draw on local knowledge to determine their collective position.  Given how important collaborative action and decision-making are in a democracy, it’s exciting to speculate that positive experiences with team-based learning could also increase people’s inclination to engage with others to address important public issues.  Anyone interested in a research project?

–Julie Plaut

A Challenging Question

by Julie Plaut

How can we document meaningful short-term outcomes, when much of what we’re aiming for takes a long time to achieve?

We’ve probably all heard the grudging joke about needing to wait to read our students’ obituaries – hopefully decades from now – to know if we really graduated actively engaged, informed and responsible citizens. Major change in individuals and in communities often takes years, yet it’s critically important to know in the shorter term to what extent we’re achieving our goals, or at least contributing to long-term movement toward them.

A common framework categorizes knowledge and skills as outcomes that can be measured in the short term; actions or behaviors as intermediate outcomes; and values, conditions, and status as long-term outcomes. Thus a campus-community partnership focused on increasing college access might track: middle school students’ understanding of key steps to attend college, and college administrators’ knowledge of effective strategies for increasing access; the same students’ enrollment in a rigorous set of high school classes, and changes in college admissions and financial aid policies or practices; smaller gaps in high school graduation and college enrollment rates by race, income, and parents’ level of education, and a shared commitment among educators to support all students’ educational success.

For institutions and individuals committed to developing engaged citizens, determining exactly what we seek to accomplish and measure may be the fundamental challenge. We can certainly draw on indicators from the national Civic Health Index, the VALUE rubrics, and other resources, including those noted in MNCC’s 2009 Civic Engagement Forums report. Yet there is neither extensive research on the results of different experiences and contexts, nor often a strong link between existing knowledge and practice. In analyzing alumni survey data, for instance, do we take into account political scientist Laura Stoker’s work on life-cycle patterns, so we don’t judge our success by looking at a typically low point in adults’ civic engagement? Some useful reflections appear in How Young People Develop Long-Lasting Habits of Civic Engagement, the result of a conversation to inform the Spencer Foundation’s Civic Learning and Civic Action initiative, which is a source of research grants that will surely inform future practice.

Thoughtful consideration of the desired outcomes for communities, students, and institutions, drawing on multiple disciplines and types of knowledge, can be a civic act in itself—developing participants’ capacity for dialogue, strategic judgment, commitment to engage over time, and sense of accountability for results as well as intentions or actions.

Reprinted from “Outcomes,”our assessment brief which is published twice a year and available at www.mncampuscompact.org/assessment

Assessing the Outcomes of Civic Engagement: Why Bother?

by the MNCC Assessment Leadership Team

Assessment is like flossing. We all know it’s a good thing to do, but we don’t necessarily act on that knowledge unless there’s an external push—a looming dentist visit or a funder’s requirement. Civic engagement practitioners, like most people in higher education, assess the results of their work primarily in response to others’ demands. Our daily lives are full. Yet assessment helps us track progress towards our goals and understand what factors contribute to success. It allows us to tell powerful stories and identify where we might best invest our time and money. It keeps us accountable to our own values as well as our partners. It thus helps us do our jobs better, advancing our institutions’ civic missions and broader movements for positive change.

So how can we fully commit to actions that are healthy for us in the long run? One step is simply recognizing assessment’s benefits as a matter of compelling self-interest. Another critical step is focusing on our strengths—taking the asset-focused approach we so often advocate in community partnerships. We have access not only to all sorts of useful resources and sample instruments, but also to institutional researchers and others with relevant responsibilities, skills, and interests. IR people, in particular, are expected to document and disseminate progress toward the institution’s mission, strategic priorities, and accreditation standards. They are increasingly being asked to explain the institution’s relationship with and impact on the community, as well as the meaning of a degree to alumni years after graduation. Civic engagement practitioners represent a source of valuable connections and knowledge for them, just as they offer quantitative research capacity and a fresh perspective on key questions and outcomes.

Assessment of civic engagement is ideally collaborative, involving multiple stakeholders within an institution, its partner organizations, and sometimes other campuses. It’s cross-cultural work too, as we build relationships, mutual trust and respect, and a sense of common purpose across differences. Along the way, we’ll practice and develop our reflection skills. Only with reflection will collecting data and stories lead to wisdom and greater insight into what works and why. Really looking closely at outcomes for communities and for students is an act of courage. It means being facing the risk of negative or neutral findings—and being willing to change and grow. Courageous leadership is something we seek to cultivate in our students, and we’ll teach it best when we model it too.

The Minnesota Campus Compact Assessment Leadership Team is a small group of civic engagement practitioners, institutional researchers, and faculty assessment coordinators committed to supporting enhanced assessment of civic engagement’s outcomes, grounded in this constructive vision and spirit. Our goal is to produce, in the coming months and years, brief pieces that highlight particular assessment tools and what was learned through their development and application. We’re also researching other resources and considering what opportunities we might offer for collaborative planning and assessment.

Reprinted from “Outcomes,”our assessment brief which is published twice a year and available at www.mncampuscompact.org/assessment

Getting Beyond “Either/Or”

By John Hamerlinck

Ten years ago, I was working for a government agency that was actively involved in remediation of the Y2K computer problem.  Now days most people look back at that time and remember some rather scary predictions and a relatively uneventful New Year’s Day, 2000.

This result was not simply a case of unfounded crisis hype. The cause for the reasonably humdrum January 1, 2000, was countless hours of work by folks in every sector of society, acting to ensure that problematic systems were updated. The smooth Y2K rollover, however, was not the only thing that they achieved.

We live in a culture where we are constantly presented an overly-simplistic, “either/or” view of the world. People are seen as only liberal or conservative; ideas are only deemed to be either good or bad (as if the “both/and” versions of these options didn’t exist). Because of this, we tend to look at events like Y2K simply as something that either did or did not happen.  A little additional analysis, however, reveals that numerous benefits resulted from all that hard work.

Cities, hospitals, businesses and schools everywhere suddenly had disaster preparedness plans that could be (and have been) implemented during all sorts of natural and human-caused catastrophes.  Widespread computer hardware updates resulted in increased productivity as organizations replaced slow, inefficient machines. Perhaps most significantly, people whose lives and enterprises moved along seemingly oblivious to the larger world were suddenly required to gain a deeper understanding of the world’s interconnectedness and interdependence.

As we engage in the work of community-building through civic engagement it is important to avoid the trap of dualism. Overcoming the common challenges we face will be more difficult if we see concepts like leadership and power in limited, “either/or” terms. We can all find lots of opportunities to demonstrate individual and collaborative leadership. As for power, it is an unlimited and renewable resource.

When we embrace a broader, multi-variant view of engagement, assessing and evaluating our work becomes more effective. We begin to recognize more unintended consequences, more unforeseen benefits and more opportunities to find underdeveloped capacities in the gray areas between the black and white surfaces.