Strengthsfinder

Below are my top five strengths and descriptors from the StrengthsFinder assessment.

  • Futuristic
    • People strong in the Futuristic theme are inspired by the future and what could be. They inspire others with their visions of the future.  
      • Imaginative, creative, visionary, even prophetic, inspiring
  • W.O.O. (Winning Others Over)
    • People strong in the Woo theme love the challenge of meeting new people and winning them over. They derive satisfaction from breaking the ice and making a connection with another person.  
      • Outgoing, people-oriented, networked, rapport-builder
  • Positivity
    • People strong in the Positivity theme have an enthusiasm that is contagious. They are upbeat and can get others excited about what they are going to do.
      • Enthusiastic, light-hearted, energetic, generous with praise, optimistic
  • Empathy
    • People strong in the Empathy theme can sense the feelings of other people by imagining themselves in others’ lives or others’ situations.  
      • Creates trust, knows just what to say/do, customizes approach to others
  • Maximizer
    • People strong in the Maximizer theme focus on strengths as a way to stimulate personal and group excellence. They seek to transform something strong into something superb.  
      • Mastery, success, excellence, works with the best

 

Advertisements

Bird by Bird

by Anne Lamott

“Thirty years ago my older brother, who was ten years old at the time, was trying to get a report on birds written that he’d had three months to write.  It was due the next day.

We were out at our family cabin in Bolinas, and he was at the kitchen table close to tears, surrounded by binder paper and pencils and unopened books on birds, immobilized by the hugeness of the task ahead.

Then my father sat down beside him, put his arm around my brother’s shoulder, and said, ‘Bird by bird, buddy. Just take it bird by bird.'”

Developing Rubrics: Lessons Learned

(published in Assessing Outcomes and Improving Achievement: Tips and Tools for Using Rubrics, written by Wende Garrison)

 “A good example has twice the value of good advice.”

The emphasis on assessment at colleges and universities across the country has created a need not just for assessment tools, but for tools that can yield meaningful information about student learning, experiences, and success. In response, many campuses have appointed committees, often comprised of faculty members, to create or redesign existing assessment tools. Increasingly, however, these efforts have sought to evaluate student achievement at the programmatic level, rather than at the level of individual courses. And although faculty can expertly evaluate the work assigned within a particular course, they are less accustomed to creating assessment tools that span specific course objectives and subject matter such that student learning and success can be measured both within and across college curricula. To do this, faculty must learn to conduct assessment not only within a group but also as a group; they must share knowledge, reflect upon expected outcomes, build consensus, and take collective ownership of the assessment.

The Valid Assessment of Learning in Undergraduate Education (VALUE) project of the Association of American Colleges and Universities (AAC&U) typifies how, using rubrics, faculty can work collectively to create meaningful assessment of student learning. Over the course of eighteen months, fifteen teams of faculty and their colleagues from around the country created rubrics to assess the essential learning outcomes identified through AAC&U’s Liberal Education and America’s Promise initiative. Working virtually and by telephone, the teams have produced, in addition to the rubrics themselves, a road map of sorts for the process of creating rubrics as a group. What follows are the lessons learned through the VALUE project.

Don’t Reinvent the Wheel

Departments and institutions often have assessment tools with which they are already working. This is not to say that they don’t need to create local versions of the VALUE rubrics, versions tailored to particular contexts, but rather that the process doesn’t need to reinvent the entire wheel. Local development groups should begin by searching for existing examples of assessment, and then proceed by discussing how these might be adapted to meet departmental or institutional objectives. For each of fifteen learning outcomes, the VALUE project collected approximately twenty sample rubrics from campuses around the country. This early rubric collection is located online and can be searched by subject area (see http://openedpractices.org). Anyone in higher education may add resources to the site or use it as a public library for local rubric development projects.

Once the rubrics for an outcome area were collected—primarily through Internet searches for publicly posted material and contributions from individual faculty—they were shared with each of the rubric development teams. The teams then examined the rubrics and identified the common criteria found most frequently across each collection. These criteria became the foundation of the VALUE rubrics.

For some less common outcomes—for example, integrative learning, civic engagement, and creative thinking—fewer rubrics were available. In these instances, the teams utilized other important sources identified by faculty from the relevant fields to determine the criteria on which there appeared to be widespread agreement within a discipline or learning outcome area.

Many Hands Make Light Work

Both the size and the composition of a team can affect the achievement of rubric development goals. After working in the VALUE project with teams of varying sizes, we concluded that a team of five to ten people is optimal. Teams of this size were able to continue working when one or two people needed to miss a meeting. Smaller teams were not always able to gather enough members at one time to advance the work, and as a result they encountered more delays than the larger teams.

In terms of composition, it is important for the end users of the assessment rubric to be represented on the committee or team responsible for developing it. For example, if the rubric is intended for use across a department where faculty are divided into different theoretical camps, a representative from each camp should be included on the team. If the rubric is to be used across disciplines, an interdisciplinary team is vital. It is also important to remember that students themselves are end users, which makes attention to the clarity of terms and simplicity of language an essential component of rubric development. In the VALUE project, the teams with greater diversity in terms of the range of disciplines represented by the team members were able more easily to create accessible and understandable final rubrics. Their ability to draw from different disciplines and perspectives made it easier for them to write for students while addressing the content and academic standards each outcome demanded.

Begin with the End in Mind

Starting the development process with a blank page can sometimes result in a rubric that is so detailed and comprehensive that it is difficult for end users to implement; the rubric will likely lack conciseness, a component key to both adoptability and adaptability. To develop a rubric that is both thorough and concise, it is useful for groups to start the development process with a few guidelines based on what is ultimately needed. For example, from the beginning, the VALUE project limited the number of performance levels in the rubrics as well as the number of criteria. Individual teams were able to meet these limits by forcing themselves, as individuals and as a group, to determine the most vital criteria for each outcome. Of course, the limits can always be expanded should feedback indicate that a rubric is too narrow. But only rarely did the rubric development teams receive that type of feedback from colleagues who tested the rubrics. The single, most common piece of feedback received (from over one hundred colleges and universities) during the VALUE process was that the rubrics should be made “shorter, more concise, simpler.” Since it is often easier to expand than to cut, doing the hard work of being concise early will likely save the group time later on.

The Proof Is in the Pudding

When creating a rubric, there is always a certain amount of hesitancy in actually putting something on the page. Academics are sometimes more comfortable with a comprehensive discussion of the philosophical implications of one focus for assessment versus another. But this dimension of rubric creation can often lead to a long, frustrating process that can take years to produce even the first draft of a rubric. Beginning with this type of discussion is doubly undesirable when the campus need for rubrics is immediate.

The challenge, then, is to put something on paper and to test it. Progress only results from actually doing something. Teams shouldn’t be afraid to release a first draft with the understanding that it will probably be revised substantially. One strategy that helped move the rubrics onto paper in the VALUE project was to apportion the work. Each member of every rubric development team was responsible for one criterion, and she or he wrote performance descriptors for each of the four levels of a particular VALUE rubric (see the appendix for examples). Just one criterion per team member was a manageable workload. But since this was an assignment given to each rubric development team member very early in the process, there were draft rubrics on paper almost immediately. Teams then devoted time to discussing each sentence, and often each word, in terms of its implications for a field or its effect on student work. By that point, the teams were discussing and editing an actual product. The discussion was no longer theoretical. The goal of drafting a rubric had been accomplished, and what remained was the detailed process of revision.

The VALUE process worked because, by getting something on paper early, each team had, in essence, made a prototype that could be tested and retested through the course of group discussions. One team member used the analogy of making pudding: “A discussion about how to make pudding is only useful for so long. In the end, you have to mix ingredients and do taste tests if you want to make good pudding.”

Kick the Tires

Once you have something on paper, you need people to test it and to provide feedback. Testing and feedback are crucial for creating quality assessment tools, and it is especially important that the feedback be meaningful. Too often, the people recruited to offer feedback assume they should passively read the tool without applying its content. In the VALUE project, we found that actually testing the rubrics on student work was essential to the creation of meaningful rubrics. So feedback was always based on testing. It’s the difference between looking at a car and actually driving it. This step was indispensable for both the feedback volunteers and the rubric development teams. Whenever the teams tested the rubrics with student work, they discovered that they wrote better, more specific performance descriptors.

To achieve high-quality feedback, it is highly recommended that at least two or three feedback and revision cycles be built into the process. In the VALUE project, every rubric was tested and revised at least twice. Some rubrics were tested and revised three times. We found consistently that it was these later revision cycles that generated the type and diversity of feedback needed to increase significantly the quality of the rubrics.

Cast a Wide Net

Try to obtain feedback from as many people as possible. Asking a large number of people to give feedback will help build buy-in for the assessment itself, while also providing input from diverse perspectives. A key element of the VALUE project was that the number of campuses providing feedback was increased for each feedback cycle. We started with twelve campuses in the first feedback cycle, and by the end over one hundred campuses had tested the rubrics.

Additionally, starting with a small amount of feedback in the first testing and feedback cycle can be helpful. The initial testing of the first three VALUE rubrics enabled the project both to revise the rubrics and to tweak the development process for the remaining twelve rubrics. These tweaks made the process more efficient and also helped make the first drafts of the remaining twelve rubrics much stronger.

In some cases, this approach might require teams to seek feedback not only from colleagues on campus, but also from those at other institutions across the country, or even around the world, who are in the discipline of field for which the rubric is being designed. Teams might also want to consider inviting community members to participate in the testing and to provide additional feedback. One lesson learned concerns the value of involving colleagues from the co-curricular side of the institution. Their insight into how students might perceive a rubric was invaluable as the VALUE rubrics were being created.

Ultimately, the goal of seeking feedback from multiple sources is to be able to identify universal or common themes. In the VALUE project, for example, the need to simplify jargon for student readers and the importance of limiting each performance descriptor to a single, measurable behavior emerged as common themes. Whenever a theme could reasonably be applied to all rubrics, it was. Looking for the common themes in feedback is central to helping identify fruitful directions for revision. In the VALUE process, trying to respond to each piece of specific feedback often sent the teams in contradictory directions; but in responding to the common themes, teams found that the revisions always improved the quality of the rubrics.

Conclusion

The process of creating assessment rubrics can be both challenging and enlightening for any campus. Almost without exception, the rubric development team members in the VALUE project spoke of the many benefits of their participation in the rubric creation and revision process in terms of their own teaching and scholarship. Although daunting, the process of creating and revising assessment rubrics is a rewarding one for faculty.

Blog at WordPress.com.

Up ↑

%d bloggers like this: