Office of Assessment

Office of Assessment

Wednesday, February 4, 2015

Changing the Question

One of the things that higher ed accreditors most often critique when it comes to student learning assessment is our ability to "close the loop." Simply said, we don't tend to make use of what we find at the end of an assessment. So we might say, "70% of our students are able to analyze their own and others' assumptions in the process of writing a research paper? Well, that's good enough for now, right? Let's get back to class and check again at the end of next semester to see if it's changed."

Because what we're interested in is our classes, not filling out reports and tables and sending them off to the ether (unless, of course, you have a special place in your heart for administrative work). But that's where the disconnect is - our classes are the whole purpose of student learning assessment. So if you're not interested in what your assessment findings tell you - if you're not interested in the answer - then maybe it's time to change the question.

The best advice I got in in graduate school was to pick a research question that interested me personally. Not one that I thought would be best suited to my advisory relationship, or one that would be easy to publish, but one that moved me so much that I would be able to withstand hours of reading, interviews, transcriptions, analyses, and endless cups of tea to get the thing written. And I remember in my final year feeling buoyed by that investment - lifted by each discovery, excited to share what I was finding with anyone who would let me bend their ear.

If we're not feeling that with the process of exploring student learning, we must not be asking the right questions. What about your students' learning do you, personally, care about? Maybe you don't care about their ability to analyze assumptions as described above. What question does interest you?

  • How well are they able to identify multiple approaches for solving a problem?
  • How well are they able to take risks?
  • How well are they able to create new ideas?
  • How well are they able to articulate insights into their own cultural rules and biases?
  • How well are they able to take informed and responsible action to address societal challenges?

These are all assessment questions, too. In fact, they show up in a set of standard rubrics some colleges have decided to use to assess their institutional level goals. In the end, in order to us to truly be invested in the whole shebang of assessment, to make it really work for us, we must take some time to learn about ourselves and our interests as they relate to our students.

Monday, December 15, 2014

Integrating Technology and Assessment

In this digital age, educators have already begun to incorporate technology into their classrooms. Just this Thanksgiving, I learned that my hometown junior high school provides all its students with iPads to use in school and at home. But as educators integrate technology into their pedagogy, they should also consider integrating it into their assessment practices.

Technology enhances assessment by:
  • Providing targeted and timely feedback
  • Supporting peer and self-assessment
  • Aggregating and storing results
  • Helping to evaluate the impact of instruction
Assessments such as online quizzes, tests, and simulations promote independent learning and allow students to choose the timing and location of their assessments.

Multimedia assessments help deepen student understanding and allow for a wider range of skills to be demonstrated.

Personal response systems, i.e. clickers, provide students anonymity and accountability, provide immediate feedback, and allow teachers to analyze class needs on the spot. Immediate feedback can promptly correct misconceptions and allow students to make corrections accordingly. In my first undergraduate biology class of almost 300 students, clicker questions helped fuel student participation, kept me accountable for weekly readings, and allowed the teacher to quickly assess our grasp of the material.

Technology should add value to current practices, not replace already valued strategies. Therefore, educators should focus on assessment first and technology second.  In other words, technological assessment practices should only be implemented if they align directly with pedagogy. It is also important to consider the technological literacy of students and to ensure that all students have equal access to technology. Proper scaffolding and technical training will allow students to pay more attention to the content and less attention to the technological tool.



JISC. (2010) Effective Assessment in a Digital Age: A Guide to technology-enhance assessment and feedback. Retrieved from: http://www.jisc.ac.uk/media/documents/programmes/elearning/digiassass_eada.pdf

Monday, November 24, 2014

The Need for Feedback

One of my most salient "aha" moments in teaching and assessing occurred a few years back while grading my students' portfolios. This work is submitted in three stages, scaffolded throughout the semester, and demonstrates my students' developing mastery to plan, instruct, evaluate, and reflectively consider their work in the classroom. It is a major undertaking for my students to write their portfolios and equally complicated to evaluate and score them in a valid and reliable way. On this occasion, students had submitted the first part of their portfolio and received the scored rubric with feedback. Part two was coming up and students were encouraged to revise part one for partial credit along with their submission towards part two. Everything was going along splendidly until the second submission of the portfolios. Despite the feedback students received for part one, there were no marked improvements in their work at the submission of part two.

I was dumbfounded. How could this be? I provided a range of scores for each category of work: below (1), approaching (2), meeting (3), and exceeding (4) expectations. As a class we reviewed the expectations for their work. Writing guides were provided to specifically indicate expectations during the writing process. And ample time was provided between the receipt of feedback and opportunity for resubmission. Yet their work was simply not improving between the first and second submission. I was baffled and frustrated until it hit me: the rubric did not provide the right type of feedback.

Research indicates that when properly constructed, rubrics provide valuable feedback, engaging students in critical thinking, and offering support for developing writers (Andrade, 2000; Mertler, 2001). And part of what makes rubrics useful in providing feedback are the descriptors and degrees that clearly describe expectations and the spectrum of work towards meeting or exceeding those expectations.

The descriptors are the vertical categories or attributes of student work along the left hand column of the rubric that delineate each of the major expectations, while the degrees are the horizontal columns along the top that distinguish between performance at each level. For example, in a paper about child development a descriptor might state: Summary of the observed child. It is the degree that describes what that summary should include to merit a score of below, approaching, meets, or exceeds expectation. For example, a 'meets expectation score' may state: The description of the observed child includes educational setting and description of child's developmental status in a clear, detailed manner, free of interpretation or jargon, rich in relevant details, and consistent with the key points in the paper. In this degree, my student can see clearly the requirements for an acceptable presentation of the summary of the observed student. So what was the missing link between my students' work and their idle scores?

In my case, the degrees were not detailed enough to offer suggestions for improvement from one submission to the next. Specifically, while students knew they had to provide a summary of the observed student (descriptor), they were confused about the level of detail required for a meets expectation (3) and an exceeds expectation (4). Herein lies the potential of the rubric that is beyond a simple rating scale (1, 2, 3, or 4).

I brought this aha moment to my students in a mea culpa moment and together we talked about the different attributes or categories (descriptors of the work) and how my students interpreted them at each of the varying degrees. Based on their feedback I was able to revise my rubric to provide qualitatively and quantitatively rich feedback to distinguish between the performance at each level of work, which significantly improved my students next submission. Instead of seeing that they received a 2 (approaching expectation) for their identification of their student's areas of strengths, I was able to explain how the identification was lacking specific evidence with examples, dates, and a connection to the course readings to justify their belief that the student they observed demonstrated strength in each developmental domain.

Before I began using rubrics extensively I relied upon my own expertise to grade students work, providing comments in the margins for areas that could use additional support or pointing out flaws in arguments. What I found was not simply a great deal of bias, I'm looking at you halo effect, but inconsistency in scores themselves which instantly affects the reliability of scoring (Nisbett & Wilson, 1977). Through the use of rubrics I can measure how well my students are mastering course concepts in a valid and reliable way. And while the rubrics continue to evolve, their connection to course outcomes and ability to provide feedback to my students remains the focal point.

Below are a few of the key tenets I use for creating valid and reliable rubrics. I add to this list regularly but these are some of the basics:

Descriptors
Meaningful. The concepts you are evaluating are meaningful and specific, if you want the student to report about the physical description of the school in which they are observing students, say so!
Format Specific. If it's a paper, don't forget to add a descriptor for grammar. If it's a presentation, don't forget add a section for proper presentation etiquette (e.g., eye contact, pacing, logical progression)

Degrees
Quantity. For example, how many pieces of evidence must be cited between a paper that is a 3 (meets expectation) and a 4 (exceeds expectation)?
Quality. Specifically, how rich and detailed must each piece of evidence be in order to distinguish between the fore mentioned 3 and 4?

Exemplars
After many years of teaching the same class even if the assignments change dramatically, you have collected an arsenal of outstanding student work. I always share exemplary work with my students once they have completed their initial planning such that their goals and objectives are their own and  the exceptional work of their colleagues is viewed for conceptual understanding.

These are just a few of the many considerations I make when writing rubrics in an effort to improve validity, or ability to meaningfully assess the objective for each assignment. But every domain and every piece of work is different and I'm curious:

In what ways do you provide feedback for your students?
What suggestions do you have for writing better scoring mechanisms?
What assessments work in your classroom that you would suggest giving a try?


References
Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57(5), 13-19.
Mertler, C. A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25), 1-10.
Nisbett, R. E., & Wilson, T. D. (1977). The halo effect: Evidence for unconscious alteration of judgments. Journal of personality and social psychology, 35(4), 250.

Tuesday, November 18, 2014

Great Gen Ed Assessment

This past Friday, faculty and staff from all over the state came to Hunter to talk about What Every College Graduate Should Know, otherwise known as general education. The topic under discussion? How you know when they know: the assessment of students' general education.

The conference began with rousing openings from our Provost, Vita Rabinowitz, and Lucinda Zoe, CUNY's University Dean of Undergraduate Studies. Each presenter then offered a variety of methods and tools to discover what their college graduates were leaving with, tailored to the needs, interests, and capacities of each academic setting.

Jay Deiner at NYC College of Technology (CUNY) discussed an elegant way to layer the assessment of general education on top of an existing program-level assessment, using, for example, one assignment (a lab report) to assess both writing in chemistry and writing as a general education competency. This had the natural advantage of relying on assignments and pedagogical discussions already ongoing to inform a school-wide comparison across disciplines.

Andrew Sidman at John Jay College of Criminal Justice (CUNY) also shared his experience interweaving program-level assessment and general education assessment, introducing an innovative method of indirect assessment that dramatically expands the scope beyond what can be assessed directly by mapping and aggregating the results of individual class rubrics.

Anne Goodsell Love at Wagner College in Staten Island and Gladys Palma de Schrynemakers and Melissa Antinori from LIU/Brooklyn walked participants through the multiple thoughtful strategies they employed over the years to directly assess writing for all graduates, highlighting challenges of design and feasibility. Both discussed the critical roles of college-wide organizational structures to carry out the process and encourage sustained faculty collaboration over the longer term.

An inspiring conference, to say the least! While I came away with lots of big and small ideas to discuss with my colleagues, five simple pieces of advice stand out:

1) Build on what you have
2) Reach out to others
3) Keep it simple
4) Stay committed
5) Be adventurous!

We are all undertaking a grand experiment: trying - with the greatest possible patience, open-mindedness and integrity - to discover what our students know upon graduation. I look forward to it!

Tuesday, November 11, 2014

How do you know when you don't know?

Educators, psychologists, and philosophers alike have been contemplating this question for centuries. In fact, just this morning in a Piagetian moment I asked my 6-year old son the same question: how do you know when you don't know? What do you do? His response matched the fore-mentioned erudite scholars while echoing precisely my journey in assessment: you ask!

This is where I began my assessment journey: with a question! In my scholarly work I've studied the way people come to know what they don't know by looking at the questions they ask. In my teaching, I try to understand what it is students don't know by modeling and encouraging questioning. Since the research on student questioning is dismal at best, I've dabbled in clickers and digital check-in's to quickly assess the general knowledge level of my students before each class. Depending on their responses I know whether I can skip to the activity where we actively engage in the content, or if we need to review the readings together to ensure understanding.

There are other questions here, embedded within the question of what students want to know and what are they ready to know, namely how do they learn? How do students study? When do they know they've mastered the content? In this way, my class on assessment also encompasses prevalent learning theory and the dissemination of novel research that challenges their assumptions about the way learning happens best so that it can be measured authentically and effectively. In other words, if you aren't a strong learner, you are less likely to adequately recall content, and will likely not succeed in the assessment. It is then my job to guide the skills and wills behind good learning techniques, to evaluate whether that learning sticks, followed but what we can do to extend your knowledge.

As an educational psychologist teaching assessment courses I constantly struggle to meaningfully assess whether or not my students grasp key content in each and every course meeting and whether their understanding is deep enough such that they can apply their learning in their own classrooms. I hope you'll share your best practices so that together we can build our cognitive toolbox to encourage our students' continued success.

References
Bercher, D. A. (2012). Self-Monitoring Tools and Student Academic Success: When Perception Matches Reality. Journal of College Science Teaching, 41(5), 26-32.
Dillon, J. T. (1988). The Remedial Status of Student Questioning. Journal Of Curriculum Studies, 20(3), 197-210. 
Dunlosky, J. (2013). Strengthening the Student Toolbox. AMERICAN EDUCATOR, 13.


Sunday, November 9, 2014

Integrated Assessment to Improve Instructional Practices and Curriculum Design

As the course Coordinator for English 220 (Introduction to Writing about Literature), I oversee a comprehensive training program for our new faculty and an ongoing development program for all of our instructors. Ranging from syllabus construction, writing pedagogy, and teaching the literary genres to assignment design, classroom management, and group work/peer review methods, the hands-on program provides teachers with essential tools for creating a successful course. But what does a successful course (which is to say a substantial and meaningful educational experience for students) look like? What does it do? And how do we know that the course is successful? To begin to answer these questions, we devote a good deal of time to discussing what I call integrated assessment. That is, we examine how curriculum, instructional, and assignment design coordinate with assessment design and implementation. I am struck every semester by how little new teachers have thought about assessment beyond the need to assign and grade papers, and to decide on the percentage of the final grade each assignment is worth. I am also struck by how enthusiastically they embrace the idea of assessment as a way to measure their effectiveness and to improve teaching.

Using assessment as a way to improve our instructional practices, assignment design, and curriculum design is one of my main interests in participating in this blog. But it's not the only one. I would also like to engage discussion across the disciplines, to explore, for instance, how assessment can and should differ in the humanities and the social sciences. Other questions that come to mind: How can we make use of more qualitative assessments? Is the language of Learning Outcomes  effective in emphasizing student understanding? And what are the differences (beyond the semantic) between learning and understanding? Between knowing and understanding? When we say that we want students to know certain things, to what extent does that imply doing certain things? In short, when we use words like learning, knowing, doing, and understanding, what assumptions are we making about each separately and all together?

Enough questions for now. I'm happy to  be part of the conversation and look forward to the exchange of ideas.



Tuesday, October 28, 2014

A Student's Perspective to Assessment

Before I begin, I would like to introduce myself: my name is Nikki Nagler and I am the new College Assistant to the Office of Assessment. When Meredith asked me to contribute to this blog, I was a bit apprehensive - as someone who is neither faculty nor staff and is new to the world of assessment, I wasn't sure what I could add to the conversation. But ever since I started exploring assessment, I have come to realize the importance of the student perspective. After all, the purpose of good assessment is to help students.

In thinking about my own experiences with assessment, here are four assessment practices I value:

1. What should I expect from this course?
In the past, most of my syllabi have communicated what students should expect from the teacher and the course. Far fewer have they communicated what should be expected of the students. At the onset of the semester what I really want to know is: what I should be able to know and do by the end of this course?

2. A variety of methods
Students learn and demonstrate learning in many different ways. I am not a great test-taker, but I like presenting information to my classmates. Therefore, I think it is important for students to have the ability to demonstrate their understanding in different ways. This means relying on a variety of assessments, including things multiple choice tests, written assignments, presentations and E-Portfolios.

3. Meaningful feedback
I find that the most meaningful feedback is both timely and specific. Feedback is most effective if it is received when the information is still fresh. If I receive a graded test three weeks after I took it, chances are I have already moved on to the next topic. Feedback is also more meaningful when students know not only what they got wrong, but why they were wrong. Helping a student understand his or her learning challenge is more beneficial than just correcting the answer itself.

4. Rubrics, rubrics, rubrics
Rubrics help make assessment transparent. They are another great way of explicitly stating student expectations. Providing rubrics to students at the time of an assignment can help students understand the criteria for evaluation so they can prepare appropriately.

As a student, I recommend selecting a practice and trying it out. After all, good assessment improves student learning and enhances teaching effectiveness.