Amanda Cook

DCE 646 – Spring 2011

Policy and Political Problems Associated With Arts Assessment, Some Key Issues and A Vision for the Future

There are three basic problems that seem to permeate the conversation about arts assessment, and I would offer that these issues also plague many, if not all, other content areas.  The three basic problems are:

  1. Attitudes about assessment
  2. Determining assessment criteria
  3. Use of assessment data

Attitudes are basic human reactions toward issues and while a common phrase like, “You need an attitude adjustment,” suggests that humans are capable of deciding what their attitude about something will be.  I believe attitudes are much deeper.  Our attitudes are based on our past experiences with a topic and our perception of the current context.  For example, in Russ Shultz’s article, “Apples, Oranges and assessment,” (2002), Shultz states, “Although assessment seems to have overwhelmed and consumed us.”  This statement suggests that Shultz (and he assumes others) feel that assessment is oppressive in some ways.  This attitude is probably based on the fact that Shultz had previously been a part of a less assessment-focused system in which he felt comfortable, and now feels outside pressure to conform to a more assessment-driven system.  His reaction to the change is as natural as any evolving species that is forced to realign their survival habits in a short period of time.  Perhaps, as Robert Sabol suggests in his 2004 article, “The assessment context: part one,” Shultz was trained with a Lowenfeldian philosophy, “which holds that children’s artwork should not be assessed” and is now being asked to function in a discipline-based system, which “advocates assessment.”  Whether Shultz resides in a Lowenfeldian or discipline-based camp, his language suggests unrest with the current state of arts assessment.  His attitude echoes the attitudes of many arts teachers.  Richard Colwell may be able to identify why arts teachers feel so defeated by the assessment/accountability systems under which we now operate.  In his 2003 article, “The status of arts assessment: examples from music,” Colwell says,

Assessment is so deeply embedded in the teaching of skills that there has been no perceived need for a greater emphasis on assessment—with skills-based instruction, we need primarily to improve our feedback and communication.

Colwell’s words struck me as I read them several times.  I immediately understood what he was saying because what he was saying resonated with my struggle with assessment, both my assessment of students, and administrative assessment of my teaching.  Whenever I am observed, by a non-artist, I find that I spend much of my time explaining how 21st century skills like critical thinking, reflective practice, collaboration, etc are being used in the class.  It seems so inherent to dance that every activity, from technical skill-building to critical analysis of choreography requires those 21st century skills, but outside observers are looking for such a short and specific list of easily identifiable activities, that they often miss what is obvious to me.  Because I have been enveloped in dance assessment, I forget how hard it is to see for others, so I have often attempted to qualify my assessments.  I have given daily participation grades, project grades, self-assessment grades, written exam grades, video performance grades, and many more.  As I write this, however, I cannot remember a time where I was questioned, by a student, administrator, or parent over any grade that I have assigned.  Therefore, my attitude toward assessment, in the recent months has shifted toward the Colwell suggestion of focus on communication and feedback.  In my informal assessment of student values I have learned that they, for the most part, do not value letters or numbers, they are not bothered by zeros, and they have not (possibly will not) take time to ask for a reversal of any numeric grade.  However, they do value affection, praise, improvement, pride, personal comments, and challenge.  Maybe my attitudes will shift again, but for now, I will use formative assessment as much as possible, and use summative assessment only when necessary.

Moving away from the less formal, and more personal, subject of attitudes toward assessment and into the complex issue of determining assessment criteria I am tempted to rant about the relocation of the responsibility from the teacher to the district, state, and sometimes nation.  This is an appropriate conversation to have, but I will attempt to remain focused on the role of the arts teacher in the K-12 classroom.  Richard Colwell reminds us that, “In the arts, we rely on auditions, portfolios, published reviews, and written doctoral exams,” (Shultz 2003) for determining which artists are accepted and which are not.  All of these methods of assessment, granted some only at a higher level than K-12 education, rely on the expertise of other artists to determine what good work is.  This suggests that assessment in the arts and of artists may be slightly more complicated than assessment in other disciplines that rely on right or wrong answers, like math, but I am sure history, science, and language arts teachers would argue that their disciplines should be assessed more like the arts.  If we take the panel approach in K-12 assessment who would our panel consist of?  Unfortunately, many arts educators are the only person in their area at their school.  Perhaps the theatre, music and visual arts teachers could understand well enough to help a dance teacher with assessment, but that would require a great amount of time outside of the school day.  Perhaps students could participate in the assessment of their classmates.  While asking for their input could provide useful for assessing their evaluation of works of art, most teachers would be hard-pressed to explain how they were able to avoid subjective opinions of students in this type of assessment.  Perhaps students could assess themselves.  This is a very valuable tool for students to use in developing reflective practice, but might draw fire from administrators on the value of self-assessment as the only tool for determining student learning.  Finally, we are left with the option of teacher-determined assessment (which I imagine accounts for most arts assessment).  After all, the dance teacher is most likely the best educated dance expert in a school.  But, as Robert Sabol points out,

In the field of art education, content [is] largely idiosyncratic and lack[s] uniformity.  Numerous factors account for the divergence: difference in local resources, needs and values of the community, funding, facilities, and staffing. In addition, art education often reflect[s] the art teachers’ individual interest or skills and the quality of their pre-service training. (Sabol 2004)

What Sabol is suggesting is that because all art education programs are not uniform and not all schools/communities are the same, it is barely reasonable to expect that, left to their own devices, art educators will function uniformly in their assessment of student work, therefore, it is not feasible to expect all programs to conform and assess according to national standards.  This is problematic for art educators who fight for the arts to be recognized as core subjects.  This is also problematic for beginning teachers, or teachers beginning at a new school, who must quickly take inventory or students’ previous learning, interests, and values, while also learning the ropes of the administration’s methods for evaluating teacher effectiveness.  My response to this challenge of assessing students has widely varied as I have grown as an artist and educator.  I first determine what learning I can best evaluate.  For example, I have the greatest experience with ballet technique; therefore, when we do a unit on ballet technique I give specific corrective feedback daily, and give a structured movement based test at the end.  On the other hand, I have limited experience in West African dance.  When I teach this technique, I focus on the culture, exposure, and community building aspects.  My feedback during the unit has little to do with the technique and much to do with how students are branching out, interacting with classmates, and valuing the culture from which the dancing is born.  The second determination that I make when considering my assessments is the values that my school promotes.  My current school struggles with reading and writing literacy.  With a high percentage of students involved in special education and struggling with English as their second language, I strive to incorporate rich experiences involving reading and writing.  These experiences involve the use of technology, finding authentic audiences (so that students will do their best work), and constant reflection and evaluation of themselves and of me.  All of these methods are subversive in a sense because the students are rarely aware that I am trying to enhance their reading and writing skills.  Finally, third in my process for determining assessments in my class, I consider purpose.  I often ask myself, “Who benefits by me assessing this?”  This questioning was not inherent for me, but rather developed through the exhaustion of pouring over assessment data in an attempt to fairly grade each of my one-hundred-eighty (give or take a few) students each quarter.  In the beginning, I would kill myself with data.  Then the next quarter, probably because of burn-out from the last quarter, I would lazily just assign subjective grades (which were usually higher than what students would have earned based on assessment data).  But, as I mentioned before, I never heard feedback on any of the grades, intricate or lazy, so I began to ask myself what the data was for.  Since then, I have administered many assessments that were never calculated into a student’s report card grade.  Those assessments gave me feedback or provided a check-in for group assignments.  Other times, I am looking to provide students with individual feedback and goal setting ideas.  In those cases, I want students to know where they are currently, and where I think they are capable of getting.  I see myself continuing on this path until I see another way.

Up until this point, I have generally agreed with Colwell’s (2003) and Shultz’s (2002) writings on assessments, with regard to attitudes and determining criteria.  However, use of assessment data is where Shultz and I take a different path than Colwell.  Looking back at 2002 and 2003 (years in which I was not even pursuing an education career yet) it is fair for me to assume that both Colwell and Shultz have developed their views on assessment even more, but, for the sake of this paper, I will consider only their evaluations at that time.  Shultz states, in his discussion of the political power in funding/judging education, based on accountability, based assessment data, “Therefore, the population assumes, even if superficially, that implementing these assessment models are responsible political actions.” (Shultz 2002)  I would largely agree that most of the public responds best to statistics.  This is a concept that government and business use to their advantage whenever possible.  For example, “We improved our graduation rate by 10%!”  This statistic sounds good, but if one were to probe, they might find that the requirements for graduation were lowered to include more students, or that a certain population (i.e. EC or ESL) was moved to another school, reducing the drop-out/retention rates.  These statistics are simply not true representations of the actual picture.  When dealing with accountability, Shultz and I agree with Einstein, “Not everything that counts can be counted, and not everything that can be counted counts.”  Just as teachers are warned to differentiate assessment in order to get true pictures of student learning this logic should be shifted to the bigger picture in education.  Lawmakers, citizens, administrators, and others should be questioning how assessment is being used.  Colwell states, “When accountability elicits discussions about education, the results produced usually are positive.” (2002)  While this may be true in some cases, in many cases the conversations lead to the public humiliation of teachers’ low test scores, or schools’ methods for dealing with discipline, or a states’ decision to allow more charter schools.  These are dangerous effects and misuses of assessment data.  Even Colwell admits, “Schools are more apt to be held accountable than students.” (2003)

So where does this leave arts teachers in the public debate over assessment and accountability?  In my opinion, a very scary place.  Do we conform to the push for high-stakes testing in order to “qualify” our programs to the public? Or do we continue to argue the intrinsic value of the arts which cannot (and perhaps should not) be measured?  Maybe there is a hybrid where students fulfill a mandated number of hours in the arts?  Or, I would offer, perhaps art educators should use their expertise in differentiated and individualized assessment to advocate for a new type of education.  Maybe our understanding could propel a restructuring of the current education system which values all subjects and strengths.  This gives autonomy to the teachers for assessing, and at the same time promotes accountability to the learner, school, community, etc.  What would this look like?  Democracy.


References


Colwell, R. (2003). The status of arts assessment: Examples from music. Arts Education Policy Review, 105 (2) (November/December), 11-18.





Sabol, R. F. (2004). The assessment context: Part One. Arts Education Policy Review, 105 (3) (January/February), 3-9.





Schultz, R. A. (2002). Apples, oranges, and assessment. Arts Education Policy Review, 103 (3) (January/February), 11-16.