Friday, June 3, 2011

Week Four Blog #7


The two ELL students I chose to administer this test on are both Hispanic students. The first student, a girl, my language arts teacher on team and I chose because her parents were concerned with her reading level as she prepared to enter high school. She had tested out of the an ESL class in 7th grade, but her parents and the 8th grade team felt she would have benefited from a more structured reading environment like READ-180 instead of mainstream Language Arts class. The other student, a young male, was on an ILP because he was an ELL student. I chose him as a candidate for the reading test because I was curious about his reading level since the READ-180 teacher said that he would not be in READ-180 next year in high school.
            The first student, Robin, was expecting the test. She is a student very in to her image at school, so the language arts teacher and I pulled her out of an elective to administer the test (DRA-2). We gave her background information on the test she was taking to help calm her nerves even though she knew the test was coming. She read aloud from the benchmark book, The Missing Link, which was an 8th grade level book. She was confident as she read aloud, and cautious at the same time. She read the passage in one minute and 52 seconds. For 211 word count passage, this time put her in the instructional level of reading. Her oral reading rate was 113.03 words per minute. Robin had 5 miscues while reading which put her in the independent category for reading. Her miscues while reading included a couple omissions, but mostly substituting words that were visually similar. While I calculated Robin’s oral reading fluency, she continued to work on the comprehension part of the assessment. She was very cautious as she completed this part of the assessment. She did not speak to me or the language arts teacher who helped me administer the test, but instead completed it using the entire 50 minutes to do so. Once she was finished I was able to grade her comprehension part of the test, and she received a 17 out of 24. Robin comprehended at the instructional level almost at the independent level. She was on the cusp of being an independent reader and an instructional reader. Robin’s reflection part of the assessment showed that she could use some help identifying important information and/or key vocabulary in the text. She struggled before with identifying the significant message in a story, and discussing theme, and he test proved we still needed to work on this. As teacher’s we know that we need to teach Robin how to support opinion with details from the text. I felt that this assessment was a fair assessment to show Robin’s capabilities in reading and writing, and was surprised that this test actually showed her reading lower than what she had produced on CSAP and Acuity, our district benchmark test. Robin had been proficient in writing and partially proficient in reading, but this DRA2 showed her closer to being partially proficient.
            The second student, Victor, was my hand chosen student for this assessment. Victor is a student who really does not come across as motivated by classwork, but is still a kid who participates in class. Victor has been in been in the READ-180 class the last two years. His READ-180 teacher was very proud of him and said that he would not be in the READ-180 classroom in high school. Victor also took the DRA2 and started off by reading the same benchmark story, The Missing Link, since that was the 8th grade book. Victor was cheery and more than willing to participate, and was very excited about the book. The Missing Link, is a science fiction novel, and he was so curious to read about robots. Victor read his passage aloud in one minute 46 seconds. This put him at the instructional level. As he read aloud I noticed that he had multiple pauses in the wrong areas, and had to reread some of what he already read. There was repetition in Victor’s reading. I calculated Victor’s words per minute to be 119.4. Victor only had one substitution and 3 repetitions, so that put him on the high end of independent almost advanced in accuracy. Victor was doing really well, and I could not wail for him to finish his comprehension part of the test.
            Victor only took only 40 minutes to complete the second part of the DRA2. At first glance his test did not look nearly completed as Robin’s. He did not even write full sentences in some of his reflection pieces. His during reading notes were sufficient. He missed one character, and got 2 out of 5 of the main events, but he nailed the resolution and summery in the story. Victor’s interpretation of the story was on target and his reflection included the main idea. I was shocked that Victor was able to hit the main idea and express it in such few words. Here was a student who I did not expect to read or comprehend at grade level, and yet his comprehension score was 18 out of 24! Victor surpassed what all teachers on our team expected of him. His CSAP and Acuity had him right on the cusp of being proficient and partially proficient. He was at 69% on his Acuity, the district’s benchmark test, and at 66% a student is considered partially proficient. He is that student that makes me reflect on schools needing to develop much more fair and authentic writing assessments for. The NEA (2003), states, “Most assessment systems are out of balance, with standardized tests dominating. …no single assessment can meet everyone’s information needs… To maximize student success, assessment must be seen as an instructional tool for use while learning is occurring and as an accountability tool to determine if learning has occurred. Because both purposes are important, they must be in balance”(pg. 6). Although Victor’s score was on the lower end of independent and his reading was still instructional, I feel like if Victor would have slowed down to really write down everything he was thinking and feeling, he could have done better.
Running records were created for closely observing and recording a child's oral reading behaviors, and for planning instruction (Morrow, 2009, pg. 47). Running records record the number and types of errors a child makes while reading aloud. A benefit from running records, are they determine the appropriate material for instructional purposes and for independent reading (Morrow, 2009). Running Records allow teachers to run an assessment-driven, differentiated program that targets the specific needs of students (NEA, 2003). First, it allows the teacher to identify an appropriate reading level for the student. Second, it reveals how well a student is self-monitoring their reading, and finally, it identifies which reading strategies a student is using (or not using) (NEA, 2003). While performing a running record test, a student’s frustration level with reading can be identified. This helps instructors to come up with strategies for instruction based on the errors made during the assessment. Running records indicate errors students make in oral reading, and is not intended to evaluate the student’s ability to comprehend text (Morrow, 2009, pg. 47).
            Running records are intended to be tested with a benchmark book accompanying the assessment. This is not always the case though. Running records can be taken on a book that has never been seen by the reader or one that has been read once or twice. There are two parts to the running record assessment: the running record and a comprehension check. To perform a running records test, one must use symbols and marking conventions explained below to record a child’s reading behavior as he or she reads from the book. When the session is complete the recorder can calculate the reading rate, error rate, and self-correction rate, and enter them in the boxes. It is the role of the teacher to talk to children about the types of errors they make in a running record to give them strategies to help figure out what they are reading (Morrow, 2009, pg. 50).
            Again, the purpose of taking a running record is to document a child’s oral reading behavior, determine error, accuracy, and self-correction rates, and help teacher to plan instruction. To score a running record as suggested by Morrow (2009), record the number of words in the testing passage. Then count the number of errors made by the child and subtract that from the number of total words in the passage. Divide the number of errors by the total words in the passage. Then multiply by 100. The result equals a percent of accuracy for the reader (Morrow, 2009, pg. 48). This scoring process is essential for teachers to internalize. They can ask themselves, why errors are being made in the first place, and determine the next step for instruction.  In the end, Running Records allow teachers to make data-based decisions to guide whole-class instruction (using modeled or shared reading), small-group instruction (guided reading), and to ensure students are reading appropriately challenging texts during independent reading (NEA, 2003).

References:
Morrow, M. Literacy development in the early years; Helping children read and write. Boston, MA: Allyn and Bacon.

National Education Association of the United States (2003). Balanced Assessment: The Key to Accountability and Improved Student Learning. Portland, OR. Pp 1-16. Retrieved May 31 from: http://blog.classroomteacher.ca/23/running-records-and-miscue-analysis/

No comments:

Post a Comment