Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Validity and its Relationship with Reading Comprehension Introduction to Research Reading fluency is considered an integral component of the reading process and it has a big presence in the classroom. Its importance became evident since the National Reading Panel (2000) pronounced fluency instruction and assessment an essential and was thus incorporated into the reading First guidelines of No Child Left Behind in 2002 (Shelton, Altwerger, &Jordan, 2009).
Reading fluency has been defined in many ways; an outcome of decoding and comprehension, a contributor to both decoding and comprehension, the ability to recognize words rapidly and accurately, the connections readers make between the natural phrasing when speaking and the phrasal segmentation when orally reading, among others (Abadiano &Turner, 2005).
Nevertheless, Roehrig, Petscher, Nettles, Hudson and Torgesen (2008) state that perhaps fluency is best defined as having three main components, word recognition accuracy, automaticity, and prosody. Reading with accuracy is the student’s ability to read with few or no errors. Reading with automaticity is the students’ ability to recognize words quickly with little effort; quantifying the students’ reading rate. Prosody is the students’ ability to read with expression such as suing intonation, stress patterns, and phrasing.
Due to No Child Left Behind (NCLB) Reading First program, which requires that validated standardized reading fluency assessments be used to progress monitor and identify any readers that might not be making sufficient progress to be at grade level, the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) is one of the few empirically validated assessments to progress monitor fluency (Roehring et al, 2008). The purpose of this literature review is to explore the validity of DIBELS and its relationship with reading comprehension.
Students who demonstrate prereading skill deficits often fall even further behind in later elementary years. Alternately, students who master essential reading skills in primary grades are able to maintain progress in later educational years. According to Goffreda, James, and Pedersen (2009) this is known as the Matthew Effect, in which the “rich get richer, while the poor get poorer”. They furthermore state that not only does illiteracy imit school success throughout the life span but that it is also associated with social problems such as school dropout, incarceration, and homelessness (Gofreda, James, & Pederson, 2009). It is this realization, along with the National Reading Panel’s recommendations, that led to the focus on early identification and precursors, such as DIBELS, in order to identify early literacy interventions. The National Institute for Literacy recommended DIBELS as a scientifically researched based assessment and thus DIBELS was adopted in many states (Shelton, Altwerger, & Jordan, 2009).
Furthermore, early literacy individual growth and development indicators (EL-IGDIs) are also being put in place for pre-kindergarten children in some states (McCormick & Haack, 2011). Geofrada, James, and Pederson (2009) state that first grade has been identified as a particular critical period since the probability (88 percent) of poor readers remain so until fourth grade or higher grades. They found DIBELS indicators scores were predictive of district and state standardized exams.
Gonzales, Vannest, and Reid (2008) conducted a study to discriminate the usefulness of first grade DIBELS to populations other than the general population, more specifically to students identified or at risk for emotional and behavior disorders. The researchers in this study found that DIBELS are efficient and effective for identification of at-risk students for populations other than general education students. In concurrence with these studies, Scheffel, Lefly, and Houser (2012) found that DIBELS is an effective tool in identifying English Language Learners (ELLs) who may be at risk for underachieving in reading.
Combined, these studies affirm the validity of DIBELS for all students, including ELLs and students identified as having emotional and behavior disorders. However, In a more complex study, Yesil-Dagli (2009) found that on average, ELL students who are eligible for free or reduced price lunch compared to those not eligible for free or reduced lunch, Hispanic ELL students compared to White ELL students, and male ELL students compared to female ELL students, read fewer words at the beginning of first grade and demonstrate a slower growth rate.
This directly impacts their fluency rate in DIBELS. Paleologos and Brabham (2011) found that DIBELS Oral Reading Fluency (DORF) is effective for predicting the performance of high-income students in overall reading standardized tests but not low-income students. According to their research, high-income students demonstrate higher abilities in reading fluency, vocabulary, and reading comprehension in comparison to low-income students although both groups had achieved “benchmark” proficient scores in DIBELS.
Furthermore, Shelton, Altwerger, and Jardon (2009) analyzed the relationship between DIBELS (DORF) and authentic reading and found that students employ different reading approaches when reading for a DIBELS test and when reading for authentic literature. That is, when students read a passage in a DIBELS test, they do so in a quick manner to achieve a high rate, but when reading authentic literature the reader slows down to read for comprehension.
They found that readers in their study read almost half as many words when reading literature than they did as they read for fluency assessments (Shelton, Altwerger, & Jardon, 2009). This in turn does not reflect the true reading rate when testing for DIBELS. To conclude their study, the authors of this study state that their data showed no connection between DORF scores and student’s comprehension when reading authentic literature. There have been both strong positive and strong negative research studies regarding the validity of DIBELS and its relationship with reading comprehension.
Furthermore, a study conducted by Martin and Shapiro (2011) found that teacher’s judgments, although having strong correlations to student performance, was consistently and significantly overestimated when compared to students actual DIBELS performance. Not only that, but another study conducted by Hoffman, Jenkins, and Dunlap (2009) found that educators were not clear about how DIBELS data should inform and guide their instruction or were not even sure that DIBELS aligned with state-mandated testing.
Future research is needed in this area. Nevertheless, in states that have DIBELS in place as an assessment to comply with the No Child Left Behind stipulations, DIBELS is present for teachers and their students. It is important then, that teachers recognize the vast variables in research regarding the validity and relationship to reading comprehension, and as with any assessment, not use DIBELS as the sole criterion when determining student achievement.
It should be kept in mind that fluency is only a part of the reading process. However, due to the fact that DIBELS is in place in many states, perhaps an area of concern that arises in the literature is how DIBELS data-drives instruction. That is, how do schools use DIBELS data to drive instruction? This is especially important since this literature review discussed the study by Hoffman, Jenkins, and Dunlap (2009) who found that teachers are not clear as to how DIBELS data should guide their instruction.
If this writer were to draft a tentative research design pertaining to this literature review, the research question would be: In terms of qualitative data, how do teachers in Crane School District#13 and Yuma District #1 use their DIBELS data to drive teacher instruction? The purpose of the research would be to find effective ways schools use DIBELS data to drive teacher instruction. The data would be collected through interviews, questionnaires and observations methods.
This type of analysis in known as qualitative study however, quantitative data will also be used when analyzing and reporting information from the surveys and questionnaires. This is also known as a multiple or mixed method. According to the learning in introduction to research, the best studies include both qualitative and quantitative data. The participants in the study would be administrators, coaches, and teachers. The responses they give will provide triangulation to the study, that is, validate that all participants know exactly how the data is driving the instruction taking place in the classroom.
The exact amount of participants is not known since the study is not being conducted yet and forms have not been signed. However, it would be random sampling at each school to ensure that survey results can be statistically representative of the schools. The instrumentation that would be used for the study would be DIBELS data, surveys, and questionnaires. Observations would also be used to triangulate the information from the surveys and questionnaires. The research time line would be approximately two to three months.
One month to gather participants and administer the questionnaires and surveys, another month to observe the actual data driven instruction in the classroom, and another month to analyze the data. The survey would include the following tentative questions: 1. What steps are taken to analyze DIBELS data? 2. Once the data is analyzed, how are the results used to drive teacher’s instruction for students classified as “at-risk”? 3. Once the data is analyzed, how are the results used to drive teacher’s instruction for students classified as “some-risk”? 4.
Once the data is analyzed, how are the results used to drive teacher’s instruction for students classified as “low risk”? References Abadiano, H. R. (2005). Reading fluency: The road to developing efficient and effective readers. The New England Reading Association Journal, 41(1), 50-56. Goffreda C. T, Diperna J. C. , & Pedersen, J. A. (2009). Preventive screening of early readers: Predictive validity of the dynamic indicators of basic early literacy skills (DIBELS). Psychology in the Schools, 46(6), 539-552. doi: 10. 1002/pits. 20396 Gonzales, J.
E. , Vannest K. J. , & Reid, R. (2008). Early classification of reading performance in children identified or at risk for emotional and behavioral disorders: A discriminant analysis using the dynamic indicators of basic early literacy skills (DIBELS). Journal of At-Risk Issues, 14(1), 33-40. Hoffman A. R. , Jenkins J. E. , & Dunlap S. K. (2009). Using DIBELS: A survey of purposes and practices. Reading Psychology, 30, 1-16. Martin S. D. , & Shapiro E. S. (2011). Examining the accuracy of teachers’ judgments of DIBELS performance.
Psychology in the Schools, 48(4), 343-356. McCormick, C. E. , & Haack R. (2011). Early literacy individual growth and development indicators (EL-IGDIS) as predictors of reading skills in kindergarten through second grade. International Journal of Psychology: A Biopsychosocial Approach / Tarptautinis psichologijos zurnalas: Biopsichosocialinis poziuris, 7, 29-40. National Reading Panel. (2000). Teaching children to read, an evidence-based assessmnet of the scientific research literature on reading and its implications for reading instruction.
Washington, DC: National Institute of Child Health and Human Development. Paleologos T. M. , & Brabham E. G. (2011). The effectiveness of DIBELS oral reading fluency for predicting reading comprehension of high-and-low income students. Reading Psychology, 32, 54-74. Roehrig A. D. , Petscher, Y. , Nettles S. M. , Hudson, R. , & Torgesen J. K. (2008). Accuracy of the DIBELS oral reading fluency measure for predicting third grade reading comprehension outcomes. Journal of School Psychology, 46, 343-366. Scheffel, D. , Lefly D. , & Houser, J. (2012).
The predictive utility of DIBELS reading assessment of reading comprehension among third grade English language learners and English speaking children. Reading Improvement, 49(3), 75-95. Shelton, N. R. , Altwerger, B. , & Jordan, N. (2009). Does DIBELS put reading first? Literacy Research and Instruction, 49(2), 137-148. Yesil-Dagli, U. (2009). Predicting ELL students’ beginning first grade English oral reading fluency from initial kindergarten vocabulary, letter naming, and phonological awareness skills. Early Childhood Research Quarterly, 26, 15-29.