Let's Go Learn Knowledge Base
DORA Comprehension sub-Test FAQs
 
What is the nature of DORA's Silent Reading sub-test?

DORA's Silent Reading sub-test is composed of leveled passages with six comprehension questions each. Students are invited to read the passages carefully, taking as much time as they need to thoroughly understand what they read. Afterwards they answer multiple choice questions about what they have read. The questions and answers are read aloud to the students. Each comprehension question requires the child either to recall an important detail about the passage or to make an inference about a key concept in the passage.

Why do you use non-fiction passages?

Using non-fiction passages with topics taught in most classrooms across the nation means less variability in assessment results. The language involved in generating non-fiction passages is easier to standardize, as it does not contain conversational colloquialisms that are often regionalized in the U.S. Also, non-fiction passages offer a range of topics common to many classrooms, reducing bias due to race, gender, and culture. While non-fiction is sometimes more difficult for children to read than fiction, Let's Go Learn has made a conscious effort to control for this by writing comprehension questions that are not too difficult and by creating an administration protocol which ensures that children only see questions within their comfort level as the sub-test raises and lowers the difficulty of passages according to success on DORA.

Is a false high score likely on DORA's Silent Reading sub-test?

While it is possible for a child to receive a score on DORA that is higher or lower than his or her comprehension ability, it is very unlikely when the assessment is administered properly. A false high score is particularly unlikely, because DORA is a very rigorous comprehension assessment which demands that children recall facts and make inferences about the text. If a child earns a score that is much higher than his or her actual reading comprehension, it is likely that the child possesses an unusually high degree of background knowledge about the passage. Low scores are more likely to happen when students are not properly prepared to take the DORA assessment or when they are fatigued on the day of testing. Thus, it is very important for teachers or parents to properly set up students' expectations prior to administering DORA.

Why do the Silent Reading sub-test scores on DORA seem low for my students?

Many factors affect a student's ability to successfully comprehend a text. Some students struggle with decoding the text they encounter or with the language structures (i.e., phrases and idioms) used. Other students may possess limited background knowledge about the topic of the text or they may not be interested in what they're reading. While Let's Go Learn's comprehension test presents students with non-fiction topics that they are likely to have encountered in school, some groups of students may have less familiarity with the subject matter in DORA than in other comprehension assessments.

Another factor that can make scores on DORA seem lower is if your students have been tested using traditional teacher-mediated pen-and-paper assessments. On these assessments there is larger room for discrepancy, as teachers often ask follow-up questions to clarify students' responses and students often become familiar with the administration protocol. Let's Go Learn's DORA removes some of this variability associated with teacher-mediated assessments.

Also, because DORA is criterion-referenced--that is, based on a set of criteria identified by experts--it is possible that the items might differ from other criterion-referenced assessments you may have encountered. This does not preclude the utility of DORA's comprehension sub-test or mean that it does not produce helpful information. It just means that one must consider its difficulty relative to other available comprehension tests.

The avoidance of false positives, as mentioned in the previous question, is also a factor that can make scores appear lower. If other comprehension measures used in the past have a lower degree of false positive aversion, then the difference when comparing DORA to these other measures may appear significant. Our philosophy is that it is worth it to avoid incorrectly labeling a low comprehension student as high, even if it means on occasion labeling a high comprehension student as slightly lower than his or her real ability. Have no doubt, comprehension measures must choose one or the other possibility. There is no way to avoid biases.

One final factor that should be considered is the student's motivation. Longer assessments do run a higher risk of fatiguing the student, and the factor that causes the greatest test score variance is student motivation. Therefore, students need to be properly introduced to the idea of DORA. Teachers should stress that this assessment will help them do a better job of instructing the students. Also, the assessment should be broken up into manageable sessions and students should be monitored during testing. If some students seem fatigued, the teacher should consider stopping the assessment and resuming it later.

In summary, many factors might make it appear, on occasion, that students' scores on DORA's Silent Reading sub-test are lower than their actual reading ability compared to other reading measures. However, when examining the biases of each measure and interpreting DORA for what it seeks to do, these discrepancies, if there are any, can usually be explained or accounted for. Furthermore, there is low probability that any discrepancy between measures will be large enough to negatively affect any particular student's instructional plan.

Why aren't students allowed to re-read the passages when answering questions?

Allowing students to re-read passages introduces a new variable to the assessment that is difficult to control for. That is, some students choose to re-read the passage while others choose not to. Allowing students to re-read a passage thus increases the variability of the comprehension sub-test score.

By allowing students to read the passages only once, DORA provides a better indicator of how well students will perform in real reading situations. This gets back to the purpose of DORA, which is to provide diagnostic data for teachers to guide instruction.

Was this Article Helpful?
Please add a quick rating! It will help us improve articles for you!
Show fields from Show fields from Show fields from a related table
Report Name *
Description
Field label
Column heading override
Justification
What does auto mean?
Fields in:

Fields to Extract:

Name for the new table:
Items in the new table are called:

When you bring additional fields into a conversion, Quick Base often finds inconsistencies. For example, say you're converting your Companies column into its own table. One company, Acme Corporation, has offices in New York, Dallas and Portland. So, when you add the City column to the conversion, Quick Base finds three different locations for Acme. A single value in the column you're converting can only match one value in any additional field. Quick Base needs you to clean up the extra cities before it can create your new table. To do so, you have one of two choices:

  • If you want to create three separate Acme records (Acme-New York, Acme-Dallas and Acme-Portland) click the Conform link at the top of the column.
  • If the dissimilar entries are mistakes (say Acme only has one office in New York and the other locations are data-entry errors) go back into your table and correct the inconsistencies—in this case, changing all locations to New York. Then try the conversion again.

Read more about converting a column into a table.

We're glad you're interested in doing more with Quick Base!

Now we need to make you official before you share apps or manage your account.

Verifying your email lets you share Quick Base with others in your company.

Your work email
Your company