Let's Go Learn Knowledge Base
Question
DORA Comprehension Sub-test FAQs
 
Answer
What is the nature of DORA's Comprehension sub-test?

DORA's Silent Reading sub-test is composed of leveled passages with six comprehension questions each. Students are invited to read the passages carefully, taking as much time as they need to thoroughly understand the text. Afterward, they answer multiple choice questions about what they read. The questions and answers are read aloud to the students. Each comprehension question requires the child either to recall an important detail about the passage or to make an inference about a key concept in the passage.

Why do you use non-fiction passages?

Using non-fiction passages with topics taught in most classrooms across the nation means less variability in assessment results. The language involved in generating non-fiction passages is easier to standardize, as it does not contain conversational colloquialisms that are often regionalized in the U.S. Also, non-fiction passages offer a range of topics common to many classrooms, reducing bias due to race, gender, and culture. While non-fiction is sometimes more difficult for children to read than fiction, Let's Go Learn has made a conscious effort to control for this by writing comprehension questions that are not too difficult and by creating an administration protocol that ensures that children only see questions within their comfort level as the sub-test raises and lowers the difficulty of passages according to success on DORA.

Is a false-high score likely on DORA's Comprehension sub-test?

While it is possible for students to receive scores on DORA that are higher or lower than their comprehension ability, it is very unlikely when the assessment is administered properly. A false-high score is particularly unlikely because DORA is a very rigorous comprehension assessment that demands that students recall facts and make inferences about the text. If a student earns a score that is much higher than his or her actual reading comprehension, it is likely that the student possesses an unusually high degree of background knowledge about the passage. Low scores are more likely to happen when students are not properly prepared to take the DORA assessment or when they are fatigued on the day of testing. Thus, it is very important for teachers or parents to properly set up students' expectations prior to administering DORA.

Why would DORA's Comprehension sub-test score be higher than other Lexile or readability tests?

In the case where a student is an English language learner (EL, ELL, ELD) or when they come from a household where academic language may be lower but the student does possess strong comprehension strategy skills, the DORA comprehension score may appear higher relative to other measures.  DORA's passages do not include high-level vocabulary.  We've specifically written them to prevent it from mis-diagnosing low vocabulary for low comprehension.  Example:  "The eloquent woman gave a speech at the start of the event."  Question:  "Did the audience think she spoke well?"   The only way to answer this question is to know the meaning of the word "eloquent."   We do not do this in DORA's passages.  Instead, when there is a vocabulary question it will rely on inferencing skills.  Example: "The eloquent woman gave a speech at the start of the event.  The audience cheered loudly."  Question:  "Did the audience think she spoke well?"  In this case,  they can infer the meaning through supporting text.  The reason for this design is that DORA is a multiple-measured assessment.  Often, especially in secondary, students will have strong decoding skills and strong comprehension skills, but weak academic vocabulary. They are often put into remedial reading programs that inaccurately spend time on comprehension strategy skill work.  We know this is one of the biggest breakdowns in student intervention sorting in secondary.  These students will fall under the DORA profile of "G."  Teachers are encouraged to look at the DORA comprehension score in conjunction with the vocabulary score.  Lexile measures don't have the ability to separate students because standard single-measure comprehension tests are really joint vocabulary and comprehension strategy assessments.  If you are low in one, your overall score is low.  This is a confound that we have designed to be avoided in DORA.

Why do the Comprehension sub-test scores on DORA seem low for my students?

Many factors affect a student's ability to successfully comprehend a text. Some students struggle with decoding the text they encounter or with the language structures (i.e., phrases and idioms) used. Other students may possess limited background knowledge about the topic of the text or they may not be interested in what they're reading. While Let's Go Learn's comprehension test presents students with non-fiction topics that they are likely to have encountered in school, some students may have less familiarity with the subject matter in DORA than in other comprehension assessments.

Another factor that can make scores on DORA seem lower is if your students have been tested using traditional teacher-mediated pen-and-paper assessments. On these assessments, there is more room for discrepancies, as teachers often ask follow-up questions to clarify students' responses and students become familiar with the administration protocol. Let's Go Learn's DORA removes some of this variability associated with teacher-mediated assessments.

Also, because DORA is criterion-referenced--that is, based on a set of criteria identified by experts--it is possible that the items might differ from other criterion-referenced assessments students have encountered. This does not preclude the utility of DORA's comprehension sub-test or mean that it does not produce helpful information. It just means that one must consider its difficulty relative to other available comprehension tests.

The avoidance of false positives, as mentioned in the previous question, is also a factor that can make scores appear lower. If other comprehension measures used in the past have a lower degree of false positive aversion, then the difference when comparing DORA to these other measures may appear significant. Our philosophy is that it is worth it to avoid incorrectly labeling a low comprehension student as high, even if it means on occasion labeling a high comprehension student as slightly lower than his or her real ability. Have no doubt, comprehension measures must choose one or the other possibility. There is no way to avoid bias entirely.

One final factor that should be considered is the student's motivation. Longer assessments do run a higher risk of fatiguing the student, and the factor that causes the greatest test score variance is student motivation. Therefore, students need to be properly introduced to the idea of DORA. Teachers should stress that this assessment will help them do a better job of instructing students. Also, the assessment should be broken up into manageable sessions, and students should be monitored during testing. If students seem fatigued, the teacher should consider stopping the assessment and resuming it later.

In summary, many factors might make it appear, on occasion, that students' scores on DORA's Silent Reading sub-test are lower than their actual reading ability when compared with other reading measures. However, when examining the biases of each measure and interpreting DORA for what it seeks to do, these discrepancies, if there are any, can usually be explained. Furthermore, there is low probability that any discrepancy between measures will be large enough to negatively affect a particular student's instructional plan.

Why aren't students allowed to re-read passages when answering questions?

Allowing students to re-read passages introduces a new variable to the assessment that is difficult to control for. That is, some students choose to re-read the passage, while others choose not to. Therefore, allowing students to re-read passages increases the variability of the comprehension sub-test scores.

By allowing students to read the passages only once, DORA provides a better indicator of how well students will perform in real reading situations. This gets back to the purpose of DORA, which is to provide diagnostic data for teachers to guide instruction.

Why does it show "n/t" for my student's CO score?

If a student has a non-mastery on the lowest set of high-frequency words and word recognition, the adaptive logic of DORA will turn the comprehension sub-test off and a score of "n/t" will appear.

How does the adaptive logic work when students are given passages to read?

If a student reads a passage and answers 4 out of 6 questions correctly, a passage one grade level higher will be given next.  If the student answers 5 or 6 out of 6 questions correctly, the test will jump up two grade levels.  Likewise, if a student answers only 3 out of 6 questions correctly (non-mastery), a passage one-grade level lower will be given.  If 0 to 2 questions out of 6 are answered correctly (non-mastery), a passage two grade levels lower will be given.  Next, students cannot jump above a non-mastered passage, just as they cannot jump below a mastered passage. The idea is to find a "ceiling" condition, which is a mastered passage with a non-mastered passage one grade level above it.  In theory, this is the student's instructional point: the point at which the student can read the most complicated passage and achieve mastery.  The next time the student takes DORA, a passage will be given that is one grade level above the student's mastery level.  A new form is also used to avoid the repetition of text.  Passages in DORA range from grade level 1 to 12 in terms of readability.  Sometimes, when looking at a student's test, it can be hard to understand which passages were given, since the order in which the passages were presented may not be clear.  But the rules above should hold true in understanding how the student adapted through the comprehension sub-test.

Was this Article Helpful?
Please add a quick rating! It will help us improve articles for you!
Show fields from Show fields from Show fields from a related table
Report Name *
Description
Reports and Charts Panel
Each table has a panel listing its reports and charts, organized in groups.
Please wait while your new report is saved...
Field label
Column heading override
Justification
What does auto mean?
Fields in:

Fields to Extract:

Name for the new table:
Items in the new table are called:

When you bring additional fields into a conversion, Quickbase often finds inconsistencies. For example, say you're converting your Companies column into its own table. One company, Acme Corporation, has offices in New York, Dallas and Portland. So, when you add the City column to the conversion, Quickbase finds three different locations for Acme. A single value in the column you're converting can only match one value in any additional field. Quickbase needs you to clean up the extra cities before it can create your new table. To do so, you have one of two choices:

  • If you want to create three separate Acme records (Acme-New York, Acme-Dallas and Acme-Portland) click the Conform link at the top of the column.
  • If the dissimilar entries are mistakes (say Acme only has one office in New York and the other locations are data-entry errors) go back into your table and correct the inconsistencies—in this case, changing all locations to New York. Then try the conversion again.

Read more about converting a column into a table.

We're glad you're interested in doing more with Quickbase!

Now we need to make you official before you share apps or manage your account.

Verifying your email lets you share Quickbase with others in your company.

Your work email
Your company