Nation's Report Card Home

Frequently Asked Questions

 

Overview of The Nation's Report Card

 

State Results

 

Trial Urban District Assessment

 

What subjects does NAEP assess, and how are the subjects chosen?

Since its inception in 1969, National Assessment of Educational Progress (NAEP) assessments have been conducted in numerous academic subjects, including the arts, civics, economics, geography, mathematics, reading, science, U.S. history, and writing.

Beginning with the 2003 assessments, national assessments are conducted every two years in reading and mathematics at grades 4 and 8. Results from these assessments are released six months after administration. The assessments are conducted in reading and mathematics in the same year, and initial results are released in the fall of that year. Results from all other assessments are released about one year after administration, usually in the spring of the following year. 

Since 1988, the National Assessment Governing Board has been responsible for selecting the subject areas to be assessed. Furthermore, the Governing Board oversees creation of the frameworks that underlie the assessments and the specifications that guide the development of the assessment instruments. The framework for each subject area is determined through a collaborative development process that involves teachers, curriculum specialists, subject-matter specialists, school administrators, parents, and members of the general public.

Back to Top

How many students participate?

The NAEP assessments are administered to representative samples of students rather than the entire national, state, or district populations. Each assessment, based on its design, samples different numbers of students. For instance, a nationally representative sample of approximately 24,100  eighth-graders from 950 schools and 28,100 twelfth-graders from 1,220 schools participated in the 2011  writing assessment. The sample design for the first computer-based writing assessment was not intended to report results for individual states or large urban districts.

See the number of schools and students that participated in the recent NAEP assessments:

 

See more information about the sample sizes and target populations for the recent NAEP assessments:

Back to Top

How has the demographic distribution of students changed between 1990 or 1992 (when the trend lines started) and 2011?

The proportion of Hispanic students has more than doubled between the early 1990s and 2011. At the same time, the proportion of White students has decreased from approximately three-quarters of the population to less than two-thirds. See the NAEP Data Explorer for complete data on changes in the student distribution.

What are the new race/ethnicity categories for the 2011 NAEP assessments?

Beginning in 2011, all of the students participating in NAEP were identified as one of the following seven racial/ethnic categories:

  • White
  • Black
  • Hispanic
  • Asian
  • Native Hawaiian/Other Pacific Islander
  • American Indian/Alaska Native
  • Two or more races

NOTE: Black includes African American, Hispanic includes Latino, and Pacific Islander includes Native Hawaiian.

In compliance with new standards from the U.S. Office of Management and Budget for collecting and reporting data on race/ethnicity, additional information was collected in 2011 so that results could be reported separately for Asian students, Native Hawaiian/Other Pacific Islander students, and students identifying with two or more races.

Back to Top

How are students with disabilities and English language learners included in the NAEP assessments?

The NAEP program has always endeavored to assess all students selected as a part of its sampling process. In all NAEP assessments, accommodations are provided as necessary for students with disabilities (SD) and/or English language learners (ELL).

Inclusion in NAEP of an SD or ELL student is encouraged if that student (a) participated in the regular state academic assessment in the subject being tested, and (b) if that student can participate in NAEP with the accommodations NAEP allows. Even if the student did not participate in the regular state assessment, or if he/she needs accommodations NAEP does not allow, school staff are asked whether that student could participate in NAEP with the allowable accommodations. (Examples of testing accommodations not allowed in NAEP are giving the reading assessment in a language other than English, or reading the reading passages aloud to the student. Also, extending testing over several days is not allowed for NAEP because NAEP administrators are in each school only one day.)

Although every effort is made to include as many students as possible, different jurisdictions have different exclusion policies, and those policies may have changed over time. Because SD and ELL students typically score lower than students not categorized as SD or ELL, jurisdictions that are more inclusive—that is, jurisdictions that assess greater percentages of these students—may have lower average scores than if they had a less inclusive policy. Exclusion rates for SD and/or ELL fourth- and eighth-grade students in mathematics ranged from 1 to 10 percent across the participating states/jurisdictions in 2011 and from 1 to 10 percent across the participating states/jurisdiction in 2011 for reading.

See the percentage of students identified, excluded, and assessed in recent NAEP assessments:

Back to Top

What testing accommodations does NAEP offer?

NAEP allows students with disabilities and English language learners to use most of the testing accommodations that they receive for state or district tests. Accommodations are adaptations to standard testing procedures that remove barriers to participation in assessments without changing what is being tested. Examples of such accommodations are extended time and small-group or one-on-one administration. In the mathematics assessment NAEP does not allow the use of calculators except on booklets specifically allowing calculator usage, as that accommodation would alter what is being tested, i.e., the student's ability to do arithmetic operations. NAEP does offer bilingual test booklets for the mathematics assessment, in English and Spanish. Many of the same accommodations that students use on other tests are provided for SD and ELL students participating in NAEP. Some of the testing accommodations that are provided to SD/ELL students in NAEP paper-and-pencil assessments are part of the universal design of the computer-based assessment, which seeks to make the assessment available to all students. For example, the font size adjustment feature available to all students taking the computer-based assessment is comparable to the large-print assessment book accommodation in the paper-and-pencil assessment, and the digital text-to-speech component takes the place of the read-aloud accommodation for paper-and-pencil assessments. However, there were still some accommodations available to SD and ELL students taking the computer-based writing assessment that were not available to other students, such as extended time and breaks. See the percentage of students by type of accommodation for the 2011 writing assessment.

What are the Governing Board inclusion goals? Did states meet the inclusion goal in 2011?

The Governing Board, which sets policy for NAEP, has been exploring ways to ensure that NAEP continues to appropriately include as many students as possible and to do so in a consistent manner for all jurisdictions assessed and reported. In March 2010, the Governing Board adopted a new policy, NAEP Testing and Reporting on Students with Disabilities and English Language Learners. This policy was the culmination of work with experts in testing and curriculum, and those who work with exceptional children and students learning to speak English. The policy aims to:

  • Maximize participation of sample students in NAEP;
  • Reduce variation in exclusion rates for SD and ELL students across states and districts;
  • Develop uniform national rules for including students in NAEP; and
  • Ensure that NAEP is fully representative of SD and ELL students.

Back to Top

How can I look at sample questions from the assessment?

Sample questions from the assessment can be accessed through the links in the navigation bar on the left-hand side of this page. Released questions from all the NAEP assessments are available in the NAEP Questions Tool. This application also provides results for each state and district on all released NAEP questions.

Back to Top

How are results reported?

Student performance is reported in three ways: in terms of scale scores, achievement levels, and percentile scores.

Average scale scores are derived from the overall level of performance of groups of students on NAEP assessment items. NAEP subject area average scale scores are typically expressed on a 0–500 (reading, mathematics (at grades 4 and 8), history, and geography) or a 0–300 (science, writing, civics, and mathematics at grade 12) scale. When used in conjunction with interpretive aids, such as item maps, average scores provide information about what a particular aggregate of students in the population knows and can do.

Achievement levels are performance standards, set by the National Assessment Governing Board, that provide a context for interpreting student performance on NAEP, based on recommendations from panels of educators and members of the public.

The levels, which are Basic, Proficient, and Advanced, measure what students should know and be able to do at each grade assessed. These descriptions are available for each of the subjects NAEP assesses.

NAEP provides results about subject-matter performance, instructional experiences, and school environment, and reports these results for populations of students (e.g., fourth-graders) and groups of those populations (e.g., male students or Hispanic students). NAEP cannot provide individual scores for the students or schools assessed.

Because NAEP scales are developed independently for each subject, scale score and achievement-level results cannot be compared across subjects. However, these reporting metrics greatly facilitate performance comparisons within a subject from year to year, and from one group of students to another in the same grade.

Back to Top

How can a score change be significant for one group, but similar or larger change not be significant for another group?

Estimates (like averages and percentages) in the reports and on the website all have a margin of error associated with them. These margins of error are called standard errors, and the sizes of the standard errors influence the results of statistical tests. Comparisons over time or between groups are based on statistical tests that consider both the size of the differences between estimates and the standard errors of the two estimates being compared. Estimates based on smaller groups are likely to have larger standard errors. When an estimate has a large standard error, a numerical difference that seems large may not be statistically significant. For example, a 3-point change in the average score for large cities may be statistically significant, while a 3-point change for a district may not be. Standard errors for all results are available in the NAEP Data Explorer.

Back to Top

Is participation in NAEP voluntary?

Federal law specifies that NAEP is voluntary for every student, school, school district, and state. However, federal law also requires all states that receive Title I funds to participate in NAEP reading and mathematics assessments at fourth and eighth grades. Similarly, school districts that receive Title I funds and are selected for the NAEP sample are also required to participate in NAEP reading and mathematics assessments at fourth and eighth grades. All other NAEP assessments are voluntary. Learn more about NAEP and why participation is important.

Back to Top

Are the data confidential?

Federal law dictates complete privacy for all test takers and their families. Under the National Assessment of Educational Progress Authorization Act (Public Law 107-279 III, section 303), the Commissioner of the National Center for Education Statistics (NCES) is charged with ensuring that NAEP tests do not question test takers about personal or family beliefs or make information about their personal identity publicly available.

After publishing NAEP reports, NCES makes data available to researchers but withholds students' names and other identifying information. The names of all participating students are not allowed to leave the schools after NAEP assessments are administered. Because it might be possible to deduce from data the identities of some NAEP schools, researchers must promise, under penalty of fines and jail terms, to keep these identities confidential.

Back to Top

Does NAEP report individual or school-level scores?

No. By design, information is not available at these levels. Reports traditionally disclose state, regional, and national results. In 2002, NAEP began to report (on a trial basis) results from several large urban districts (Trial Urban District Assessments), after the release of state and national results. Because NAEP is a large-group assessment, each student takes only a small part of the overall assessment. In most schools, only a small portion of the total grade enrollment is selected to take the assessment, and these students may not reliably or validly represent the total school population. Only when the student scores are aggregated at the state or national level are the data considered reliable and valid estimates of what students know and can do in the content area; consequently, school- or student-level results are never reported.

Back to Top

What information is available on individual state performance?

There are a variety of tools available to further explore the state results. The state profiles page provides data for each state and links to one-page, printable summaries of state performance (known as "snapshots"). State Comparisons provides tables and maps that compare states and jurisdictions based on the average scale scores for selected groups of public school students within a single assessment year or between two assessment years. The NAEP Data Explorer allows users to search for state results by racial/ethnic groups, gender, private or public schools, teacher experience, and hundreds of other variables. Trend data are also available for all 50 states back to 2003, and for most states back to the first state assessment in the 1990s.

Back to Top

How are state tests different from NAEP?

Most state tests measure student performance on the state's own curriculum standards, that is, on what policymakers and citizens consider important for students to know and be able to do. State tests allow comparisons of results over time within the state, and, in most cases, give individual student scores so that parents can know how their child is performing. State tests do not provide comparisons of results with other states or the nation. NAEP is the only assessment that allows comparison of results from one state with another, or with results for the rest of the nation. The NAEP program helps states answer such questions as the following: How does the performance of students in my state compare with the performance of students in other states with similar resources or students? How does my state's performance compare with the region's? Are my state's gains in student performance keeping up with the pace of improvement in other states? The term "proficiency" used in relation to performance on state tests does not have the same meaning as the term Proficient on the NAEP achievement levels, because the criteria used to determine proficiency are different. Together, state achievement tests and NAEP help educators and policymakers develop a comprehensive picture of student performance.

Back to Top

What is the NAEP Trial Urban District Assessment?

The Trial Urban District Assessment (TUDA) is a special project of the National Center for Education Statistics, the National Assessment Governing Board, and the Council of the Great City Schools to determine the feasibility of reporting district-level results for the National Assessment of Educational Progress (NAEP). The 2011 assessment marks the sixth assessment in reading since 2002 and the fifth assessment in mathematics since 2003. The District ProfilesTool provides tables and maps that compare urban districts based on the average scale scores for selected groups of public school students within a single assessment year or between two assessment years.

Back to Top

How many districts participate each year, and how are they chosen?

A total of 21 urban districts participated in the 2011 Trial Urban District Assessment. Districts are invited by the National Assessment Governing Board to participate in the assessment based on a selection process that considers a number of factors including the district's size and racial/ethnic diversity. For example, districts eligible to participate in the TUDA assessments must be large cities with a population of 250,000 or more in addition to having a majority (50 percent or more) of their student population being Black or Hispanic or eligible for the National School Lunch program. The maximum number of districts participating in a given assessment year is based on the level of Congressional funding for the program.

Back to Top

How do the samples for the Trial Urban District Assessment (TUDA) contribute to state results?

Students in the TUDA samples are also included as part of the state and national samples. For example, the results reported for students in Boston also contribute to the results reported for Massachusetts and to the results for the nation. The districts' results are weighted so that their contribution to the state results reflects the actual proportion of students in the population.

Back to Top

What are "large cities" and why are they used as a point of comparison?

Just as the national public sample is used as a benchmark for comparing results for states, results for urban districts are compared to results from large cities nationwide. Referred to as "large central cities" in previous TUDA reports, results for large cities are for public schools located in the urbanized areas of cities with populations of 250,000 or more. Large city is not synonymous with "inner city." Schools in participating TUDA districts are also included in the results for large cities, even though some districts (Atlanta, Austin, Charlotte, Cleveland, Fresno, Houston, Jefferson County, Los Angeles, and Miami-Dade) include some schools not classified as large city schools. Students in the 17 TUDA districts represent nearly half of the students who attend schools in large cities nationally. The comparison to students in large cities is made because the demographic characteristics of those students are most like the characteristics of students in the urban districts. Both the districts and large cities overall generally have higher concentrations of Black or Hispanic students, lower-income students, and English language learners than in the nation as a whole.

Back to Top