Proficiency

LAST UPDATED:

In education, the term proficiency is used in a variety of ways, most commonly in reference to (1) proficiency levels, scales, and cut-off scores on standardized tests and other forms of assessment, (2) students achieving or failing to achieve proficiency levels determined by tests and assessments, (3) students demonstrating or failing to demonstrate proficiency in relation to learning standards (for a related discussion, see proficiency-based learning); and (4) teachers being deemed proficient or non-proficient on job-performance evaluations.

To understand how proficiency works in educational contexts, it is important to recognize that all proficiency determinations are based on some form of standards or measurement system, and that proficiency levels change in direct relation to the scales, standards, tests, and calculation methods being used to evaluate and determine proficiency. It is therefore possible, for example, to alter the perception of proficiency by lowering standards or cut-off scores on tests, or to overlook that two distinct—and therefore incomparable—proficiency systems are being compared side-by-side, even though different standards, tests, or calculation methods were used to determine proficiency (see Common systems vs. disparate systems below). Because the bar for proficiency can diverge significantly from system to system, state to state, test to test, school to school, and course to course, or from year to year when changes are made to learning standards and accompanying tests, proficiency in education may become a source of confusion, debate, controversy, and even deception.

The following are a few of the major issues related to proficiency determinations in education:

  • High standards vs. low standards: One source of debate is related to the standards upon which a proficiency determination is based, and whether the standards are being applied consistently or fairly to produce accurate results. Some may argue, for example, that the standards or cut-off scores for “proficiency” on a given test are too low, and therefore the test results will only produce “false positives”—i.e., they will indicate that students are proficient when they are not. A test administered in eleventh grade that reflects a level of knowledge and skill students should have acquired in eighth grade would be one general example. Because reported “proficiency” rises and falls in direct relation to the standards used to make a proficiency determination, it’s possible to manipulate the perception and interpretation of test results by elevating or lowering standards. Some states, for example, have been accused of lowering proficiency standards to increase the number of students achieving “proficiency,” and thereby avoid the consequences—negative press, public criticism, large numbers of students being held back or denied diplomas (in states that base graduation eligibility on test scores)—that may result from large numbers of students failing to achieve expected or required proficiency levels.
  • Common systems vs. disparate systems: Since proficiency must be determined by some form of measurement system—whether it’s a certain percentage of correct answers on a test or a highly sophisticated mathematical algorithm, as with value-added measures used in teacher evaluation—proficiency determinations can be more or less accurate based on the quality of the system being used, or they can be comparable (when common systems are used) or incomparable (when disparate systems are used). Confusion may result when there is disagreement about the methods being used to determine proficiency, or when two different systems are being compared even though the results are not comparable in a valid or reliable way. For example, when the Common Core State Standards were adopted by a number of states, the states were then required to use different standardized tests, based on a different set of standards, to determine “proficiency” (i.e., the tests would measure achievement against the more recently adopted Common Core standards, as opposed to the learning standards formerly used by the states). In this case, both the standards and the tests used to measure proficiency have changed significantly, which makes any comparisons between the old system (student test scores from previous years) and the new system (student scores on the new tests) difficult or impossible. Advocates of the Common Core typically argue that the new standards will allow for more consistent comparisons of student performance across state lines—and thereby more reliably or usefully measure student learning—because “common” standards and “common” tests are being used.
  • Alignment vs. misalignment: Proficiency levels may also rise or fall in relation to the level of alignment between a test and the content actually taught to students. For example, if schools teach a selection of concepts and skills that are not evaluated on a given test, the results may produce a “false negative”—i.e., students may have learned what they were taught, but they were not tested on content they were taught, producing misleading results (proficiency is based on the content that was tested, not the content that was taught). The question of alignment and misalignment often arises in debates about learning standards. For example, when states adopt a new set of learning standards, teachers then have to “align” what they teach to the new standards. If the process of alignment is poorly executed or delayed, students may take tests based on the new standards even though what they were taught was still based on an older set of standards. The adoption of the Common Core State Standards by a majority of states has become a source of discussion and debate on this issue.
  • Learning vs. reporting: As described above, it may be possible for students to learn a lot (or very little) in schools but still appear to have learned very little (or a lot) due to the systems and standards being applied, or due to the misalignment of teaching and testing. Potential confusion and problems, therefore, may stem from the tendency of people to view test scores as accurate, absolute measures of learning, rather than relatively limited indicators of learning that may be potentially flawed or misleading. (For a related discussion, see measurement error.) For example, students may learn important skills in school such as problem solving and researching that are not specifically evaluated by tests, or they may be have learned a large body of knowledge, just not the specific knowledge evaluated by a given test or assessment. In these cases, “proficiency” rates on tests—often reported as either percent proficient or proportion proficient—may present only a partial or misleading picture of what students have learned. It is for this reason, among others, that testing experts often recommend against making important decisions about students on the basis of a single test score.
  • Appropriate vs. inappropriate proficiency levels: Given the issues described above, proficiency determinations are also the object of debates related to the appropriateness or inappropriateness of a given proficiency scale, standard, or system. For example: Is it appropriate to hold a non-English-speaking student to the same proficiency standards, as measured by the same English-language tests, as a native-English-speaking student? Or, similarly, a recently immigrated student who has had very little formal education in her home country? (For a related discussion, see test bias.) Teacher evaluations are another object of debate and controversy on this issue, particularly when it comes to factoring student achievement into performance evaluations. Advocates of using student-achievement indicators, such as test scores, may argue that it is appropriate to consider student achievement, given that it’s a teacher’s job to improve student learning. If the academic achievement of their students is not considered, how is possible to accurately or meaningfully evaluate teacher performance? Opponents may counter-argue, however, that student achievement is influenced by a host of factors outside of a teacher’s control, such as student’s prior educational experiences, the socioeconomic status of the student’s parents, or the stability and support present in a student’s home environment. Consequently, it would inappropriate to hold teachers accountable for factors that are beyond their influence or control. In these cases, proficiency systems and determinations may be debated or disputed when they are perceived to be biased, unfair, or inequitable by one group or another.
Recommended APA Citation Format Example: Hidden curriculum (2014, August 26). In S. Abbott (Ed.), The glossary of education reform. Retrieved from http://edglossary.org/hidden-curriculum