วันพฤหัสบดีที่ 15 พฤษภาคม พ.ศ. 2568

Norm-referenced VS Criterion-based Achievement Grading

Norm-referenced method: When the exam only samples part of the content (due to time constraints, for instance), it may not fully reflect all students' knowledge or abilities. In such cases, norm-referencing helps distinguish performance levels relative to peers, especially if the exam is designed to be difficult or selective.

Criterion-based method: If the exam fully aligns with the course objectives and content, it's appropriate to assess students based on predetermined criteria. In this case, every student theoretically has an equal opportunity to succeed by demonstrating mastery of the material.

===

📌 Why Use Norm-Referenced Grading When an Exam Has Incomplete Content Coverage?

It's not that incomplete content coverage requires norm-referenced grading, but rather that norm-referenced grading can be more practical or justifiable in that situation. Here’s the reasoning:


🔍 1. Incomplete Coverage = Limited Validity for Mastery Judgments

  • In criterion-based grading, you’re judging whether students meet specific learning outcomes.

  • But if the test only covers part of what was taught, you can’t be sure a student has mastered all intended outcomes — the exam doesn’t measure them all.

  • This makes it difficult to fairly say “Student A met the standard” if the test didn’t assess the full standard.

✅ So: If content coverage is partial, a claim like “meets expectations” (criterion-based) is less valid.


🔍 2. Norm-Referencing Focuses on Ranking, Not Mastery

  • Norm-referenced grading doesn't claim to assess full mastery — it just compares students to each other.

  • If everyone is tested on the same (even partial) content, you can still rank performance fairly.

  • This is especially common in competitive settings like entrance exams, where the goal is to identify the top X%.

✅ So: Even if the content is partial, norm-referenced grading can still say, “Student A performed better than 85% of peers.”


🔍 3. Selective Testing is Often Meant to Differentiate, Not to Measure Everything

  • Some exams (especially in large-scale or competitive environments) are designed to be selective, challenging, and not to reflect all course content.

  • In these cases, norm-referencing is deliberate — the test's role is to discriminate between levels of performance, not verify full learning.


📘 Example to Illustrate

Imagine a computer science course with 10 learning objectives:

  • Scenario A (Criterion-Based Fit): The final exam has one question for each objective, with rubrics. You can say “Students mastered 8 out of 10 objectives.”

  • Scenario B (Norm-Referenced Fit): The exam covers only 4 objectives in depth (due to time constraints), with high difficulty and trick questions. You can’t judge overall mastery, but you can still say “Student A is in the top 10%.”

--ChatGPT