Conformity Assessment Society - CAS
  • Home
  • About
  • Organization
  • What We Do
  • Apply
  • Courses
  • Membership Grades
  • Expert View
  • NEWS
  • FAQ's
  • Gallery
  • Bookstore
  • Library
  • Code of Conduct
  • Digest
  • Chapters
  • Tests
  • Contact Us
  • More
    • Home
    • About
    • Organization
    • What We Do
    • Apply
    • Courses
    • Membership Grades
    • Expert View
    • NEWS
    • FAQ's
    • Gallery
    • Bookstore
    • Library
    • Code of Conduct
    • Digest
    • Chapters
    • Tests
    • Contact Us
Conformity Assessment Society - CAS
  • Home
  • About
  • Organization
  • What We Do
  • Apply
  • Courses
  • Membership Grades
  • Expert View
  • NEWS
  • FAQ's
  • Gallery
  • Bookstore
  • Library
  • Code of Conduct
  • Digest
  • Chapters
  • Tests
  • Contact Us

Expert View

"Expert View" is a dedicated page for the periodic publication of insightful technical articles authored by specialists in the field—contributions from experts and practitioners are warmly invited to share knowledge, perspectives, and innovations with our professional community.  

Submit your article and help shape the conversation in your field.

Expert View - September 2025

The Importance of Certified Training

  

“Certified training transforms learning into competence, and competence into trust.”


Setting the Stage

In a world where competence must be proven rather than presumed, certified training has become a cornerstone of professional credibility. From auditors and inspectors to trainers and consultants, professionals cannot simply rely on experience or informal learning. What today’s global marketplace demands is training that is structured, independently validated, and aligned with international standards. Certified training provides exactly that assurance. It is not just about attending a course, it is about demonstrating, through a recognized system, that learning has been effectively translated into competence.

The Conformity Assessment Society (CAS) has long emphasized that competence is the backbone of trust in testing, inspection, certification, and accreditation. In this context, certified training is one of the most powerful tools available to professionals and organizations seeking to demonstrate their reliability.


The Standards Behind Competence

Certified training draws its authority from the international standards and specifications that govern it. ISO/IEC 17024 defines how competence of persons must be certified, requiring fairness, impartiality, and international recognition of credentials. ISO 29993 strengthens this assurance by requiring that learning services outside formal education are transparent, objective-driven, and properly evaluated.

The International Personnel Certification Association (IPC), the global scheme owner for personnel certification, complements these ISO frameworks with its Specification on Recognition of Training Providers (IPC-SC-11-002, Issue 4, 2024). This specification sets a harmonized framework for training providers seeking recognition under the IPC Multilateral Recognition Arrangement (MLA). Providers that meet this standard demonstrate not only quality in delivery but alignment with an international trust framework endorsed by IPC members around the world.

Together, ISO standards and IPC specifications ensure that certified training is not just effective education, but a structured pathway toward competence recognized and trusted across borders.


IPC’s Requirements for Training Providers

IPC’s specification requires training providers to define measurable learning objectives aligned with certification schemes, ensure course content is accurate and up-to-date, and apply modern, interactive training methods. Trainers must demonstrate both subject-matter expertise and pedagogical skills. Independence between training and examination is required to safeguard impartiality, while robust management systems and record-keeping are mandatory to ensure traceability.

Certificates issued under this framework carry references to IPC MLA recognition, a mark that signals global credibility. Many providers also demonstrate alignment with ISO 9001 or ISO 29993, further embedding quality assurance into their operations.


Global Benefits Across the Ecosystem

Certified training benefits every stakeholder in the conformity assessment ecosystem. Professionals gain qualifications that carry international weight, strengthening their employability and career mobility. Organizations benefit from verified competence within their workforce, reducing risks and enhancing client trust. Regulators and accreditation bodies, in turn, gain access to reliable, auditable evidence of competence that aligns with international recognition systems such as the ILAC Mutual Recognition Arrangement and the IAF Multilateral Recognition Arrangement.

The Conformity Assessment Society (CAS) supports and amplifies these frameworks by connecting professionals worldwide, promoting awareness of standards, and fostering dialogue across testing, inspection, certification, and accreditation communities. CAS positions certified training as an essential pillar for building global trust and professional excellence.


A Call to Action

The lesson is clear: Certified training is no longer optional. It is a strategic investment in credibility, trust, and global recognition. By aligning with ISO/IEC 17024, ISO 29993, and IPC-SC-11-002, training providers and professionals ensure that learning is not only effective but also certified as competence recognized under global agreements.

The future of conformity assessment depends on systems where competence is not just claimed but independently verified. IPC provides the framework, ISO sets the standards, and CAS amplifies the global voice of conformity professionals who make it a reality. Together, these elements ensure that certified training remains the bedrock of trust in international markets.


References

- ISO/IEC 17024:2012, "Conformity assessment – General requirements for bodies operating certification of persons", International Organization for Standardization, Geneva.

- ISO 29993:2017, "Learning services outside formal education – Service requirements", International Organization for Standardization, Geneva.

- IPC-SC-11-002, 2024, "IPC, Specification on IPC Recognition of Training Providers", Issue 4.0, International Personnel Certification Association. 


Why Join Us

Connect with global experts in testing, inspection, certification, accreditation and standardization. Gain knowledge, recognition, and networking opportunities while shaping the future of conformity assessment. Members access publications, training, and professional support, advancing careers and influencing standards worldwide. Be part of the trusted community promoting quality and competence. 

Join Us

Expert View - August 2025

Contrasting Groups Method for Cut Score Determination

By Dr. George Anastasopoulos

Technical & Intl. Business Development Mgr., PJLA


Author’s Note

This article has been prepared drawing on my professional experience as an accreditation assessor. In numerous ISO/IEC 17024 assessments, I have observed that many Personnel Certification Bodies (PCBs) receive nonconformities related to the determination and justification of cut scores. Despite being a cornerstone of valid and reliable certification decisions, this area is often underestimated or handled without sufficient methodological rigor.
From my experience, I propose the Contrasting Groups Method because it is clearly understood, straightforward to implement, and can be applied with minimum administrative and technical hassle. It therefore represents a practical, evidence-based approach that allows PCBs not only to achieve compliance with ISO/IEC 17024, but also to strengthen the credibility and defensibility of their certification outcomes. 


The Problem of Cut Score Setting

In personnel certification, the fairness and validity of examinations are fundamental to compliance with ISO/IEC 17024. One of the most sensitive elements in this process is the setting of a cut score – the threshold that separates candidates deemed competent from those not yet meeting the required standard. A poorly justified or arbitrarily chosen cut score risks undermining the credibility of the certification, potentially allowing underqualified individuals to be certified or unfairly rejecting competent candidates.
To address this challenge, certification bodies must adopt methods that are transparent, evidence-based, and defensible. Among the available approaches, the Contrasting Groups Method offers a particularly intuitive, data-driven solution that aligns well with ISO/IEC 17024’s emphasis on validity, reliability, and fairness.


Concept of the Contrasting Groups Method

The Contrasting Groups Method sets the passing standard by comparing the actual test performance of two clearly distinct populations:

· Competent candidates (Experts): individuals who have demonstrably achieved the required skills and knowledge, such as certified professionals, seasoned practitioners, or advanced learners.

· Non-competent candidates (Rookies): individuals at the early stages of learning, such as novices, trainees, or entry-level participants.

By administering the same exam to both groups, their score distributions can be compared. The method identifies the point where the two distributions intersect, representing the threshold score that most effectively differentiates competence from non-competence. This intersection becomes the cut score, striking a balance between inclusivity and rigor.


Step-by-Step Process

The Contrasting Groups Method requires a carefully structured process to ensure its results are reliable and defensible:

· Define Groups Clearly: Establish two distinct populations representing competence and non-competence. Competent groups may include certified professionals, experienced practitioners, or advanced-level students. Non-competent groups typically consist of novices, trainees, or entry-level participants. Exclude borderline individuals.

· Select Representative Samples: For meaningful distributions, each group should ideally consist of at least 50–100 participants. The closer the sample reflects the actual candidate population, the more credible the cut score determination.

· Administer the Test: Both groups must take the exact same examination under standardized conditions.

· Collect and Plot Scores: Convert test results into frequency distributions and plot them to visualize the probability density curves.

· Identify the Intersection Point: Overlay curves and find the threshold where misclassification risk is balanced.

· Adopt the Cut Score: The intersection point becomes the proposed cut score, reflecting the optimal balance between inclusivity and rigor.


Worked Example

Consider an examination consisting of 100 questions. Two groups are defined:

· Rookies (non-competent): average score of 50 with a standard deviation of 12.

· Experts (competent): average score of 70 with a standard deviation of 12.

When plotted, the rookies’ scores create a normal distribution centered around 50, while experts’ scores form another distribution centered around 70. The overlap occurs between scores of 55 and 65, with the point of intersection near 60.

At this score, the chance that a rookie exceeds 60 is equal to the chance that an expert falls below it. Thus, 60/100 is adopted as the cut score.


Explanation of the Diagram

The diagram visually communicates the logic of the Contrasting Groups Method:

· The blue curve represents the distribution of the rookies’ scores (non-competent population).

· The green curve represents the experts’ scores (competent population).

· The red dashed line marks the adopted cut score at the intersection of the two curves.

· The shaded zones show areas of misclassification:

o To the right of the cut score under the blue curve: non-competent individuals misclassified as competent (false positives).

o To the left of the cut score under the green curve: competent individuals misclassified as non-competent (false negatives).

The cut score reflects the point where misclassification risk is minimized and equally balanced between both groups.

Figure 1: Example of Contrasting Groups Method (blue = non-competent, green = competent, red = cut score).


Limitations of the Method

While straightforward and intuitive, the Contrasting Groups Method has several limitations:

· Dependence on Clear Group Definitions: If groups are poorly defined or overlapping, accuracy decreases.

· Large Sample Size Requirements: Reliable distributions require at least 50–100 participants per group.

· Sensitivity to Test Quality: Poorly designed exams with low discrimination reduce effectiveness.

· Instability with Heavy Overlap: Significant overlap between groups creates unstable or indefensible cut scores.


Validation of the Cut Score

Determining a cut score is not the end of the process; validation is essential:

· Comparison with Other Methods: Apply Angoff, Bookmark, or Ebel methods to cross-validate results.

· Error of Measurement Analysis: Calculate the Standard Error of Measurement (SEM) to define confidence interval around the cut score.

· Predictive Validity Checks: Track certified candidates’ real-world performance to confirm appropriateness.

· Policy and Stakeholder Review: Ensure alignment with professional and regulatory expectations.


Conclusion

The Contrasting Groups Method provides certification bodies with a transparent, empirical, and defensible approach to cut score determination. Its strength lies in its simplicity and use of real performance data, minimizing subjectivity and enhancing transparency.

However, its limitations—such as reliance on group clarity, adequate sample sizes, and exam quality—must be carefully managed. For maximum defensibility, this method should be combined with alternative standard-setting approaches (Angoff, Ebel, Hofstee) and validated through both statistical analysis and practical evidence. This ensures that certification decisions remain fair, valid, reliable, and consistent with ISO/IEC 17024 requirements.


Annex


Other Cut Score Determination Methods

While the Contrasting Groups Method provides one defensible approach to setting cut scores, ISO/IEC 17024 does not mandate any specific method. Instead, it requires certification bodies to use methods that are fair, valid, reliable, and appropriate for the type of examination. Several well-established methods are available, each with its own strengths and limitations:


· Angoff Method: Subject matter experts estimate the probability that a minimally competent candidate will answer each question correctly. The cut score is the sum of these probabilities. Widely used and highly defensible but requires trained experts.


· Modified Angoff Method: Like the Angoff Method but supplemented with actual candidate performance data from pilot tests. Increases accuracy and realism.


· Ebel Method: Experts classify items by both difficulty and relevance, then assign expected success rates. Produces a structured cut score that reflects test content balance.


· Nedelsky Method: Suitable for multiple-choice questions. Experts eliminate obviously wrong options, and the probability of a minimally competent candidate guessing correctly defines the cut score.


· Bookmark Method: Items are ordered by statistical difficulty (Item Response Theory). Experts place a 'bookmark' at the point where a minimally competent candidate is expected to stop answering correctly. Strong psychometric foundation, especially in large-scale standardized testing.


· Hofstee Method (Compromise Method): Experts set acceptable minimum and maximum pass marks and pass rates. The actual test score distribution is then used to select a compromise cut score. Balances psychometric evidence with policy constraints.


· Borderline Group Method: Used mainly in performance-based or practical exams. Examiners identify borderline candidates and the average of their scores becomes the cut score.


Each of these methods can be used to demonstrate compliance with ISO/IEC 17024, provided the process is transparent, well-documented, and applied consistently.

It is advised to certification bodies to use more than one method to cross-validate the defensibility of the final cut score.


Bibliography

· Livingston, S. A., & Zieky, M. J. (1982). Passing Scores: A Manual for Setting Standards of Performance on Educational and Occupational Tests. Princeton, NJ: Educational Testing Service.

· Hambleton, R. K., & Pitoniak, M. J. (2006). Setting Performance Standards. In R. L. Brennan (Ed.), Educational Measurement (4th ed., pp. 433–470). Westport, CT: Praeger.

· Cizek, G. J., & Bunch, M. B. (2007). Standard Setting: A Guide to Establishing and Evaluating Performance Standards on Tests. Thousand Oaks, CA: SAGE Publications.

· Zieky, M. J., Perie, M., & Livingston, S. A. (2008). Cut Scores: A Manual for Setting Standards of Performance on Educational and Occupational Tests. Princeton, NJ: Educational Testing Service.

· Kane, M. (1994). Validity of the Contrasting Groups Method of Standard Setting. Applied Measurement in Education, 7(1), 3–18.

· International Organization for Standardization (ISO). (2012). ISO/IEC 17024:2012 — Conformity assessment — General requirements for bodies operating certification of persons. Geneva: ISO.


About the Author

Dr. George Anastasopoulos is the Technical and International Business Development Manager at Perry Johnson Laboratory Accreditation, Inc.(PJLA), a leading U.S.-based accreditation body operating globally across multiple technical and scientific sectors. He also serves as General Secretary of the International Personnel Certification Association (IPC).

A Mechanical Engineer with an MSc and PhD in Applied Mechanics from Northwestern University, Dr. Anastasopoulos is an active member of several international standards committees, including ISO/TC 176, ISO/CASCO, and ASTM, contributing to the development of ISO 9001 and ISO/IEC 17025. He is also a regular participant in IAF and ILAC global accreditation activities.

He has received the EOQ Presidential Georges Borel Award for his international achievements in promoting quality worldwide and has published extensively in journals and conferences. As a keynote speaker and project leader, he has contributed to numerous initiatives in conformity assessment, management systems, and quality assurance across the U.S., EU, and beyond. He can be contacted at ganast@pjlabs.com


Why Join Us

Connect with global experts in testing, inspection, certification, accreditation and standardization. Gain knowledge, recognition, and networking opportunities while shaping the future of conformity assessment. Members access publications, training, and professional support, advancing careers and influencing standards worldwide. Be part of the trusted community promoting quality and competence. 

Join Us

Expert View Links

Competence vs. Qualification in Personnel Certification

Interlaboratory or Intralaboratory? How Labs Show True Competence!

Interlaboratory or Intralaboratory? How Labs Show True Competence!

In today’s highly competitive and ever-evolving professional landscape, personnel certification remains a valuable asset for individuals seeking to demonstrate their abilities and advance their careers. Obtaining a professional certification often requires a significant investment of time, effort, and financial resources. For this reason,

In today’s highly competitive and ever-evolving professional landscape, personnel certification remains a valuable asset for individuals seeking to demonstrate their abilities and advance their careers. Obtaining a professional certification often requires a significant investment of time, effort, and financial resources. For this reason, candidates are increasingly faced with a critical choice: whether to pursue a competence-based or a qualification-based certification program.

At first glance, qualification-based certification may seem more accessible. It is often easier to obtain, less expensive, and focused primarily on formal education or training history. However, this convenience may come at the cost of genuine credibility and market acceptance. So, what really distinguishes competence-based from qualification-based certification, and which one truly holds value in today’s global market?

FULL ARTICLE

Interlaboratory or Intralaboratory? How Labs Show True Competence!

Interlaboratory or Intralaboratory? How Labs Show True Competence!

Interlaboratory or Intralaboratory? How Labs Show True Competence!

I recently received the following question from a calibration laboratory: “We would like to request clarification regarding the distinction between interlaboratory and intralaboratory programs in the context of ISO/IEC 17025 compliance. Specifically, we seek confirmation on whether both types of programs are considered acceptable evidence

I recently received the following question from a calibration laboratory: “We would like to request clarification regarding the distinction between interlaboratory and intralaboratory programs in the context of ISO/IEC 17025 compliance. Specifically, we seek confirmation on whether both types of programs are considered acceptable evidence of ongoing competence for the purposes of maintaining accreditation.”
 

First, I would like to note that this question is also valid for a testing laboratory. So my reply is applicable to both Testing and Calibration laboratories:

FULL ARTICLE

Contrasting Groups Method for Cut Score Determination

Interlaboratory or Intralaboratory? How Labs Show True Competence!

Contrasting Groups Method for Cut Score Determination

This article has been prepared drawing on my professional experience as an accreditation assessor. In numerous ISO/IEC 17024 assessments, I have observed that many Personnel Certification Bodies (PCBs) receive nonconformities related to the determination and justification of cut scores. Despite being a cornerstone of valid and reliable ce

This article has been prepared drawing on my professional experience as an accreditation assessor. In numerous ISO/IEC 17024 assessments, I have observed that many Personnel Certification Bodies (PCBs) receive nonconformities related to the determination and justification of cut scores. Despite being a cornerstone of valid and reliable certification decisions, this area is often underestimated or handled without sufficient methodological rigor.
From my experience, I propose the Contrasting Groups Method because it is clearly understood, straightforward to implement, and can be applied with minimum administrative and technical hassle. It therefore represents a practical, evidence-based approach that allows PCBs not only to achieve compliance with ISO/IEC 17024, but also to strengthen the credibility and defensibility of their certification outcomes. 

FULL ARTICLE

Conformity Assessment Society (CAS)

contact us at secretary@ca-society.com

Copyright © 2025 Conformity Assessment Society - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept