Validation Demystified – Part I
Validation is arguably the central pillar of the VET practice and has a valued place in the education system. It is one of the few compliance requirements that influence and is accountable for more than a dozen standards within the Standards for RTO 2015. As such, validation is the much-studied, crammed and piloted concept of educational practice in Australia and globally.
Yet, there is much misunderstanding, interchangeability of meaning and objective with other concepts and inconsistency of interpretation and practice of validation. Most often than not, there is confusion in the understanding of Pre-validation (now termed by ASQA as Verification), Moderation, and Validation.
The confusion goes beyond the terminologies. There is a wide range of inconsistency in the implementation of the requirements of the Standards for RTO 2015, figuring out the objectives of validation, choosing the methods of its best practice, prioritising its role in the governance and continuous improvement etc. Despite the big appetite of RTOs to appreciate and use it, validation remains the most bewildering standard of SRTO 2015.
This article will try to clarify the misperceptions and debunk the myths and mysteries
Why does validation remain challenging to understand and implemented inconsistently across RTO practice? The puzzling questions include:
- What are the areas that RTOs misunderstand?
- What are the frequent mistakes made by practitioners?
- Which part is found non-compliant during audits?
- What are the darkest areas of validation that are leading to obscurity and process inefficiency?
- What is blurring is the distinction between validation, moderation and pre-validation, and why do they become intertwined in current practice.
- What does independent validator mean, independent from what is considered as independent? Why only in 1.25 and not in 1.9 – 1.11.
- Why is moderation not mandatory but essential?
Problem 1. The problem starts in the foundation of the concept of validation, its definition and denotation.
Many RTOs use Moderation, Pre validation and Validation interchangeably. The Standards for RTO 2015 mention validation in different sections with different requirements which sometimes invokes different connotations to different people. Due to this and other reasons some people find it difficult to understand what standards are relevant to which part of the Validation, Pre-Validation or Moderation.
Many RTOs have the impression that Pre-validation and Moderation are not as important as Validation and think they do not have intra-relation as they are very different from each other.
Let’s start with the basic definitions.
Pre-Validation (Verification) is the practice of validation that occurs before using the assessment tools. This validation happens after either developing the assessment tool or purchasing it from providers, and you want to validate them. The objective of pre-validation (Verification) is to ensure that the devices meet the requirements of the training package and ensure they are a valid tool that ascertains the assessment will be conducted according to the principles of assessment and rules of evidence. Whether you develop your own tool or purchase them from providers, RTOs are required to verify they are fit for purpose and valid assessments.
Validation is the quality review of the assessment process and is generally conducted after the assessment is complete. Validation involves checking that your assessment tools have produced valid, reliable, sufficient, current and authentic evidence, enabling your RTO to make reasonable judgements about whether training package (or VET accredited course) requirements have been met.
Moderation is a quality control process aimed at bringing assessment judgements into alignment. Moderation is generally conducted before the finalisation of student results as it ensures the same decisions are applied to all assessment results within the same unit of competency.
How are they aligned to SRTO 2015, and what are their commonalities?
The standards that affect each concept are different. The below table will give you a clear understanding of the representation of standards per each concept.
|When takes place
|Ensure they are a valid tool that ascertains the assessment will be conducted according to the principles of assessment and rule of evidence
|Prior to using the tools.
|1.5, 1.6, 1.8
|Bringing assessment judgements into alignment
|Before the finalisation of student results
|1.8 and 3.1
|A quality review process that confirms your RTO’s assessment system can consistently produce valid assessment judgements.
|After the assessment tool is implemented and student assessments are completed and marked.
|1.5, 1.6, 2.2, 3.1, 1.13, 1.1.8, 1.9, 1.10, 1.11, and 1.25
Though their definition and purpose are diverse, Pre validation, Moderation, and Validation have one shared goal. They are destined to ensure the best practice of effective assessment, mainly assessment practice and judgment.
Problem 2. The ‘two units 50% in three years and five years cycle’ syndrome and compliance mentality.
Many RTOs believe that the validation must be conducted at 50%, and this is done because of compliance requirements. For this reason, the RTOs wait until the last day and are not able to identify the problem at an early stage. These results in finding themselves in the unfortunate position of non-compliance with standard 3.1. They have to revoke the certificates and redo all assessments again because they have awarded certification documentation to learners whom they have NOT assessed as meeting the training product requirements specified in the relevant training package.
RTOs must see the Standards as a document that describe what outcomes an RTO must achieve, not how they must be achieved (policed).
‘50% three years and five years cycle’ is the minimum but risky requirement.
Scheduling and adhering to 50% of qualifications in three years is equivalent to planning for failure and non-compliance. The best time to validate your resources is today, not tomorrow.
RTOs need to develop a validation schedule the day they receive their registration to validate each training product (AQF qualification, skill set, unit of competency, accredited short course and module) on its scope of registration. The validation clock starts to tick from that day, and the quality requirements increase with each clock tick.
Problem 3. Many RTOs believe they have conducted their validation; however, they are deemed to be non-compliant with Standards 1.10 – 1.11 and 1.25.
Several RTOs do the validation but are not followed by any rectification plan, and that makes their validation part of a problem instead of part of a solution.
Many RTOs opt to cut corners—by developing a generic validation tool strategy from a template and asking validators to ‘sign off’, which most often than not, results in a yes ticks and flicks.
Some RTOs also conduct the validation by one person, usually a compliance officer/manager, without considering the requirements of the validation team. Many RTOs make mistakes in sampling and choosing their assessment to be validated. Because they use a sample of their best students or trainers who have been deemed competent, their validation produces a bad validation outcome.
For many RTOs, the main reason can be described as “validation conducted for the sake of conducting’ not for an effective outcome and meaning full action. The absence of systemic, documented process and assessment tools and guides is customary in many RTOs.
Most RTOs have validation policies and procedures. However, the method does not articulate and demonstrate in detail what they must do, how they will do it, when they will do it, who is responsible for doing what, the mechanism for monitoring them, and the evidence they can provide.
RTOs must develop and implement a system with evidence (that can be seen, touched and heard) to ensure assessment judgements are consistently made on a sound basis and validation of assessment judgements is carried out regularly.
Next is the validation tools, far from a ‘sign me up’ checklist with close-ended yes and no answers that lead validators to say yes. For example, asking validators to tick ‘the principles of assessments are good’ will not be effective and does not demonstrate that the assessment practice and judgment was informed by validators.
Though there is no specific method or approach that you must follow, you must demonstrate that:
- You have developed a schedule to validate each training product (AQF qualification, skill set, unit of competency, accredited short course and module) on your scope.
- You adjusted the validation schedule when adding a new training product. When making adjustments, ensure your plan continues to meet the timeframe and completion requirements discussed above.
- The training products must be validated as per the schedule; putting a validation schedule without implementing it is worse than not having a plan.
- Select your validators and ensure at least one subject matter expert ‘industry relevance’ requirements.
- The assessment tool must contain an open-ended and meaningful questions that can check the assessment practice and judgment from different angles
The ‘two units from qualification’ pattern.
Part of the validation failure is when RTOs stick into the ‘two units per qualification’ approach. The number of units and the selection criteria must not insist on the minimum requirement as one size does not fit all.
Statistically, valid sampling is essential in the process of validation.
A statistically valid sample is:
· large enough that the validation outcomes of the model can be applied to the entire set of judgements, and
· taken randomly from the collection of assessment judgements being considered.
Calculating sample size
You must validate enough assessments to ensure that the results of your validation are accurate and are representative of the total completed assessments for the training product.
To determine appropriate sample sizes, you can use ASQA’s validation sample size calculator.
Whatever model or method you use, you must ensure your sampling will provide you with a very low error level and high confidence in assessment practices and judgements.
Many RTOs use the sample of their best students or trainers who have been deemed competent, which produces a lousy validation outcome.
Randomly selecting your sample will ensure adequate coverage of varying levels of learner performance. You may also supplement the random selection by adding additional completed assessments (for example, to include both competent and not competent assessments, or to include multiple assessors’ decisions, various delivery modes and locations) to ensure the validation process is representative of all assessment judgements.
(Reference, ASQA and NCVER)