Down the rabbit hole with the SRTO 2025 (Standards 1.3 & 1.5)

Inspired by recent comments about reviewing assessment tools / pre-assessment validation / post-assessment validation, I decided to investigate the origin of the non-term “pre-assessment validation” and how the definition of validation has morphed over time, and across various quality frameworks and units of competency.

There are three main pathways in my rabbit warren. Choose your own adventure!

Reviewing assessment tools

Outcome Standard 1.3 spells out the requirement to review and amend assessment tools prior to use..

Is this anything new? 

No! This has always been a requirement, (SRTO 2015, Clause 1.8) but now it is spelt out clearly.

Obviously, RTOs now need to consider how they will capture and present evidence of their quality review of assessment tools.

Validation: definitions over time

This rabbit hole involved exploring how the term ‘validation’ has been defined and used in various quality frameworks from the Australian Recognition Framework (ARF, 1999) through the Australian Quality Training Framework (AQTF 2001, 2005, 2007, 2011, 2012) to the Standards for NVR Registered Training Organisations (SNR 2012), the Standards for Registered Training Organisations (SRTO 2015) and finally the revised outcome standards (SRTO 2025).

Fascinating reading if you are into that sort of thing! If you are into it, click here – if not: here is a summarised version:

Prior to 2005, the AQTF used language that clearly indicated validation should be post-assessment, but rather than use the term ‘validation’, it stated, ‘The RTO must validate its assessment strategies’ including ‘evidence contributing to judgements’.

In 2005 the AQTF introduced the idea that ‘validation may be undertaken prior to and post assessment’. In 2007 the AQTF stated ‘Validation may be undertaken before, during and after the actual assessment activity occurs”.

In 2011, the AQTF dropped the reference to validation being undertaken prior to assessment and placed it very much in the post-assessment space.

Strangely, the SNR 2012 did not define validation, but the definition of validity within the Principles of Assessment is interesting: It describes validity as ‘…concerned with the extent to which an assessment decision about a candidate…based on the evidence of performance by the candidate, is justified.” This shows how validation is one way we can confirm validity.

In 2015, the SRTOs clearly described validation as a process that happens after assessment and introduced the ‘statistically valid sample’.

The SRTO 2025 take it one step further and require review of assessment tools prior to assessment (standard 1.3) and validation of assessment practices and judgements (standard 1.5).

So, in summary, apart from a little glitch between 2005 and 2011 where things got murky, validation has always been about reviewing assessment practices and judgements (including the tools used to make the judgements) AFTER assessment.

Validation: TAE units

Where did the term ‘pre-assessment validation’ come from?

I’ve focused on Certificate IV units, but obviously there are implications for the Diploma units too.

If you want a potted history of TAE units from 1998 to 2022, click here.

In summary:

Up until 2008, units of competence (BSZ, TAA, TAE) described review or validation of assessment as a process that occurred after the assessment judgement had been finalised.

In 2008, TAEASS403A introduced the concept of validation being conducted before, during and after assessment. This aligned with AQTF 2007 definition of validation.

The terms ‘pre-assessment validation’ and strangely, ‘pre-validation’ were first used in TAEASS413 (released in 2022). These terms do not align with any quality standards. Thanks, PWC!

Hopefully the next time TAE is released, this unit will be amended to align with SRTO 2025.

IAVS in 2020

As 2019 draws to a close and the jingle bells get louder I wanted to take a moment to thank all my colleagues, clients and friends for your support, questions, ideas and insights. It’s been a great pleasure to work with you all.

In 2020 I will not be working in VET in Australia. I will be in Tamghas, Nepal on a volunteer assignment with Australian Volunteers International. My husband, Greg and I will be working with a newly formed local government organisation to develop sanitation and waste management services.

Greg will be taking care of all the technical side of things and I’m really looking forward to returning my focus to two of my loves: education for sustainability and community education.

If you are interested you can follow us on Instagram – https://www.instagram.com/gregnruthinnepal

I’ll be back in Australia in 2021 and will be looking forward to reconnecting with all my work colleagues and friends and catching up on all the changes that will have happened in VET and especially the new AQF and new approaches in ASQA.

Best wishes for 2020!

Mapping – Common Pitfalls

During my auditing adventures I see many different mapping documents, styles and techniques.

Some mapping approaches are really useful: they clarify exactly what is required in evidence and reassure assessors that the unit is adequately covered by the assessment instruments and that evidence to be gathered will be valid.

Sadly, some mapping approaches are a woefully wanting.

Accurate and careful mapping in the early stages of assessment design and development is critical. Detailed mapping assists in developing clear and specific questions and tasks, and in writing clear assessment criteria.

Mapping underpins the reliability of the assessment process.

Here are the most common mapping pitfalls (in no particular order):

Failure to map all aspects of the unit
RTOs generally map the performance criteria (PC), but sometimes fail to map the performance evidence (PE) and knowledge evidence (KE) and more frequently fail to map the foundation skills (FSK). Sometimes it might be useful to map the assessment conditions but this really depends on how specific they are. For instance where the assessment condition specifies that assessment must be conducted in a real workplace or with real clients it may be useful to map this aspect of the unit.

Lack of specific mapping
Sometimes mapping can be so vague as to make it unhelpful, for instance where a written task with many questions is mapped (with a tick or a cross) to all knowledge requirements. It is more helpful if the question number is mapped to a specific knowledge requirement.

Over-mapping
Over-mapping is when an aspect of the unit (PC, PE, KE, FSK) is mapped to an assessment event that only vaguely relates to the criteria. This is very common with short answer questions that draw out knowledge that may underpin a performance criteria but provide no evidence of the implementation of that performance criteria.

Inaccurate mapping
Inaccurate mapping is when an aspect of the unit is mapped to an assessment event that provides no evidence in relation to that aspect of the unit. OR vise versa, where an assessment event that does provide evidence is not mapped to that aspect of the unit.

Are there aspects of mapping that you find confusing or are a bug-bear for you? Please feel welcome to share your thoughts via the comments link.

Hopefully we will see significant improvement in the quality of mapping and the quality of assessment tool design as trainers and assessors undergo professional development in TAEASS502 Design and develop assessment tools (as required prior to April 2019 for those who don’t already hold the unit).

Student – Centred Auditing – what does it mean for RTOs?

ASQA has recently introduced a new audit model that focuses on the student journey with their RTO. This new audit model is a big shake up in the audit approach, but from the RTO point of view, there is nothing really that you need change in order to be ready for the new audit model. It is really business as usual for RTOs, maintaining quality training and assessment and compliance with all standards at all times.

The main impacts for the RTO of the new audit model are:

  • You will be asked to supply student contact details so ASQA can conduct Student Surveys prior to the audit. Surveys are administered in a written format, online, and the results of the surveys will be used to inform the scope of the audit
  • The auditor will be keen to speak to students during the audit process (at the site audit and possibly after the audit)
  • If non-compliances are identified, the RTO is required to assess the impact of the non-compliance on current and past students, explain how you will redress or fix the impact and carry out the remedial action.

The requirement for RTO to assess the impact of non-compliance and carry out remedial action is arguably the biggest change to the regulatory environment for RTOs that we have seen for many years. This change may see you re-assessing past students or offering refunds to students who were not fully informed during the enrolment process. See ASQA’s Fact Sheet on Addressing Non-Compliances Following Audit for more information.

The clauses from the Standards for RTOs 2015 that ASQA will review as part of your re-registration audit have changed under the Student –Centred Model. This does not mean that those clauses that are not automatically included such as, clauses 1.09-1.11 (Validation) and 1.5-1.6 (industry engagement), are no longer important. They are! The audit process samples only some clauses, but RTOs are always expected to be compliant with ALL clauses ALL the time.

The Student-Centred Model puts the focus right back where it should be by asking: is the student getting quality education, making informed choices, being supported in their learning, getting the skills and knowledge they need to support their vocational ambitions and meet industry needs?

More than pottery classes – Community Education Providers involvement in Workbased Learning and Foundation Skills

Work-based Learning in Community Education Providers – Interim Research Report, December 2015

Executive Summary

This research is being conducted by Community Colleges Australia on behalf of the Commonwealth Department of Education & Training (DET) under the Strategic Partnerships Project.

The research aims to examine the extent to which community providers are delivering work-based learning, workplacement and foundation skills. It is hoped that the research will create better recognition of community providers by illuminating the role community providers play in providing quality VET.

The research involves a large-scale survey of community based providers followed by detailed semi-structured interviews with selected participants. Results will be collated into a report for the Department of Education & Training.

Four hundred and fourteen (414) RTOs were selected from the National Register of RTOs (TGA) according to their provider type indicator (AVETMISS Data Field) and invited to take part in the initial survey. Ninety-two RTOs responded to the survey.

While the data provided by the surveys needs to be further interrogated and checked with detailed semi-structured interviews, the initial results indicate the following:

Work-based learning

Community based providers are very much involved in delivering work-based learning:

  • 45% deliver traineeships and 10% deliver apprenticeships
  • 66% of RTOs are delivering qualifications that have a mandatory work-based learning component (ie where the training package specifies that assessment must be conducted in the workplace and/or specifies workplacement hours required)
  • 52% of Community based RTOs are involved in delivering accredited and non-accredited training for specific business clients.
  • 74% stated that they provide workplacement opportunities as part of their courses, even when the training package does not require it.
  • 35% of RTOs stated they operate a business that has been specifically designed for the purpose of providing work-based training such as a hairdressing/beauty salon, nursery or café

When asked why they did not deliver more work-based learning and provide more workplacement opportunities the following themes emerged:

  • Funding is a big issue. RTOs state that employers are reluctant to pay for work-based learning and the funding arrangements are not supportive.
  • Community based RTOs often service large regional and rural areas and the cost of managing work-based traineeships in such thin markets with large geographical spread was prohibitive.
  • Community based RTOs are often very small operations and do not have the capacity to invest in the marketing and relationship building required to build a client base with local business and market the RTO capabilities.
  • Furthermore, it was felt that community based RTOs did not have a reputation for providing work-based learning opportunities and therefore the marketing requirements are greater.
  • Employers are reluctant to host learners for workplacement and there is a lot of competition especially in rural and regional areas from other “large National” and “private” RTOs to engage host employers

Foundation Skills

Community providers are very involved in delivering foundation skills:

  • 88% stated that they have delivered accredited foundation skills in the last 12 months. These were classroom-based courses.
  • 89% stated that they are integrating foundation skills into VET courses. This may be achieved in a variety of fashions, such as VET trainers being LLN trained and integrating LLN into their VET delivery (46%), LLN experts working with VET experts in the design of the course (18%) or FSK units being integrated into VET courses (29%)

When asked why they did not deliver more foundation skills following themes emerged:

  • The most common response was inadequate funding for foundation skills. It is perceived that employers will not pay for foundation skills training and “selling” the benefits of foundations skills to employers is a big drain on resources.
  • Changes to Job Active contracts mean there are less referrals from this source.
  • Victorian processes of having the appropriate qualification on scope and being on the Approved Provider List are seen as prohibitively complex and costly for many Victorian providers.
  • Limited approved courses for foundation skills under Smart and Skilled in NSW has caused problems for many RTOs
  • Students in greatest need are not interested or reluctant, people are not aware of their need, lack of student numbers.
  • Many responded that they have limited capacity in terms of qualified staff and classroom space to deliver more foundation skills.

Who is a community provider

For the purpose of the research we defined a community education provider as being an RTO that is NFP, a legal entity, focused on adult education and serving a specific community whether that be a geographical community or a cultural community.

Survey participants were selected from the Training.gov (TGA) database according to provider type indicator. Those selected included those identified as the following provider types (AVETMISS Data Field):

  • Community Based Education Provider
  • Other provider ’91 Private education/training business or centre’ is a privately operated registered training organisation (‘private provider’).
  • Other provider ’99 Other training provider not elsewhere classified’ are for-profit private training providers (e.g. private one-on-one music teachers, private tutors) and not-for-profit training providers (e.g. Mission Australia, Salvation Army)

Despite that fact that the AVETMISS definition of “Community-based Adult Education Provider” specifies that such a provider should be NFP, our research using the TGA database has shown that a number of RTOs registered with the “Community-based Adult Education Provider” type indicator are in fact “for-profit” organisations. This is a curious and potentially misleading fault in the AVETMISS collection and has implications for how the data may be used for research and policy decisions.

Next steps in research:

  • A second broad survey to all respondents who have indicated they deliver foundation skills. The survey will take a snapshot of the range of conditions, strategies and practices in use for the facilitation of foundation skills including: the qualifications of trainers; the characteristics of learners; the curriculum/training packages being used; funding arrangements; classroom and workplace practices: approaches to pedagogy and assessment and the outcomes in terms of assisting transitions in life, work and education for learners.
  • Detailed semi-structured interviews with survey respondents focusing on:
    • Those RTOs who are successfully engaging with businesses to provide business specific work-based learning opportunities. What strategies have worked for engagement? What strategies have worked for delivering the training and up-skilling workers? What skills are required by trainers?
    • Those RTOs who are delivering Traineeships. What models are they using? What skills are required by trainers/assessors?
    • Those RTOs who operate a business for the purpose of providing training, such as beauty salon or café.
    • Those who deliver training that has mandatory workplacement requirements. What have they learnt from this experience in terms of engaging with employers, finding host employers and the value of workplacement?

What’s the difference between validation and moderation?

When talking about validation one of the most common questions asked is what is the difference between validation and moderation and should we be doing both?

The new Standards for Registered Training Organisations 2015 (SRTOs 2015) have very specific requirements around validation, but moderation is not mentioned. The standards specify that validation of assessment judgements is required (clause 1.9) and this requirement has caused a some confusion regarding the difference between validation and moderation with many people wrongly thinking that validation of assessment judgements is moderation.

For my definitions and understanding of these two concepts I refer back to the 2009 National Quality Council publication called A Code of Professional Practice for Validation and Moderation. You can download the document from the NCVER Voced database. This publication has a useful table on page 7 that summarises the distinctive features of both validation and moderation.

I think the easiest way to explain the difference between validation and moderation is to think about purpose and the outcome of the activity.

Validation is a quality review process conducted for the purpose of continuous improvement. The outcome of Validation is recommendations for future improvements. 

Moderation on the other hand is a quality control process that occurs prior to the final assessment decision being made. It is a process whereby assessors “moderate” their assessment decisions and come to a consensus decision about competence of a candidate.

The outcome of moderation may be adjustments to the assessor’s decision.

Validation of assessment judgements involves reviewing the outcomes of assessment (eg students’ work) and reviewing the assessment decision that was made. The outcome of validation of assessment judgements may be recommendations for improvements to the assessment materials, assessment processes or perhaps professional development for the assessor. The difference between validation of assessment judgements and moderation is that validation of assessment judgements happens after the student results have been finalised while moderation happens prior to the student results being finalised.

In VET moderation is not as common as is validation. In LLN provision however, moderation has been a common practice for many years.

The Standards for RTOs 2015 and the AQTF do not specify that RTOs undertake moderation, however this is no reason why moderation would not be part of your assessment system.

If you have not already had enough of validation, moderation and validation of assessment judgements you can read more on the ASQA Fact Sheet, Conducting Validation

 

Will the new Standards for RTOs 2015 improve validation practices?

The VET industry continues to be plagued by concerns about poor assessment practices. The root cause of poor assessment practice is attributed to a variety of causes including: lack of adequate systems to ensure consistency across providers (Misko 2015); unacceptably low levels of assessment literacy (Gillies et al 2010); inadequacy of the TAE40110 (Misko 2015); low level skills of candidates (Halliday-Wynes and Misko 2013); pressure of fast-track qualifications and lack of systematic and regular moderation and validation (Misko et al 2014).

My personal experience and observation as an ASQA auditor is that it is common for RTOs to have abundant evidence of validation activities, and yet present assessment materials that are critically non-compliant. This suggests a curious disconnect between validation activities and the development of quality assessment materials, and indicates that while validation may be happening it is far from “systematic”. This disconnect lends weight to calls for a shift in focus from validation for compliance purposes to validation for continuous improvement purposes (Gillis et al 2010).

The definition and understanding of the concept of “systematic validation” as required by the Standards for Nationally Regulated RTOs 2012 (SNR) was clearly an elusive concept. The Standards for Registered Training Organisations 2015 (SRTOs 2015) detail more proscriptive requirements for RTOs with regard to planning and managing validation. What is yet to be determined is whether these more proscriptive requirements will have any impact on the quality of validation activities occurring in RTOs.

I will be conducting research in 2016 to determine whether the new standards have changed the perceptions of trainers and assessors with regard to validation. I will be blogging regularly regarding the research process and findings so stay tuned!