Skip to content

1. This document sets out policy and guidance on the scaling of marks by Boards of Examiners. The policy and guidance are one of the measures by which SGUL seeks to protect the interests of students and maintain academic standards when SGUL's Unforeseen Circumstances (Force Majeure) Regulations have been invoked. The policy and guidance are intended to establish consistency between programmes whilst recognising the different assessment practices that exist at the St George's. The policy and guidance also provide transparency for students.

Definition of Scaling

2. Scaling is the process of applying an arithmetic adjustment to the initial marks obtained during the assessment process so that the marks, after scaling, are a more accurate reflection of the extent to which the students have achieved the learning outcomes for an assessment task.

3. Scaling therefore takes place at the end of the moderation process and before marks are finalized and presented to Boards of Examiners. Scaling is usually applied to the marks of all students who have attempted the component or element of assessment.

4. Scaling is different from the established standard-setting procedures such as Angoff and Cohen that are used to determine the passmark for a knowledge test.

Current (business as usual) approach to scaling

5. For the most part, existing processes for standard setting, for the moderation of student work and for the analysis of assessment outcomes are sufficient to eliminate errors or defects in the assessment process and to ensure that assessment is valid and reliable. These processes will amount to scaling if they result in marks for a cohort (or subset of a cohort) being raised or lowered. SGUL has not issued any detailed guidance on scaling and the way in which scaling functions in a business as usual setting is determined at the programme level. Programme teams typically have not documented the way in which scaling is applied as business as usual.

6. SGUL's guidance to external examiners does set out interventions that might be undertaken if, in the view of the external, there are issues related to fairness of marking or the effectiveness of internal moderation. These interventions do include scaling marks up or down for a cohort as a mechanism for correcting defects in the assessment process. If this were to happen, the external examiner and the Course Director or Chief Examiner would make a recommendation to the Examination Board for discussion and approval. No detailed guidance is offered about scaling methodologies.

Business as usual triggers for scaling

7. The following then are possible business as usual triggers for scaling:

  1. an anomalous distribution of marks (for example, unusual patterns or numbers of high or low marks) at either module level or the level of an assessment element;
  2. a range of marks significantly out of line with what might be expected from past performance in the assessment element or module in question;
  3. the range of marks is significantly out of line with the marks achieved by the same students on other modules at the same level;
  4. reasoned evidence of a problem with the relevant assessment component.
  5. scaling mechanisms (business as usual)

8. Scaling mechanisms might include:

  1. adding a fixed number (for instance 3) of marks to all marks on a particular assessment component, as long as no scaled marks are then greater than 100. (Marks of 0 and 100 will not be scaled);
  2. subtracting a fixed number (for instance 5) of marks from all marks on a particular assessment component, as long as no scaling marks are then less than 0. (Marks of 0 and 100 would not normally be scaled);
  3. multiplying all marks on an assessment element by a particular factor (for example 0.96).

9. The rank order of students after scaling would be the same as the rank order of students before scaling unless scaling is applied to a subset of the cohort (e.g. because the marking practice of examiners on an OSCE station appeared to be unduly harsh). Any scaling mechanism that must encompass the full range of raw marks from 0 to 100.

10. Adding marks to failed marks for an assessment component and leaving all other pass marks unadjusted would not normally take place as part of a scaling procedure. The review of failed marks to reconfirm that learning outcomes have not been met will usually take place as part of the moderation process.

Scaling under Unforeseeable Circumstances (Force Majeure)

11. In the context of Unforeseeable Circumstances (Force Majeure), the facility to scale marks is available to assessment teams and reflects the following factors:

  1. Assessment tasks that test the same learning outcomes have been redesigned to reflect the exceptional circumstances within which the assessments will be delivered.
  2.  The circumstance within which the students will be attempting the redesigned assessments may have an impact on their capacity to prepare for these assessments and achieve outcomes that reflect their capabilities.

12. These factors suggest the possibility of unreliability in the assessment process. For example, a comparison of marks awarded in a force majeure period with those awarded in previous years for the equivalent assessments might reveal variations in student performance, These variations, if they cannot be explained by other means, may be evidence that student performance has been affected by exceptional circumstances in a manner that can be neutralised by the scaling of marks.

Principles under Force Majeure

13. During the period when Unforeseeable Circumstances (Force Majeure) regulations have been invoked, scaling will reflect the following principles:

  1. a. Scaling will be considered if a comparison with marks from previous years indicates variations in performance that are not explained in other ways.
  2. Scaling will be undertaken at the level of individual assessment element or component because the purpose of scaling is to mitigate the consequences of changes to individual tasks and the circumstances in which those tasks were attempted by students.
  3. The marks for all relevant modules and module assessments will be considered before making the decision to scale the marks of any individual assessment element. This is to determine the full impact of the Exceptional Circumstances on the assessment process.
  4. Scaling of marks for small (<20) cohort of students is permissible but should be approached with caution because there may be random fluctuations in the mark distribution from year to year.
  5. Scaling must be a transparent process. Where scaling has been undertaken, the students concerned must be made aware of the rationale for scaling and receive an explanation of the process.

Process

14. Module marks including component marks from previous years will be made available to assessment teams for comparative purposes. An average of marks awarded in comparable assessments in the three previous years will be provided where available. If marks for fewer than three years are available, a reliable reference point for scaling may not be available.

15. When the 2019-20 assessed work has been fully marked and moderated, provisional marks will be compared with the equivalent marks from the previous years, If the comparison indicates that there are significant variations that cannot be explained by other means, the validity of scaling marks will be considered.

16. Provisional marks can be released to students in advance of scaling in line with SGUL's Assessment Feedback Policy.

17. Marks that have been capped (e.g. because an allegation of academic misconduct is under investigation or has been sustained) will be removed because they will reduce the mean for the module.

18. A range of marks might be out of line with past performance if the proportions of marks awarded in the different honours degree classification boundaries for an assessment task differ from what has been achieved by previous cohorts of students. By way of an example, student performance (based on the previous three years of data) might indicate that typically:

  1. 10% of students had achieved marks in the first class classification boundary
  2. 60 % of students had achieved marks in the 2:1 classification boundary
  3.  30% of students had achieved marks 2:2 classification boundary

19. If student performance in an assessment under force majeure is significantly different from the previous reported pattern, scaling might be applied. In the above example, consideration might be given to adjusting the boundary between 1st and 2:1 performance to replicate the previous pattern of marks if <5% of students achieved first class marks. If scaling is applied, marks would be scaled accordingly.

20. For taught postgraduate programmes, the distribution of marks awarded in the pass, merit and distinction categories can be used as the basis for comparing outcomes under forces majeure with outcomes from previous years.

21. There is no expectation that marks will be scaled under force majeure. As noted, the facility to scale marks is available to address anomalies that have not been correct as part of the moderation process.

22. Adding marks to failed marks for an assessment component and leaving all other pass marks unadjusted would not take place as part of a scaling procedure. The review of failed marks to reconfirm that learning outcomes have not been met will usually take place as part of the moderation process.

23. All scaling decisions under the Force Majeure regulations must be discussed with External Examiners and reported to Boards of Examiners. Prior to the meeting of the Board of Examiners, the approval of the Chair of Monitoring Committee (or nominee) must be sought. Board minutes must include an explanation of the reasons for scaling and the mechanisms used to undertake scaling.

24. Where scaling has been undertaken, affected students will be provided with the rationale for the scaling and an explanation of the process.

 

Find a profileSearch by A-Z