Approaches to Inclusive Assessment and Feedback

Importance of Effective Assessment

‘Assessment is the senior partner in learning and teaching. Get it wrong and the rest collapses’ (Biggs and Tang, 2011).

The role of assessment, in the learning and teaching experience, plays a critical role in the development of the student skill set and to their overall experience.

‘Nothing we do to, or for our students are more important than our assessments of their work and the feedback we give them on it. The results of our assessment influence students for the rest of their lives…’ (Race et. al, 2005).

A well designed assessment can not only encourage active learning but also address issues relating to academic integrity, inclusivity and employability. There are also opportunities for colleagues, through relevant review processes such as module evaluation, to develop teaching approaches and evidence effective practice. In relation to employability, effective assessment, ideally varied in design, will maximise the opportunity for all learning outcomes to be met and for each student to be industry-ready upon graduating. The UK Quality Code for Higher Education Guiding Principles for Assessment guides the ways in which the University should approach both assessment and feedback.

In-line with the University’s Assessment & Feedback Policy, effective assessment can:

  • Provide a means to enhance student learning by providing appropriate feedback on performance;
  • Provide a means to allow students to develop key skills and graduate attributes for employability and life-long learning;
  • Provide a means by which to measure and certify student achievements against the learning outcomes of the module, also referred to as Assessment of learning;
  • Provide a reliable and consistent basis for the recommendation of an appropriate grade or award;
  • Provide a means by which staff can evaluate the effectiveness of their teaching.

Ensuring Assessments are Fit for Purpose

To evidence our commitment to providing assessments, which meet the principles of the UK Quality Code, the University aims to ensure suitable and robust quality assurance processes for all assessments. These include utilising student feedback, through surveys, module feedback or other platforms, as well as ensuring both internal and external review processes are completed.

The University’s Assessment & Feedback Policy outlines the partnership and commitment, between the University, staff and students in delivering an inclusive, collaborative and empowering assessment approach.

As part of the University’s Learning and Teaching Strategy all programmes will aim to improve the effectiveness of assessment and feedback methods for staff and students and to reduce the overall burden of assessment. In-line with the University’s Assessment & Feedback Policy colleagues are required to:

  1. Develop assessment which empower students to meet the intended learning outcomes and competence standards of the module.
  2. Link each assessment to assessment criteria for attaining academic standards for progression. Students must be made aware of the assessment criteria over and above the inclusion of general assessment criteria within College/School handbooks.
  3. Ensure assessments are inclusive and must meet the needs of all students as outlined by the Equality Act (2010).
  4. Offer alternative assessment methods which meet the same competence standards and assess the same learning outcomes.
  5. Ensure students experience a varied range of assessment methods across modules within a level of study and across a programme.
  6. Design assessments which provide all students with the opportunity to develop key transferable skills that will enhance employability and their graduate attributes.
  7. Ensure programmes contain appropriate formative assessment to allow students and staff to reflect on their learning progress and improve for future assessments.
  8. Give students the opportunity to familiarise themselves with any new assessment methodology introduced or utilised in a programme/module.
  9. Where appropriate, design assessments which assure the practitioner’s fitness to practice and to safeguard the public.
  10. Give Students the opportunity to evaluate whether a module assessment, in particular any alternative assessment provisions made, has allowed them to demonstrate the intended learning outcomes and whether the assessment criteria have been clear and useful.
  11. Ensure students have access to at least one past or sample examination paper for each examination format, along with any generic feedback relating to a specific past paper.
  12. Not re-use examination or assessment questions but, in exceptional circumstance, if they are re-used as part of a question bank or similar, they should not be re-used within 3 years.

Assessment Role in Staff Development

Assessment offers the opportunity to development of teaching quality and approaches for staff. Through mechanisms such as External Examiner reports, there are opportunities to reflect on assessment suitability, and effectiveness, to ensure ongoing evolution of delivery.

External Examiners have a key role in the development of assessments, and in the monitoring of marking. Please refer to the Code of Practice for External Examiners for further information on how External Examiners’ assist in the development of assessment.

These peer observation and External Examiner mechanisms also play a critical role in equality and inclusivity of assessment. Allen (2020) outlines:

‘We need a standard measure of attainment. If higher education policy is to be better informed there has to be a “levelling up” in the measurement of student attainment’.

Therefore, whilst consistency in assessment can sometimes be challenging, putting additional mechanisms in place can not only address ‘levelling up’ issues but also provide important development opportunities and sharing of effective practice principles.


Types of Assessments

There are a range of approaches to assessment which, when carefully designed and implemented, can significantly enhance the student experience:

Formative Assessment

Designed to help learners learn more effectively, to develop a skill and discover how to approach different types of assessments to suit their learner style.

Summative Assessment

Used to indicate and measure the students’ success in meeting the assessment criteria. Only summative assessments count towards a students’ final mark.

Continuous Assessment

Refers to evaluation of student progress throughout a course. Coursework is a prime example of this.

Authentic Assessment

Authentic Assessment describes any assessment type which reflects real-world applications that students may encounter in their future career.

You can find more information, on Authentic Assessments, on  SALT Website

Adjusted Assessment

Making reasonable and appropriate changes to the original assessment format to enable all students to access and engage with the assessment, without compromising competence standards.

Diagnostic Assessment

Used, usually at the start of a module, to provide an indicator of a learner’s aptitude, progress and preparedness for undertaking a module, programme or assessment

Synoptic Assessment

An approach to Programme Level assessment whereby the learning outcomes from a range of modules can be synthesised into a single assessment point.

Programme Level Assessment

A holistic approach to assessment and learning. It can eliminate the compartmentalised approach to learning experienced, reduce the assessment load silo approach, and improve employability skills.

Ipsative Assessment

Focused on individual performance, it is based on developing a student’s previous individual performance rather against external criteria and standards

Inclusive Assessment

Designing assessment from the outset that is, as far as possible, accessible for all students (recognising that some students with particularly complex needs may still require adjustments to assessments), using the concepts of ‘universal design’.  All assessments must be inclusive.

 

Optional/Student Selected Assessment

Students can select from a range of assessment types to demonstrate the intended learning outcomes

Alternative Assessment

Providing students with a different type of assessment that is accessible for them, which still tests the Learning Outcomes of the original assessment, without compromising competence standards.

What assessment type is most suitable for different learning outcomes?

The matrix below, adapted from Nightingale, P., Te Wiata, I.T., Toohey, S., Ryan, G., Hughes, C., Magin, D. (1996), outlines some assessment types and the learning outcomes each may be most appropriate for. The matrix incorporated in the table below and one designed by Annie Crook, University of Oxford,  may be useful in evaluating which assessment type may best assess a skillset. It should be noted, the matrix is to be used as a guide but learning outcomes may vary within a particular subject area.    

Generic learning outcomes

Assessment types

Thinking critically and making judgments
(Developing arguments, reflecting, evaluating, assessing, judging)

Essay
Report / Academic Journal
Policy writing or briefing paper
Presenting a case
Book / Journal or Theoretical review

Solving problems and developing plans
(Identifying problems, posing problems, defining problems, analysing data, reviewing, designing experiments, planning, applying information)

Problem scenario
Group work
Industry business solution
Prepare a Committee Paper
Draft a research bid
Case analysis
Conference paper

Performing procedures and demonstrating Techniques (Computation, taking readings, using equipment, following laboratory procedures, following protocols, carrying out instructions)

Demonstration
Role play
Develop and produce a video / media
Poster presentation
Lab report
Professional practice observation

Managing and developing yourself
(Working co-operatively, working independently, learning independently, being self-directed, managing time, managing tasks, organising)

Journal
Portfolio
Learning contract
Group work

Accessing and managing information
(Researching, investigating, interpreting, organising information, reviewing and paraphrasing information, collecting data, searching and managing information sources, observing and interpreting) /td>

Annotated bibliography
Project
Dissertation
Applied task
Applied problem

Demonstrating knowledge and understanding
(Recalling, describing, reporting, recounting, recognising,
identifying, relating and interrelating)

Written examination
Oral examination
Essay
Report
Devise an encyclopaedia entry
Produce an A–Z on a topic
Write an answer to a client’s question
Multiple Choice Question Quiz

Designing, creating, performing
(Imagining, visualising, designing, producing, creating,
innovating, performing)

Portfolio
Performance
Presentation
Projects

Production of a product, piece or art or other development

Communicating
(One and two-way communication, communication
within a group, verbal, written and non-verbal
communication. Arguing, describing,
interviewing, negotiating, presenting)

Written presentation
Oral presentation
Group work
Discussion/debate/role play
Professional observation

Peer and self-assessment

Peer assessment focuses on students assessing the work of fellow students against the criteria outlined in the assessment brief (or rubrics). Self-assessment is inherently the same principle but facilitates an individual student in evaluating their own work. From both an academic and professional skills development standpoint, both approaches offer an opportunity for participants to understand standards, engage in their learning and improve their performance.

Both peer and self-assessment are predominantly used in formative assessment, in order to assist students in their development towards learning outcomes, but can also be used as a summative approach for reflection on group contributions. 

‘The use of student self and peer assessment can help students develop their assessment literacies, understand how to approach summative assessment more effectively and learn how to use marking criteria to help structure their techniques for assessed pieces of work and exams. The use of some self and peer assessment can also alleviate marking requirements for staff’. Newcastle University, L&T Development Service (2020).


Accessibility in Learning and Teaching

The enhancement of accessibility within learning & teaching has risen to the top of the Higher Education agenda over the last few years, and this has been brought into sharper focus with the rise in online learning and assessment driven by the Covid-19 pandemic. In the ‘How has the coronavirus accelerated the future of assessment?’ report, JISC outlines five key priorities for transforming assessments and highlights the focus on enhancing accessibility:

  • Making assessment more authentic
  • Enhancing accessibility
  • Appropriately automated (including easing teachers’ workload)
  • Continuous adaptation to the ’world of work’
  • Secure against cheating

The University’s SAILS Academy provides comprehensive guidance on making your learning & teaching materials accessible to all. An effective practice starting point is SAILS’ Top Tips, which gives colleagues practical guidance on how to ensure documentation is accessible from point of creation, to ensure an engaging learning experience for all.

The Top Tips outline the following recommendations:

  • Make your lecture materials available early (at least 48 hours before delivery)
  • Ensure high text colour contrast
  • Use legible font sizes and styles
  • Check file accessibility
  • Accessible audio and video files
  • Only use tables for tabular data
  • Make equations accessible
  • Check external tools for accessibility

For more details, on each of these areas, please visit the ‘Top Tips’ document provided by SAILS. To support colleagues, in ensuring their written content is accessible, the Hemingway App will check your written text for any accessibility issues. Within Canvas, the .information on using the accessibility checker is available in the Canvas Essentials training material the ‘Checking and Publishing Pages’.

Alison Braddock and Michael Draper, from the University’s SAILS team, have also recorded an invaluable podcast, on inclusivity and accessibility, which is available to all colleagues.

Fundamentally, colleagues are encouraged to embed accessibility in their learning and teaching from the start. This ensures effective practice is at the forefront of everything we do, reduces future workloads, and most importantly, reduces anxiety and issues for all stakeholders who access the resources. One inclusive way is to give the students an opportunity to feedback on your materials, both prior to release, and if required, through ongoing dialogue.

In the following sections, you will be able to find information about both inclusive assessment design and, if required, developing alternative assessments which still deliver the relevant Learning Outcomes and maintain competence standards. The University’s ‘Module Coordinators guide to the current process and procedure for setting Alternative Assessment‘ offers effective practice approaches to delivering inclusive alternative assessments.


Inclusive Assessment Design

Whilst this Code of Practice will outline developing alternative assessments and their design in subsequent sections, the most effective practice is to ensure inclusive assessment design is evident from the outset of any module development. However, where students have very specific needs, even with the most inclusive assessments, some adjusted or alternative assessment may still be required. This should be designed wherever possible at the same time as the ‘standard’ assessment mode, in line with the anticipatory duty outlined in the Equality Act 2010. This will ensure a reduced workload for colleagues and a positive learning and teaching experience for students in the longer term.

In the ‘Engage in Assessment’ (2015) report, the University of Reading, highlights:

Effective assessment design requires you to establish exactly what you are trying to achieve in a particular type of assessment. You may find the following ‘trigger’ questions useful for this:

  • Why am I assessing?
  • What exactly am I trying to assess?
  • How am I assessing my students?
  • Who is best placed to do the assessing?
  • When should I assess my students?

JISC’s ‘Transforming Assessment and Feedback with Technology (2015)’ report highlights six fundamental elements to an engaging learning experience:

Good design should make the assessment experience inspiring and motivating for both students and staff. It should create a positive climate that encourages interaction and dialogue. Assessment should appear relevant and authentic and wherever possible allow students to draw on their personal experience and to exercise choice with regard to topics, format and timing of assessment. 

In their 2020 article, ‘Online Education and Authentic Assessment’, Harrison (2020) outlines that engaging students through authentic assessments, which allows students to demonstrate knowledge rather than high-stakes exams, is an effective way to minimise cheating. Additionally, Imperial College’s ‘Assessment Design Toolkit’ highlights the importance of both assessment design and feedback being intrinsically linked across the programme, stating:

Perhaps more importantly a well aligned piece of teaching is also more likely to result in a positive learning experience for the student and be easier for the teacher to manage and deliver.

A critical element of any assessment design is to ensure it is inclusive through regular review of the assessment itself. The Quality Code for UK Higher Education –  Assessment – Guiding Principles can help understand how to approve assessments and design them in a way which meets inclusivity and QAA standards. Heriot-Watt’s document ‘Mapping to the UK Quality Code’ also provides an effective practice document which can help reflect on how one might review their assessment processes.

Information on a wide-range of alternative assessments is available from the University’s ‘Alternative Assessments description guide which includes a broader (but non-exhaustive) list of common assessment definitions.

What is ‘Inclusive assessment design’?

Fundamentally, inclusive assessment design is focused on meeting the needs of the diverse nature of the student body. The JISC guide ‘Transforming assessment and feedback with technology’ includes guidance on inclusive assessment.

Within the guide, JISC identify inclusive practice elements as:

  • Ensuring that an assessment strategy includes a range of assessment formats
  • Ensuring assessment methods are culturally inclusive
  • Considering religious observances when setting deadlines
  • Considering school holidays and the impact on students with childcare responsibilities when setting deadlines
  • Considering students’ educational background and providing support for unfamiliar activities
  • Considering the needs of students with disabilities (the JISC guide on making assessments accessible can help with this)

Does inclusive assessment design work?

Whilst there are many resources out there, which advocate inclusive assessment, the University of Plymouth’s ‘Inclusive Assessment’ resource provides an effective practice repository consisting of both students and academics talking about their experiences of inclusive assessment and giving their advice on approaches. In particular, the assessment design video provides a brief example of how the University have adapted their approaches to meet the needs of their diverse student body.

Are there examples of inclusive assessment design at Swansea University?

Whilst there are many examples of inclusive approaches used at Swansea, one example is that of Dr. Miguel Lurgi, in Biosciences, who has developed an inclusive approach to the assessment incorporated into the modules he has recently taken leadership of. In the video below, Miguel talks about how, through offering students a choice of assessment approaches for the same assignment, students are able to select the approach which best suits their individual needs and plays an important part in the overall learning outcomes for the course and programme.


Alternative Assessments

BYU Centre for Learning & Teaching defines alternative assessment as:

‘Performance tests or authentic assessments, used to determine what students can and cannot do, in contrast to what they do or do not know. In other words, an alternative assessment measures applied proficiency more than it measures knowledge’.

Alternative assessments are designed to be more than just traditional examinations which focus on memory recall, by moving towards assessing skills required in a professional setting. Therefore, when setting an assessment, it is critical to evaluate the learning outcomes and professional skill requirements required.

How can I make sure my alternative assessments are effective and comparable with ‘standard’ assessment approaches?

Beyond the resources outlined in this section, there are numerous effective practice approaches across the sector. One such resource is the ‘7 approaches to alternative assessment’ (2019), by Denise Pope which offers some key considerations to use when enhancing the learning and teaching experience for students.

Research by Manchester Institute of Education, which is highlighted in the Luminate – Busting Myths on Alternative Assessment, shows how effective they can be.


Design of Assessment which Promotes Academic Integrity

York University, through ‘Designing Assessments that foster academic integrity’ also offers key considerations for assessment design. Additionally, as the University moves to more online teaching provisions, the ‘Transition to Online Teaching and Assessment’ Canvas module, delivered by Dr Phil Newton and Dr Jo Berry, provides effective practice tips on a variety of assessment design and delivery.

How do I design my on-line assessments to ensure academic integrity?

Charles Sturt University (2020) defined academic integrity as:

‘Acting with honesty, fairness and responsibility in learning, teaching and research. It involves observing and maintaining ethical standards in all aspects of academic work’.

Considering the changing ways of Higher Education delivery, the Quality Assurance Agency, in the ‘Assessing with integrity in digital delivery’ report, outlines:

Academic integrity should always be at the heart of institutional approaches to quality and standards, and has particular importance now for ensuring that:

  • all students are treated fairly and equally
  • students can learn and benefit from the process of learning
  • processes are transparent
  • all staff understand and follow these processes
  • the standards and value of academic awards are maintained.

Whether long-term or not, Covid-19 refocused Higher Education learning & teaching provisions, including assessment scenarios, to online provisions. Therefore, this has demanded further considerations around assessment-related topics such as academic integrity.

Egan (2018), in ‘Improving academic integrity through assessment design’, highlighted the importance of:

‘Personalising assessments to decrease instances of academic dishonesty while promoting student engagement… Also, providing timely feedback for students as a means of fostering integrity-based skills that can transfer to other contexts, recognising that feedback goes beyond the one-way transmission model from teacher to student and instead conceptualises feedback as a dialogic process’.

How can I ensure, particularly in a technological age, students evidence academic integrity?

The University’s Centre for Academic Success has implemented a ‘Skills for Learning, Skills for Life’ Canvas Module to support students with their integration into student life. Section 3 focuses on academic integrity and acts as an effective practice approach for ensuring students are informed of what academic integrity entails.

With the University’s Canvas implementation, this presents new challenges and considerations for Colleges. However, the University of Sydney has provided a ‘Practical assessment strategies to prevent plagiarism’ as part of their own Canvas integration, as an effective practice resource. The guide forms part of a wider, publicly accessible Canvas module on ‘Assessment design and educational integrity’.  

Within the resource, reasons for student plagiarism are listed as:

  • They do not think it is dishonest to paraphrase a person’s work (Franklyn-Stokes & Newstead, 1995)
  • They do not know how to reference appropriately in the relevant discipline (Wilhoit, 1994; Owens & White, 2013)
  • They have not developed sufficient skills in reading and taking notes from sources (Wilhoit, 1994)
  • They have not developed sufficient skills in academic writing, and specifically, using an ‘original voice’ and ‘authorial identity’ (Owens & White, 2013)
  • They lack confidence in their ability to express an argument using an ‘original voice’
  • They are anxious and under stress because of study workload and/or financial pressures, and feel they must plagiarise to succeed in their studies (Ashworth, Bannister & Thorne, 1997; Zobel & Hamilton, 2002)
  • They are seeking ways to expend the least amount of effort in obtaining high grades in assessment tasks (Wilhoit, 1994)
  • They perceive that the assessment task is trivial and/or irrelevant (Ashworth, Bannister & Thorne, 1997), and/or
  • They perceive that academic dishonesty by other students is condoned by staff, and there are no consequences (McCabe, Trevino & Butterfield, 2001).

When studying integrity issues such as plagiarism, it is important to consider cultural elements relating to Franklyn-Stokes & Newstead (1995) and use these as a basis for educating students on UK perceptions. Campbell (2017), in ‘Cultural Differences in Plagiarism’ highlights some of the cultural perceptions of plagiarism and how they may impact on the way a student believes they can/should approve their work. Other considerations, which may impact on academic integrity, were highlighted by Lodhia’s (2018) article ‘More University students are cheating – but it is not because they are lazy’ stating:

What is abundantly clear is that there is a wide variety of reasons why one might choose to cheat in an exam or on coursework and while those people who choose to do so should not be applauded for their decision, more work needs to be done to eradicate the causes behind the cheating rather than to merely dismiss it as students being lazy.

How do I ensure academic integrity when using technology to assess?

The Quality Assurance Agency, in their ‘Assessing with integrity in digital delivery’ guide outline examples of available technology to support academic integrity:

  • Text matching software – this can identify copied text and indicate plagiarism.
  • Analytics software – this can be used to determine whether there have been multiple authors of work submitted by the same student, or if there are significant variations in writing style from different pieces of work.
  • Remote invigilation (such as webcams or facial recognition software).

The University’s Canvas Digital Learning Platform is integrated with the existing submission portal, TurnItIn, and can carry out text matching and analytical processes. Additionally, Canvas has ‘Lockdown Browser’ functionality which through integrated software ensures students cannot access additional materials, via their device, during an online assessment and tracks their activities via a webcam.

Although the Canvas Digital Learning Platform is a new platform, much of the existing functionality, in relation to similarity checking, is comparable to TurnItIn. Although within Canvas, staff can still choose for students to submit via TurnItIn. TurnItIn similarity checking system instantly checks student work, using pattern recognition algorithms against a database containing 45+ billion web pages, 337+ million student papers, and 130+ million articles from academic books and publications. To support the use of TurnItIn, and promote academic integrity, TurnItIn has an ‘Enhance Academic Skills’ which acts as an effective practice guide for students.


Role of Students in Assessment Design

The QAA, in the guiding principles, in relation to expectations and practices for assessment, outlines the following:

The principles advocate a holistic approach, transparency, and an approach which ensures students are both supported in assessment preparation; so academic integrity is promoted and encouraged. Therefore, colleagues may want to consider approaches to assessment design which involve student consultation, within assessment design, as well as a redesign of approaches to assessment to encourage ‘efficient and manageable’ assessments, as outlined in the QAA guiding principles.

The principles outlined are supported by resources such as Getting Smart’s (2018) report ’The Student role in formative assessment’ which states: 

‘It is critical that students understand their role in a formative assessment classroom environment. Student behaviours include: engaging with learning goals, developing success criteria, providing feedback to peers, receiving feedback from teachers and peers, and more’.

Additionally, the Institute of Learning & Teaching in Higher Education, in the ‘The role of the student: Assessment literacy and learning empowerment’ (2017) highlights that dialogue between staff and students is valuable as it offers a wide range of perspectives from both an educational and professional viewpoint and allows students to understand their role in their learning and development. The report also advocates student engagement with assessment parameters including choosing a topic, identifying requirements and how to approach a particular assessment. Advance HE, in ‘Engagement through Partnership’ report (2014) stressed the importance of engaging students in all areas of learning & teaching. Highlighting a mutually beneficial partnership researched by Jarvis, Dickerson & Stockwell (2014) where all parties can gain from collaboration they stated:

[Staff-student] partnership in learning and teaching has a significant impact on learning and teaching development and enhancement, learning to learn, raising the profile of research into learning and teaching, and employability skills and attributes.

Currently, the University has a strong student representative community which includes a representative for all undergraduate and postgraduate taught programmes. Therefore, utilising their voice and that of the wider cohort, offers excellent opportunities to both evaluate your assessments, from a learner perspective, and further align them to perceived learning outcomes.


Students' role in the Assessment Process

In order to ensure assessment and feedback is valuable and inclusive, students have a key role to play in the process. Students can add to an inclusive environment by:

  • Submit electronically where possible. 
  • Demonstrate they have achieved academic (and where appropriate professional) standards by completing assessments. 
  • Meet professional and ethical standards appropriate to the subject. 
  • Inform the University of any medical or other reasonable adjustments requiring modification to assessments at the start of each academic year, or as soon as possible. 
  • Comply with all University regulations, including those on academic integrity. 

There are also opportunities to further promote inclusivity by engaging students in the design of assessments they may undertake and in the assessing of both their own and their peers’ work. Leslie & Gorman (2017) outlined that ‘Student engagement is vital in enhancing the student experience and encouraging deeper learning. Involving students in the design of assessment criteria is one way in which to increase student engagement’.

Therefore, involving students in the design of assessment, and in assessing itself, promotes student engagement and, through understanding how students believe their skills can best be evaluated, allows them to engage in their learning and meet the desired outcomes of the module. In regards to the assessment of work, Race (2013) outlines:

‘Nothing affects students more than assessment, yet they often claim to be in the dark as to what goes on in the minds of their assessors and examiners. Involving students in peer and self-assessment can let them in to the assessment culture they must survive in’.


Role of Technology in Assessment

In the ‘Assessment as a driver for institutional transformation’ briefing paper found:

Technology has a dual role. It helps facilitate self-assessment and supportive social and peer processes, by providing students with familiar tools and flexible ways of interacting with each other and with learning resources. Technology also supports teachers by providing them with the ability to monitor group interactions as they happen online, and to intervene to clear up misunderstandings when required, but without providing unnecessary feedback or dominating discussions.

CASE STUDY – REAP

Within the same briefing paper, REAP (2010) highlighted the two case studies below as an effective practice approach to utilising technology to enhance the learning and teaching experience for both staff and students:

In one first year Psychology class, a single teacher was able to organise rich and regular feedback to 560 students on a series of online essay writing tasks. This resulted in an increase in mean exam marks (from 51.1% to 57.4%) with some students producing work at second- and third-year standard.

In another Engineering first-year class with 250 students, teachers were able to cut homework marking in half (a saving of 102 hours) by encouraging students to engage in self-assessment using an online homework system without any drop in exam performance. The time saved was used to increase personal tutor-student contact. These examples were effective because the sources of feedback were extended beyond the teacher, through planned and carefully structured learning tasks.

With the implementation of the Canvas Digital Learning Platform, at Swansea University, the opportunities to utilise elements such as student groups, peer feedback and collaborative practices are numerous. For information, and training materials on utilising Canvas in your teaching delivery, please visit the Canvas Essentials learning resource.


Marking an Moderation of Assessment

As with all processes relating to learning & teaching, all staff should ensure the marking process of any assessment is both inclusive and equal. The University’s Policy on Moderation provides guidance on the responsibilities of all staff in regards to the marking of assessments. In addition to ensuring students understand the marking criteria, through in-class information and the assessment rubric, they are working to, it is also important to ensure moderation of assessment takes place in a rigorous, consistent and transparent manner.

Moderation of Assessment

The University’s Assessment and Feedback Policy outlines moderation as:

‘a process separate from the marking of assessments, which ensures that assessment outcomes (marks) are fair, valid and reliable, that assessment criteria have been applied consistently, that any differences in academic judgement between individual markers can be acknowledged and addressed and that feedback is of consistently high quality’.

Moderation is not a form of second or double marking and is not about making changes to an individual student’s marks. A moderator’s job is to ask whether the marks awarded are justified (and supported by comments made on the assessment) and to guard against bunching of marks in one classification (indicating that the full range of marks has not been used). The moderator may decide to:

  • Confirm all marks
  • Raise or lower all marks
  • Move a boundary (e.g. put all high 2:2s into the 2:1 classification)
  • Make an adjustment to a particular class of marks (e.g. raise all First-Class marks, lower all Third-Class marks).

Moderators do not make comments on individual pieces of assessment, but rather make overall comments on the sample, the marking and any recommended changes. Moderation must be evidenced and recorded.

Specific assessments defined by the College/School may need to be Double Marked, and the established process for agreeing marks as specified in the Swansea University Policy on Moderation must be followed. Second or double marking is normally only required where Professional, Statutory or Regulatory Bodies demand it, or if any issues with consistency have marking have been identified through moderation.

The University provides Colleges/Schools with a recommended template to support the consistent recording of moderation processes and outcomes. Completed moderation will be made available to External Examiners so that they have clear information on the marks awarded and the moderation process in order to be able to confirm the maintenance of academic standards.

Moderation practices should:

  • seek to ensure accuracy and fairness;
  • be appropriate and acceptable to the discipline being taught;
  • be suitable to the material being assessed;
  • be suitable to the means of assessment being used;
  • be clearly evidenced in the feedback provided to students. The External Examiner will need to refer to this in undertaking his/her role.

As part of the moderation process, the University expects all assessments to be moderated in one of the 5 following ways:

Universal Double Blind Marking of the whole cohort 

The first marker makes no notes of any kind on the work being marked and the second marker examines the submission as it was submitted by the student. Both examiners record their marks and feedback separately and then compare marks and resolve differences to produce an agreed mark.

Universal Non-Blind Double Marking of the whole cohort

The first marker provides feedback for the student on the assessment and the second marker assesses the work with this information known. No actual marks are disclosed; or marks are, for example, written on the back cover of an examination book.

Moderation of the entire cohort as Check or Audit

The first marker provides feedback for the student and awards a mark. The role of the second marker is to check that first marking has been carried out correctly.

Moderation by sampling of the cohort

The second marker samples work already first marked, with feedback for students and marks attached, in order to check overall standards. This may be used where first markers are less experienced, where there are several first markers and consistency may be a problem or where unusual patterns of performance are expected or observed.

Partial Moderation

Any of the above may be applied to particular types of marks e.g., fails, firsts, or borderlines.

As a minimum, moderation will be conducted for all assessments at all levels, including (but not limited to):

  • Examinations
  • Continuous Assessment
  • Laboratory or practical work
  • Dissertations and Projects

Moderation may, as a minimum, apply to a sample of each assessment element, including all failed work (including tolerated failures) and work close to the borderline for tolerated failures (30% for undergraduate modules and 40% for taught postgraduate modules). 

Assessment methods that are automated (i.e. the answers are machine or optically read), or in quantitative assessments in which model answers are provided to the marker, are exempt from moderation. 

Samples for moderation must include a minimum of:

  • 10% of assessments overall

and

  • All failures

Including

  • 5% from the First/Distinction range, including borderlines.
  • 5% (total) from the remaining classification bands with a focus on work at borderlines.

For Modules with small numbers (i.e. less than 30 students), all assessments will be moderated.

The sample will be representative of all markers involved in marking the component. Discretion in sampling may be exercised where there are:

  • Modules with large numbers, where Module Leaders may wish to discuss the content of the sample with the relevant Programme Director and/or Moderator.
  • Requirements imposed by Professional, Statutory or Regulatory bodies

The University provides Colleges/Schools with an agreed template to ensure the consistent recording of moderation processes and outcomes. Moderation forms will be made available to External Examiners so that they have clear information on the marks awarded and the moderation process in order to be able to confirm the maintenance of academic standards.

Therefore, it may be that staff wish to vary their moderation approaches to align with the nature of the work being accessed, but it is important to ensure ongoing review of these processes and, if required, adjust them accordingly.

How do I approach the marking and moderation process in Canvas?

Since August 2020, Swansea University has adopted Canvas as its Digital Learning Platform. TurnItIn remains integrated with Canvas and, as such, there should be no change to marking and moderation practices when using TurnItIn. For assessments which do not require similarity checks, colleagues can choose for students to submit directly to Canvas’ Speedgrader platform. Whilst the fundamentals of marking and moderation remains the same, colleagues can take the Speedgrader Overview Training to familiarise themselves with the platform functionality.

How can I ensure my moderation processes are effective?

As identified by Squire (2013) ‘Understanding moderation’, the main functions of moderation should be: 

  • To verify that assessments are fair, valid, reliable and practicable
  • To identify the need to redesign assessments if required
  • To provide an appeals procedure for dissatisfied learners
  • To evaluate the performance of assessors
  • To provide procedures for the de-registration of unsatisfactory assessors
  • To provide feedback to the NSB’s on unit standards and qualifications

At Swansea University, the ‘Policy on moderation (including double marking), outlines the following:

There should be clear evidence that moderation has taken place, either in the form of clear feedback for students by both markers, either added directly to the piece of assessed work, or by means of a separate feedback sheet.

Colleges should bear in mind the information needed to provide assurance that quality standards are maintained under the schemes of moderation which they use. College Learning and Teaching Committees should ensure that suitable monitoring of moderation takes place.

Colleges should ensure that the selection of markers meets acceptable teaching quality standards; including where Postgraduate Students are used to mark (see the Guide to the Employment of Research Students). Academic Regulations exist to assure academic standards for Swansea University’s portfolio of programmes and awards but also to ensure that all students are treated fairly. When using Postgraduate Research Students, for academic work such as marking, colleagues should ensure these students are given relevant training. Additionally, there is a broad agreement between UK sponsors in the matter of subject-related paid employment (mainly teaching and demonstrating). This permits employment, with the express permission of the supervisors, to a normal maximum of 6 hours in the working week (9 to 5, Monday to Friday). Again, in most cases the annual maximum will be 180 hours per year.

Moderation practices should be determined for the assessment(s) within a given module and should be discipline specific. The College which approves a module should also approve its moderation arrangements. Therefore, double marking is not always required but it is critically evaluation, and moderation, of the marking practices takes place.

What happens if markers cannot agree on a final mark?

The University’s approach to resolution of differences in marking between marks is as follows:

  • discussion and negotiation between the two markers on all differences;
  • discussion and negotiation between the markers on specified differences e.g., for relatively large differences, fails, firsts, borderlines or differences across degree classes. If a size criterion is used its value or range of values should be agreed and specified;
  • taking the mean of different marks: this may be done for all differences, for relatively small differences or differences within a degree class, or where both marks are clearly above or below the pass‑fail line or above or below limits for compensation. It is recommended that where differences straddle critical boundaries the differences should be settled by discussion and negotiation;
  • resort to a third marker. This should be an additional internal examiner.

Differences between markers cannot be left unresolved. The External Examiner should not be called upon to resolve differences.

The moderator will often share the same view on the work they have seen and agree the marks and feedback will stand without adjustment. On occasions, some discussion is required and marks will be agreed based on a negotiated outcome. In the rare cases where agreement cannot be reached, the matter must be brought to the attention of the relevant Programme Director or Director of Learning and Teaching who may decide on further action such as additional moderation or marking.

Are there support tools available to facilitate peer-moderated marking?

Peer-moderated marking is based on the principle of students reviewing the work of their peers and against a clearly defined assessment criteria, providing a mark for this work. Peer-moderated marking is predominantly used for formative assessment but, in certain scenarios, can play an important role in summative approaches. To assist with peer-moderated marking, the WebPA Peer Assessment Tool which facilitates peer moderated marking. The software is fully integrated with the University’s new Canvas virtual learning environment.

What are the benefits of peer-moderated marking?

There are several benefits for facilitating peer assessment, including:

  • Assigning one mark to a group of students for a project is inherently unfair. Students commonly complain that their contributions are not being given the credit they deserve, and group members who didn’t pull their weight receive the same marks as those who contributed far more.
  • It allows academics to better grade a student’s abilities against a range of key skills.
  • Peer review prompts students to assess their own, and others, abilities.

Peer-moderated marking offers an engagement mechanism for students by encouraging them to understand what is required to achieve a particular standard. Through understanding assessment criteria, the student is able to reflect on both their work and the work of their peers, and use this process as both an academic and professional learning tool.

How can I ensure both staff and students understand the marking criteria/standards?

Assessment rubrics provide clear indicators for assessment and achievement criteria across all the components of any kind of student work, from written to oral to visual. It can be used for marking assignments, class participation, or overall grades, and can be used as part of feedback.

Subject specific online assessment rubrics will be made available to students when the assessments are set at the beginning of a session, and then revisited at key points throughout the year.


Importance of Varying Assessment Approaches

Providing students with a variety of assessment methods allows them to demonstrate their ability using different types of learning, at least some of the time. Research evidence suggests that everyone is different in the way they learn and how they convey an understanding of what has been learnt (Biggs, 1999), making the use of a variety of assessment methods an increasingly important part of learning and development.

Each assessment method has a range of advantages and disadvantages. By adopting a variety of assessment methods within modules and programme, there is an opportunity to even out disadvantages and maximise the reliability of the assessment overall.

Another key factor in developing a variety of assessment is that it is extremely beneficial for inclusive learning and for meeting the requirements of the Equality Act (2010). By offering variety of assessment, the University will be reflecting the diversity of the student body.

The Quality Assurance Agency’s UK quality code for Higher Education states:

‘Through inclusive design wherever possible, and through individual reasonable adjustments wherever required, assessment tasks provide every student with an equal opportunity to demonstrate their achievement’.

Therefore, through maximising the opportunity for every student to reach their potential, the University can evidence a commitment to equality of both opportunity and achievement.


Ensuring Effective Balance of Assessment(s)

Assessment load must be balanced and appropriate for the subject area. When setting assessment, consideration should be given to how this reflects the credit weighting of the module, and how the assessment type works within the overall assessment strategy of the module, including its link to the learning outcomes, to avoid over-assessing students.

The table below provides indicative comparative assessment load for a typical taught 10-credit module. Whilst the table outlines an example of assessment, balance should focus on the contribution and weighting an element or module makes to the overall award. Therefore, when designing an assessment, learning outcomes should be a key element of consideration. Colleagues may also want to consider how both formative and summative assessments can play a role in delivery.

Assessment equivalency tables have traditionally focused on word count equivalence as a workload indicator. University of Wales, Trinity St David (UWTSD) provide a useful example (tables 1 and 2) where the modes of assessment are broadly equivalent to the word length required for written assessments in a 20-credit module (NB. this is not intended to be an exhaustive list of modes of assessment).

Assessment

Unit

Weight

Level 4

Level 5

Level 6

Level 7

Min

Max

Min

Max

Min

Max

Min

Max

Written assessment*

Word Length

100%

3000

5000

4000

5000

5000

6000

5000

6000

50%

1500

2500

2000

2500

2500

3000

2500

3000

*Written assessment may include any of the following: Written Assignment/Essay, Research Project/Dissertation, Process Workbooks/Journals, Written Report, Lab Book, Academic Review, Blog/Wiki, and Placement Report.

Assessment Type

Unit of Assessment

Weight

Level 4

Level 5

Level 6

Level 7

Examination (Seen/unseen)

Duration (hours)

50%

1

2

2

3

2

3

2

3

Portfolio*

Elements

50%

2

3

3

4

4

5

4

5

Blogs

Minimum entries

50%

2

3

3

4

4

5

4

5

Seminar Presentation (individual)

Duration (minutes)

50%

10

15

15

20

15

20

15

20

25%

5

8

8

10

8

10

8

10

Seminar Presentation

(group)

Duration per person (minutes)

50%

5

10

10

15

10

15

10

15

25%

3

5

5

8

5

8

5

8

Video (Individual)

Duration (minutes)

50%

3

5

5

10

10

15

10

15

25%

2

3

3

5

5

8

5

8

Video (Group)

Duration per person (minutes)

50%

2

4

4

8

8

10

8

10

25%

1

2

2

4

4

5

4

5

Practical Performance Project

Duration per person (minutes)

50%

3

5

3

5

Extended Performance project dependent on context

Academic Interview

Duration (minutes)

50%

15

20

20

25

25

30

25

30

*Portfolio: elements might refer to various activities, abstracts, poems, literature reviews, sketch book, pieces of art, number of project briefs, and stages in the creation of a single artefact or activity.

In-line with the University’s ‘Swansea Graduate’ objectives, colleagues should aim to ensure assessment not only meets the desired learning outcomes but also evidences authentic assessment principles, where possible.


Authentic Assessment

Gulikers, Bastiaens & Kirschner (2004) define authentic assessment as: ‘An assessment requiring students to use the same competencies, or combinations of knowledge, skills, and attitudes that they need to apply in the criterion situation in professional life’.

Therefore, the principle of authentic assessment is to evaluate student knowledge or skill, in a way which intertwines both academic and professional requirements, in a real-world context. 

How could I implement authentic assessment principles? 

The assessment types table outlines the strengths and developmental areas of a variety of assessment approaches. Although there may not be a suitable approach for all, dependent on the subject area, there may be an assessment type which is most relevant to creating a ‘real-world’ scenario.

The ‘Objective Structured Clinical Examination (OSCE)’ or ‘Objective Structured Skills Examination (OSSE)’ is an approach which can facilitate these ‘real-world’ assessment approaches.   

Harden, Lilley and Patricio (2016) focus on clinical settings and describe the approach as ‘performance-based examination in which examinees are observed and scored as they rotate around a series of stations. Each station focuses on an element of (clinical) competence, and the learner’s performance (with a real patient), is assessed’.

Whilst Harden, Lilley and Patricio outline the approach in a clinical setting, which has previously predominately been where the approaches have focused, and OCSE has been prevalent within healthcare assessment since the mid-1900s, the principles can be adapted to meet a variety of other subject areas where the evidencing of specific professional knowledge or skills is required.

OSCE/OSSE Approaches

The principles of OSCE/OSSE allow for students to embed their academic understanding with the professional skills they require for their chosen subject area. Through these approaches, it allows students to enhance their employability skills and meet objectives of the ‘Swansea Graduate’.

CASE STUDY

The use of OSCE plays an important role in the professional development of midwifery students, at Swansea University, and is particularly prevalent in final year skills assessments.

Building on exposure to the approach in the first two years, student learning is facilitated through a lead lecture and followed up with same-day practical sessions. Students have the opportunity to undertake both a formative assessment, as well as a 2nd formative mock assessment, before undertaking their summative elements.

In the formative sessions, student learning in both a professional and academic perspective, is facilitated through peer-to-peer marking and added to by verbal feedback from the lecturer. The summative elements can often consist of several assessment stations with students being assessed on 2 or more elements of an unknown real-world scenario.

When utilising these approaches, students are encouraged, and assisted, in understanding both the learning outcomes and the required professional skills. An effective approach in supporting students in their understanding is through the use of assessment rubrics.


Assessment Rubrics

Assessment rubrics provide clear indicators for assessment and achievement criteria across all the components of any kind of student work, from written to oral to visual. It can be used for marking assignments, class participation, or overall grades, and can be used as part of feedback.

Subject specific online assessment rubrics will be made available to students when the assessments are set at the beginning of a session, and then revisited at key points throughout the year. Swansea Academy of Learning and Teaching (SALT) will provide advice on assessment rubrics which can be used with TurnitIn’s Rubric tool, or used for any other type of electronic marking, and which should form a key part of the feedback process.

Example Assessment Rubrics

Presentation Assessment Rubric

Presentation 0-29 Fail 30-39 Fail 40-49 3rd 50-59 2:2 60-69 2:1 70 + 1st
Initiative: Originality and Innovativness of Solution Project solution and approach very poor Project solution and approach poor Project solution and approach satisfactory Project solution and approach good Project solution and approach very good Project solution and approach excellent
Presentation Structure: Relevance of information provided Very poor structure Poor structure Satisfactory structure Good structure Very good structure Excellent structure
Presentation Quality: Convey key concepts around project brief Very poor skills demonstrated Poor skills demonstrated Satisfactory skills demonstrated Good skills demonstrated Very good skills demonstrated Excellent skills demonstrated
References: Use of suitable sources to support the report Very poor use of sources Poor use of sources Satisfactory use of sources Good use of sources Very good use of sources Excellent use of sources
Quality of Research: Evidence to support project Very poor supporting evidence Poor supporting evidence Satisfactory supporting evidence Good supporting evidence Very good supporting evidence Excellent supporting evidence
Purpose: Ratonale behind presentation delivery Very poor rationale provided Poor rationale provided Satisfactory rationale provided Good rationale provided Very good rationale provided Excellent rationale provided
Professionalism: Dress demeanour Very Poor Poor Satisfactory Good Very Good Excellent
Overall report quality Very Poor Poor Satisfactory Good Very Good Excellent

Proposal Assessment Rubric

Proposal 0-29 – Fail 30-39 – Fail 40-49 – 3rd 50-59 – 2:2 60-69 – 2:1 70 + 1st
Creativity and Innovation Very poor level of creativity and innovation. A lack of initiative in approaching the challenge that the project must address. A lack of willingness to seek practical, numerical, or theoretical solutions independently. Poor level of creativity and innovation. Little initiative in the challenge that the project must address. Little willingness to seek practical, numerical, or theoretical solutions independently. Satisfactory level of creativity and innovation. Some initiative in approaching particular problems in the project. Some willingness to seek practical, numerical, or theoretical solutions independently. Good level of creativity and innovation. Some good initiative in approaching the challenge that the project must address. A willingness to seek practical, numerical, or theoretical solutions independently. Very good level of creativity and innovation. High levels of initiative in approaching he challenge that the project must address. A willingness to seek practical, numerical, or theoretical solutions independently. Excellent level of creativity and innovation. Excellent initiative in approaching the challenge that the project must address. Extensive willingness to seek practical, numerical, or theoretical solutions independently.
Quality of Presentation Very poor presentation of document. Insufficient information presented, unprofessional with a significant number of spelling and grammatical errors.   Very poor level of readability conveying very few or no key message(s), identifying very little or no knowledge and experience. No effort to combine. No conclusions drawn. Poor presentation of document. Very little information presented, unprofessional with many spelling and grammatical errors. Poor level of readability conveying very few key message(s), identifying very little knowledge and experience, little effort to combine. Little argument with few conclusions drawn. Satisfactory presentation of document. Satisfactory level of information presented in a professional way with satisfactory spelling and grammar.   Satisfactory level of readability conveying few key message(s), identifying satisfactory knowledge and experience, combined in a satisfactory way. Argument unclear in places with satisfactory conclusions drawn. Good presentation of document. Most information presented in a professional way with good spelling and grammar. Good level of readability conveying some key message(s) concisely, identifying some knowledge and experience and combining these in order to build an argument and draw good conclusions. Very good presentation of document. Almost all information presented in a professional way with very good spelling and grammar. Very good level of readability conveying almost most key message(s) concisely, identifying most knowledge and experience clearly and combining these in order to build a logical argument and draw very good conclusions. Excellent presentation of document. Clear and professional in every aspect with excellent spelling and grammar. Excellent level of readability conveying all key message(s) concisely clearly, identifying all aspects of knowledge and experience clearly and combining these in order to build a detailed, logical argument and draw excellent, clear conclusions.
Quality of Research Very poor use of sources lacking in focus to inform the understanding of the challenge and the creation of the proposal. Poor use of sources with weak focus to answer the understanding of the challenge and the creation of the proposal. Satisfactory use of sources with satisfactory focus to answer some the understanding of the challenge and the creation of the proposal. Good use of sources with good focus to answer the understanding of the challenge and the creation of the proposal. Very good use of sources with very good focus to answer the understanding of the challenge and the creation of the proposal. Excellent use of sources with excellent focus to answer the understanding of the challenge and the creation of the proposal in depth.
Insightfulness of Analysis Very poor use of data and very weak or no conclusions drawn informing the proposal. Very weak or no application of business theory. Poor use of data and weak conclusions drawn informing the proposal. Conclusions may not be drawn from all data. Weak application of business theory. Satisfactory use of data and some appropriate conclusions drawn   informing the proposal. May have weak application of business theory. Good use of data and most conclusions drawn informing the proposal and appropriate to business with good application of business theory. Very good use of data and almost all conclusions drawn informing the proposal and appropriate to business with very good application of business theory. Excellent use of data and appropriate conclusions drawn informing the proposal with excellent application of business theory.
Understanding Understanding of the project provider’s challenge is very poor. Little or no application of the research and practical experience in understanding the challenge or creating the proposal. Understanding of the project provider’s challenge is poor. Poor application of the research and practical experience in understanding the challenge or creating the proposal. Some understanding of the project provider’s challenge is demonstrated and some relevant research into the problem evidenced. Satisfactory application of the research and practical experience in creating the proposal. Good understanding of the project provider’s challenge is demonstrated based on limited research into the problem. Good application of the research and practical experience in creating the proposal. Very good understanding of the project provider’s challenge is demonstrated, based on wider research into the problem. Very good application of the research and practical experience in creating the proposal. Excellent understanding of the project provider’s challenge is demonstrated, based on wider research into the problem. Very good application of the research and practical experience in creating the proposal.

Individual Reflection Assessment Rubric

Individual Reflection 0-29 – Fail 30-39 – Fail 40-49 – 3rd 50-59 2.2 60-69 – 2.1 70+ – First
Skills: Very poor analysis of individual skills and skills within the team Poor analysis of individual skills and skills within the team Satisfactory analysis of individual skills and skills within the team Good analysis of individual skills and skills within the team Very good analysis of individual skills and skills within the team Excellent analysis of individual skills and skills within the team
Learning: Very poor analysis of tasks, activities, incidents, issues or approaches to problem solving Poor analysis of tasks, activities, incidents, issues or approaches to problem solving Satisfactory analysis of tasks, activities, incidents, issues or approaches to problem solving Good analysis of tasks, activities, incidents, issues or approaches to problem solving Very good analysis of tasks, activities, incidents, issues or approaches to problem solving Excellent analysis of tasks, activities, incidents, issues or approaches to problem solving
Reflective Application: Very poor analysis of changed behaviour and repeated behaviour based on unsuccessful and successful team working Poor analysis of changed behaviour and repeated behaviour based on unsuccessful and successful team working Satisfactory analysis of changed behaviour and repeated behaviour based on unsuccessful and successful team working Good analysis of changed behaviour and repeated behaviour based on unsuccessful and successful team working Very good analysis of changed behaviour and repeated behaviour based on unsuccessful and successful team working Excellent analysis of changed behaviour and repeated behaviour based on unsuccessful and successful team working
Literature: Very poor use of appropriate sources Poor use of appropriate sources Satisfactory use of appropriate sources Good use of appropriate sources Very good use of appropriate sources Excellent use of appropriate sources
Record Keeping Very poor record keeping. Little or no details of meetings, attendance at surgeries, formative feedback received and acted on. Poor record keeping. Little detail of meetings, attendance at surgeries, formative feedback received and acted on. Satisfactory record keeping. Satisfactory detail of meetings, attendance at surgeries, formative feedback received and acted on. Good record keeping. Good detail of meetings, attendance at surgeries, formative feedback received and acted on. Very good record keeping. Very good detail of meetings, attendance at surgeries, formative feedback received and acted on. Excellent record keeping. Excellent detail of meetings, attendance at surgeries, formative feedback received and acted on.
Overall: Very poor Poor Satisfactory Good Very good Excellent

Business Plan Assessment Rubric

Business Plan 0-29 – Fail 30-39 – Fail 40-49 – 3rd 50-59 2.2 60-69 – 2.1 70+ – First
Initiative Business idea or innovation in approach is very poor Business idea or innovation in approach is poor Business idea or innovation in approach is satisfactory Business idea or innovation in approach is good Business idea or innovation in approach is very good Business idea or innovation in approach is excellent
Structure Very poor structure not addressing any areas listed with very little or none relevant to the particular new venture Poor structure not addressing basic areas listed with little or none relevant to the particular new venture Satisfactory structure addressing basic areas listed with some relevant to the particular new venture Good structure addressing most of the key areas listed and mostly relevant to the particular new venture Very good structure addressing almost all of the key areas listed above and almost all relevant to the particular new venture Excellent structure addressing each of the key areas listed and all relevant to the particular new venture
Presentation Very poor presentation of document. Insufficient information presented, unprofessional with a significant number of spelling and grammatical errors Poor presentation of document. Very little   information presented, unprofessional with many spelling and grammatical errors Satisfactory presentation of document. Some information presented in a professional way with satisfactory spelling and grammar Good presentation of document. Most information presented in a professional way with good spelling and grammar Very good presentation of document. Almost all information presented in a professional way with very good spelling and grammar Excellent presentation of document. Clear and professional in every aspect with excellent spelling and grammar
Writing Very poor level of readability conveying very few or no key message(s) in each of the key areas Poor level of readability conveying few key message(s) in each of the key areas Satisfactory level of readability conveying some key message(s) in each of the key areas Good level of readability conveying most key message(s) concisely in each of the key areas Very good level of readability conveying almost all key message(s) concisely in each of the key areas Excellent level of readability conveying all key message(s) concisely in each of the key areas
Research Very poor level of primary and secondary research provided Poor level of primary and secondary research provided Satisfactory level of primary and secondary research provided Good level of primary and secondary research provided Very good level of primary and secondary research provided Excellent level of primary and secondary research provided
Analysis Very poor use of data and very weak or no conclusions drawn. Very weak or no application of business theory. Poor use of data and weak conclusions drawn. Conclusions may not be drawn from all data. Weak application of business theory. Satisfactory use of data and some appropriate conclusions drawn. May have weak application of business theory. Good use of data and most conclusions drawn appropriate to business with good application of business theory. Very good use of data and almost all conclusions drawn appropriate to business with very good application of business theory. Excellent use of data and appropriate conclusions drawn with excellent application of business theory.
Understanding Business plan demonstrates very little or no understanding of the key areas Business plan demonstrates little understanding of the key areas Business plan demonstrates understanding some of the key areas Business plan demonstrates understanding most of the key areas Business plan demonstrates understanding almost all of the key areas Business plan demonstrates understanding all of the key areas
References Very poor use of sources with very weak focus to answer key research aims Poor use of sources with weak focus to answer key research aims Satisfactory use of sources with satisfactory focus to answer key research aims Good use of sources with good focus to answer key research aims Very good use of sources with very good focus to answer key research aims Excellent use of sources with excellent focus to answer key research aims
Overall Very poor Poor Satisfactory Good Very good Excellent

Academic Integrity

Academic integrity reflects a shared set of principles which include honesty, trust, diligence, fairness and respect and is about maintaining the integrity of a student’s work and their award. Academic integrity is based on the ethos that how you learn is as important as what you learn.

Academic integrity is based upon several core principles. For students, as part of the Student Charter, this means they must take joint responsibility for ensuring they meet integrity aims.

Academic misconduct includes plagiarism; collusion; breach of examination regulations; fabrication of data; impersonation of others or the commissioning of work for assessment (this list is not exhaustive).

More information is available from the University’s Academic Misconduct Policy.

How can I design my assessments to support students in their academic integrity?

It is important individual staff, Schools, Colleges and the University do as much as possible to ensure students are knowledgeable about the principles which guide academic integrity.

The list below, by Carroll (2002), was adapted by the University of Tasmania (2018), and outlines some ideas to tackle potential academic integrity issues:

  • Change the content or type of assessment task often (e.g., from year to year).
  • Use tasks that require students to reflect, journalise, analyse, or evaluate.
  • Use tasks that require students to integrate / reflect / apply issues to their own context and experience or utilise current/recent events and ‘hot’ topics.
  • Ask students to submit evidence of their information gathering and planning or have staged assessment where students submit partially completed work prior to final submission.
  • Ask students to provide working drafts or incorporate a re-drafting process into the task itself.
  • Use tasks that are interdependent and build upon each other.
  • Tie in the classroom experience – for example:
    • including class discussions in assignments
    • using presentations in class
    • ask students to informally (or formally) report on their assessment work in class

In their ‘Re-evaluating assessment for academic integrity’ article, Morris (2018) outlines some effective practice approaches to promoting integrity amongst students. Predominately focusing on knowledge and assessment design, the article provides some useful tips on how colleagues can develop assessments which are engaging, challenging and promote academic integrity.
 
How can I support students with academic integrity?

To support students in understanding how they can best ensure they are producing work which meets academic integrity standards, the ‘Academic Success: Skills for Learning, skills for life‘ online course is available to complete.

Additionally, the University recommends doing a brief outline of academic integrity, during an assessment briefing session or lecture, to allow students an opportunity to answer any module-specific issues they may have.

How does TurnitIn assist with academic integrity?

TurnItIn may be used as a learning tool for formative assessments only. Normally this should be restricted to students who have recently commenced their programme of study at Swansea. Where it is used as a learning tool guidance should be offered to students about the interpretation of originality report, with reference to any errors that the student may have made. Where the student is clearly struggling with the principles and practice of appropriate attribution of sources, it is recommended that they are referred to the subject librarian for specific guidance. Academics should also consider referring students to the Centre for Academic Success for more targeted support.

At which point in the marking process should I look at the originality report?

All assignments should be marked anonymously before any reference is made to the percentage match within the Similarity Index. If marking is being undertaken through Grademark, this information is made available to the marker at the time of marking, who may refer to and scrutinise the percentage match once marking is complete.

There is an optional facility which allows the user identification to be anonymous in Turnitin until after a set “post-date” (generally the post-date should be the date on which first marking is completed). If anonymity is used, then it is not possible to match a plagiarism report to a paper until after the set post-date. Exceptionally an instructor or administrator can reveal a report’s identity but will be required to give their identity, and reason for doing so, to provide an audit trail should it be needed later.

What is an acceptable level of similarity?

As noted above, the Turnitin database is not infallible.  It will only cross-reference submitted work against work already in its database, or work which it can access through collaborative agreements. Staff may also want to discuss their suspicions with colleagues or examine other sources if they suspect plagiarism, e.g. simple ‘Google’ searches may reveal relevant work.

Above all, it must be emphasised that the academic judgment of staff with the relevant subject expertise must be paramount in cases of alleged academic misconduct.

Examples of interpretation and further guidance can be found on the Turnitin User Group area of the Canvas platform.

Where Turnitin is being used as the main means of submission then Colleges/Schools must download the assignment documents by the end of each academic year. In-line with the Retention of Assessed Work Policy, electronic documents must be securely stored in the same way and for the same length of time as paper copies would be.


Feedback

Feedback forms an integral part of a student’s academic journey. Whether formative or summative, it provides students with guidance on how to develop their skills in an area.

“Assessment is the engine that drives learning. Feedback is the oil that lubricates the cogs of understanding” Race, P. (2006)

Feedback describes information about reactions to a product, a person’s performance of a task which is used as a basis for improvement. With that in mind, feedback to students should be informative, supportive and facilitate a positive attitude to future learning. Feedback should be timely (see policies) and can take a wide variety of forms, for example verbal, written, audio or video, which allow for a more inclusive approach.  Feedback that links to learning outcomes, is dialogic and feeds forward to steer improvement is encouraged.

Useful guidance can be found in The Higher Education Academy (now Advance HE) ‘Transforming Teaching and Inspiring Learning’ document ‘The Developing Engagement with Feedback Toolkit’ (DEFT) (Winstone & Nash 2016).

Why is feedback important?

Whether summative or formative in nature, feedback plays an important part in the development of students in both an academic and professional capacity.

Effective feedback can:

  1. Enable students to obtain the maximum educational benefit to learning. The majority of evidence from the feedback literature suggests that it is essential to provide meaningful, constructive feedback within a reasonable timescale and that it is important to use multiple methodologies to deliver the feedback.
  2. Be an integral part of the learning and assessment process. The most effective feedback aims to provide learners with information that enables them to identify what they have done well (and why) and highlight areas for improvement and further development. It is extremely beneficial to discuss with students themselves what form of feedback works best for them.

What is the role of formative feedback in the learning experience?

Formative feedback is provided to the student with regards to their progression towards a goal or standard and may (or may not) carry marks, but the principle purpose is developmental, rather than judgement. Assessment for judgement is termed ‘summative’ assessment, with its primary purpose being to measure the sum of the learning, often in the form of a traditional exam.

The problem could be considered that students receive too much feedback after learning, rather than during learning. Therefore, formative assessment is an essential feature of learning if students are to improve the next element of their work by adjusting in a progressive manner. This process can also be described by the phrase ‘feed-forward’.

What is my role in formative feedback?

If you are providing feedback to a student, it is important to recognise you are a facilitator to their academic and professional development. Therefore, your role is to provide effective developmental targets for our students through mechanisms such as developing their subject knowledge, personalising feedback to meet the individual learner and giving students time to respond to the feedback.

What role can the student play in formative feedback?

Providing formative feedback is important, but only part of the process. To ensure that developmental targets are ‘fed-forward’ the student must actively engage with their feedback, act on the feedback provided and self-assess their work.

One possible inclusive approach to feedback is through dialogic feedback.  

Dialogic feedback is currently playing an important role in areas such as Swansea University’s School of Healthcare Science where, for example, students are able to have face-to-face discussions, on practical elements of the course, with their lecturers.

What is dialogic feedback?

Dialogic feedback is an approach to providing feedback on assessment which engages the student in a conversation, rather than purely as a passive recipient. The approach requires the student to engage more fully with the assessment and feedback process, meaning that they should be part of the process in shaping an individual assessment and feedback solution which works for them.

This approach asks students to identify what they would specifically like feedback on and how effective they found it in their development.

‘Dialogic feedback suggests an interactive exchange in which interpretations are shared, meanings negotiated and expectations clarified’ (Carless, 2011)

Therefore, dialogic feedback can be viewed as a two-way communication process, where students and staff actively discuss an assessed piece of work and, through this dialogue, identify specific developmental areas based on exactly what the student wants to learn more about.

A report by Nottingham Trent University (2013) highlighted a case study, from University of Central Lancaster, where students were required to make one-to-one meetings with their module tutor to collect their assessments. Students were asked, by reviewing an assessment rubric, the mark they believed they had obtained. Reflecting on the assignment, students were asked to identify two or three keys areas they wanted feedback on and how they could improve their performance in these areas.

Dialogic feedback gives both the student and staff member the opportunity to engage in conversation around developing a specific skill-set. By focusing on identified areas, it can also enhance engagement for both staff and students, as well as reduce the time required to provide feedback by focusing on specific areas of the assessed work.

It should be noted, dialogic feedback does not have to be face-to-face. Technological approaches such as audio or video feedback can be used and offer solutions to accessibility and inclusivity issues. You can view a Swansea University seminar, on Audio Feedback, which formed part of the 7 Characteristics of a Good University Teacher series. 

Additionally, Mulliner & Tucker (2015), when conducting research into the perceptions of feedback for students and academics, found academics ‘tended to believe their feedback was more useful, fair, understandable, constructive and encouraging and detailed in comparison to what students felt they were receiving.’ 

Therefore, understanding what feedback a student want, by engaging in dialogue with them, can not only benefit the student but provide colleagues with a useful guide on what feedback students want.

Feedback Spirals

Winstone & Carless (2020), in ‘Designing Effective Feedback Processes in Higher Education’, turn the focus of feedback on how it should ‘place less emphasis on what teachers do in terms of providing commentary, and more emphasis on how students generate, make sense of, and use feedback for ongoing improvement’ and highlight the importance of staff and student feedback dialogue in developing productive feedback partnerships.

Colleagues are encouraged to provide feedback with is dialogic in nature and provides ongoing, two-way communication to assist students in both their academic and professional skills development. Recent research, such as Carless (2018), has identified feedback spirals as one such method.

As identified by Carless (2018), the key challenge with the traditional ‘feedback loop’ is how to both promote, and ensure student uptake, of the single process-based loop. Therefore, as Carless argues, double-loop learning  or spirals, offer additional re-evaluation of a problem or task.

The Chartered Association of Business Schools, in the article ‘The Spiral of the feedback loop’ provides both a visual representation of the feedback spiral (below) and provide a case study of how the use of a feedback spiral positively impacted on student engagement and pass rates within an economics programme.

In their example, it was recognised that traditional feedback is:

  • Difficult to hear: negative feedback may reduce self-confidence and self-esteem
  • Feedback is difficult to recognise/remember/retain
  • Feedback is difficult to use (motivational dimensions): traditional feedback that is ‘given’ to students may lead to passive reception and may fail to motivate self-improvement

The spiral of the Feedback Loop, according to CABS, addressed the above issues and it offered students the opportunity for formative learning because:

  1. It focused on future actions (motivational dimension)
  2. It is developmental and evolving (cognitive dimension)
  3. It is dialogic and interactive (emotional dimension)

With the implementation of Canvas, staff are encouraged to investigate the use of elements such as groups, discussion boards and integrated media functions such as Zoom, Panopto and  – which can all play a key role developing ongoing dialogue, student engagement and both summative & formative assessment mechanisms (such as quizzes). Self-paced training is available, on various Canvas elements, within the Canvas Essential programme.

What else can I do to ensure my feedback is effective?

Phil Race (2018) stated:

Successful feedback/feed-forward needs to be a dialogue, not just a monologue from tutors, so helping students make the most of feedback is an essential part of the picture.

In Feedback and Feedforward – Just the Tips! report, Race gives 20 top tips for engaging students in meaningful, transferable and effective feedback/feedforward processes which add to both their academic journey and professional development.

One of the key mechanisms, identified by Race, is timeliness of feedback:

Aim to get feedback on work back to learners very quickly, while they still care and while there is still time for them to do something with it. The longer learners must wait to get work back, especially if they have moved into another semester by the time they receive their returned scripts, the less likely it is that they will do something constructive with the lecturer’s hard-written comments.

Therefore, whenever possible, colleagues are encouraged to utilise instant feedback approaches, particularly in a formative setting, where students can develop understanding immediately after being assessed. Hunt (2008), in ‘Enhanced Learning through Instant Feedback’ provides key considerations for utilising instant feedback principles.

How can I ensure my feedback is balanced?

Part of effective feedback is ensuring it is relevant but also concise, and comprehensive. It is about being focused on delivering to the specific elements being assessed and what a student requires.

The article ‘Effective feedback is more than just correcting student work’ provides some effective practice tips on how to provide high-quality feedback without overburdening students, or yourself.  

How can I scale balanced feedback out to a large cohort?

Race (2009), in ‘Putting Feedback Right: Getting Better Feedback to More Students in Less Time’ suggests the use of a short post-submission handout detailing what a model answer would look like and examples of common mistakes.

In ‘Assessment for Learning at King’s’, King’s College London, define generic feedback as:

‘Rather than providing comments on each individual piece of work, generic feedback allows for holistic comments to be provided to all students as a cohort for one assignment. It focuses on patterns of strengths, weaknesses and areas for development that are common to the majority of the class’.

Providing generic feedback, by dealing with common issues, provides an opportunity for group development and understanding. Providing generic feedback, particularly in large cohorts, provides additional time for colleagues to offer students the opportunity, for dialogic and personalised feedback, should they want to.

What is inclusive feedback?

Inclusive assessment and feedback is an approach which places the individual student experience at the heart of the process, by ensuring that when assessments are designed, the needs of all students are considered. The Equality Act 2010 places a legal obligation on institutions to ensure that assessment does not disadvantage any student.

The principles of universal or inclusive design will ensure that by keeping the needs of all students in mind, developing different assessment approaches which maximise inclusion can be beneficial to all students and therefore ease the burden on staff.

Inclusive assessment should not compromise a student’s ability to meet the learning outcomes, meaning that developing robust, fair and inclusive assessments is essential.


Guidance Resources, Further Reading and Directory of Effective Practice for Assessment and Feedback

For more examples, you can visit Academic Quality’s Effective Practice Directory or SALT Conference Resources webpage.

Policies & Legislation

Swansea University has a range of policies relevant to assessment, marking and feedback. The University’s regulations and policies are aligned with the UK Quality Code for Higher Education published by the Quality Assurance Agency, which is the body that monitors and advises institutions on standards and quality.

All Assessment and Feedback related policies can be accessed via the expanding menu via the ‘Assessment and Progress’ heading on the page below.

References

Pedagogy and Principles

  • Allen, D (2020) ‘What can be done about degree algorithm variations?’. WonkHE. https://wonkhe.com/blogs/what-can-be-done-about-degree-algorithm-variations-2/
  • Biggs, J and Tang, C. (2011) ‘Teaching for Quality Learning at University’, 4th Edition, Society for Research into Higher Education and Open University Press.
  • Brown, S., Rust, C., Gibbs, G. (1994) ‘Strategies for Diversifying Assessment’ Oxford Centre for Staff Development, UK.
  • Carless, D., Salter, D., Yang. M. and Lam, J., (2011) ‘Developing sustainable feedback practices’. Studies in Higher Education, 36(4), 395-407.
  • Carroll, J. (2002). A Handbook for Deterring Plagiarism in Higher Education. Oxford: Oxford Centre for Staff and Learning Development.
  • Gulikers, J., Bastiaens, T., & Kirschner, P. (2004). ‘A five-­­dimensional framework for authentic assessment’. Educational Technology Research and Development, 52 (3), 67-­­85.
  • Harden, R., Lilley, P. and Patricio, M (2016) ‘The definitive guide to the OSCE’. Elsevier.
  • Higher Education Academy:

Approaches to Inclusive Learning and Teaching | Research-Led, Practice-Driven Learning >

 

css.php