Skip to content
  • There are no suggestions because the search field is empty.

Selecting Assessments

Our recommendations for selecting the most appropriate assessments for your pilot project.

The primary focus is to ensure that the project is successful by meeting the success criteria you define at the outset. It is therefore crucial to select the appropriate assessments (and by association, the modules/programmes/cohorts) to be involved.

How particular assessments are selected for involvement should be heavily influenced by the nature of your success criteria. Below are some factors to consider when identifying assessments, grouped by theme.

It is advisable to initially identify a small number of assessments for involvement in your project. This enables the institution to build the necessary experience/capacity to support an increase in usage over time.

UNIwise has created a template that you can use for data collection purposes when selecting assessments that will form part of your pilot project.

If you have one or more success criteria that relate to the infrastructure required to deliver digital assessments, you should consider some of the factors listed below:

On-site assessments

  • Are you interested in finding out about PC lab based or Bring Your Own Device (BYOD) assessments or a mixture of the two?
  • Are there particular infrastructure components that you would like to evaluate such as Wi-Fi capacity in key venues?

Remote (off-site) assessments

  • If you plan to run remote assessments, are there particular aspects that you would like to investigate as a part of your evaluation/success criteria such as:
    • student experience of remote assessments?
    • institutional support for students completing assessments remotely?
    • remote invigilation?

The structure of a particular course of study may impact on one or more of your success criteria:

What year(s) of study will selected assessments be drawn from? Students studying in their first year are often more receptive simply because they have no prior experience of an alternative approach. Students from later years of study may be resistant to the idea of using a new assessment system/following a new assessment process. However, these students can be valuable from an evaluation perspective as they are able to compare the new system/process with what they have experienced previously.

Size of the cohort(s): if your evaluation process will gather information from students, the size of the cohort(s) will affect the amount and variety of data available to analyse and draw conclusions from.

Are the modules/programmes that selected assessments form part of mature or newly offered? Mature programmes tend to have well defined assessment processes. This can make the use of a new system much smoother and offers the possibility for evaluation activities to make a comparison with previous offerings of the course.

Are the academic staff responsible for the selected assessments new to the institution? A key component of a successful implementation is a thorough understanding of the assessment processes of the institution. Newly recruited staff may still be developing their understanding of institutional procedures.

The format of the assessments that you select for your pilot project may also directly impact on your defined success criteria. Consider the forms of digital assessment that the institution is particularly interested in exploring. Are there plans to explore one assessment form, such as the submission of assignments, or a range of assessments such as online examinations and face-to-face presentations?

The amount of prior experience with digital tools of the staff and students that you select for involvement in your pilot project may also impact upon your success criteria. Consider whether your institution is exploring digital assessments for the first time or is this an area that the staff and students who will be involved have prior experience of. You may wish to consider recruiting staff and students who have differing levels of experience to give you a wider perspective when it comes to the evaluation of the project.

If your success criteria include aspects that touch upon student experience, this should be considered when selecting assessments for the pilot project. For example, what particular aspects of the students' experience are you keen to explore and what assessment types will enable you to test this?

Examples of a mismatch between selected assessments and project success criteria

Below are a couple of examples where the selected assessments did not map well to the success criteria of the project:

Success criteria: Student Experience related

Students had expressed a desire to use their own personal computers to complete assessments rather than write on paper. As such, a key success criterion was defined related to the exploration of a BYOD approach to assessments.

However, during the running of the project, it became apparent that there was insufficient time to communicate with the students involved to ensure that they had a suitable personal computer with which to complete their assessments. As a result, students used institutional computers in PC labs meaning that the evaluation of this particular success criteria would not be possible.

Success criteria: Institutional infrastructure related

The Project Team wanted to explore the infrastructure at the institution with regards to the delivery of digital assessments. Therefore, success criteria relating to this aspect was defined at the outset of the project.

When selecting assessments for involvement with the pilot project, the majority were relatively complex in nature and made use of advanced features within WISEflow.

When it came to the evaluation of the project, the focus of staff and students involved was directed towards the features of the WISEflow system. Therefore, useful information on the institutions digital infrastructure was diluted and made the related success criteria more difficult to evaluate.