Assessment is a fundamental part of the learning process, measuring the knowledge gained and the personal transformation enabled by learning. It both enables the evaluation of learning and supports further learning through the identification of strengths, weaknesses, areas of expertise, and areas requiring further study and work. Typically, we think of assessment as formal tests used to provide an impartial measurement of our knowledge or skills, and yet, while this is an important and essential aspect, we should regard it as an invaluable part of the learning process that the learner can fully direct and own.
Although we imagine exam halls filled with regimented rows of desks when we think of assessment, there are many approaches to student evaluation and a wide range of techniques to use. Work-based learning, mobile learning, and the use of innovative technologies, such as virtual worlds and augmented reality, are all increasingly common aspects of the modern education landscape. The development and enhancement of assessment techniques and the adoption of new technologies are natural elements of this changing world.
e-assessment is a term for any form of assessment that involves the use of computers and other technologies as an essential part of the assessment delivery or scoring, although it is most generally regarded as assessment that is delivered and evaluated electronically. A multiple-choice test delivered and marked by computer and an essay written with a word processor and delivered to a digital drop box to be marked on-screen are both forms of e-assessment. Evaluation of an electronic portfolio of essays, images, sound files, and videos can also be a form of e-assessment.
There are three main purposes for assessment: summative, formative and diagnostic. In practice, a course of study is likely to incorporate a number of these at different stages during the period of the course. Each serves a distinct purpose, and different factors apply to the setting and delivery of each. Often, we manage all using e-assessment.
Summative assessment is assessment conducted at the end of a learning period. It is designed to measure how much the student has learned and evaluates their performance in relation to the course's aims and objectives. Because of its role as the conclusion of the learning process, it rarely provides feedback beyond a mark or grade. Summative assessment can be life changing, determining whether a candidate may enter a chosen profession or further course of study. Security, quality assurance, and assessment validity are therefore crucially important.
Formative assessment describes assessment activities aimed at supporting the learning process. Like summative assessment, formative assessments are used to evaluate what has been learned so far, but the outcomes are used to build future activities and do not contribute towards the student's final grade (although they may be used to support an appeal against a summative award or as a substitute if a student hasn't been able to take the exam). Formative assessment generally includes feedback that allows for applying the outcome toward development. While we can regard a raw mark or grade as a form of feedback, they provide little real guidance towards improvement.
Continuous assessment is a form of summative assessment that features elements of formative assessment. Rather than one assessment at the end of the entire course, students are assessed at regular points throughout the course, generally at the end of a small unit of learning. The recorded marks contribute towards the candidate's final grade for the course, and feedback is usually provided. One danger of continuous assessment is that it can promote shallow, surface learning, with students developing an acceptable standard of expertise they soon forget after the assessment is over. To avoid this problem, design the course carefully to ensure that students build upon prior learning as the course progresses.
Diagnostic assessment, delivered prior to a course of study, determines the suitability of a candidate for a course or the necessity of additional teaching. For example, a nursing course may require new entrants to undertake a diagnostic assessment of their mathematical and arithmetical abilities, with those performing below the minimum standard being required to take an additional course to raise their performance.
Assessment may be delivered in a formal environment such as a scheduled examination in a dedicated testing centre, or informally, such as the use of "clickers" or handheld electronic voting units during a break in a lecture. The nature and purpose of the assessment determine the specifics of the environment.
We can use a wide range of techniques to meet all of the purposes discussed above, although some are more or less appropriate for each purpose.
As the name suggests, self-assessment is assessment in which the candidate herself is responsible for evaluating and grading her work; in some cases, the candidate may also set the initial question(s) and determine the criteria for marking. The pedagogic value of such exercises is in encouraging the student to think critically about the topic and about their own performance and relationship to the topic. It can be a very powerful learning tool. Self-assessment is not likely to be used as a major part of summative assessment, at least partially because key stakeholders such as employers and parents are skeptical of the academic validity of it. There is some evidence that students themselves do not regard such approaches as highly as more traditional tests.
Using peer assessment, students mark and comment on the work of their peers. Students may be graded:
- according to the mark(s) awarded by their peers,
- according to their own approach to marking and evaluating,
- or a combination of these two.
A teacher (or other external body) determines the approach taken for peer assessment. One must take care when using this form of assessment to avoid such unwanted practices as friends evaluating their friends' work more highly, or marking unpopular students unfairly. Anonymity, both about who produced the work and who commented on it, is usually necessary. Again, while peer assessment can be a useful learning exercise, it is unlikely to be used as part of a summative assessment.
Systems such as PeerPigeon and WebPA are available to manage peer assesssment. Figure 1 is an example peer review workflow developed by Dave Millard for the Peer Pigeon project (see Resources). (View a larger version of Figure 1.)
Figure 1. An example peer review workflow
We often regard group work as good training for the student in preparation for working life, particularly in areas such as software development where group working is the normal approach in the professional environment. There are two main types:
- The group itself evaluates each of the members of that group (a form of peer assessment)
- A teacher evaluates the work produced by the group
The first type again runs some of the risks of peer assessment, although perhaps surprisingly, research has found that students do not tend to aggrandize their input to the extent that one might expect, perhaps because they are aware that they're being measured against their peers. Web PA (see Resources for more information) is specifically designed to manage sophisticated forms of peer assessment of group work.
The second type often does not distinguish between the different levels and quality of input different members of the group. Due to the difficulty of tracking individual contributions, a single score is often awarded for the group as a whole based on the quality of the final product. Some projects, such as work done with wikis by the Scottish Qualifications Authority (SQA), have attempted to address this issue.
Using scanners for assessments provides a blend of on- and offline assessment. Scanners and Optical Character Recognition (OCR) are often used to automate the evaluation of responses to multiple-choice and multiple-response questions. Students indicate their responses on a pre-printed grid that is scanned by software and automatically graded. Thresholds for clarity can be established for human intervention as required. Scanners can also be used to enable on-screen marking.
When a very large number of students are being assessed (such as national school qualifications), scanning exam scripts for electronic distribution to markers can save time and costs and increase the security of exam scripts. Moving scripts between markers and exam boards by postal or courier services is costly and can lead to significant problems should scripts become misplaced. When scripts are scanned and stored on secure servers, markers can access them (with appropriate credentials). Additionally, this can greatly speed the assessment process when different subject experts are responsible for marking different parts of the same script. Some research has demonstrated that on-screen marking can be less accurate than marking physical artifacts, and markers must be made aware of this potential weakness.
Automated assessment of free text responses has always attracted great interest, but is an extremely difficult process. More successful experiments in this area are those where formulaic writing or the assessment of writing style rather than content are primary. E-rater (see Resources) is widely used, particularly within the United States, to evaluate candidates for entry to higher education, as it offers a thorough assessment of the student's ability to construct an elegant, coherent, and persuasive argument rather than the actual content of the argument. It can be argued that evaluating essay length texts for content is not yet viable; however, some impressive work has been done with shorter texts through technologies such as Intelligent Assessment Technologies' FreeText Author, where there is a remarkably high level of accuracy (see Resources).
On-screen assessment that is delivered and evaluated by computer is what is generally referenced in discussions on e-assessment.
Many different types of questions can be set and evaluated using computers. IMS Question and Test Interoperability (QTI) define items as interaction types rather than how they are displayed on screen. These interaction types can be realized as a range of question types:
- Multiple choice: Consists of a question and a number of possible answers from which the candidate must select the correct one. Although very simple to code and process, we can use well-written multiple-choice questions to test advanced knowledge.
- True or false: The simplest type of multiple-choice question, this consists simply of a question stem with two possible responses, true or false.
- Multiple response questions: Like multiple-choice questions, these consist of a question with a number of possible responses, with the difference that none, one, more than one, or all of the responses may be correct.
- Fill in the blank or cloze: Students replace blanks in a text through free text entry, selection from a drop list of options, dragging a selection of responses into place, and so on.
- Hot spot: This is a graphical interaction in which
candidates must select the appropriate part or parts of an image,
perhaps by dragging a marker to it or simply by clicking on it. For
example, a question might ask candidates to identify the location of
certain cities as shown in Figure 2.
Figure 2. UK city locations. The airport icons at the bottom of the graphic are to be dragged to the appropriate location.
- Free text entry: Students enter text (including
numbers) to answer the question. Figure 3 provides
Figure 3. Free text entry
- Extended text entry: Students enter a relatively large amount of text, such as an essay, usually for manual marking.
We can define the marking of these items within the test to produce a raw score, which the assessment management system can later convert to a grade.
The use of recognized technical standards for the development and exchange of assessment content offers a number of advantages. Content can be easily transferred between different systems and institutions, allowing teachers and learners access to a much larger amount of content. Community-driven initiatives, such as subject-specific item banks, benefit greatly from content interoperability, while professional content publishers and vendors have access to a far greater audience which is able to use their content regardless of the specific assessment tool they use. Expensively produced or purchased content also has far greater longevity when it is free from lock-in to a single delivery system.
Developing assessment tools based on recognized standards supported by a strong developer community significantly increases the efficiency of the development process, freeing up developer time and resources for other activities. Earlier articles in this series explore the origin and development of standards in greater detail. The standards in widespread use in the assessment domain are inherently conservative and reactive: they are developed to reflect common practice within the domain, focusing on features generally used at the expense of more specialized interests, and attempt to meet the broadest needs. This process was made explicit during the finalization of IMS Question and Test Interoperability (QTI) v2.1: the project group surveyed the QTI developer community to identify those features most commonly used which formed the basis of the Basic QTI specification, while Full QTI attempts to accommodate the wider range of features less commonly used.
QTI v2.1 also offers detailed guidance on integrating custom interactions within a QTI compliant system, enabling the use of item types beyond those explicitly codified within the standard. The relationship between formal standards bodies and community-led initiatives to further extend and enhance standards is particularly rich in the assessment domain. A great example is the management of mathematical content, exemplified by the JISC CETIS QTI Mathematics Working Group (see Resources).
IMS Question and Test Interoperability supports the delivery and exchange of assessment content across compliant systems and between institutions (see Resources). It is one of the oldest of the IMS specifications (v1.0 was released in May 2000), reflecting the central position of assessment in learning. Version 2.0 was released in January 2005, offering a substantial revision of the item level of the specification and bringing it more closely into line with other IMS specifications that came after v1, and v2.1 is expected to be released in early summer 2011, completing the section and test levels.
In QTI, a question (termed item) is constructed from a number of elements, including:
rubric: instructions to the candidate related to the assessment
question stem: the stimulus question, for example, "What is the capital of France?"
answer: the correct answer, in this case
distracters: incorrect answers; possible distracters for this question might be
London, Berlin, Rome.
A number of questions can be combined within a section, for example, a themed group of questions within a larger examination. Sections can then be combined into a single assessment. Associated material such as videos, images, and sound files, may be contained within a single item, a section or an entire assessment.
The QTI v2.1 specification is made up of nine documents, each detailing a different aspect of the specification or good practice in its use:
- Overview: Outlines the history of the specification and the scope of the current version. It includes a number of the use cases that helped determine the scope and coverage of the specification.
- Assessment Test, Section and Item Information Model: Describes the concepts and rules of QTI items, sections and tests and the relationships between them.
- XML binding: Provides an XML schema and optional Document Type Definition (DTD) against which QTI is validated.
- Results reporting: Describes how information about assessment results and outcomes should be handled and exchanged between systems.
- Implementation guide: Provides examples of QTI as illustrative demonstrations of how the specification may (but not must) be implemented.
- Integration guide: Provides guidance on the use of QTI with IMS Content Packaging, IMS Learning Design, IMS Simple Sequencing, and IEEE CMI (Data Model for Content Object Communication). See Resources)
- Conformance guide: provides profiles against which tool developers can measure the interoperability of their product.
- Meta-data and Usage Data: Provides an extended application profile of IEEE Learning Object Metadata (LOM) to enable the description and discovery of assessment material, and guidance on the use of custom vocabularies for recording usage data (see Resources).
- Migration guide: Offers detailed guidance on converting content from v1.x to v2.1.
Here are some examples of QTI-based assessment tools:
Questionmark™ Perception™ is a very widely used assessment management system that has been adopted by a number of institutions worldwide. Questionmark is one of the originators of the QTI specification, which was based on their internal language Questionmark Markup Language (QML), and they engaged in a number of development activities (see Resources).
The Sakai Collaboration and Learning Environment (see Resources) is a free, open source system developed by a consortium of higher education institutions, primarily in America. It offers QTI assessment tools within a larger learning system.
Moodle is a similar free, open source community-developed LMS with QTI capabilities (see Resources). In the UK, the government-funded Joint Information Systems Committee (JISC) have provided a considerable amount of support for QTI, mandating its use for assessment tool development in a number of programs and providing ongoing support for and engagement with the specification's development through the Centre for Educational Technology and Interoperability Standards Innovation Support Centre (JISC CETIS).
The principles and processes of content packaging are described in far greater detail by Zoe Rose in the second article in this series, Learning Technology Standards, Specifications and Protocols. In the context of assessment content, QTI provides clear guidance on how to represent relationships between a group of items within a single package to enable the content to be understood as an assessment. In developing this part of the QTI specification, the authors were careful to ensure that no extensions or adaptations to the IMS Content Packaging v1.1.3 specification were made to maximize the usability of packages of QTI items and tests in all systems that use Content Packaging without modification, meaning that assessment material can be held within general repositories as well as dedicated item banks.
IMS Common Cartridge is a relatively recent innovation from IMS that combines standards such as IMS Learning Tools Interoperability and QTI to produce interoperable, self-contained packages (cartridges) or learning resources. Common Cartridge currently implements a limited range of QTI interaction types (Basic QTI) but work is ongoing on the specification and extension is likely (see Resources).
John Robertson explores this standard in great depth in Part 5 in this series, Metadata. IEEE LOM can be used to describe assessment content, but caution should be used with respect to a number of terms.
The IEEE LOM application profile provided by QTI v2.x replaces the use of
metadata tagging within the assessment test or item itself as was the
practice in QTI v1.x, a change which brings assessment content far more
closely into line with other educational content and greatly increases the
visibility and discoverability of assessment material (see Resources. The profile addresses a number of
limitations and issues relevant to assessment in core LOM, such as the
addition of vocabularies to define the nature of an item interaction type,
and the forbidding of semantically unhelpful terms such as
learning_resource_type, where the same content
can be used in multiple ways—for both self-assessment and
examinations in this case.
Dublin Core is the basic model that underlies IEEE LOM. As Robertson points out, the standard's simplicity (its base form consists of just 15 elements) means that it is capable of a very wide range of implementations and interpretations. Additionally, educational terms within Dublin Core are still limited and not specific to assessment.
Usage data is information about how candidates responded to an assessment item in an examination context. Unlike metadata, which is generally intended to be shared at least in part, usage data is highly context dependent and often extremely commercially sensitive. Usage data is an integral part of the quality assurance process. It can be used to:
- Confirm that an item is of the intended difficulty for a specific group.
- Ensure that an item does not unintentionally discriminate against a specific group or groups. Cash systems, religious, or cultural bias or other knowledge necessary to pass the assessment that are not the topic actually being assessed can cause problems.
- Identify confusing, misleading, or unclear elements such as distractors or rubric.
Depending on the nature of the individual item bank, usage data may be shareable and generally accessible, or highly proprietary and privileged information.
In addition to practical factors such as commercial sensitivity, there are difficulties inherent in sharing usage data. Terminology and vocabularies vary across different theoretical models and national contexts, and clashing terms and different definitions for the same term make the development of a standard method difficult. QTI provides limited guidance on defining and recording usage data, and recommends the use of bespoke glossaries for recording and exchanging this information.
QTI uses extensible markup language (XML) to record information about assessment content in machine readable format (see Resources). It uses tags, similar to HTML, to define the function and behavior of each part of a file. The specification provides definitions for each tag, together with guidance on how they should be combined into deliverable assessments, results data, and so on.
Listing 1 shows a simple multiple choice question to illustrate the basic structure of QTI XML.
Listing 1. Simple multiple choice question
<?xml version="1.0" encoding="UTF-8"?> <!-- This example is adapted from the PET Handbook, copyright University of Cambridge ESOL Examinations --> <assessmentItem xmlns="http://www.imsglobal.org/xsd/imsqti_v2p0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.imsglobal.org/xsd/imsqti_v2p0 imsqti_v2p0.xsd" identifier="choice" title="Unattended Luggage" adaptive="false" timeDependent="false"> <responseDeclaration identifier="RESPONSE" cardinality="single" baseType="identifier"> <correctResponse> <value>ChoiceA</value> </correctResponse> </responseDeclaration> <outcomeDeclaration identifier="SCORE" cardinality="single" baseType="integer"> <defaultValue> <value>0</value> </defaultValue> </outcomeDeclaration> <itemBody> <p>Look at the text in the picture.</p> <p> <img src="images/sign.png" alt="NEVER LEAVE LUGGAGE UNATTENDED"/> </p> <choiceInteraction responseIdentifier="RESPONSE" shuffle="false" maxChoices="1"> <prompt>What does it say?</prompt> <simpleChoice identifier="ChoiceA">You must stay with your luggage at all times.</simpleChoice> <simpleChoice identifier="ChoiceB">Do not let someone else look after your luggage.</simpleChoice> <simpleChoice identifier="ChoiceC">Remember your luggage when you leave.</simpleChoice> </choiceInteraction> </itemBody> <responseProcessing template="http://www.imsglobal.org/question/qti_v2p0/rptemplates /match_correct"/> </assessmentItem>
Figure 4 shows Listing 1 rendered.
Figure 4. Multiple choice questions
Standards for discovering and sharing content within repositories are discussed in the third article in this series, Open Repositories for Scholarly Communication, by Stuart Lewis.
Assessment content may be stored either in a dedicated repository (an item bank or assessment bank) or within a general repository of learning objects and other educational content. Assessment items can be stored adequately within a general repository. There may be some advantages to this by enabling assessment material to be retrieved more easily with other learning content it was designed to complement and the handling of licensing. However, there are clear advantages to storing items within a dedicated item bank: the ability to preview assessment content by playing it and the ability to construct, revise, and play assessments on the fly while searching, are functions beyond those normally provided by a more general educational repository.
Item banks may be intended for a wide range of audiences, such as:
- Educators seeking formative content to deliver for their courses.
- Publishers providing formative assessment content to supplement textbooks or e-textbooks.
- Assessment bodies (including institutions) constructing banks for adaptive and/or personalized high stakes assessment.
- Students seeking self-directed study resources.
The Physical Sciences Question Bank (see Resources), funded by the Higher Education Academy, makes a large number of items available to staff within UK Higher and Further Education. It is one of a number of subject-specific item banks. Other successful examples including Mathletics for mathematics and the e3an database of electrical and electronic engineering items (e3an stands for Electrical and Electronic Engineering Assessment Network).
Jorum (see Resources), funded by JISC, is a general educational repository that stores a wide range of objects, from text files through videos and virtual world resources to assessment material.
The levels of security and approaches to management depend very much on the purposes and intended audience of the item bank. The levels of authority that may be applicable include:
- Single authority producing and controlling content — for example, the publisher providing an item bank to complement a textbook or an awarding body releasing retired items as revision aids.
- Single authority quality assuring and releasing content provided by a wider community — for example, a subject center providing an assessment bank as a focal point for their subject community.
- Multiple authorities downloading and providing content — rating and annotation systems provide a de facto quality assurance process.
Maintaining assessment item banks provide examples for each of these.
At its minimum, an assessment bank requires the ability to upload, discover, and download assessment content. A more effective bank would provide greater functionality:
- Upload: The ability to upload content to an item bank depends on the purposes and intentions of the bank. Anyone may freely upload to a community bank, while a publisher's item bank is extremely restricted.
- Discover: At a minimum, the contents of an assessment bank could simply be browsed as a series of files; however, this is not a particularly user-friendly approach and, where there are a large number of items, can be very impractical. The ability to search based on a number of terms (subject, topic, question type, intended difficulty, author, and so on) is essential for effective use of an item bank.
- Preview: Most repositories will offer users the ability to preview content within it before downloading it, which provides an initial evaluation of the suitability of content.
- Play: The ability to play, or to fully try out an assessment item (be presented with the stimulus, submit an answer, have the answer marked, and receive a score and feedback, retry the question) is what distinguishes an item bank from other repositories.
- Download: Being able to download items, either to a separate file or to an assessment delivery system, is necessary to make an item bank more than just a redundant store of content.
- Rating and annotation: The ability to comment on and rate content within an item bank, similar to the system used by YouTube, can provide added value for users of the system.
- Management of usage data: The reciprocal arrangement proposed by the Item Banks Infrastucture Study (IBIS) Project (see Resources), under which users may only download resources after uploading contributions and which recommends the return of usage data to the item bank, necessitates additional processes for defining a usage data profile and processes for managing its upload and integration with earlier data
Issues around version control, quality assurance, intellectual property rights, and licensing do not differ from those faced by other repositories. Security of content within an item bank, however, can be a major issue.
The security of items used in any high stakes context is essential to ensure that the assessment process and validity are not compromised.
Security of items does not just apply to the finished item but also throughout the development and quality assurance process. An item bank for a large testing organization may contain items in a wide range of stages of development:
- Items under development, which are still in the process of being drafted by subject experts, or being converted from other formats by digitization experts.
- Items being quality assured. This can reflect a number of stages, from proof reading, accessibility evaluation and basic checks such as accuracy of links, through initial deployment to a small control group, to post-delivery review and recalibration.
- Items ready for delivery. These may have already been delivered before. The costs associated with developing high quality content make reuse of that content a high priority. Even in high stakes assessment, the same content may be reused in a number of delivery periods. In pre-e-assessment days, it was necessary for candidates to return their exam papers before leaving the examination room. In electronic systems, this security is easier to enforce.
- Items that have been delivered and are retired.
- Items that have been withdrawn. These may be items that may have concerns around them after evaluation of usage data, or items that are no longer accurate, such as "Name the President of the United States of America."
- Items that have been delivered and are now released to a general audience. For example, examination boards may release past examination papers as revision aids.
Vendors who sell assessment content are naturally concerned with ensuring that access is limited to those people who are eligible or licensed to use that content. Access management systems such as the IMS Learning Management Systems specification or Shibboleth, create relationships between individual learners and the licensing agency (a university, for example) to manage access and security (see Resources).
Tools vendors such as Questionmark and the now retired UK project TOIA offer community item banks accessible only by registered users or customers, with access associated with their membership credentials (see Resources).
E-assessment is a relatively well-established example of the use of technology to support learning and teaching, yet it is constantly developing with the introduction of new approaches and the refinement and enhancement of existing methods. The existence of mature, widely adopted technical standards to support the development of e-assessment systems and content helps to support the further adoption of these technologies. They are likely to become widely used with the growth of part time and distance learning anticipated in the near future.
- A Java Implementation of IMS QTI Version 2 developed by Graham Smith for UCLES at the University of
Cambridge, provided the example items and code in this
Assessment item banks and repositories, by Sarah Currier,
provides an outline of the relationship between assessment banks and
general repositories of learning resources.
Assessment item banks: an academic perspective, by Dick Bacon,
explores the pedagogic aspects of assessment banks.
PeerPigeon is a set
of services for the e-Framework that deals with peer review.
WebPA is an open source
online peer assessment tool that enables every team member to recognise
individual contributions to group work.
Authority (SQA) is the national accreditation and awarding body in
E-rater is widely used,
particularly within the United States, to evaluate candidates for entry to
Author, from Intelligent Assessment Technologies, is a text
assessment tool with a remarkably high level of accuracy
- JISC CETIS QTI
Mathematics Working Group exemplifies the relationship between formal standards
bodies and community-led initiatives to further extend and enhance
IMS Question and
Test Interoperability (QTI) supports the delivery and exchange of
assessment content across compliant systems and between institutions.
- The IMS
Content Packaging v1.2 Public Draft v2.0 specification describes
data structures that can be used to exchange data between systems that
wish to import, export, aggregate, and disaggregate packages of content.
- The IMS Learning
Design specification supports the use of a wide range of
pedagogies in online learning.
- The IMS Simple
Sequencing specification defines a method for representing the
intended behavior of an authored learning experience such that any
learning technology system (LTS) can sequence discrete learning activities
in a consistent way.
- The IEEE Learning Object Metadata (LOM) standard specifies the syntax
and semantics of LOM, defined as the attributes required to
fully/adequately describe a Learning Object.
IMS Common Cartridge
is a relatively recent innovation from IMS that combines standards such as
IMS Basic Learning
- XML: QTI uses extensible markup language to record information
about assessment content in machine readable format.
- The Item Banks Infrastructure Study (IBIS) Project created a report
that presents a comprehensive vision of a distributed item bank system for
- Learn about management systems such as
the IMS Learning
Information Systems specification or Shibboleth.
- The IEEE LOM application profile provided
by QTI v2.x replaces the use of metadata tagging within the
assessment test or item itself as was the practice in QTI v1.x.
- The Centre for Educational Technology and Interoperability Standards
Innovation Support Centre (JISC CETIS) provides advice to the UK
Higher and Post-16 Education sectors on educational technology and
Mathletics is one of a number of subject-specific item banks.
e3an (Electrical and
Electronic Engineering Assessment Network) is a database of electrical and
electronic engineering items.
Jorum is a JISC-funded free online
repository service designed to collect and share learning and teaching
- Detailed lists of commercial and open
source products and tools are available at JISC CETIS Assessment tools, projects and resources and QTI.
- The Physical Sciences Question Bank, funded by the Higher Education
Academy, makes available a large number of items to staff within UK Higher
and Further Education.
- The Learning Object Metadata (LOM) standard is maintained by the IEEE's Learning Technology
Standards Committee (LTSC).
- The Learning
Resource Exchange (LRE) Portal is a free repository containing
standards-based educational content for schools in many European
- The Global Grid for Learning
(GGfL), owned by Cambridge
University Press (CUP), is a payment-based repository containing a
wide variety of different types of educational content for schools,
including standards-based content.
- The standards and specifications
maintained by the IMS Global Learning Consortium are available on the IMS website.
They can also be downloaded, but require the user to agree to the licensing
- IMS maintains their own Learning and Educational Technology Product Directory, which
records products that have achieved conformance marks for different
- Anyone interested in the development of
either Common Cartridge or Learning Tools Interoperability may wish to
consider joining the Common Cartridge and Learning Tools Interoperability alliance,
which is run by the IMS.
- Further discussions about the future of
e-learning standards are also underway at the International Federation for
Learning, Education, and Training Systems Interoperability (LETSI).
JISC provides infrastructure,
guidance, and support for the use of technology in UK Higher and Further
Dublin Core metadata initiative is a
commonly used standard for descriptive metadata.
podcasts: Listen to interesting interviews and discussions for
developerWorks technical events and webcasts: Stay current with
developerWorks technical events and webcasts.
Get products and technologies
provides assessment products and support services and has customers in
educational institutions, businesses, professional associations,
government agencies and other organizations throughout the world.
- The Sakai Collaboration and Learning
Environment is a free, open source system developed by a
consortium of higher education institutions, primarily in America. It
offers QTI assessment tools within a larger learning system.
Moodle is a similar free, open source
community-developed LMS with QTI capabilities.
software: Evaluate IBM software products in the method that suits
you best. From trial downloads to cloud-hosted products, developerWorks
features software especially for developers.
blogs: Get involved in the developerWorks community.
Rowin Young has been involved with e-assessment for a number of years. She is a member of the IMS Question and Test Interoperability working group, and a Learning Technology Advisor for the JISC Centre for Educational Technology and Interoperability Standards (JISC CETIS) with particular interest in assessment, games for learning, and virtual worlds. She has also worked lectured and tutored on Scottish and English Language and Literature in traditional and distance learning institutions.