A competency model for semi-automatic question generation in adaptive assessment
A competency model for semi-automatic question generation in adaptive assessment
The concept of competency is increasingly important since it conceptualises intended learning outcomes within the process of acquiring and updating knowledge. A competency model is critical to successfully managing assessment and achieving the goals of resource sharing, collaboration, and automation to support learning. Existing e-learning competency standards such as the IMS Reusable Definition of Competency or Educational Objective (IMS RDCEO specification and the HR-XML standard are not able to accommodate complicated competencies, link competencies adequately, support comparisons of competency data between different communities, or support tracking of the knowledge state of the learner.
Recently, the main goal of assessment has shifted away from content-based evaluation to intended learning outcome-based evaluation. As a result, through assessment, the main focus of assessment goals has shifted towards the identification of learned capability instead of learned content. This change is associated with changes in the method of assessment.
This thesis presents a system to demonstrate adaptive assessment and automatic generation of questions from a competency model, based on a sound pedagogical and
technological approach. The system’s design and implementation involves an ontological database that represents the intended learning outcome to be assessed across a number of dimensions, including level of cognitive ability and subject matter content. The system generates a list of the questions and tests that are possible from a given learning outcome, which may then be used to test for understanding, and so could determine the degree to which learners actually acquire the desired knowledge.
Experiments were carried out to demonstrate and evaluate the generation of assessments, the sequencing of generated assessments from a competency data model, and to compare a variety of adaptive sequences. For each experiment, methods and experimental results are described. The way in which the system has been designed and evaluated is discussed, along with its educational benefits.
Sitthisak, Onjira
7a4ff5e1-d550-48cf-8d77-a294cbd9f039
May 2009
Sitthisak, Onjira
7a4ff5e1-d550-48cf-8d77-a294cbd9f039
Davis, Hugh
1608a3c8-0920-4a0c-82b3-ee29a52e7c1b
Gilbert, Lester
a593729a-9941-4b0a-bb10-1be61673b741
Sitthisak, Onjira
(2009)
A competency model for semi-automatic question generation in adaptive assessment.
University of Southampton, School of Electronics and Computer Science, Doctoral Thesis, 152pp.
Record type:
Thesis
(Doctoral)
Abstract
The concept of competency is increasingly important since it conceptualises intended learning outcomes within the process of acquiring and updating knowledge. A competency model is critical to successfully managing assessment and achieving the goals of resource sharing, collaboration, and automation to support learning. Existing e-learning competency standards such as the IMS Reusable Definition of Competency or Educational Objective (IMS RDCEO specification and the HR-XML standard are not able to accommodate complicated competencies, link competencies adequately, support comparisons of competency data between different communities, or support tracking of the knowledge state of the learner.
Recently, the main goal of assessment has shifted away from content-based evaluation to intended learning outcome-based evaluation. As a result, through assessment, the main focus of assessment goals has shifted towards the identification of learned capability instead of learned content. This change is associated with changes in the method of assessment.
This thesis presents a system to demonstrate adaptive assessment and automatic generation of questions from a competency model, based on a sound pedagogical and
technological approach. The system’s design and implementation involves an ontological database that represents the intended learning outcome to be assessed across a number of dimensions, including level of cognitive ability and subject matter content. The system generates a list of the questions and tests that are possible from a given learning outcome, which may then be used to test for understanding, and so could determine the degree to which learners actually acquire the desired knowledge.
Experiments were carried out to demonstrate and evaluate the generation of assessments, the sequencing of generated assessments from a competency data model, and to compare a variety of adaptive sequences. For each experiment, methods and experimental results are described. The way in which the system has been designed and evaluated is discussed, along with its educational benefits.
Text
Sitthisak_O._Thesis_(2009).pdf
- Other
More information
Published date: May 2009
Organisations:
University of Southampton, Electronics & Computer Science
Identifiers
Local EPrints ID: 66322
URI: http://eprints.soton.ac.uk/id/eprint/66322
PURE UUID: 5d94b949-de57-43bb-91b3-66fc1536e628
Catalogue record
Date deposited: 02 Jun 2009
Last modified: 14 Mar 2024 02:33
Export record
Contributors
Author:
Onjira Sitthisak
Thesis advisor:
Hugh Davis
Thesis advisor:
Lester Gilbert
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics