Choosing between wider participation and quality of interactions: a study of learner engagement within PeerWise
Choosing between wider participation and quality of interactions: a study of learner engagement within PeerWise
Research interest in learner engagement within peer-supported digital environments has increased in recent years. Yet the effect of lecturers actively promoting participation within such spaces on the quality of the interactions has been little explored. This poster presents one such study of learner engagement in two consecutive years of using PeerWise in a second-year module in Human-Computer Interaction (compulsory for Computer Science and Web Science programmes, optional for several others).
For the first cohort, of 141 students, the use of PeerWise was mandatory, with marks awarded subject to a minimum level of participation by two given deadlines. Over 95% of the registered students complied and engaged with the software. In contrast, for the second cohort, of 169 students, this was an optional activity, not rewarded with marks. As a consequence, only 62% of the students (107 of them) elected to take part. The latter group created collectively only 81 questions, compared to the 531 generated by the previous cohort, due to a lower number of unique authors (22 vs 126), and a slightly lower average production effort (3 questions per author vs 4.2 in the whole semester). Other metrics of cohort participation were also lower, such as much fewer comments were produced (265 vs 118) and answers to questions (8707 vs 4993).
However, both the quality and difficulty of the student-authored questions, as judged by peers, were significantly higher in the group with optional participation in PeerWise, with difficulty averages of 0.558 and 0.722, and quality averages of 2.522 and 3.037 respectively. Other metrics for engagement in this peer-supported environment were comparatively higher, in particular, those related to “conversations” sparked from questions. These metrics incorporate replies to comments added to questions, and the actors involved in the exchanges, as calculated using Wilde’s interactional model of peer-supported digital environments, based on Chua’s dialogic labels of posts in her discussion analytics’ work on FutureLearn MOOCs.
Further, and more interestingly, all of the 107 students who used PeerWise in the second year achieved exam marks of 50% or higher, of which, 99 obtained 60% or higher. A limitation of this study is that it does not control for self-selection bias, i.e. I cannot claim that the use of PeerWise caused students to succeed in the exam (it is possible that only “good” students chose to use the software). Neither I claim that the removal of the mandatory requirement of its use caused students to engage in deeper discussions. It is interesting to note, however, that the assessment design may affect student engagement even if the learning activities remain the same. This study suggests that awarding marks as an incentive for wider participation may disincentivise deeper connections. Conversely, removing this incentive for participation may foster a higher quality of content and of interactions. Still, this benefit may not be felt widely across the class: an impossible choice, worth further study and consideration within the research community and across educators promoting the use of PeerWise in their courses.
student engagement, Tools to aid computer education, interactive learning environments, PeerWise, Assessment design
Wilde, Adriana
4f9174fe-482a-4114-8e81-79b835946224
4 September 2020
Wilde, Adriana
4f9174fe-482a-4114-8e81-79b835946224
Wilde, Adriana
(2020)
Choosing between wider participation and quality of interactions: a study of learner engagement within PeerWise.
UK and Ireland Computer Science Education Conference, Virtual Event, Glasgow, United Kingdom.
03 - 04 Sep 2020.
Record type:
Conference or Workshop Item
(Poster)
Abstract
Research interest in learner engagement within peer-supported digital environments has increased in recent years. Yet the effect of lecturers actively promoting participation within such spaces on the quality of the interactions has been little explored. This poster presents one such study of learner engagement in two consecutive years of using PeerWise in a second-year module in Human-Computer Interaction (compulsory for Computer Science and Web Science programmes, optional for several others).
For the first cohort, of 141 students, the use of PeerWise was mandatory, with marks awarded subject to a minimum level of participation by two given deadlines. Over 95% of the registered students complied and engaged with the software. In contrast, for the second cohort, of 169 students, this was an optional activity, not rewarded with marks. As a consequence, only 62% of the students (107 of them) elected to take part. The latter group created collectively only 81 questions, compared to the 531 generated by the previous cohort, due to a lower number of unique authors (22 vs 126), and a slightly lower average production effort (3 questions per author vs 4.2 in the whole semester). Other metrics of cohort participation were also lower, such as much fewer comments were produced (265 vs 118) and answers to questions (8707 vs 4993).
However, both the quality and difficulty of the student-authored questions, as judged by peers, were significantly higher in the group with optional participation in PeerWise, with difficulty averages of 0.558 and 0.722, and quality averages of 2.522 and 3.037 respectively. Other metrics for engagement in this peer-supported environment were comparatively higher, in particular, those related to “conversations” sparked from questions. These metrics incorporate replies to comments added to questions, and the actors involved in the exchanges, as calculated using Wilde’s interactional model of peer-supported digital environments, based on Chua’s dialogic labels of posts in her discussion analytics’ work on FutureLearn MOOCs.
Further, and more interestingly, all of the 107 students who used PeerWise in the second year achieved exam marks of 50% or higher, of which, 99 obtained 60% or higher. A limitation of this study is that it does not control for self-selection bias, i.e. I cannot claim that the use of PeerWise caused students to succeed in the exam (it is possible that only “good” students chose to use the software). Neither I claim that the removal of the mandatory requirement of its use caused students to engage in deeper discussions. It is interesting to note, however, that the assessment design may affect student engagement even if the learning activities remain the same. This study suggests that awarding marks as an incentive for wider participation may disincentivise deeper connections. Conversely, removing this incentive for participation may foster a higher quality of content and of interactions. Still, this benefit may not be felt widely across the class: an impossible choice, worth further study and consideration within the research community and across educators promoting the use of PeerWise in their courses.
More information
Published date: 4 September 2020
Venue - Dates:
UK and Ireland Computer Science Education Conference, Virtual Event, Glasgow, United Kingdom, 2020-09-03 - 2020-09-04
Keywords:
student engagement, Tools to aid computer education, interactive learning environments, PeerWise, Assessment design
Identifiers
Local EPrints ID: 443659
URI: http://eprints.soton.ac.uk/id/eprint/443659
PURE UUID: 17b2775a-d875-45da-8d0c-e5f3604f34c7
Catalogue record
Date deposited: 07 Sep 2020 16:31
Last modified: 12 Nov 2024 02:46
Export record
Contributors
Author:
Adriana Wilde
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics