What do evaluation instruments tell us about the quality of complementary medicine information on the internet?
Breckons, M., Jones, R.M., Morris, J. and Richardson, J. (2008) What do evaluation instruments tell us about the quality of complementary medicine information on the internet? Journal of Medical Internet Research, 10, (1), e3. (doi:10.2196/jmir.961).
Full text not available from this repository.
Background: Developers of health information websites aimed at consumers need methods to assess whether their website is of “high quality.” Due to the nature of complementary medicine, website information is diverse and may be of poor quality. Various methods have been used to assess the quality of websites, the two main approaches being (1) to compare the content against some gold standard, and (2) to rate various aspects of the site using an assessment tool.
Objective: We aimed to review available evaluation instruments to assess their performance when used by a researcher to evaluate websites containing information on complementary medicine and breast cancer. In particular, we wanted to see if instruments used the same criteria, agreed on the ranking of websites, were easy to use by a researcher, and if use of a single tool was sufficient to assess website quality.
Methods: Bibliographic databases, search engines, and citation searches were used to identify evaluation instruments. Instruments were included that enabled users with no subject knowledge to make an objective assessment of a website containing health information. The elements of each instrument were compared to nine main criteria defined by a previous study. Google was used to search for complementary medicine and breast cancer sites. The first six results and a purposive six from different origins (charities, sponsored, commercial) were chosen. Each website was assessed using each tool, and the percentage of criteria successfully met was recorded. The ranking of the websites by each tool was compared. The use of the instruments by others was estimated by citation analysis and Google searching.
Results: A total of 39 instruments were identified, 12 of which met the inclusion criteria; the instruments contained between 4 and 43 questions. When applied to 12 websites, there was agreement of the rank order of the sites with 10 of the instruments. Instruments varied in the range of criteria they assessed and in their ease of use.
Conclusions: Comparing the content of websites against a gold standard is time consuming and only feasible for very specific advice. Evaluation instruments offer gateway providers a method to assess websites. The checklist approach has face validity when results are compared to the actual content of “good” and “bad” websites. Although instruments differed in the range of items assessed, there was fair agreement between most available instruments. Some were easier to use than others, but these were not necessarily the instruments most widely used to date. Combining some of the better features of instruments to provide fewer, easy-to-use methods would be beneficial to gateway providers.
|Digital Object Identifier (DOI):||doi:10.2196/jmir.961|
|Subjects:||R Medicine > RZ Other systems of medicine|
|Divisions:||Faculty of Health Sciences
|Date Deposited:||17 Nov 2011 11:28|
|Last Modified:||27 Mar 2014 19:48|
|RDF:||RDF+N-Triples, RDF+N3, RDF+XML, Browse.|
Actions (login required)