Feature Selection for Summarising: The Sunderland DUC 2004 Experience.
In, Document Understanding Conference 2004
In this paper we describe our participation in task 1-very short single-document summaries in DUC 2004. The task chosen is related to our research project, which aims to produce abstracting summaries to improve search engine result summaries. DUC allowed us to produce summaries no longer than 75 characters, therefore we focused on feature selection to produce a set of key words as summaries instead of complete sentences. Three descriptions of our summarisers are given. Each of the summarisers performs very differently in the six ROUGE metrics. One of our summarisers which uses a simple algorithm to produce summaries without any supervised learning or complicated NLP technique performs surprisingly well among different ROUGE evaluations. Finally we give an analysis of ROUGE and participants’ results. ROUGE is an automatic evaluation of summaries package, which uses n-gram matching to calculate the overlapping between machine and human summaries, and indeed saves time for human evaluation. However, the different ROUGE metrics give different results and it is hard to judge which is the best for automatic summaries evaluation. Also it does not include complete sentences evaluation. Therefore we suggest some work needs to be done on ROUGE in the future to make it really effective.
Actions (login required)