Title: Multilingual Summarization Evaluation without Human Models
Abstract:We study correlation of rankings of text summarization systems using evaluation methods with and without human models. We apply our comparison framework to various well-established content-based evalu...We study correlation of rankings of text summarization systems using evaluation methods with and without human models. We apply our comparison framework to various well-established content-based evaluation measures in text summarization such as coverage, Responsiveness, Pyramids and Rouge studying their associations in various text summarization tasks including generic and focus-based multi-document summarization in English and generic single-document summarization in French and Spanish. The research is carried out using a new content-based evaluation framework called Fresa to compute a variety of divergences among probability distributions.Read More
Publication Year: 2010
Publication Date: 2010-08-23
Language: en
Type: article
Access and Citation
Cited By Count: 78
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot