1
|
Abstract
Text summarization is a process of producing a concise version of text (summary) from one or more information sources. If the generated summary preserves meaning of the original text, it will help the users to make fast and effective decision. However, how much meaning of the source text can be preserved is becoming harder to evaluate. The most commonly used automatic evaluation metrics like Recall-Oriented Understudy for Gisting Evaluation (ROUGE) strictly rely on the overlapping n-gram units between reference and candidate summaries, which are not suitable to measure the quality of abstractive summaries. Another major challenge to evaluate text summarization systems is lack of consistent ideal reference summaries. Studies show that human summarizers can produce variable reference summaries of the same source that can significantly affect automatic evaluation metrics scores of summarization systems. Humans are biased to certain situation while producing summary, even the same person perhaps produces substantially different summaries of the same source at different time. This paper proposes a word embedding based automatic text summarization and evaluation framework, which can successfully determine salient top-n sentences of a source text as a reference summary, and evaluate the quality of systems summaries against it. Extensive experimental results demonstrate that the proposed framework is effective and able to outperform several baseline methods with regard to both text summarization systems and automatic evaluation metrics when tested on a publicly available dataset.
Collapse
|
2
|
Finegan-Dollak C, Radev DR. Sentence simplification, compression, and disaggregation for summarization of sophisticated documents. J Assoc Inf Sci Technol 2015. [DOI: 10.1002/asi.23576] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
| | - Dragomir R. Radev
- Department of EECS; University of Michigan; 3917 Beyster Building Ann Arbor MI 48109
| |
Collapse
|
3
|
Leal Bando L, Scholer F, Turpin A. Query-biased summary generation assisted by query expansion. J Assoc Inf Sci Technol 2014. [DOI: 10.1002/asi.23222] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Lorena Leal Bando
- School of Computer Science and IT; RMIT University; GPO Box 2476 Melbourne Vic. 3001 Australia
| | - Falk Scholer
- School of Computer Science and IT; RMIT University; GPO Box 2476 Melbourne Vic. 3001 Australia
| | - Andrew Turpin
- Department of Computing and Information Systems; The University of Melbourne; Level 8, Doug McDonell Building, Parkville Campus Melbourne Vic. 3010 Australia
| |
Collapse
|
4
|
Reeve LH, Han H, Brooks AD. The use of domain-specific concepts in biomedical text summarization. Inf Process Manag 2007. [DOI: 10.1016/j.ipm.2007.01.026] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
5
|
|
6
|
Abstract
Human variation in content selection in summarization has given rise to some fundamental research questions: How can one incorporate the observed variation in suitable evaluation measures? How can such measures reflect the fact that summaries conveying different content can be equally good and informative? In this article, we address these very questions by proposing a method for analysis of multiple human abstracts into semantic content units. Such analysis allows us not only to quantify human variation in content selection, but also to assign empirical importance weight to different content units. It serves as the basis for an evaluation method, the Pyramid Method, that incorporates the observed variation and is predictive of different equally informative summaries. We discuss the reliability of content unit annotation, the properties of Pyramid scores, and their correlation with other evaluation methods.
Collapse
|
7
|
Automatic Text Summarization Using a Machine Learning Approach. ADVANCES IN ARTIFICIAL INTELLIGENCE 2002. [DOI: 10.1007/3-540-36127-8_20] [Citation(s) in RCA: 62] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
8
|
Brandow R, Mitze K, Rau LF. Automatic condensation of electronic publications by sentence selection. Inf Process Manag 1995. [DOI: 10.1016/0306-4573(95)00052-i] [Citation(s) in RCA: 60] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
9
|
|
10
|
Kochen M, Tagliacozzo R. Book-indexes as building blocks for a cumulative index. ACTA ACUST UNITED AC 1967. [DOI: 10.1002/asi.5090180204] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|