summarunner: a recurrent neural network based sequence model for extractive summarization of...
TRANSCRIPT
SummaRuNNer
Ramesh Nallapati, Feifei Zhai, Bowen Zhou
Presented by :
Sharath T.S
Shubhangi Tandon
Contributions of this paper
SummaRuNNer, a simple recurrent network based sequence classifier that outperforms or matches state-of-the-art models for extractive summarization
The simple formulation of model facilitates interpretable visualization of its decisions
A novel training mechanism that allows our extractive model to be trained end-to-end using abstractive summaries.
SummaRuNNer
Treat extractive summarization as a sequence classification problem
Each sentence is visited sequentially in the original document order
A binary decision is made (taking into account previous decisions)
GRU based RNN basic building block of sequence classifier
Recurrent network with two gates, u :update gate and r : reset gate
Recurrents neural networksLSTMs:
●Input gate: Decides what fraction of the new input flowing into the LSTM cell has to be updated.
LSTMs - Continued●Update gate: Calculates what amount of current cell state to forget,
and updates the new information.
LSTMs - ContinuedOutput gate: Evaluates the new cell state and decides what parts of
the information has to be output.
Refer: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
GRU LSTMsModifications compared to LSTMs:
It combines the forget(f) and input(i) gate into a single update gate.
Merges the cell state and hidden state into one state.
The Model
SummaRuNNerModel:
● Two-layer bi-directional GRU-RNN - The first layer of the RNN runs at the word level, computes hidden state representations at each word position. Another RNN at the word level that runs backwards from the last word to the first.
● second layer of bi-directional RNN that runs at the sentence-level and accepts the average-pooled, concatenated hidden states of word-level RNNs.
● Document representation : `
Computing Posterior - Logistic loss
(7)
Extractive Summary labels - Greedy Algorithm
Why is it needed?
most summarization corpora only contain human written abstractive summaries as ground truth.
Algorithmselected sentences from the document should be the ones that maximize the Rouge
score with respect to gold summaries.
Stop when none of the remaining candidate when added improve the ROUGE score.
Train the network with labelled data.
Abstractive training - DecoderApart from the sigmoid function present to compute the class a sentence belongs to, the
decoder in addition does the following
Takes embedding of a word(hidden state) as input from the previous state as xk, s -1 is the value computed at the last sentence of the RNN( Equation 7).
Computes softmax to output the most probable word.
Optimize the log likelihood of the word distribution in the abstractive summaries.(context captured by RNN)
Predict using weights W, without the decoder on test samples.
Decoder - ContinuedHow does it work?
The summary representation s−1 acts as an information channel between the SummaRuNNer model and the decoder.
Maximizing the probability of abstractive summary words as computed by the decoder will require the model to learn a good summary representation which in turn depends on accurate estimates of extractive probabilities p(yj).
SummaRuNNer Visualisation
Corpus usedDaily Mail ( Cheng & Lapata) : 200k Tr, 12k Val , 10k Test
Daily Mail/CNN (Nallapati) : 286k Tr, 13k Val, 11k Test
DUC 2002 : 567 documents ( out of Domain Testing)
Average statistics28 sentences/ doc
3-4 sentences in reference summary
802 word / doc
Training Data ConstraintsVocab size : 150k
Maximum sentences/ doc : 100
Max Sentence Length : 50 words
Model hidden state : 200
Batch Size : 64
Experiments and Results : Daily Mail Corpus
Experiments and Results : Daily Mail /CNN data
Experiments and Results : DUC 2002 data
Future WorkPre-Train extractive model using abstractive training
Construct a joint extractive-abstractive model where predictions of extractive component form stochastic intermediate units to be consumed by abstractive component.