Neural models for Factual Inconsistency Classification with Explanations
Tathagata Raha, Mukund Choudhary, Abhinav Menon, Harshit Gupta, K V Aditya Srivatsa, Manish Gupta, Vasudeva Verma
What’s the work about?
FICLE! A new task, dataset, and baselines for kickstarting a much needed look into Factual Inconsistencies in text:
All of this, without an external knowledge graph!
An overview of FICLE
A specialized dataset derived from the FEVER task's refuting examples, enriched with an ontology for identifying textual inconsistencies and annotated for both syntactic and semantic explanations.
Structural Explanations Annotation
Inconsistent Claim Fact Triple (Source-Relation-Target), Context Span, and which Structural Component the inconsistency corresponds to.
Some structural annotations’ distribution overview statistics
Semantic Explanations Annotation
Inconsistency Types are adapted from lexical relation types in linguistics with “Set” and “Negation” added in to handle other popular cases.
Baseline Neural Model Pipelines for FICLE
Stage B
Structural explanations
Stage A
Span Prediction
Stage C
Semantic explanations
We also experimented with predicting just “Source, Relation and Target Prediction from Claim Sentence” and just “ Inconsistent Context Span Prediction” which mostly resulted to inferior results.
Experiments & Results - I
Note that the two problems are 5-class and 6-class classification respectively. We observe that joint multi-task model outperforms the other two methods.
Experiments & Results - II
DeBERTa outperforms all other models. Embeddings & Two-step methods help!
Experiments & Results - III
Thank You!