Streamlining Event Relation Extraction
A Pipeline Leveraging Pretrained and Large Language Models for Inference
Previous Contributions:
Pipeline Goal: Use the developed event relation extraction system for inference to analyze user-provided text and identify four semantically precise event relations.
Pipeline Tasks:
operation
Disarm
intends_to_cause
Free Iraqi people
Cause
Terrorism
Prevens
alternative to
“As US claimed, the intent of the military operation was to disarm Iraq of weapons of mass distruction, to end support for terrorism and free iraqi people”
Gustavo Miguel Flores, Youssra Rebboud, Pasquale Lisena, and Raphäel Troncy
Upload your text and select models for each task (Relation Detection, Classification, and Event Extraction). The pipeline, powered by BERT, RoBERTa, REBEL, and LLMs like GPT-4 and Zephyr, extracts key causal event relations, including Direct-Cause, Enable, Intend, and Prevent. Your input will be processed, highlighting subjects, objects, and their respective relation types for clear, actionable insights.
Choose the model for Event Extraction (span Detection)
Select a set of predefined example sentences to do event relation extraction
Enter your sentence here
Enter your OpenAI key in case you choose GPT models
Choose the model for Event Relation Detection
Choose the model for Relation classification
RoBERTA
REBEL
The government implemented a nationwide vaccine program to prevent the spread of the influenza outbreak.
REBEL
The government implemented a nationwide vaccination (prevent-subj) program to prevent the spread (prevent-obj) of the influenza outbreak.
Scan Me!�Experience fine-grained causal event relation extraction directly from your text with our API.
Class | Precision | Recall | F1-score |
1 (causal relation) | 0.94 | 0.89 | 0.92 |
0 (no causal relation) | 0.76 | 0.85 | 0.80 |
Model | Precision | Recall | F1-score |
BERT-base | 0.9748 | 0.9747 | 0.974 |
BERT-large | 0.969 | 0.968 | 0.968 |
REBEL | 0.976 | 0.975 | 0.975 |
Model | Precision | Recall | F1-score |
ALBERT | 0.645 | 0.675 | 0.660 |
REBEL | 0.832 | 0.828 | 0.829 |
Event Relation Detection With RoBERTa
Event Relation Classification with BERT/ REBEL
Event Extraction with ALBERT/ REBEL
Event Relation Extraction API
Motivation
Template:
Introduction:
Extract the subject, object, and relation from the following sentences. The sentence has one of the following relations: cause, enable, prevent, or intend.
Definition of direct-cause, intend-to-cause, enable, prevent.
{examples}
Request:
Extract the Subject, Object, and relation for the following sentence:
Sentence: "{input_sentence}"
Output format
Application Framework
User Interface
Streamlit
LLMs
PLMs
Pipeline Architecture
Results
Models and Dataset
Datasets
Dataset | Total | Direct-Cause | Enable | Prevent | Intend | No-relation |
Event Relation dataset | 2,196 | 268 | 540 | 611 | 601 | 172 |
CausalNews Corpus | 3,417 | 1,811 | 0 | 0 | 0 | 1,606 |
Total | 5,613 | 2,079 | 540 | 611 | 601 | 1,778 |
Models used
[1] Y. Rebboud, et al. Beyond Causality: Representing Event Relations in Knowledge Graphs. EKAW 2022, Bolzano, Italy.
[2] Y. Rebboud, et al. Prompt-based Data Augmentation for Semantically-Precise Event Relation Classification. SEMMES 2023, Heraklion, Greece.
The pipeline integrates pre-existing fine-tuned PLMs and prompted LLMs to extract semantically precise event relations, both applied during inference.
Future Work
Hugging Face�
Langchain