ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
Paper_NameConferenceFromNoteDataset
2
meta learning to classify intent and slot labels with noisy思路很不错,从原理的角度在三个方面依次介绍了各种few shot learning的方法,有点偏向CV,大概有37页
3
WorkshopAlexa训练了一个抗噪声(noisy samples)的网络,也能用在少样本学习上, 确定了一种新的task,但是没有链接
ATIS, SNIPS, TOP
4
workshopAlexaencoder角度以及数据增强的角度出发,数据增强只是在embedding层面,必然缺少可解释性
5
ProtoDA: Efficient Transfer Learning for Few-Shot Intent ClassificationAAAI2021Alexa相比baseline有一个多点的提升,关键是思路很清奇,只用了syntactic的信息。虽然不是fewshot learning,也给了我一些启发,可以利用外界的信息去学习少样本。SNIPS ATIS
6
A survey of joint intent detection and slot-filling models in natural language understandingArxiv悉尼大学
7
Multi-Domain Spoken Language Understanding Using Domain-and Task-Aware ParameterizationNoneShiyu 老师的workSLU问题用seq-to-seq来解决了,但是在few shot实验的地方缺乏与其他baseline的对比实验,被reviewer喷的多ASMixed MTOD
8
Multi-Domain Spoken Language Understanding Using Domain- and
Task-Aware Parameterization
Arxiv
Shiyu 老师work 的basleine
SLU问题,主要不是集中在few shot 上面,是侧重于domain adaption的问题,数据量也用百分比而不是几个数据来训练的。方法主要是说,不同的domain、task应该有一些不同的参数。Joint 训练也是一种方向,在五年前很火爆,现在没啥了。https://blog.csdn.net/weixin_37947156/article/details/87608018
9
Multi-Domain Adversarial Learning for Slot Filling in Spoken Language UnderstandingCorrBing Liu利用对抗学习的方式来学习到一种更加general的domain表达形式
10
Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot FillingNips 2015Bing Liu第一次提出两个任务的joint 训练
11
ONENET:JOINT DOMAIN, INTENT, SLOT PREDICTION FOR SLUworkshop
12
Joint Slot Filling and Intent Detection via Capsule Networks (胶囊网络)ACL 2019阿里腾讯美国利用18/19年很火的胶囊网络,提出了一种语义等级化的网络。SNIPS ATIS
13
Exploring transfer learning for end-to-end spoken language understandingAAAI 2021Alexa
14
Zero-shot User Intent Detection via Capsule Neural NetworksEMNLP 2018借用了胶囊网络,需要熟悉胶囊网络的内部结构才能够明白内部的一些原理。
15
Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue DatasetAAAI 2020Google Research提出了一种全新的数据集以及全新的evaluation,支持一些few shot的东西。
16
Accelerating Natural Language Understanding in Task-Oriented DialogACL workshop提出了一种很简单的网络,值得参考一下。
17
Meta Learning for Few-Shot Joint Intent Detection and Slot-FillingICML 2020联合预测两个任务,然后是从few shot的角度出发的,并且拓展到了跨语言。
18
Linguistically-Enriched and Context-Aware
Zero-shot Slot Filling
Arxiv圣何塞集成了语言学信息以及context的信息。关键是效果很好,超出了2020 baseline 17个点。
SNIPS ATIS Multiwoz
19
Coach: A Coarse-to-Fine Approach for Cross-domain Slot FillingACL 2020 short港科技这篇论文提出了一种由粗到细的模型,可以作为我的baseline。并且加了一种template的玩意SNIPS 和 NER数据集
20
Robust Zero-Shot Cross-Domain Slot Filling with Example ValuesACL 2019MIT, Google, AlexaSNIP XSchema
21
Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection NetworkACL 2020哈工大只是解决了slot tagging的问题,从NER 的CRF的角度出发来考虑这个问题
22
Zero-Shot Transfer Learning with Synthesized Data for Multi-Domain Dialogue State TrackingACL 2020斯坦福从数据的角度出发,提出了新的合成对话数据的方式,然后用这种数据训练的模型在few shot上表现更好了,十多个点呢,醉了。Multiwoz
23
Self-Supervised Meta-Learning for Few-Shot Natural Language Classification TasksEMNLP 2020微软从预训练的角度出发,用元学习的方式进行训练,大概超出了四个点不等,公式不多,说明是属于微调模型类的论文。关于NLP 分类的17种数据集
24
A Closer Look At Feature Space Data Augmentation For Few-Shot Intent Classification.EMNLP 2019 Amazon从数据扩充的角度出发,只是针对于Intent Classification 的问题。SIPS FB Dialog
25
Hierarchical Attention Prototypical Networks for Few-Shot Text ClassificationEMNLP 2019 北大 微软从attention的角度出发,旨在扩充特征空间的理解,计算了三个层面的attentionFewRel CSID
26
Recent Neural Methods on Slot Filling and Intent Classification for Task-Oriented Dialogue Systems: A SurveyColing
27
Joint Slot Filling and Intent Detection in Spoken Language Understanding by Hybrid CNN-LSTM Model
ICML 2020ATIS
28
Joint Intent Detection and Slot Filling with Wheel-Graph Attention NetworksArxiv引入了graph,在两个数据集上做到了SOTAATIS SNIPS
29
Recent Neural Methods on Slot Filling and Intent Classification for Task-Oriented Dialogue Systems: A SurveyCOLING不是joint的单独介绍了,包括多语言的东西呢。
30
BERT for Joint Intent Classification and Slot Filling
达摩院
31
32
33
Cross-Lingual + Few Shot
34
Mult ATIS ++
35
End-to-End Slot Alignment and Recognition for Cross-Lingual NLUEMNLP 2020 Alexa从源语言到目的语言不需要连线,自动匹配。用了五组实验平均,实验做的很充分。ATIS++ 多语言
36
To What Degree Can Language Borders Be Blurred In BERT-based Multilingual Spoken Language Understanding?COLING 2020Alexa使用了多语言BERT,本身的模型创新点在于这个对抗式ATIS++ 多语言
37
Simultaneous Slot Filling, Translation, Intent Classification, and Language Identification: Initial Results using mBART on MultiATIS++
AACLAlexaMBART的初步试验结果ATIS++ 多语言
38
Multilingual Code-Switching for Zero-Shot Cross-Lingual Intent Prediction and Slot FillingArxiv 2020
39
Multi-lingual Intent Detection and Slot Filling in a Joint BERT-based ModelArxiv 2019从英语到意大利语的转化,自己用翻译器翻译的,不准确。ATIS + 意大利
40
MTOP: A Comprehensive Multilingual Task-Oriented Semantic Parsing BenchmarkEACL 2021Facebook 数据集是新提出来的,也能够用于我的任务,但是数据集做的很一般,看起来很简陋
41
42
43
斯坦福自己构建的数据集
44
Cross-Lingual Transfer Learning for Multilingual Task Oriented DialogNAACL 2019斯坦福+Facebook AI提出了一种新的数据集,顺便提出了一个简单地baseline,引用量到了70多,很高了已经是。斯坦福
45
Attention-Informed Mixed-Language Training for Zero-Shot Cross-Lingual Task-Oriented Dialogue SystemsIJCNLP 2019斯坦福
46
AAAI -2020港科技斯坦福
47
Cross-lingual Spoken Language Understanding with Regularized Representation AlignmentEMNLP 2020港科技斯坦福
48
Cross-lingual Alignment Methods for Multilingual BERT: A Comparative StudyEMNLP 2020Alexa, UK主要在分析连线的重要性,以及如何去连线的问题斯坦福等
49
CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot Cross-Lingual NLPIJCAI-2020哈工大斯坦福等
50
Evaluating Cross-Lingual Transfer Learning Approaches in Multilingual Conversational Agent ModelsCOLING 2020Amazon一些评测,一个模型
新的数据集,未公开,设置和斯坦福的一样
51
52
数据集
53
FewJoint: A Few-shot Learning Benchmark for Joint Language UnderstandingArxiv哈工大提出了一种新的专门用于这种few shot训练的数据集,针对于dialog NLU的地方,也包括NER的任务
54
Taskmaster-2 (2020)NoneGoogle由谷歌提出的第二代数据集,没有论文,不知道用的人多不多。
55
56
57
探索的未来方向
58
多语言之间的迁移学习
59
60
Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder ModelACL 2019从两种语言应该有相似的sentence embedding的角度来出发。
61
机器翻译在few shot方面的应用
62
Meta-Learning for Few-Shot NMT AdaptationACL
63
64
学术界的经典few/zero_shot,及其余网络
65
Prototypical Networks for Few-shot LearningNIPS 2017类似于k means 的思路,尝试着将相同类型的聚类并且归纳,来区分其他的样本,有一些缺点的。
zero shot and CU-Birds dataset.
66
Model-Agnostic Meta-LearningICML 2017不断地采集小样本训练模型,使得模型能够快速的适应few-shot的settingCV的数据集
67
胶囊网络
68
OPTIMIZATION AS A MODEL FOR FEW-SHOT LEARNINGICLR 2017提出了一个基于LSTM的元学习器模型来学习用于训练另一个学习器神经网络分类器的精确优化算法。
69
70
学习元表示的一些方法。
71
Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text ClassificationSIGAR 2020一种对抗学习的方式 结合mbert,对于unlable的东西采取半监督方式学习,一圈一圈轮着来。
72
Non-Linear Instance-Based Cross-Lingual Mapping for Non-Isomorphic Embedding SpacesACL 2020讲解空间映射的一种新的方式,没有结合mbert,很新颖
73
Filtered Inner Product Projection for Multilingual Embedding AlignmentICLR 2020将特殊的投影方式的,投影到一个共同的空间上面去吧。MUSE 数据集
74
Embeddings in Natural Language Processing: Theory and Advances in Vector Representations of Meaning
书 170页一本书来讲述embedding的一些效果
75
https://wals.info/ 描述各个语言语法形态学特征(phonological, grammatical, lexical)的网站网站
The World Atlas of Language Structures (WALS) is a large database of structural (phonological, grammatical, lexical) properties of languages gathered from descriptive materials (such as reference grammars) by a team of 55 authors.
76
77
对抗 + m-bert
78
Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots.NAACL 2021讲述了对抗攻击,并且描述了对抗学习实际上能够很好地促进语言之间的一种融合自然语言推理
79
Multi-Level Cross-Lingual Transfer Learning With Language Shared and Specific Knowledge for Spoken Language Understanding
期刊北邮一个本科生写的,没有用预训练模型 用了multi level的东西去做的,很厉害
80
Adversarial Learning with Contextual Embeddings for Zero-resource Cross-lingual Classification and NEREMNLP 2019一种比较简单的利用数据增强的方法
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100