A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Paper_Name | Conference | From | Note | Dataset | |||||||||||||||||||||
2 | meta learning to classify intent and slot labels with noisy | 思路很不错,从原理的角度在三个方面依次介绍了各种few shot learning的方法,有点偏向CV,大概有37页 | ||||||||||||||||||||||||
3 | Workshop | Alexa | 训练了一个抗噪声(noisy samples)的网络,也能用在少样本学习上, 确定了一种新的task,但是没有链接 | ATIS, SNIPS, TOP | ||||||||||||||||||||||
4 | workshop | Alexa | encoder角度以及数据增强的角度出发,数据增强只是在embedding层面,必然缺少可解释性 | |||||||||||||||||||||||
5 | ProtoDA: Efficient Transfer Learning for Few-Shot Intent Classification | AAAI2021 | Alexa | 相比baseline有一个多点的提升,关键是思路很清奇,只用了syntactic的信息。虽然不是fewshot learning,也给了我一些启发,可以利用外界的信息去学习少样本。 | SNIPS ATIS | |||||||||||||||||||||
6 | A survey of joint intent detection and slot-filling models in natural language understanding | Arxiv | 悉尼大学 | |||||||||||||||||||||||
7 | Multi-Domain Spoken Language Understanding Using Domain-and Task-Aware Parameterization | None | Shiyu 老师的work | SLU问题用seq-to-seq来解决了,但是在few shot实验的地方缺乏与其他baseline的对比实验,被reviewer喷的多 | ASMixed MTOD | |||||||||||||||||||||
8 | Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware Parameterization | Arxiv | Shiyu 老师work 的basleine | SLU问题,主要不是集中在few shot 上面,是侧重于domain adaption的问题,数据量也用百分比而不是几个数据来训练的。方法主要是说,不同的domain、task应该有一些不同的参数。Joint 训练也是一种方向,在五年前很火爆,现在没啥了。https://blog.csdn.net/weixin_37947156/article/details/87608018 | ||||||||||||||||||||||
9 | Multi-Domain Adversarial Learning for Slot Filling in Spoken Language Understanding | Corr | Bing Liu | 利用对抗学习的方式来学习到一种更加general的domain表达形式 | ||||||||||||||||||||||
10 | Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling | Nips 2015 | Bing Liu | 第一次提出两个任务的joint 训练 | ||||||||||||||||||||||
11 | ONENET:JOINT DOMAIN, INTENT, SLOT PREDICTION FOR SLU | workshop | ||||||||||||||||||||||||
12 | Joint Slot Filling and Intent Detection via Capsule Networks (胶囊网络) | ACL 2019 | 阿里腾讯美国 | 利用18/19年很火的胶囊网络,提出了一种语义等级化的网络。 | SNIPS ATIS | |||||||||||||||||||||
13 | Exploring transfer learning for end-to-end spoken language understanding | AAAI 2021 | Alexa | |||||||||||||||||||||||
14 | Zero-shot User Intent Detection via Capsule Neural Networks | EMNLP 2018 | 借用了胶囊网络,需要熟悉胶囊网络的内部结构才能够明白内部的一些原理。 | |||||||||||||||||||||||
15 | Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset | AAAI 2020 | Google Research | 提出了一种全新的数据集以及全新的evaluation,支持一些few shot的东西。 | ||||||||||||||||||||||
16 | Accelerating Natural Language Understanding in Task-Oriented Dialog | ACL workshop | 提出了一种很简单的网络,值得参考一下。 | |||||||||||||||||||||||
17 | Meta Learning for Few-Shot Joint Intent Detection and Slot-Filling | ICML 2020 | 联合预测两个任务,然后是从few shot的角度出发的,并且拓展到了跨语言。 | |||||||||||||||||||||||
18 | Linguistically-Enriched and Context-Aware Zero-shot Slot Filling | Arxiv | 圣何塞 | 集成了语言学信息以及context的信息。关键是效果很好,超出了2020 baseline 17个点。 | SNIPS ATIS Multiwoz | |||||||||||||||||||||
19 | Coach: A Coarse-to-Fine Approach for Cross-domain Slot Filling | ACL 2020 short | 港科技 | 这篇论文提出了一种由粗到细的模型,可以作为我的baseline。并且加了一种template的玩意 | SNIPS 和 NER数据集 | |||||||||||||||||||||
20 | Robust Zero-Shot Cross-Domain Slot Filling with Example Values | ACL 2019 | MIT, Google, Alexa | SNIP XSchema | ||||||||||||||||||||||
21 | Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network | ACL 2020 | 哈工大 | 只是解决了slot tagging的问题,从NER 的CRF的角度出发来考虑这个问题 | ||||||||||||||||||||||
22 | Zero-Shot Transfer Learning with Synthesized Data for Multi-Domain Dialogue State Tracking | ACL 2020 | 斯坦福 | 从数据的角度出发,提出了新的合成对话数据的方式,然后用这种数据训练的模型在few shot上表现更好了,十多个点呢,醉了。 | Multiwoz | |||||||||||||||||||||
23 | Self-Supervised Meta-Learning for Few-Shot Natural Language Classification Tasks | EMNLP 2020 | 微软 | 从预训练的角度出发,用元学习的方式进行训练,大概超出了四个点不等,公式不多,说明是属于微调模型类的论文。 | 关于NLP 分类的17种数据集 | |||||||||||||||||||||
24 | A Closer Look At Feature Space Data Augmentation For Few-Shot Intent Classification. | EMNLP 2019 | Amazon | 从数据扩充的角度出发,只是针对于Intent Classification 的问题。 | SIPS FB Dialog | |||||||||||||||||||||
25 | Hierarchical Attention Prototypical Networks for Few-Shot Text Classification | EMNLP 2019 | 北大 微软 | 从attention的角度出发,旨在扩充特征空间的理解,计算了三个层面的attention | FewRel CSID | |||||||||||||||||||||
26 | Recent Neural Methods on Slot Filling and Intent Classification for Task-Oriented Dialogue Systems: A Survey | Coling | ||||||||||||||||||||||||
27 | Joint Slot Filling and Intent Detection in Spoken Language Understanding by Hybrid CNN-LSTM Model | ICML 2020 | ATIS | |||||||||||||||||||||||
28 | Joint Intent Detection and Slot Filling with Wheel-Graph Attention Networks | Arxiv | 引入了graph,在两个数据集上做到了SOTA | ATIS SNIPS | ||||||||||||||||||||||
29 | Recent Neural Methods on Slot Filling and Intent Classification for Task-Oriented Dialogue Systems: A Survey | COLING | 不是joint的单独介绍了,包括多语言的东西呢。 | |||||||||||||||||||||||
30 | BERT for Joint Intent Classification and Slot Filling | 达摩院 | ||||||||||||||||||||||||
31 | ||||||||||||||||||||||||||
32 | ||||||||||||||||||||||||||
33 | Cross-Lingual + Few Shot | |||||||||||||||||||||||||
34 | Mult ATIS ++ | |||||||||||||||||||||||||
35 | End-to-End Slot Alignment and Recognition for Cross-Lingual NLU | EMNLP 2020 | Alexa | 从源语言到目的语言不需要连线,自动匹配。用了五组实验平均,实验做的很充分。 | ATIS++ 多语言 | |||||||||||||||||||||
36 | To What Degree Can Language Borders Be Blurred In BERT-based Multilingual Spoken Language Understanding? | COLING 2020 | Alexa | 使用了多语言BERT,本身的模型创新点在于这个对抗式 | ATIS++ 多语言 | |||||||||||||||||||||
37 | Simultaneous Slot Filling, Translation, Intent Classification, and Language Identification: Initial Results using mBART on MultiATIS++ | AACL | Alexa | MBART的初步试验结果 | ATIS++ 多语言 | |||||||||||||||||||||
38 | Multilingual Code-Switching for Zero-Shot Cross-Lingual Intent Prediction and Slot Filling | Arxiv 2020 | ||||||||||||||||||||||||
39 | Multi-lingual Intent Detection and Slot Filling in a Joint BERT-based Model | Arxiv 2019 | 从英语到意大利语的转化,自己用翻译器翻译的,不准确。 | ATIS + 意大利 | ||||||||||||||||||||||
40 | MTOP: A Comprehensive Multilingual Task-Oriented Semantic Parsing Benchmark | EACL 2021 | 数据集是新提出来的,也能够用于我的任务,但是数据集做的很一般,看起来很简陋 | |||||||||||||||||||||||
41 | ||||||||||||||||||||||||||
42 | ||||||||||||||||||||||||||
43 | 斯坦福自己构建的数据集 | |||||||||||||||||||||||||
44 | Cross-Lingual Transfer Learning for Multilingual Task Oriented Dialog | NAACL 2019 | 斯坦福+Facebook AI | 提出了一种新的数据集,顺便提出了一个简单地baseline,引用量到了70多,很高了已经是。 | 斯坦福 | |||||||||||||||||||||
45 | Attention-Informed Mixed-Language Training for Zero-Shot Cross-Lingual Task-Oriented Dialogue Systems | IJCNLP 2019 | 斯坦福 | |||||||||||||||||||||||
46 | AAAI -2020 | 港科技 | 斯坦福 | |||||||||||||||||||||||
47 | Cross-lingual Spoken Language Understanding with Regularized Representation Alignment | EMNLP 2020 | 港科技 | 斯坦福 | ||||||||||||||||||||||
48 | Cross-lingual Alignment Methods for Multilingual BERT: A Comparative Study | EMNLP 2020 | Alexa, UK | 主要在分析连线的重要性,以及如何去连线的问题 | 斯坦福等 | |||||||||||||||||||||
49 | CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot Cross-Lingual NLP | IJCAI-2020 | 哈工大 | 斯坦福等 | ||||||||||||||||||||||
50 | Evaluating Cross-Lingual Transfer Learning Approaches in Multilingual Conversational Agent Models | COLING 2020 | Amazon | 一些评测,一个模型 | 新的数据集,未公开,设置和斯坦福的一样 | |||||||||||||||||||||
51 | ||||||||||||||||||||||||||
52 | 数据集 | |||||||||||||||||||||||||
53 | FewJoint: A Few-shot Learning Benchmark for Joint Language Understanding | Arxiv | 哈工大 | 提出了一种新的专门用于这种few shot训练的数据集,针对于dialog NLU的地方,也包括NER的任务 | ||||||||||||||||||||||
54 | Taskmaster-2 (2020) | None | 由谷歌提出的第二代数据集,没有论文,不知道用的人多不多。 | |||||||||||||||||||||||
55 | ||||||||||||||||||||||||||
56 | ||||||||||||||||||||||||||
57 | 探索的未来方向 | |||||||||||||||||||||||||
58 | 多语言之间的迁移学习 | |||||||||||||||||||||||||
59 | ||||||||||||||||||||||||||
60 | Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder Model | ACL 2019 | 从两种语言应该有相似的sentence embedding的角度来出发。 | |||||||||||||||||||||||
61 | 机器翻译在few shot方面的应用 | |||||||||||||||||||||||||
62 | Meta-Learning for Few-Shot NMT Adaptation | ACL | ||||||||||||||||||||||||
63 | ||||||||||||||||||||||||||
64 | 学术界的经典few/zero_shot,及其余网络 | |||||||||||||||||||||||||
65 | Prototypical Networks for Few-shot Learning | NIPS 2017 | 类似于k means 的思路,尝试着将相同类型的聚类并且归纳,来区分其他的样本,有一些缺点的。 | zero shot and CU-Birds dataset. | ||||||||||||||||||||||
66 | Model-Agnostic Meta-Learning | ICML 2017 | 不断地采集小样本训练模型,使得模型能够快速的适应few-shot的setting | CV的数据集 | ||||||||||||||||||||||
67 | 胶囊网络 | |||||||||||||||||||||||||
68 | OPTIMIZATION AS A MODEL FOR FEW-SHOT LEARNING | ICLR 2017 | 提出了一个基于LSTM的元学习器模型来学习用于训练另一个学习器神经网络分类器的精确优化算法。 | |||||||||||||||||||||||
69 | ||||||||||||||||||||||||||
70 | 学习元表示的一些方法。 | |||||||||||||||||||||||||
71 | Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text Classification | SIGAR 2020 | 一种对抗学习的方式 结合mbert,对于unlable的东西采取半监督方式学习,一圈一圈轮着来。 | |||||||||||||||||||||||
72 | Non-Linear Instance-Based Cross-Lingual Mapping for Non-Isomorphic Embedding Spaces | ACL 2020 | 讲解空间映射的一种新的方式,没有结合mbert,很新颖 | |||||||||||||||||||||||
73 | Filtered Inner Product Projection for Multilingual Embedding Alignment | ICLR 2020 | 将特殊的投影方式的,投影到一个共同的空间上面去吧。 | MUSE 数据集 | ||||||||||||||||||||||
74 | Embeddings in Natural Language Processing: Theory and Advances in Vector Representations of Meaning | 书 170页 | 一本书来讲述embedding的一些效果 | |||||||||||||||||||||||
75 | https://wals.info/ 描述各个语言语法形态学特征(phonological, grammatical, lexical)的网站 | 网站 | The World Atlas of Language Structures (WALS) is a large database of structural (phonological, grammatical, lexical) properties of languages gathered from descriptive materials (such as reference grammars) by a team of 55 authors. | |||||||||||||||||||||||
76 | ||||||||||||||||||||||||||
77 | 对抗 + m-bert | |||||||||||||||||||||||||
78 | Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots. | NAACL 2021 | 讲述了对抗攻击,并且描述了对抗学习实际上能够很好地促进语言之间的一种融合 | 自然语言推理 | ||||||||||||||||||||||
79 | Multi-Level Cross-Lingual Transfer Learning With Language Shared and Specific Knowledge for Spoken Language Understanding | 期刊 | 北邮一个本科生写的,没有用预训练模型 用了multi level的东西去做的,很厉害 | |||||||||||||||||||||||
80 | Adversarial Learning with Contextual Embeddings for Zero-resource Cross-lingual Classification and NER | EMNLP 2019 | 一种比较简单的利用数据增强的方法 | |||||||||||||||||||||||
81 | ||||||||||||||||||||||||||
82 | ||||||||||||||||||||||||||
83 | ||||||||||||||||||||||||||
84 | ||||||||||||||||||||||||||
85 | ||||||||||||||||||||||||||
86 | ||||||||||||||||||||||||||
87 | ||||||||||||||||||||||||||
88 | ||||||||||||||||||||||||||
89 | ||||||||||||||||||||||||||
90 | ||||||||||||||||||||||||||
91 | ||||||||||||||||||||||||||
92 | ||||||||||||||||||||||||||
93 | ||||||||||||||||||||||||||
94 | ||||||||||||||||||||||||||
95 | ||||||||||||||||||||||||||
96 | ||||||||||||||||||||||||||
97 | ||||||||||||||||||||||||||
98 | ||||||||||||||||||||||||||
99 | ||||||||||||||||||||||||||
100 |