(Preprint) A Single Example Can Improve Zero-Shot Data Generation
Pavel Burnyshev ¹, Valentin Malykh ¹ ², Andrey Bout ¹, Ekaterina Artemova ¹ ³, Irina Piontkovskaya ¹
¹ Huawei Noah's Ark Lab, Moscow, Russia
² Kazan Federal University, Kazan, Russia
³ HSE University, Moscow, Russia
arXiv, 2021-08-16
Abstract
Sub-tasks of intent classification, such as robustness to distribution shift, adaptation to specific user groups and personalization, out-of-domain detection, require extensive and flexible datasets for experiments and evaluation. As collecting such datasets is time- and labor-consuming, we propose to use text generation methods to gather datasets. The generator should be trained to generate utterances that belong to the given intent.
We explore two approaches to generating task-oriented utterances. In the zero-shot approach, the model is trained to generate utterances from seen intents and is further used to generate utterances for intents unseen during training. In the one-shot approach, the model is presented with a single utterance from a test intent. We perform a thorough automatic, and human evaluation of the dataset generated utilizing two proposed approaches. Our results reveal that the attributes of the generated data are close to original test sets, collected via crowd-sourcing.
Multi-photon neuron embedded bionic skin for high-precision complex texture and object reconstruction perception research
Hongyu Zhou, Chao Zhang, Hengchang Nong, Junjie Weng, Dongying Wang, Yang Yu, Jianfa Zhang, Chaofan Zhang, Jinran Yu, Zhaojian Zhang, Huan Chen, Zhenrong Zhang, Junbo Yang
Opto-Electronic Advances
2025-01-22