Exploration of context incorporation strategies for the NLU task (Slot Filling) on dialogues
- What is the best way to incorporate context information in the Slot Filling task for dialogues? How much will it benefit the task in question?
- Do the context incorporation strategies have same or different effect when used with different model architectures?
- Do the context incorporation strategies have same or different effect when applied to ASR outputs vs gold transcriptions of the dialogues?
Data: simulated emergency call dialogues in German/English/Polish from the NotAs project.
- (Overview of the SF task and approaches) Louvan, S., & Magnini, B. (2020). Recent neural methods on slot filling and intent classification for task-oriented dialogue systems: A survey. arXiv preprint arXiv:2011.00564. Link: https://arxiv.org/abs/2011.00564.
- (Inspiration for the usage of context information) Anikina, T., & Kruijff-Korbayová, I. (2019, September). Dialogue act classification in team communication for robot assisted disaster response. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue (pp. 399-410). Link: https://www.aclweb.org/anthology/W19-5946.pdf.