MULTILINGUALITY AND LANGUAGE TECHNOLOGY
Your Company Address
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI) Campus D3 2 Stuhlsatzenhausweg 3 66123 Saarbrücken Germany
– How to control output of a pre-trained language model and include necessary input in the correct order? E.g., experiments with different prompts for generation vs. post-editing.
– What is the best way to encode the previous context? Can we just concatenate the strings or is a more structured representation needed? e.g., one could experiment with using AMRs for previous turns and provide them as additional inputs.
Option #1: we can use our in-house EveEnti annotations as a formal representation of the mission content and implement a model that will use such representations as input and generate fluent text describing the mission.
Option #2: use AMRs as formal input representations.
In both cases we will need an annotator to produce reference descriptions for each of the dialogue threads. We are working together with firefighters and may be able to get their support to create the reference descriptions.