Language Technologies

Data acquisition and management

Our methodologies for studying language are data-intensive or at least partially data driven. We rely on sound collection of linguistic dataset. This includes the corpus design (balance for reaching statistical significance) and the protocol design, subjects recruitment, passations themselves and then recordings and archiving the data (while respecting the relevant laws). In our context, management of meta-data is specially important. Very detailed meta-data about the corpus (social, cognitive, language, situation… variables) are required for later being able to assess the sources of variation. However, depending on the kind of data, the population their availability to the public or even to the researchers needs to be careful controlled. In Aix, we have accumulated experience over the years through the management of the SLDR (Speech and Language Data Repository) and a large number of experimental studies with delicate populations.

Data processing and Linguistic Resources

Dealing with variation in Natural Language Processing (NLP) is paradoxical. Each NLP task is in itself a rela challenge that researcher tend to simplify by limiting the domain of application. Variation is therefore a crucial issue. It is true for rule-based approach that are very sensible to domain change but also to machine Learning approaches in which the systems trained on a coherent dataset would most likely to be overfit if applied to data with significant variation.

The challenge is therefore to adapt existing tools and resources for dealing with non-standard (including problem in the recognition, unintelligible words or phrases, neologisms, foreign words, code switching or specific style spoken constructions). The tools adaptation is a bidirectional process. First, we use existing tools (tokenisers, lemmatizers, POS-tagger, parser) on the real dataset. Then we perform a systematic error analysis on a subset of the data and adapt either by specific machine Learning techniques or through direct adaptation of the rules. The new system is used again on our data and this process is iterated until acceptable results are obtained (evaluation against the manual analysis). Resources such as lexicon or lexical network need to go through a similar process.

Variation and natural language processing

The datasets produced in variation studies are extremely interesting to reuse for deeper computational studies. More precisely, the need for precise transcription generates several competing versions of transcription. The intelligibility studies also provide such parallel transcription of specific utterances. A systematic analysis of these competing versions will be very fruitful to understand the interaction process itself. In particular we can compare the case of isolated sentences (intelligibility studies) on which interpreter have to rely only on syntactic and basic semantic information vs. sentences in context (versionning of conversational transcripts). Such an analysis will shade a light on how syntax, semantics and discourse are invoked for interpreting difficult or even unintelligible elements. This is in turn very valuable input for automatic processing tools in order to contribute to word recognition but more generally to allow them to generate syntactic and semantic analyses for sentences usually not interpretable by automatic means.

Leave a Reply

Your email address will not be published. Required fields are marked *