Consent and contradict are not in a simple binary relationship: There may be several nuances, degrees of this behavior (total or partial), indecision to maintain or advocate (uncertainty), or even a total lack of that (indifference). Recognition of these variants of agreement/disagreement is a key factor in teaching a successful conversation: non-recognition or misinterpretation of the events of the agreement can even lead to the total failure of the given interaction. Although languages generally have a number of lexical and syntactic means for expressing this behaviour, it can still be misleading to rely exclusively on linguistic form. For example, if Actor B agrees with Actor A, he would say yes; But the same “yes” can also be used to propose exactly the opposite, i.e. to mean differences of opinion, depending on how the “yes” is pronounced. Alternatively, one can agree or contradict by not even saying a word, simply by keeping a squeal: again, it is the non-verbal behavior that contributes to the understanding of the context, effectively to the pragmatic interpretation of the event. In order to correctly identify the pragmatic functions of the agreement/disagreement, all available modalities, both orally and non-verbal, both audio and visually, must be taken into account. However, there remains a point of challenge. If someone expresses their consent by saying “yes” and acquiescing at the same time, this agreement is identified as the concreting, virtual temporal orientation of the two events (verbal and gestural). But how can the wisdom of the adage “silence gives consent” be justified, i.e. how to interpret concordance on the basis of the absence of any behavioural event? Indeed, it is not as if we are faced with zero contributions. We assume that we do arrive at the interpretation of a (certain) agreement after a certain period of observation, during which we collect data from all available modalities (verbal and non-verbal).
In this process, we go beyond simply searching for a simple temporal orientation of certain events, but rather identifying behaviors that consist of events over a longer observation period. It is indeed a cognitive process in which the models thus identified are intersected with stereotypical patterns of behaviour that we already know (as congenital or acquired), and the pragmatic function of the best match is attributed to the given model that was found during the observation period, in our case to that which is related to agreement/disagreement. Before we look at the actual analysis of the match models, we should look at some basic data on the corpus as a whole. During the development of the HuComTech Corpus, we wanted to identify a large number of multimodal behaviors during a certain observation period. Based on the resulting database data, this paper focuses on the discovery of time models related to agreement/disagreement. It describes the methodological basis of the structure of the corpus, the analysis and interpretation of the data. Particular emphasis is placed on the Theme research tool: we describe both its theoretical bases facilitating the analysis of multimodal behavioral data and some methodological questions of its application to the HuComTech Corpus.