Abstract
Language, speech and conversational behaviours reflect cognitive changes that may precede physiological changes and offer a much more cost-effective option for detecting preclinical cognitive decline. Artificial intelligence and machine learning have been established as a means to facilitate automated speech-based cognitive screening through automated recording and analysis of linguistic, speech and conversational behaviours. In this work, a scoping literature review was performed to document and analyse current automated speech-based implementations for cognitive screening from the perspective of human–computer interaction. At this stage, the goal was to identify and analyse the characteristics that define the interaction between the automated speech-based screening systems and the users, potentially revealing interaction-related patterns and gaps. In total, 65 articles were identified as appropriate for inclusion, from which 15 articles satisfied the inclusion criteria. The literature review led to the documentation and further analysis of five interaction-related themes: (i) user interface, (ii) modalities, (iii) speech-based communication, (iv) screening content and (v) screener. Cognitive screening through speech-based interaction might benefit from two practices: (1) implementing more multimodal user interfaces that facilitate—amongst others—speech-based screening and (2) introducing the element of motivation in the speech-based screening process.