Recognizing context for annotating a live life recording.
In: Personal and Ubiquitous Computing, Jg. 11 (2007) ; Nr. 4, S. 251-263
ISSN: 1617-4909, 0949-2054
Zeitschriftenaufsatz / Fach: Informatik; Wirtschaftswissenschaften
In the near future, it will be possible to continuously record and store the entire audio–visual lifetime of a person together with all digital information that the person perceives or creates. While the storage of this data will be possible soon, retrieval and indexing into such large data sets are unsolved challenges. Since today’s retrieval cues seem insufficient we argue that additional cues, obtained from body-worn sensors, make associative retrieval by humans possible. We present three approaches to create such cues, each along with an experimental evaluation: the user’s physical activity from acceleration sensors, his social environment from audio sensors, and his interruptibility from multiple sensors.