Distributed Modular Toolbox for Multi-modal Context Recognition

Publication Type Conference Paper
Authors David Bannach, Kai Kunze, Paul Lukowicz, Oliver Amft
Title Distributed Modular Toolbox for Multi-modal Context Recognition
Abstract We present a GUI-based \em C++ toolbox that allows for building distributed, multi-modal context recognition systems by plugging together reusable, parameterizable components. The goals of the toolbox are to simplify the steps from prototypes to online implementations on low-power mobile devices, facilitate portability between platforms and foster easy adaptation and extensibility. The main features of the toolbox we focus on here are a set of parameterizable algorithms including different filters, feature computations and classifiers, a runtime environment that supports complex synchronous and asynchronous data flows, encapsulation of hardware-specific aspects including sensors and data types (e.g., int vs. float), and the ability to outsource parts of the computation to remote devices. In addition, components are provided for group-wise, event-based sensor synchronization and data labeling. We describe the architecture of the toolbox and illustrate its functionality on two case studies that are part of the downloadable distribution.
Date March 2006
Proceedings Title ARCS 2006: Proceedings of the 19th International Conference on Architecture of Computing Systems
Publisher Springer
Volume 3894
Pages 99–113
Series Lecture Notes in Computer Science
DOI 10.1007/11682127_8
Full Text PDF
Friedrich-Alexander-Universität Erlangen-Nürnberg