||In machine learning, concept drift can cause the optimal solution to a given problem to change as time passes, leading to less accurate predictions. Concept drift can be sudden, gradual or reoccuring. Understanding the consequences of concept drift is particularly important in human-centric applications where changes in the underlying data and environment are common and unexpected. In order to gain a better understanding of the adverse effects of different types of concept drift on learners, we propose a novel simulation tool that is able to incrementally generate datasets with customisable concept drift by interacting with a human in a game-like setting. We illustrate our approach by generating and analysing concept drift simulations inspired by body-sensor based long-term activity recognition. Our initial results show that current unsupervised adaptation techniques can be caught in cyclic mislabelling and that a hybrid solution that is self-calibrating and semi-supervised is more robust than any of the two taken separately for this example.