Robustness verification for Concept Drift Detection

Real world data streams are rarely stationary, but subjective to concept drift, i.e., the change of distribution of the observations. Concept drift needs to be constantly monitored, so that when the trained model is no longer adequate, a new model can be trained that fits the most recent concept. Current methods of detecting concept drift typically include monitoring the performance, and triggering a signal once this drops by a certain margin. The disadvantage of this is that this method acts retroactively, i.e., when the performance has already dropped.

The field of neural network verification detects whether a neural network is susceptible to an adversarial attack, i.e., whether a given input image can be perturbed by a given epsilon, such that the output of the network changes. This indicates that this input is close to the decision boundary. When the distribution of images that are close to the decision boundary significantly changes, this indicates that concept drift is occurring, and we can proactively (before the performance drops) retrain the model. The short-term goal of this micro-project is to define ways to a) monitor the distribution of images close to the decision boundary, and b) define control systems that can act upon this notion.

Disadvantages of this are that verifying neural networks requires significant computation time, and it will take many speed-ups before this can be utilized in high-throughput streams.

Output

Conference or Journal Paper – We initially aim for top-tier venues, but will decide on the actual venue after the results and scope are determined.

Project Partners

  • Leiden University, Holger Hoos
  • INESC TEC, João Gama

Primary Contact

Holger Hoos, Leiden University