Resolving Training Biases via Influence-based Data Relabeling

Good quality datasets are essential to the successful training of supervised models. However, no matter how much attention is given to data cleaning, errors are bound to be included and this can cause poor performance even on relatively simple tasks.

In recent years, influence functions have re-emerged as a useful tool to estimate the impact of data samples on model’s predictions thanks to works such as [Koh17U]. First introduced in the 70s as a tool for robust statistics (see e.g. [Ham74I] ), they estimate the influence of each training data point on the model’s predictions using a derivative of the loss around a training sample.

Similarly, data resampling is a widely used strategy to deal with harmful samples. It re-weights input data based on their training loss: those with higher losses are assumed to have corrupted labels, and hence it could be beneficial to down-weigh them during training. Nevertheless, such loss-based resampling methods are known to have limitations, e.g. instabilities towards large portions of mislabelled data (for details see [Zha16U]).

To address these limitations, influence functions have recently been used (instead of training losses) in the resampling scheme. Inspired by the success of such approaches, [Kon22R] moves one step further and re-labels the harmful data-points (instead of just decreasing their weight) based on the results of influence analysis. The new approach is named RDIA, and can be found on github.

But why would this be better than just removing the bad samples? The image above shows the average test loss of several re-labelling methods on 4 different datasets. The datasets are progressively corrupted with different amounts of noise (as reported on the x-axis). The other methods are

  1. ERM: the usual full training.
  2. Random: randomly selecting and changes the label of training samples.
  3. UIDS: introduced in [Wan20L].
  4. Dropout: removes all training data with negative influence in the validation set.

RDIA performs better than all the other methods, and it shows good results even with very high noise. This is somewhat expected, since it is better in leveraging the information coming from the validation data, and corrects the labels accordingly in the training set.

While the applicability of this approach may be limited in practice, moving towards a data-centric development of ML models could yield many benefits. Model training and data cleaning, which typically are separate steps of the MLOps pipeline, are slowly becoming part of a two-legged iterative process, where the errors of one lead to the refinement of the other. In this paper it is shown that this could give surprisingly good improvements to model accuracy.

References

  • Understanding Black-box Predictions via Influence Functions, Pang Wei Koh, Percy Liang. Proceedings of the 34th International Conference on Machine Learning (2017)
  • The Influence Curve and Its Role in Robust Estimation, Frank R. Hampel. Journal of the American Statistical Association (1974)
  • Understanding deep learning requires rethinking generalization, Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals. ICLR 2017 (2016)
  • Resolving Training Biases via Influence-based Data Relabeling, Shuming Kong, Yanyan Shen, Linpeng Huang. (2022)
  • Less Is Better: Unweighted Data Subsampling via Influence Function, Zifeng Wang, Hong Zhu, Zhenhua Dong, Xiuqiang He, Shao-Lun Huang. Proceedings of the AAAI Conference on Artificial Intelligence (2020)

In this series