Our paper „Repeated Knowledge Distillation with Confidence Masking to Mitigate Membership Inference Attacks” got accepted at ACM AISec 2022! In the paper, we describe a novel approach to protect machine learning models from membership inference attacks. Concretely, we combine the known defence mechanism of "knowledge distillation" with the masking of confidence scores. Our approach is much more flexible than existing defence mechanisms as it allows for the fine-tuning of parameters and as such can be used to achieve a tailored trade-off between the accuracy of the models and the attack protection.