1- Electrical and Computer Engineering Faculty University of Tehran Tehran, Iran
2- Department of Computer Engineering Hamedan University of Technology Hamedan, Iran
3- Electrical and Computer Engineering University of Tehran Tehran, Iran , yazdani@ut.ac.ir
Abstract: (17 Views)
A decentralized method of machine learning, federated learning (FL) enables several clients to work together to train models without disclosing their raw data. However, because of its openness, it is also susceptible to poisoning assaults, especially label-flipping (LF), in which harmful clients alter training labels to taint the global model. Such effort drifts the model in a way that the model performance dwindles in specific attack-related classes while behaving the same as benign clients for other classes to increase complexity for detecting solutions. We combat this by using a defense mechanism that dynamically modifies trust factors to filter out malicious updates based on last-layer gradient similarity. By assessing the defense across a variety of datasets and more complex adversarial scenarios, such as multi-group attacks with different intensities, this study builds on earlier studies. According to experimental data, the method maintains accuracy within a proper level of the clean model while drastically reducing the impact of label-flipping, cutting the attack success rate by 50%. These results demonstrate how important it is to have adaptive security measures in place to protect FL models in hostile and changing contexts.