Zhu Z, Shi Y, Xin G, Peng C, Fan P, Letaief KB. Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design.
Entropy (Basel) 2023;
25:1205. [PMID:
37628235 PMCID:
PMC10453433 DOI:
10.3390/e25081205]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/25/2023] [Accepted: 08/11/2023] [Indexed: 08/27/2023]
Abstract
As a promising distributed learning paradigm, federated learning (FL) faces the challenge of communication-computation bottlenecks in practical deployments. In this work, we mainly focus on the pruning, quantization, and coding of FL. By adopting a layer-wise operation, we propose an explicit and universal scheme: FedLP-Q (federated learning with layer-wise pruning-quantization). Pruning strategies for homogeneity/heterogeneity scenarios, the stochastic quantization rule, and the corresponding coding scheme were developed. Both theoretical and experimental evaluations suggest that FedLP-Q improves the system efficiency of communication and computation with controllable performance degradation. The key novelty of FedLP-Q is that it serves as a joint pruning-quantization FL framework with layer-wise processing and can easily be applied in practical FL systems.
Collapse