Federated Learning: Collaborative Machine Learning Across Decentralized Data Sources
Main Article Content
Abstract
Federated Learning has emerged as a promising approach for collaborative machine learning across decentralized data sources, such as mobile devices, edge devices, and IoT sensors. This paper provides an overview of Federated Learning, discussing its principles, techniques, applications, and challenges. Unlike traditional centralized approaches, Federated Learning enables model training to be performed locally on individual devices, with only model updates aggregated centrally. This decentralized approach preserves data privacy and reduces communication costs, making it well-suited for scenarios where data cannot be easily centralized due to privacy concerns or bandwidth limitations. Federated Learning has applications in various domains, including healthcare, finance, telecommunications, and smart cities, where sensitive data is distributed across multiple devices or locations. However, Federated Learning also poses challenges such as communication overhead, heterogeneous data distributions, and model aggregation complexities. Addressing these challenges requires further research and development to improve the scalability, efficiency, and robustness of Federated Learning algorithms. By harnessing the potential of Federated Learning, researchers and practitioners can develop collaborative machine learning solutions that leverage decentralized data sources while preserving privacy and minimizing communication costs.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
References
Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., & Bacon, D. (2016). Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492.
McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics (pp. 1273-1282).
Bonawitz, K., Eichner, H., Grieskamp, W., Huba, D., Ingerman, A., Ivanov, V., ... & Mazzocchi, S. (2019). Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046.
Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2), 1-19.
Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., & Smith, V. (2020). Federated optimization in heterogeneous networks. arXiv preprint arXiv:2002.04688.
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., ... & Zheng, X. (2019). Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977.
Li, Y., Jiang, X., Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., ... & Talwalkar, A. (2021). Federated learning: Challenges, methods, and future directions. IEEE Transactions on Knowledge and Data Engineering.
Smith, V., Chiang, C. J. H., Sanjabi, M., & Talwalkar, A. (2017). Federated multi-task learning. In Advances in Neural Information Processing Systems (pp. 4424-4434).
Hard, A., Rao, K., Mathews, R., Ramaswamy, S., Beaufays, F., Augenstein, S., ... & Ramage, D. (2018). Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604.
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. (2018). How to backdoor federated learning. arXiv preprint arXiv:1807.00459.