Exploring the Impact of Federated Learning on the Development of Modern Systems: A Review
Main Article Content
Abstract
Federated Learning (FL) represents a transformative advancement in Artificial Intelligence that enables decentralized model training across multiple institutions or devices without transferring raw data. This paradigm is particularly significant for sectors such as healthcare, where privacy, security, and compliance are paramount. This review explores the core principles of FL, its architectural framework, and key differentiators such as horizontal and vertical FL, while examining its major applications across domains including healthcare, finance, mobile systems, and robotics. Through a critical analysis of recent studies, the paper highlights FL's potential to enhance data privacy, mitigate bias from heterogeneous datasets, resist cybersecurity threats, and optimize communication efficiency. It also addresses the limitations that challenge FL's broader adoption, such as high computational costs, vulnerability to adversarial attacks, and data inconsistency across nodes. The findings suggest that FL has the potential to reshape the future of AI by enabling collaborative intelligence without compromising data confidentiality. The paper concludes by outlining future directions for FL research, including adaptive aggregation, secure federated algorithms, and energy-efficient frameworks to enable scalable, ethical, and privacy-aware AI systems.
Article Details
Section
Articles

This work is licensed under a Creative Commons Attribution 4.0 International License.
How to Cite
[1]
“ Exploring the Impact of Federated Learning on the Development of Modern Systems: A Review”, JUBPAS, vol. 33, no. 2, pp. 161–173, Jun. 2025, doi: 10.29196/jubpas.v33i2.5790.