The scenario clearly describes a situation where data cannot be centralized due to legal and privacy constraints , yet the organization still wants to benefit from collective learning across multiple institutions. The key requirement is that the model is sent to local data sources , trained locally, and only aggregated insights or model updates are shared centrally.
This is the defining principle of Federated Learning , a core component of Federated and Privacy-Preserving Learning . In this approach, each participant (in this case, banks) trains the model on its own data locally. The updates (such as model weights or gradients) are then shared and aggregated to improve a global model—without exposing raw data.
Privacy-preserving techniques such as secure aggregation and differential privacy further ensure that sensitive information cannot be reverse-engineered from shared updates.
Other options are not relevant:
Advanced neural architectures improve model capability but do not address data-sharing constraints.
Quantum computing is unrelated to distributed training in this context.
Generative AI evolution focuses on content generation, not decentralized training.
CAIPM emphasizes federated learning as a key enabler for collaborative AI in regulated industries , where data privacy and sovereignty are critical.
Therefore, the correct answer is Federated and Privacy-Preserving Learning , as it directly supports decentralized training without sharing raw data.
Submit