Background: The next wave of public-health threats will be signalled by heterogeneous data—text, imaging, genomics, mobility, IoT. Training centralised AI on such sensitive, geo-distributed corpora is often unlawful or infeasible. We describe a federated, bandwidth-aware framework that stitches together GPU/TPU cycles from edge clouds, telecom backbone and regional HPC centres into a single "compute-network" fabric, enabling privacy-preserving continual learning of trillion-parameter multimodal LLMs.
Methods:
• Partitioned-parameter LLM (Transformer-XL) with cross-modal alignment layers.
• Dynamic token-/sample-selection to minimise upstream traffic (<3 % of raw data).
• Asynchronous federated learning with differential privacy (ε ≤ 1) and secure aggregation.
• Intent-based network orchestration (SONATA 3.0) to provision sub-10 ms edge inference for hotspot detection.
Results: Pilot on 4.2 M chest X-rays, 1.1 B social-media posts and 0.9 M wastewater samples across 14 APAC cities. Compared with cloud-centralised baseline:
– 37 % higher F1 in early outbreak detection (0.93 vs 0.68).
– 48 % reduction in training energy, 62 % fall in cross-border data transfer.
– Inference latency ≤ 120 ms on 5 G, achieving WHO "real-time" benchmark.
Conclusions & Action Plan: Compute-network synergy turns raw telco infrastructure into a communal AI super-computer, letting health authorities share models instead of data.