Federated learning (FL) is no longer a research curiosity—it’s a practical response to a hard constraint: the most valuable data is often the least movable. Regulatory boundaries, data sovereignty rules, and organizational risk tolerance routinely prevent centralized aggregation. Meanwhile, sheer data gravity makes even permitted transfers slow, expensive, and fragile at scale.
The latest version of NVIDIA FLARE addresses this reality with a federated computing runtime that moves the training logic to the data, while raw data stays put. In high-stakes environments, centrally aggregating data is often not possible or practical, so a modern federated platform must treat data isolation, compliance, and privacy-enhancing technologies as first-class requirements.
What has historically slowed adoption isn’t the concept of FL—it’s the developer experience. If the path from “my local script trains” to “my job runs across federated sites” requires deep refactoring, new class hierarchies, or brittle configuration, many projects stall after the pilot.
The FLARE API evolution targets exactly that: eliminating the refactoring overhead by splitting the work into two concrete steps that map cleanly onto how teams actually build and ship ML systems:
Step 1 (client API): Turn an existing local training script into a federated client with ~5–6 lines of code, without changing your training loop structure.Step 2 (job recipes): Select the FL workflow and bind it to your client training script, then run the same job across simulation, PoC, and production by swapping only the execution environment.
‘No data copy’ as a system requirement
In regulated or high-sensitivity settings, “just centralize the dataset” is increasingly off the table. A practical federated computing platform needs to support:
No data copy: Data stays local, and only model updates (or equivalent signals) move.Compliance posture: Deployment and governance controls that support sovereignty and audit requirements.
Privacy-enhancing techniques: Multiple layers of defenses (examples include homomorphic encryption, differential privacy, and confidential computing).
Figure 1. Federated computing keeps data in place, enabling collaboration through model updates while supporting compliance and privacy-enhancing protections.The refactoring cliff: Why FL projects stall
Teams typically hit one of two cliffs after the pilot:
The code cliff: Converting working PyTorch/TensorFlow/Lightning training into FL can require invasive restructuring—new abstractions, messaging glue, and framework-specific scaffolding. The lifecycle cliff: Even when simulation works, moving to PoC and production triggers rewrites via job redefinition, reconfiguration, and environment-specific branching.FLARE flattens both cliffs by standardizing the workflow into two steps:
Make your script federated (client API) Execute it as a portable job (job recipe)The intended experience is explicitly to combine these so you can go from zero to an operational federated job quickly.
Step 1: Convert your local training script into a federated client (client API)
Who it’s for: Practitioners and ML engineers with existing training code who want the smallest possible difference.
The mental model is intentionally simple:
Initialize the client runtime Loop while the job is running Receive the current global model Train locally (your code) Send updated weights + metrics backFLARE’s client API is designed for minimal code changes and avoids forcing you into heavy “Executor/Learner” inheritance—use the FLModel structure or simple data exchange to communicate with the runtime.
Example 1a: Convert PyTorch to FLARE
Below is a concrete pattern you can apply to many scripts. The key touchpoints are: flare.init(), flare.receive(), loading model weights, and flare.send() with updated weights and metrics.
We show the local training code on the left and the federated version on the right, highlighting: import, flare.init(), receive(), send().
train.py
# train.py import torch import torchvision import torchvision.transforms as transforms from model import Net batch_size = 4 epochs = 1 lr = 0.01 model = Net() device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") loss = torch.nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=lr, momentum=0.9) transform = transforms.Compose( [ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ] ) train_dataset = torchvision.datasets.CIFAR10( root="/tmp/data/cifar10", transform=transform, download=True, train=True ) trainloader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size, shuffle=True ) model.to(device) for epoch in range(epochs): running_loss = 0.0 for i, batch in enumerate(trainloader): images, labels = batch[0].to(device), batch[1].to(device) optimizer.zero_grad() predictions = model(images) cost = loss(predictions, labels) cost.backward() optimizer.step() running_loss += cost.cpu().detach().numpy() / batch_size if i % 3000 == 2999: print( f"Epoch: {epoch + 1}/{epochs}, batch: {i + 1}, Loss: {running_loss / 3000}" ) running_loss = 0.0 print( f"Epoch: {epoch + 1}/{epochs}, batch: {i + 1}, Loss: {running_loss / (i + 1)}" ) print("Finished Training") torch.save(model.state_dict(), "./cifar_net.pth")client.py
# client.py # 1. Import client API import nvflare.client as flare import torch import torchvision import torchvision.transforms as transforms from model import Net batch_size = 4 epochs = 1 lr = 0.01 model = Net() device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") loss = torch.nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=lr, momentum=0.9) transform = transforms.Compose( [ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ] ) train_dataset = torchvision.datasets.CIFAR10( root="/tmp/data/cifar10", transform=transform, download=True, train=True ) trainloader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size, shuffle=True ) # 2. Initialize FLARE flare.init() # At each round while FLARE is running while flare.is_running(): # 3. Receive the global model input_model = flare.receive() # 4. Load global model model.load_state_dict(input_model.params) model.to(device) for epoch in range(epochs): running_loss = 0.0 for i, batch in enumerate(trainloader): images, labels = batch[0].to(device), batch[1].to(device) optimizer.zero_grad() predictions = model(images) cost = loss(predictions, labels) cost.backward() optimizer.step() running_loss += cost.cpu().detach().numpy() / batch_size if i % 3000 == 2999: print( f"Epoch: {epoch + 1}/{epochs}, batch: {i + 1}, Loss: {running_loss / 3000}" ) running_loss = 0.0 print( f"Epoch: {epoch + 1}/{epochs}, batch: {i + 1}, Loss: {running_loss / (i + 1)}" ) print("Finished Training") torch.save(model.state_dict(), "./cifar_net.pth") # 5. Send back the updated model output_model = flare.FLModel( params=model.cpu().state_dict(), meta={"NUM_STEPS_CURRENT_ROUND": len(trainloader) * epochs}, ) flare.send(output_model)Example 1b: PyTorch Lightning client The Lightning integration keeps the same
The Lightning integration keeps the same intent—receive global model, train, send updates—but exposes it in a Lightning-friendly way: import the Lightning client adapter and patch the Trainer.
The typical flow is: import, patch, (optional) validate, train as usual.
The point: Lightning users don’t have to drop into custom federated messaging—they keep the Trainer abstraction and still participate correctly in FL rounds.
Step 2: Package and execute the federated job anywhere (job recipes)
Who it’s for: Data scientists and applied teams who want a code-first job definition that remains stable across environments.
After step 1, you have a federated client script. Step 2 makes it a federated job you can run repeatedly and move through the lifecycle cleanly.
Job recipes are designed to replace JSON-based job configuration with a Python-based job definition:
Code-first: Define complete FL jobs in Python, not complex config filesWrite once, run anywhere: Same recipe runs in simulator, PoC, or production
Speed to deployment: Go from experimentation to deployment without changing code structure
Example 2a: Execute a FedAvg recipe in simulation
The key linkage is that your recipe references the client training script you created in step 1 (e.g., train_script="client.py"), then you execute it in an environment.
This is the “write once” idea in practice: Once the recipe correctly references your client script, the rest becomes an execution concern.
Example 2b: Move from simulation to real-world with an environment swap.
Job recipes formalize a progressive workflow by swapping the execution environment:
SimEnv (Simulation): Easy development, rapid debugging PocEnv (Proof-of-Concept): Local runtime, multi-process, realistic testing ProdEnv (Production): Distributed deployment on secure, scalable infrastructure
Figure 2. One JobRecipe, multiple execution environments: Debug in SimEnv, validate in PocEnv, and deploy in ProdEnv without rewriting the job definition
Getting started
Start with a script you already trust. Step 1: Add the client API handshake (or patch your Lightning Trainer). Step 2: Wrap it in a job recipe and execute first in simulation, then PoC, then production by swapping environments.FLARE in the News
FLARE is showing up in real deployments—from Eli Lilly TuneLab’s federated learning platform (built by Rhino Federated Computing using NVFlare) to Taiwan MOHW’s national healthcare federated learning initiative, and a Tri-labs (Sandia/LANL/LLNL) federated AI pilot across sensitive datasets.
Going further
Start with a script you already trust. Add the minimal FLARE client handshake (receive → train → send). Then scale from single-node simulation to multi-site deployment when you’re ready.
Start here: Hello World examples (fastest path to your first federated run) — NVFlare Hello World Watch the walkthrough: see the simplified API stack in action — Webinar recording Client API docs JobRecipe docs NVFlare on GitHub.png)
22 hours ago
English (United States) ·
French (France) ·