Skip to content

Investigate availability of Flower metadata #242

@elichad

Description

@elichad

Some example metadata that might be useful:

  • input dataset ids
  • configuration parameter values (though many of these come from the pyproject.toml so we might already have this part covered)
  • aggregation method
  • performance metrics
  • resource usage metrics
  • start/end time of the run

Some of these are exposed in the stdout when Flower is running (see sample below), but I don't know if these are directly accessible within the client and server scripts.

This will feed the decision of whether to integrate RO-Crate support directly into Flower, or whether to make a plugin/tool that sits alongside it/can be incorporated into the Flower config scripts

(.venv) eli@AP-5Y3TQ73:~/feda/quickstart-pytorch$ flwr run .
INFO :      Starting FedAvg strategy:
INFO :          ├── Number of rounds: 3
INFO :          ├── ArrayRecord (0.24 MB)
INFO :          ├── ConfigRecord (train): {'lr': 0.1}
INFO :          ├── ConfigRecord (evaluate): (empty!)
INFO :          ├──> Sampling:
INFO :          │       ├──Fraction: train (1.00) | evaluate ( 0.50)
INFO :          │       ├──Minimum nodes: train (2) | evaluate (2)
INFO :          │       └──Minimum available nodes: 2
INFO :          └──> Keys in records:
INFO :                  ├── Weighted by: 'num-examples'
INFO :                  ├── ArrayRecord key: 'arrays'
INFO :                  └── ConfigRecord key: 'config'

...

INFO :      Strategy execution finished in 49.36s
INFO :      
INFO :      Final results:
INFO :      
INFO :          Global Arrays:
INFO :                  ArrayRecord (0.238 MB)
INFO :      
INFO :          Aggregated ClientApp-side Train Metrics:
INFO :          { 1: {'train_loss': '2.2276e+00'},
INFO :            2: {'train_loss': '2.1386e+00'},
INFO :            3: {'train_loss': '2.0864e+00'}}
INFO :      
INFO :          Aggregated ClientApp-side Evaluate Metrics:
INFO :          { 1: {'eval_acc': '1.0100e-01', 'eval_loss': '2.3375e+00'},
INFO :            2: {'eval_acc': '1.4480e-01', 'eval_loss': '2.2189e+00'},
INFO :            3: {'eval_acc': '1.7760e-01', 'eval_loss': '2.1385e+00'}}
INFO :      
INFO :          ServerApp-side Evaluate Metrics:
INFO :          { 0: {'accuracy': '1.0000e-01', 'loss': '2.3035e+00'},
INFO :            1: {'accuracy': '1.1100e-01', 'loss': '2.3362e+00'},
INFO :            2: {'accuracy': '1.4640e-01', 'loss': '2.2177e+00'},
INFO :            3: {'accuracy': '1.8500e-01', 'loss': '2.1380e+00'}}
INFO :      

Saving final model to disk...

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions