fedscale.cloud.execution package

Submodules

fedscale.cloud.execution.client_base module

class fedscale.cloud.execution.client_base.ClientBase[source]

Bases: ABC

Represents a framework-agnostic FL client that can perform training and evaluation.

abstract get_model_adapter(model) ModelAdapterBase[source]

Return framework-specific model adapter. :param model: the model :return: a model adapter containing the model

abstract test(client_data, model, conf)[source]

Perform a testing task. :param client_data: client evaluation dataset :param model: the framework-specific model :param conf: job config :return: testing results

abstract train(client_data, model, conf)[source]

Perform a training task. :param client_data: client training dataset :param model: the framework-specific model :param conf: job config :return: training results

fedscale.cloud.execution.data_processor module

fedscale.cloud.execution.data_processor.collate(examples)[source]
fedscale.cloud.execution.data_processor.voice_collate_fn(batch)[source]

fedscale.cloud.execution.executor module

class fedscale.cloud.execution.executor.Executor(args)[source]

Bases: object

Abstract class for FedScale executor.

Parameters:

args (dictionary) – Variable arguments for fedscale runtime config. defaults to the setup in arg_parser.py

Stop()[source]

Stop the current executor

Test(config)[source]

Model Testing. By default, we test the accuracy on all data of clients in the test group

Parameters:

config (dictionary) – The client testing config.

Train(config)[source]

Load train config and data to start training on that client

Parameters:

config (dictionary) – The client training config.

Returns:

The client id and train result

Return type:

tuple (int, dictionary)

UpdateModel(model_weights)[source]

Receive the broadcasted global model for current round

Parameters:

config (PyTorch or TensorFlow model) – The broadcasted global model config

client_ping()[source]

Ping the aggregator for new task

client_register()[source]

Register the executor information to the aggregator

deserialize_response(responses)[source]

Deserialize the response from server

Parameters:

responses (byte stream) – Serialized response from server.

Returns:

The deserialized response object from server.

Return type:

ServerResponse defined at job_api.proto

dispatch_worker_events(request)[source]

Add new events to worker queues

Parameters:

request (string) – Add grpc request from server (e.g. MODEL_TEST, MODEL_TRAIN) to event_queue.

event_monitor()[source]

Activate event handler once receiving new message

get_client_trainer(conf)[source]

Returns a framework-specific client that handles training and evaluation. :param conf: job config :return: framework-specific client instance

init_control_communication()[source]

Create communication channel between coordinator and executor. This channel serves control messages.

init_data()[source]

Return the training and testing dataset

Returns:

The partioned dataset class for training and testing

Return type:

Tuple of DataPartitioner class

init_data_communication()[source]

In charge of jumbo data traffics (e.g., fetch training result)

override_conf(config)[source]

Override the variable arguments for different client

Parameters:

config (dictionary) – The client runtime config.

Returns:

Variable arguments for client runtime config.

Return type:

dictionary

report_executor_info_handler()[source]

Return the statistics of training dataset

Returns:

Return the statistics of training dataset, in simulation return the number of clients

Return type:

int

run()[source]

Start running the executor by setting up execution and communication environment, and monitoring the grpc message.

serialize_response(responses)[source]

Serialize the response to send to server upon assigned job completion

Parameters:

responses (string, bool, or bytes) – TorchClient responses after job completion.

Returns:

The serialized response object to server.

Return type:

bytes stream

setup_communication()[source]

Set up grpc connection

setup_env()[source]

Set up experiments environment

setup_seed(seed=1)[source]

Set random seed for reproducibility

Parameters:

seed (int) – random seed

testing_handler()[source]

Test model

Parameters:
  • args (dictionary) – Variable arguments for fedscale runtime config. defaults to the setup in arg_parser.py

  • config (dictionary) – Variable arguments from coordinator.

Returns:

The test result

Return type:

dictionary

training_handler(client_id, conf, model)[source]

Train model given client id

Parameters:
  • client_id (int) – The client id.

  • conf (dictionary) – The client runtime config.

Returns:

The train result

Return type:

dictionary

fedscale.cloud.execution.optimizers module

class fedscale.cloud.execution.optimizers.ClientOptimizer(sample_seed=233)[source]

Bases: object

update_client_weight(conf, model, global_model=None)[source]

fedscale.cloud.execution.rl_client module

class fedscale.cloud.execution.rl_client.RLClient(conf)[source]

Bases: TorchClient

Basic client component in Federated Learning

test(client_data, model, conf)[source]

Perform a testing task. :param client_data: client evaluation dataset :param model: the framework-specific model :param conf: job config :return: testing results

train(client_data, model, conf)[source]

Perform a training task. :param client_data: client training dataset :param model: the framework-specific model :param conf: job config :return: training results

fedscale.cloud.execution.tensorflow_client module

class fedscale.cloud.execution.tensorflow_client.TensorflowClient(args)[source]

Bases: ClientBase

Implements a TensorFlow-based client for training and evaluation.

get_model_adapter(model) TensorflowModelAdapter[source]

Return framework-specific model adapter. :param model: the model :return: a model adapter containing the model

test(client_data, model, conf)[source]

Perform a testing task. :param client_data: client evaluation dataset :param model: the framework-specific model :param conf: job config :return: testing results

train(client_data, model, conf)[source]

Perform a training task. :param client_data: client training dataset :param model: the framework-specific model :param conf: job config :return: training results

fedscale.cloud.execution.torch_client module

class fedscale.cloud.execution.torch_client.TorchClient(args)[source]

Bases: ClientBase

Implements a PyTorch-based client for training and evaluation.

get_criterion(conf)[source]
get_model_adapter(model) TorchModelAdapter[source]

Return framework-specific model adapter. :param model: the model :return: a model adapter containing the model

get_optimizer(model, conf)[source]
test(client_data, model, conf)[source]

Perform a testing task. :param client_data: client evaluation dataset :param model: the framework-specific model :param conf: job config :return: testing results

train(client_data, model, conf)[source]

Perform a training task. :param client_data: client training dataset :param model: the framework-specific model :param conf: job config :return: training results

train_step(client_data, conf, model, optimizer, criterion)[source]

Module contents