fedscale.core.execution package

Submodules

fedscale.core.execution.executor module

class fedscale.core.execution.executor.Executor(args)[source]

Bases: object

Abstract class for FedScale executor.

Parameters:

args (dictionary) – Variable arguments for fedscale runtime config. defaults to the setup in arg_parser.py

Stop()[source]

Stop the current executor

Test(config)[source]

Model Testing. By default, we test the accuracy on all data of clients in the test group

Parameters:

config (dictionary) – The client testing config.

Train(config)[source]

Load train config and data to start training on that client

Parameters:

config (dictionary) – The client training config.

Returns:

The client id and train result

Return type:

tuple (int, dictionary)

UpdateModel(config)[source]

Receive the broadcasted global model for current round

Parameters:

config (PyTorch or TensorFlow model) – The broadcasted global model config

client_ping()[source]

Ping the aggregator for new task

client_register()[source]

Register the executor information to the aggregator

deserialize_response(responses)[source]

Deserialize the response from server

Parameters:

responses (byte stream) – Serialized response from server.

Returns:

The deserialized response object from server.

Return type:

ServerResponse defined at job_api.proto

dispatch_worker_events(request)[source]

Add new events to worker queues

Parameters:

request (string) – Add grpc request from server (e.g. MODEL_TEST, MODEL_TRAIN) to event_queue.

event_monitor()[source]

Activate event handler once receiving new message

get_client_trainer(conf)[source]

A abstract base class for client with training handler, developer can redefine to this function to customize the client training:

Parameters:

config (dictionary) – The client runtime config.

Returns:

A abstract base client class with runtime config conf.

Return type:

Client

init_control_communication()[source]

Create communication channel between coordinator and executor. This channel serves control messages.

init_data()[source]

Return the training and testing dataset

Returns:

The partioned dataset class for training and testing

Return type:

Tuple of DataPartitioner class

init_data_communication()[source]

In charge of jumbo data traffics (e.g., fetch training result)

init_model()[source]

Get the model architecture used in training

Returns:

Based on the executor’s machine learning framework, initialize and return the model for training

Return type:

PyTorch or TensorFlow module

load_global_model()[source]

Load last global model

Returns:

The lastest global model

Return type:

PyTorch or TensorFlow model

override_conf(config)[source]

Override the variable arguments for different client

Parameters:

config (dictionary) – The client runtime config.

Returns:

Variable arguments for client runtime config.

Return type:

dictionary

report_executor_info_handler()[source]

Return the statistics of training dataset

Returns:

Return the statistics of training dataset, in simulation return the number of clients

Return type:

int

run()[source]

Start running the executor by setting up execution and communication environment, and monitoring the grpc message.

serialize_response(responses)[source]

Serialize the response to send to server upon assigned job completion

Parameters:

responses (string, bool, or bytes) – Client responses after job completion.

Returns:

The serialized response object to server.

Return type:

bytes stream

setup_communication()[source]

Set up grpc connection

setup_env()[source]

Set up experiments environment

setup_seed(seed=1)[source]

Set random seed for reproducibility

Parameters:

seed (int) – random seed

testing_handler(args)[source]

Test model

Parameters:

args (dictionary) – Variable arguments for fedscale runtime config. defaults to the setup in arg_parser.py

Returns:

The test result

Return type:

dictionary

training_handler(clientId, conf, model=None)[source]

Train model given client id

Parameters:
  • clientId (int) – The client id.

  • conf (dictionary) – The client runtime config.

Returns:

The train result

Return type:

dictionary

update_model_handler(model)[source]

Update the model copy on this executor

Parameters:

config (PyTorch or TensorFlow model) – The broadcasted global model

Module contents