Module tripleblind.network_builder

The NetworkBuilder allows you to fashion a neural network one layer at a time.

A typical usage would be:

import tripleblind as tb

builder = tb.NetworkBuilder()
builder.add_conv2d_layer(3, 32, 3, 1)
builder.add_relu()
builder.add_conv2d_layer(32, 64, 3, 1)
builder.add_relu()
builder.add_flatten_layer()
builder.add_dense_layer(50176, 128),
builder.add_relu()
builder.add_split()  # special layer for SMPC
builder.add_dense_layer(128, 10)
builder.add_relu()

training_model = tb.create_network("network_for_training", builder)

The generated network is generally trained on input data to produce a model. See Network for more information.

NOTE: Layers are very similar to PyTorch.

Classes

class NetworkBuilder

Utility for defining a neural network layer by layer.

Methods

def add_adaptive_avg_pool2d_layer(self, output_size: Tuple[int, ...])

2D adaptive average pooling over an input signal composed of several input planes.

NOTE: this layer is not tested for secure MPC inference.

Args

output_size : int or tuple
target output size of the image of the form H x W.

Can be a tuple or an int for a square image.

Returns

NetworkBuilder
This network builder
def add_adaptive_max_pool2d_layer(self, output_size: Tuple[int, ...])

2D adaptive max pooling over an input signal composed of several input planes.

NOTE: this layer is not tested for secure MPC inference.

Args

output_size : int or tuple
target output size of the image of the form H x W.

Can be a tuple or an int for a square image.

Returns

NetworkBuilder
This network builder
def add_avg_pool2d_layer(self, kernel_size: Tuple[int, ...], stride: Tuple[int, ...] = None, padding: Tuple[int, ...] = 0, ceil_mode: bool = False, count_include_pad: bool = True, divisor_override=None)

2D average pooling over an input signal made of several planes.

NOTE: this layer is not supported with secure MPC.

Args

kernel_size : int or tuple
The siz of the sliding window.
stride : int or tuple, optional
Stride of the sliding window. Defaults to kernel_size.
padding : int or tuple, optional
Zero-padding added to both sides of input. Defaults to 0.
ceil_mode : bool, optional
when True, will use ceil instead of floor for the output shape. Defaults to False.
count_include_pad : bool, optional
when True, will include the zero-padding in the averaging calculation. Defaults to True.
divisor_override
if specified, it will be used as divisor, otherwise kernel_size will be used.

Returns

NetworkBuilder
This network builder
def add_batchnorm1d(self, num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

Applies Batch Normalization over 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension)

This layer can be used with networks built for FED inference only. It is not tested for SMPC inference.

Args

num_features : int
C from an expected input of size (N,C,L) or L from input of size (N, L)
eps : float, optional
A value added to the denominator for numerical stability. Defaults to 1e-5
momentum : float, optional
Used for the running_mean and running_var computation. Defaults to 0.1
affine : bool, optional
When set to True, this module has learnable affine parameters. Defaults to True
track_running_stats : bool, optional
Tracks the running mean and variance when True. Defaults to True

Returns

NetworkBuilder
This network builder
def add_batchnorm2d(self, num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension)

Allowed in FED and SMPC inference

Args

num_features : int
C from an expected input of size (N,C,H,W)
eps : float, optional
A value added to the denominator for numerical stability. Defaults to 1e-5
momentum : float, optional
Used for the running_mean and running_var computation. Defaults to 0.1
affine : bool, optional
When set to True, this module has learnable affine parameters. Defaults to True
track_running_stats : bool, optional
Tracks the running mean and variance if True. Defaults to True

Returns

NetworkBuilder
This network builder
def add_conv1d_layer(self, in_channels: int, out_channels: int, kernel_size: Tuple[int, ...], stride: Tuple[int, ...] = 1, padding: Tuple[int, ...] = 0, dilation: Tuple[int, ...] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')

1D convolution over an input signal made of several input planes.

This layer can be used with networks built for FED inference only. It is not tested for SMPC inference.

Args

in_channels : int
Number of input channels.
out_channels : int
Number of output channels.
kernel_size : int or tuple
The size of the convolving kernel.
stride : int or tuple, optional
Stride of the convolution. Defaults to 1.
padding : int or tuple, optional
Zero-padding added to both sides of input. Defaults to 0.
dilation : int, optional
Spacing between kernel elements. Defaults to 1.
groups : int, optional
Number of blocked connections from input channels to output channels. Defaults to 1.
bias : bool, optional
If True, adds a learnable bias to the output. Defaults to True.
padding_mode : string, optional
Defaults to 'zeros'. Possible values: 'reflect', 'replicate', 'circular'.

Returns

NetworkBuilder
This network builder
def add_conv2d_layer(self, in_channels: int, out_channels: int, kernel_size: int, stride: Tuple[int, ...] = 1, padding: Tuple[int, ...] = 0, dilation: Tuple[int, ...] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')

2D convolution over an input signal made of several input planes.

Allowed in FED inference, and in SMPC (unless padding is used).

Args

in_channels : int
Number of input channels.
out_channels : int
Number of output channels.
kernel_size : int
The width/height of the convolving kernel.
stride : int or tuple, optional
Stride of the convolution. Defaults to 1.
padding : int or tuple, optional
Zero-padding added to both sides of input. Defaults to 0.
dilation : int or tuple, optional
Spacing between kernel elements. Defaults to 1. Not supported for SMPC inference.
groups : int, optional
Number of blocked connections from input channels to output channels. Defaults to 1. Not supported for SMPC inference.
bias : bool, optional
If True, adds a learnable bias to the output. Defaults to True. Not supported for SMPC inference.
padding_mode : string, optional
Defaults to 'zeros'. Possible values: 'reflect', 'replicate', 'circular'. Not supported for SMPC inference.

NOTE: the following Args are not tested with SMPC inference: dilation padding_mode bias: if False

Returns

NetworkBuilder
This network builder
def add_conv3d_layer(self, in_channels: int, out_channels: int, kernel_size: Tuple[int, ...], stride: Tuple[int, ...] = 1, padding: Tuple[int, ...] = 0, dilation: Tuple[int, ...] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')

3D convolution over an input signal made of several input planes.

Note: this layer is not supported by secure MPC inference.

This layer can be used with networks built for FED inference only. It is not tested for SMPC inference.

Args

in_channels : int
Number of input channels.
out_channels : int
Number of output channels.
kernel_size : int or tuple
The size of the convolving kernel.
stride : int or tuple, optional
Stride of the convolution. Defaults to 1.
padding : int or tuple, optional
Zero-padding added to both sides of input. Defaults to 0.
dilation : int, optional
Spacing between kernel elements. Defaults to 1.
groups : int, optional
Number of blocked connections from input channels to output channels. Defaults to 1.
bias : bool, optional
If True, adds a learnable bias to the output. Defaults to True.
padding_mode : string, optional
Defaults to 'zeros'. Possible values: 'reflect', 'replicate', 'circular'.

Returns

NetworkBuilder
This network builder
def add_dense_layer(self, in_features: int, out_features: int, bias: bool = True)

Add a dense (linear) layer

Same as linear layer in PyTorch y = xA^T + b

Args

in_features : int
Features expected coming in
out_features : int
Features expected coming out
bias : bool, optional
If True, adds a learnable bias to the output. Defaults to True. Not supported for SMPC inference.

Returns

NetworkBuilder
This network builder
def add_dropout(self, probability: float = 0.5, inplace: bool = False)

While training, randomly zero some of the elements

Allowed in FED and SMPC inference.

Args

probability : float, optional
Probably of zeroing an element. Defaults to 0.5.
inplace : bool, optional
can optionally do the operation in-place. Defaults to False. Not supported for SMPC inference.

Returns

NetworkBuilder
This network builder
def add_flatten_layer(self, start_dim: int = 1, end_dim=-1)

Flatten into a tensor

Allowed in FED and SMPC inference.

Args

start_dim : int, optional
first dimension to flatten. Defaults to 1. Not supported for SMPC inference.
end_dim : int, optional
last dimension to flatten. Defaults to -1. Not supported for SMPC inference.

Returns

NetworkBuilder
This network builder
def add_leaky_relu(self, negative_slope: float = 0.01, inplace: bool = False)

Performs leaky rectified linear unit function across elements

Allowed in FED. Not tested for SMPC inference.

Args

negative_slope : float
Controls the angle of the negative slope. Defaults to 0.01.
inplace : bool, optional
can optionally do the operation in-place. Defaults to False. Not supported for SMPC inference.

Returns

NetworkBuilder
This network builder
def add_lstm_layer(self, input_size: int, hidden_size: int, num_layers: int = 1, bias: bool = True, batch_first: bool = False, dropout: float = 0.0, bidirectional: bool = False, is_many_output: bool = True)

Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.

This layer can be used with networks built for the Marketplace inference only. It is not tested for SMPC inference.

Args

input_size : int
The number of expected features in the input x.
hidden_size : int
The number of features in the hidden state.
num_layers : int, optional
Number of recurrent layers. E.g., num_layers=2 means stacking two LSTMs together to form a stacked RNN, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1.
bias : bool, optional
If False, then the layer does not use bias weights. Default: True.
batch_first : bool, optional
If True, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout : float, optional
introduces a Dropout layer on the outputs of each LSTM layer except the last layer. Default: 0.
bidirectional : bool, optional
If True, becomes a bidirectional LSTM. Default: False.
is_many_output : bool, optional
Output of layer format. Default: True.

Returns

NetworkBuilder
This network builder
def add_max_pool1d_layer(self, kernel_size: Tuple[int, ...], stride: Tuple[int, ...] = None, padding: Tuple[int, ...] = 0, dilation: Tuple[int, ...] = 1, ceil_mode: bool = False)

1D max pooling over an input signal made of several planes.

NOTE: this layer is not supported with secure MPC.

Args

kernel_size : int or tuple
The siz of the sliding window, mut be > 0.
stride : int or tuple, optional
Stride of the convolution (mut be > 0). Defaults to kernel_size.
padding : int or tuple, optional
Zero-padding added to both sides of input. Defaults to 0.
dilation : int or tuple, optional
controls the stride of elements in the window. Defaults to 1.
ceil_mode : bool, optional
when True, will use ceil instead of floor for the output shape. Defaults to False.
NOTE
the following Args are not tested with SMPC inference: dilation, ceil_mode

Returns

NetworkBuilder
This network builder
def add_max_pool2d_layer(self, kernel_size: Tuple[int, ...], stride: Tuple[int, ...] = None, padding: Tuple[int, ...] = 0, dilation: Tuple[int, ...] = 1, ceil_mode: bool = False)

2D max pooling over an input signal made of several input planes.

Allowed in FED inference, and in SMPC (unless padding is used)

Args

kernel_size : int or tuple
The width/height of the convolving kernel.
stride : int or tuple, optional
Stride of the convolution. Defaults to kernel_size.
padding : int or tuple, optional
Zero-padding added to both sides of input. Defaults to 0.
dilation : int or tuple, optional
controls the stride of elements in the window. Defaults to 1.
ceil_mode : bool, optional
when True, will use ceil instead of floor for the output shape. Defaults to False.
NOTE
the following Args are not tested with SMPC inference: dilation, ceil_mode

Returns

NetworkBuilder
This network builder
def add_relu(self, inplace: bool = False)

Perform rectified linear unit function across elements

Allowed in FED and SMPC inference.

Args

inplace : bool, optional
can optionally do the operation in-place. Defaults to False. Not supported for SMPC inference.

Returns

NetworkBuilder
This network builder
def add_rnn_layer(self, input_size: int, hidden_size: int, num_layers: int = 1, nonlinearity: str = 'tanh', bias: bool = True, batch_first: bool = False, dropout: float = 0.0, bidirectional: bool = False)

Applies a multi-layer Elman RNN with tanh or ReLU non-linearity to an input sequence.

This layer can be used with networks built for the Marketplace inference only. It is not tested for SMPC inference.

Args

input_size : int
The number of expected features in the input x.
hidden_size : int
The number of features in the hidden state.
num_layers : int, optional
Number of recurrent layers. E.g., num_layers=2 means stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1.
nonlinearity : str, optional
The non-linearity to use. Can be either 'tanh' or 'relu'. Default: 'tanh'.
bias : bool, optional
If False, then the layer does not use bias weights. Default: True.
batch_first : bool, optional
If True, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout : float, optional
introduces a Dropout layer on the outputs of each RNN layer except the last layer. Default: 0.
bidirectional : bool, optional
If True, becomes a bidirectional RNN. Default: False.

Returns

NetworkBuilder
This network builder
def add_split(self)

Single network split layer for SMPC and Blind Learning

Raises

Exception
Raised if split has already been defined for the network

Returns

NetworkBuilder
This network builder
def add_tanh(self)

Returns a new tensor with the hyperbolic tangent of the elements

This layer can be used with networks built for FED inference only. It is not tested for SMPC inference.

Returns

NetworkBuilder
This network builder
def get_layers(self)

Retrieve the list of network layers.

Returns

Array
An array of dicts describing each layer in this network.