Neural Networks
For neural networks, TripleBlind supports Split Learning, Federated Learning, Transfer Learning, and training over a vertically-partitioned intersection of datasets.
Operations
Use tripleblind.NetworkBuilder()
to create a neural network one layer at a time.
When using add_agreement()
to forge an agreement on a trained model, use Operation.EXECUTE
for the operation
parameter.
When using add_agreement()
to allow a counterparty to use your dataset for model training, use the below for the operation
parameter:
- Use
Operation.BLIND_LEARNING
for training over remote datasets using a split-learning approach with SMPC averaging. - Use
Operation.VERTICAL_BLIND_LEARNING
for training on a split dataset. Vertically-partitioned records must be in corresponding order across all data sources. - Use
Operation.PSI_VERTICAL_BLIND_LEARNING
for identifying an overlap of matching records across datasets, and training a model on the vertically-partitioned intersection.
Supported Layers
TripleBlind Network Builder Layer | PyTorch Inference | Keras Inference | ONNX Operator | SMPC Inference |
Adaptive_avg_pool2d | AdaptiveAvgPool2d | ✅ | ||
Adaptive_max_pool2d | ||||
Avg_pool2d | AveragePool | ✅ | ||
Batchnorm1d | BatchNorm1d | BatchNormalization | ✅ | |
Batchnorm2d | BatchNorm2d | BatchNormalization | BatchNormalization | ✅ |
Conv1d | Conv1D | Conv | ✅ | |
Conv2d | Conv2d | Conv2D | Conv | ✅ |
Conv3d | Conv | ✅ | ||
Dense | Linear | Gemm | ✅ | |
Dropout | Dropout | ✅ | ||
Flatten | Flatten | Flatten | Flatten | ✅ |
Leaky_ReLU | LeakyReLU | LeakyReLU | ✅ | |
LSTM | LSTM | LSTM | ✅ | |
Max_pool1d | MaxPooling1D | MaxPool | ✅ | |
Max_pool2d | MaxPool2d | MaxPooling2D | MaxPool | ✅ |
ReLU | ReLU | ReLU | ✅ | |
RNN | ✅ | |||
Split | ✅ | |||
Tanh | Tanh | ✅ | ||
Keras Layers | Activation | ✅ | ||
ZeroPadding2D | ✅ | |||
ONNX Operators | Add | ✅ | ||
Cast | ✅ | |||
Clip | ✅ | |||
Concat | ✅ | |||
Constant | ✅ | |||
ConstantOfShape | ✅ | |||
ConvTranspose | ✅ | |||
Div | ✅ | |||
Elu | ✅ | |||
Exp | ✅ | |||
Expand | ✅ | |||
Gather | ✅ | |||
GlobalAveragePool | ✅ | |||
HardSigmoid | ✅ | |||
HardSwish | ✅ | |||
Identity | ✅ | |||
InstanceNormalization | ✅ | |||
MatMul | ✅ | |||
Mul | ✅ | |||
Pad | ✅ | |||
PRelu | ✅ | |||
ReduceMean | ✅ | |||
Reshape | ✅ | |||
Resize | ✅ | |||
Shape | ✅ | |||
Sigmoid | ✅ | |||
Slice | ✅ | |||
Softmax | ✅ | |||
Squeeze | ✅ | |||
Sub | ✅ | |||
Tile | ✅ | |||
Transpose | ✅ | |||
Unsqueeze | ✅ |
SMPC Inference currently supports Sequential models with 2D inputs. With the additional batch and channels dimensions this makes for 4D input:
- PyTorch / ONNX: NCHW (batch, channels, rows, cols)
- Keras: NHWC (batch, rows, cols, channels)
These formats are also referred to as “channels first” and “channels last” formats, where:
- N: batch size
- H: height (rows)
- W: width (cols)
Loss functions
All PyTorch loss functions listed in the 🔗torch.nn list of loss functions are allowed, but only the following functions have been completely verified:
- NLLLoss
- CrossEntropyLoss
- BCEWithLogitsLoss
- SmoothL1Loss
To provide PyTorch tensor parameters to loss functions, use tripleblind.TorchEncoder.
Optimizer functions
All PyTorch optimizer functions listed in the 🔗torch.optim documentation are allowed, but only the following functions have been completely validated at this time:
- SGD
- Adam
- Adagrad
To provide PyTorch tensor parameters to loss functions, use tripleblind.TorchEncoder.
Learning rate scheduler functions
The following learning rate schedulers are supported:
CyclicLR
The CyclicLR
learning rate scheduler varies the learning rate of each parameter group in a cyclic manner, increasing from a base rate to a maximum rate and then decreasing back to the base rate in a triangular or exponential pattern.
Parameters
base_lr: Optional[float] = 0.0001
- The starting learning rate of the cycle.
max_lr: Optional[float] = 0.01
- The maximum learning rate of the cycle.
step_size: Optional[int] = 100
- Determines the frequency of learning rate cycle. Smaller values result in more frequent cycles.
- Typically set to (total number of training samples / batch size) / 2.
gamma: Optional[float] = 0.99
- The multiplicative factor that scales the learning rate after each cycle.
mode: Optional[str] = "triangular2"
- The shape of the cycle:
"triangular"
,"triangular2"
, or"exp_range"
.
Example usage:
lr_scheduler_name = "CyclicLR" lr_scheduler_params = {"step_size": 10, "base_lr": 0.0001, "max_lr": 0.01, "mode": "triangular2"}
CyclicCosineDecayLR
The CyclicCosineDecayLR
learning rate scheduler combines cosine annealing and cyclic learning rate policies, allowing for a warmup phase, an initial decay phase, and a cycle phase that can be either fixed or geometrically increasing, with the option to restart learning rates. This scheduler gradually reduces the learning rate during the decay and cycle phases and then increases it again during the warm-up phase.
Parameters
warmup_epochs: Optional[int] = None
- Number of
warmup_epochs
. - Set to
None
(default) to disable warmups.
warmup_start_lr: Optional[float] = None
- Learning rate at the beginning of warmup.
- Must be set if
warmup_epochs
is notNone
.
init_decay_epochs: Optional[int] = 10
- The number of initial decay epochs.
min_decay_lr: Optional[float] = 0.0001
- Learning rate at the end of decay.
restart_lr: Optional[float] = None
- Learning rate when the cycle restarts.
- If
None
, the optimizer’s learning rate will be used.
restart_interval: Optional[int] = None
- The number of epochs after which a cycle restarts.
- Set to
None (default)
to disable cycles.
restart_interval_multiplier: Optional[int] = None
- The multiplication coefficient for geometrically increasing cycles.
Training parameters
dloss_meta
loss_meta
optimizer_meta
scheduler_meta
epochs: int = 1
batchsize: int = 32
model_path: str = ""
dataset_path: str = ""
test_size: float = 0.0
data_shape: List[int] = None
data_type: str = None
- Can be
"table"
or"image"
model_output: str = ""
- Can be
"binary"
,"multiclass"
, or"regression"
binary_metric: str = "accuracy"
- Follows sklearn metrics:
"f1_Score"
,"roc_auc_score"
, or"precision_recall_curve"
n_classes: int = None
- Used for object detection.
- This should not include background class.
federated_rounds: int = 1
Inference parameters
final_layer_softmax
classes
model_type
model_path
dataset_path
data_shape
data_type
dataset_path
model_output
batch_size
min_score
max_overlap
top_k
data_scale: float = 1e-5
weight_scale: float = 1e-5
inference_interval: float = None
- If given, inferences are streamed in output at the given interval (in seconds)
allow_to_listen: UUID = None
- During steaming inference, allow the given Team to see the output
- Currently, an Enterprise Mode access point cannot be listened to remotely.
SSD (Object Detection)
- Model: Single Shot Detector 300
- Works only with rectangles
Distributed Inference
A more secure form of Federated Inference is used on vertically-partitioned models. Each provider receives only the part of the algorithm necessary to do inference on its data, keeping the overall model secure. For more information, see Model Security.
Limitations
- When training on vertically-partitioned datasets using
Operation.PSI_VERTICAL_BLIND_LEARNING
, the owned dataset must be supplied as the first (or left-side) dataset asset. - The TripleBlind Network Builder LSTM Layer is not supported for GPU processors.
Pre-built neural network templates
The TripleBlind SDK includes several well-known neural network architectures accessible using the ModelFactory
method. Instead of building a common architecture layer by layer, they can be defined using a prebuilt template. Currently, the network types available via ModelFactory
are from the VGG (Visual Geometry Group) family of image recognition models:
- VGG-11
- VGG-13
- VGG-16
- VGG-19
The below example shows the definition of a VGG-11 network architecture using ModelFactory
:
builder = tb.ModelFactory.vgg(
vgg_type="vgg11",
num_classes=10,
batch_norm=False,
dropout=0.0,
)