Model Security
Model Training
Federated Learning
Federated learning is a decentralized machine learning technique that trains a model across multiple data providers each holding independent datasets. Typically this involves sending the model to each participant to initialize model training independently, and then averaging the activations of wait at a central point (without having to move or share their datasets). Our implementation of federated learning is seamlessly integrated with our access management, privacy controls, and auditing systems.
Split Learning
Split Learning, developed at MIT Media Lab’s Camera Culture group, is a machine learning approach that allows several entities to train machine learning models without sharing any raw data. Split Learning provides a promising solution to enable industry, research, and academic collaborations across a wide range of domains including healthcare, finance, security, logistics, governance, operations and manufacturing.
The underlying approach of Split Learning is to split a deep learning model (i.e., architecture) across the participating, distributed entities. Each participating entity trains their part of the model on premises, and only shares the output of the last layer of their model (referred to as smashed data or split layer output). Therefore, unlike other distributed learning approaches that require sharing the entire model and its parameters, Split Learning allows sharing the output of the split layer only, which drastically reduces the amount of shared knowledge (i.e., trained models) and thus reduces information leakage. Additionally, different flavors of Split Learning limit the amount and type of data that can be shared from the split layer, such as data labels, for added privacy.