496code: ML Security Sandbox


One of our recent clients projects was to create a security sandbox for AI model training. The client requirements were to allow a certain set of end users, who wish to train AI models, to use the training data provided by other end users. However, the training clients could not be allowed direct access to the (proprietary) training data: they must be allowed to train their models with the data, but not actually see the data.

To solve this, we developed a secure sandbox solution. The end users could: set up their models for training; specify the data set they wish to train with; submit the models to training, in the sandbox (which users cannot access); and finally, download their final trained model. We designed and developed the sandbox workflow, backend servers, server APIs, and Python client training APIs required to support this system. This included deep integration with TensorFlow and PyTorch implementations to support standard ML training systems. The final product included a full security analysis, noting potential points of attack, likelihood of exploitation, and mitigation strategies.


Home | Contact: info@496code.com