Models can be exported from the platform on a checkpoint basis (so any individual epoch of an experiment) for use off-platform, either for further training and tuning or just for running inference. This is currently done via the Deployment page where, when a deployment is created, the specific checkpoint used for this deployment can be exported using the Export model button.
Note: Any experiments trained before June 2021 will not include the tf.savedmodel files and the export model will fail. We strongly recommend to push for downloading the tf.savedmodel format with container definition. This is a more general and contains the same files and more compared to the tf.savedmodel option. * Peltarion prediction server (Model with docker container definition) doesn’t work for M1 chip.
How to download your model:
- Create a deployment of the model to download
- On the deployment page click Export model
- There are then 3 options:
b. tf.savedmodel with container definition (recommended option when available!). How to build and use the container is described here: Peltarion prediction server
c. h5 under + See more options
You can also read this Knowledge center article.
If you have any questions, you can reply here or to this community post.