Easy to use AI tools became popular in 2021 (e.g. Lobe.ai, Roboflow, Edge Impulse, Peltarion, Power Apps). We’re convinced more and more companies will be looking to deploy their models in a private or public cloud or on-prem.
In this post we explain how you can use the Peltarion service to deploy an AI model on your branded cloud.
How can AI help solving real world problems?
AI has always been a difficult nut to crack because of the lack of easy to use tools. When I signed up for an AI course in the ’80s I had to work with desktop software to create neural networks. It wasn’t easy and being a very visual person I struggled to keep motivated. Now, many years later I see the progress AI tools have made. A few years ago I started a demo of Lobe.ai and I was surprised how easily I could train a model and use it. It was a Eureka moment!
AI is everywhere (traffic lights, mobile phones, spreadsheets, video) and more and more real world problems are being solved by using AI. Don’t you just love Google Lens and its ability to recognise animals and flowers? Behind the scenes AI is driving these innovations.
For more business focused usage you can think of things like car damage assessment, this can be a very time consuming process which can be solved by AI. AI can also make these processes more automated.
Different needs for training and deployment
Even inside afriQloud the first thing people shout is “we need TPU’s and GPU’s for this!”
Well, not always.
For training you need advanced hardware and many of the cloud offerings like Edge Impulse and Peltarion offer this on their own cloud. These tools allow you to train models on their own optimised AI stack. Below you can see a screenshot taken from Peltarion.
You can do all sampling and training online and when ready export/deploy your model.
That is the moment the fun with afriQloud starts.
Peltarion Prediction Server
After the training and sampling cycles inside Peltarion you will be able to export your trained model as a SavedModel file. With docker build it’s possible to create an image containing your trained model.
One of the prerequisites is that you have a docker enabled VM.
In the documentation of your branded portal you can see how you can launch a VM (inside Cloudspace). You can tweak the vCPU’s, memory and disks.
It is important that docker is installed on the Linux server. When this is done you can deploy the model.
The Peltarion Prediction Server has an API that can accept images to do a prediction on.
You will be surprised how much you can automate by using AI models.
Peltarion AI documentation
On this page you can see the full deployment documentation.