Microsoft has today introduced container support for Azure Cognitive Services. In preview. This means that users can deploy premade models via Docker containers for edge devices.
Both in the cloud, and in on-premise servers.
This release comes less than a week after Google Cloud launched Kubeflow Pipelines, which is a machine learning workflow for Kubernetes containers that provides companies and developers with more options for deploying AI.
As reported, some 1.2 million developers have used Azure Cognitive Services. Developers that do not have the time or resources to make their own models from scratch can use these premade AI models to meet their demands.
Microsoft VP Eric Boyd talked about containers:
“It’s become the default way to deploy anything, and so AI models wrap up really quite nicely into a Docker container. That’s how people want to consume it, and so that’s the way that you should author them. We want to unblock where people are being blocked by from adopting AI. This is one of the key assets we’ve got, so that’s one of the things we’re doing.”
Container support is sure to appeal to those who find drawing AI predictions from the cloud expensive, or anyone that would rather not put their data in the cloud — businesses, in particular, that have strict privacy or data requirements.
As you may have guessed, facial recognition, vision and speech-to-text services will be the first available for containers, with more set to come in the future.
Microsoft chose these 3 as the first of more than 30 AI services for a mix of technical and business reasons. Primary reason being that some models are easier to put into containers. But things are expected to escalate fast once the service exits the preview.
Find out all you want to about Azure Cognitive Services by clicking this link.