Developer guide on machine learning for iOS with Core ML
Learn basic machine learning concepts and how to use machine learning in iOS.
13 Mar 2023 · 5 min read
Starting with iOS 11, Apple introduced Core ML which abstracts the complexity of machine learning allowing us to utilize it in our iOS applications.
In this article, we'll look at basic machine learning concepts and how we can use Core ML to implement it in an iOS application.

What is machine learning?
Machine learning is an artificial intelligence study field which deals with computer learning, especially with questions and answers on how computers can learn without being explicitly programmed.
Nowadays, many apps utilize machine learning to implement certain features, for example for image recognition, natural language processing and more.
To integrate machine learning into an application, basically two steps are required:
- Training: Training a machine learning model involves choosing a learning algorithm and providing the model with labeled training data to learn from.
- Inference: Once the model was trained, we can ask it to make predictions about new data. This process is called inference.
Let's look at how to do that for iOS applications.
Machine learning models in iOS
In iOS, a machine learning model is represented by a .mlmodel file.
Apple provides several pretrained open source models which are ready to use, for example:
- MNIST to classify a single handwritten digit
- MobileNetV2, Resnet50 or SqueezeNet for image classification
- PoseNet for estimatations about joint positions for each person in an image
A full list of pretrained models can be found at the official Apple developer documentation. For each model, we can download a .mlmodel file.
We can also use a custom trained model, for example we can build and train a model with the Create ML app bundled with Xcode. Check out this article on how to train a model with Create ML to learn more.
Integrating ML models into iOS projects
We can add the model to an iOS project by simply dragging the .mlmodel file into the project. When we open the file, we can see details like the input and output of the model.

In the example above, we see the MobileNetV2 model which was trained to classify the dominant object in an image.
The input of the model is an image and as output, we get a classLabelProbs dictionary with probabilities of each category and a classLabel which represents the most likely category.
Utilizing the ML model to get predictions
After dragging the model into the project, Xcode automatically generates a programmatic interface which we can use to interact with the model in our code. The code to get a prediction about an image could look as follows:
func classifyImage(_ image: UIImage) throws -> String? {let model = try MobileNetV2Int8LUT(configuration: MLModelConfiguration())guard let pixelBuffer = image.pixelBuffer() else { return nil }let prediction = try model.prediction(image: pixelBuffer)return prediction.classLabel}
As we can see above, all we need to do is to initialize the model with a configuration and after that, we can use its prediction(image:) function for inference.
Further reading
Additionally to Core ML, Apple provides domain-specific frameworks which use Core ML under the hood:
- Vision for analyzing images
- Natural Language for processing text
- Speech for converting audio to text
- Sound Analysis for identifying sounds in audio
These frameworks are more specialized and provide solutions for domain specific machine learning tasks which we can use our of the box.
For example, since Vision is specialized for images, it allows us to directly work with UIImage objects without needing to convert them to a pixel buffer first.

Newsletter
Like to support my work?
Say hi
Related tags
Articles with related topics
Latest articles and tips