Create ML App

I was design owner for the Create ML app, and launched a new model evaluation user experience & visual system for the app, and new vision-based models such as object detection, image classification, and multi-label image classification.

May 2022 – June 2023
Launches: WWDC 2022, 2023

Our team had the ambition to make model evaulatuion simple, visual and accessible for anyone. I designed for the Create ML app, Core ML system, and connection points with XCode.


Visual evaluation for Image Classification and Object Detection models

Interactively learn how your model performs on test data from your evaluation set. Explore key metrics and their connections to specific examples to help identify challenging use cases, further investments in data collection, and opportunities to help improve model quality.

These product ideas originated from our team and myself. I illustrated the design vision from zero, before features & new models existed. We ended up launching…

See the demo here

Data previews with continuity on MacOS, iPadOS

Visualize and inspect your data to identify issues such as wrongly labelled images, misplaced object annotations.

See the demo here (4:45).

New models: Multi-label Image Classifier

The multi-label image classifier is an entirely new capability for training models in Create ML, which is also the most complex to evaluate. This model can detect multiple labels in a single image, and the probability that each label is correct based on customizable confidence thresholds.

I designed a new interaction pattern for evaluating results with customizable confidence thresholds, and an adaptable interface for displaying results.

You can evaluate results and predictions in an entirely visual way. An early challenge was to develop a visual system that conveys all permutations of model detections. This needed to work for both moving and still images.

People needed to view model results as a macro (overall performance), medium (improvements and hallucinations), and micro level (performance per label class). Simple filters allowed people to isolate any portion of their results.

Learn more from discover machine learning enhancements at WWDC 2023.

Evaluation for other data types

What if we could show people what the model saw across images, video, and 3D assets? We built a testing playground where you can use your iPhone and iPad in live preview with Continuity Camera, and share how you can take Action Classification even further with the new repetition counting capabilities of the Create ML Components framework.

Here’s how it works with the action classification model.

Here’s a prototype of how it works with the object tracking model, extending the ability to test and develop 3D software on VisionOS.


Described in detail from What’s new in Create ML at WWDC.