MARK On-device Model Training Update

We have just published 5010 firmware update for MARK, the details about it you can read in this article on our blog. For the next update, apart from improving existing functions we would like to add a new feature – on-device model training.

Through our feedback questionnaire we have discovered, that apart from STEM educators and makers, there is a significant percent of parents with younger kids backing our campaign. MARK comes with pre-trained models, which will be useful for this category of users, but custom model training on computer/in cloud might be too complicated for them, as it involves creating a dataset, pre-processing data, training the model and flashing it to a device. We already have made the whole training pipeline a lot easier with aXeleRate, a Keras framework for AI on the Edge, which allows training models in Google Colab notebooks. You can see the examples in these articles:

aXeleRate – Keras-Based Framework for AI on the Edge

Use Codecraft 2.5 New Interactive Lessons with M.A.R.K.

Since custom model inference is fun feature with many useful application and is particularly suitable for explaining the basics of machine learning to students, we have decided to bring the threshold for creating custom models even lower. Upon achieving our next stretch goal, we’ll roll out on-device model training feature in the next update, which is being actively developed now. It will be accessible both in Codecraft, graphical programming environment for TinkerGen products and through MaixPy IDE.

The working principle behind on-device model training is slightly different from regular deep neural network training with backpropagation. Backpropagation is computationally very costly and cannot be efficiently done on a microcontroller chip. What we do instead is use a model pre-trained on ImageNet1000 dataset without a classification layer as a feature extractor. It will process the image and output a feature vector, containing the information about features found in the image (e.g., cat whiskers, car wheels, human eyes, etc.), which we use to train a K-means classifier. After training is done and we need to classify new image from the camera, we pass the image through a feature extractor and use the output feature vector to find which cluster does the new image belongs to. Here is a concrete simplified example:

New model can be used for inference and saved for later use on SD card. Since K-means classifier is less sophisticated method, than neural networks, the trained model has some limitations, namely number of classes cannot be higher than 5 and it can only correctly classify objects, that are similar to a few samples it was trained on.

Here is a short demo video, showing the data collection, training and inference. We finished the code verification and as soon as we reach our next stretch goal, we’ll start working on adding this functionality to Codecraft.

Stay tuned for more articles from us and updates on MARK Kickstarter campaign.

For more information on Grove Zero series, Codecraft and other hardware for makers and STEM educators, visit our website,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s