Using MODBUS on Azure Sphere with Rust

Björn Quentin

Björn Quentin – Developer

At grandcentrix we were tasked with connecting standalone temperature and humidity sensors to the cloud to monitor those readings for different regions inside larger supermarkets. These regions include for example the vegetable department as well as the checkout area and the entrance area. The final installation is planned to have one Azure Sphere guardian device per site and multiple sensors connected to it.
The sensors are not directly connected but communicate with the receiver – a central MODBUS enabled device. Sensor communication is done via a proprietary wireless protocol that is optimized for power consumption. Since the sensors itself are powered by batteries and are designed to run for a long period of time without recharging or replacing the battery, we also need to monitor the battery level to be able to plan battery replacements in advance.

Azure Sphere connects via MODBUS RTU over RS485 to the central receiver and the telemetry data needs to go into Azure IoT for further processing and analysis. Additionally, a sensor’s firmware can be updated via MODBUS whenever the manufacturer releases a new firmware.

Having all the telemetry data in the cloud gives great opportunities for visualizing and analyzing the data. This way is also suitable to give different target groups access to the data depending on the user’s role. Also alerting can be implemented without much effort this way. Beside these business and technical requirements we wanted to implement everything on the Azure Sphere side in Rust – the reasons for that are explained below.

Azure Sphere

Today it’s easier than ever before to get started with a network-enabled MCU – they are easily accessible and powerful but it is important to keep them secure. Usually, it’s your duty to make and keep them secure – which is a chore. Just think of the vulnerabilities found in Treck TCP/IP stack and the Mirai botnet which made headlines some years ago and the latest discovered vulnerabilities known as AMNESIA:33. There is much more to say on this topic: If you want to dive deeper have a look at here.
Azure Sphere If you are interested in how Azure Sphere can help secure your IoT business, contact us →


All you have to worry about is your own code. And that is where Rust comes into play. While we all test and review our code as carefully as possible, it is still a reality that most of all critical problems in software are caused by doing memory management wrong. Have a look at this article by the Microsoft Security Response Center that explains things in more detail. Unfortunately, there is little that can be done to make sure your code doesn’t get affected by those problems by the OS. But there is hope: Rust. One of the main drivers in creating the Rust programming language was to protect developers from running into those problems.

Here is a quick summary of some things Rust offers:

  • High performance
  • Memory safety
  • Useable for anything from small embedded devices to big iron servers – and the web
  • A modern feature-rich language
  • Zero cost abstractions

More than enough reasons to use it on Azure Sphere. And if you do so, you are in good company since many companies switched or are switching to Rust for at least the most sensitive parts of their software development.

The Approach

So, we want to talk MODBUS RTU via RS485 to the sensors.

The protocol is easy enough to implement it without much effort. But there is one catch: Since only one device is allowed to talk on the bus at any given time, we have to switch the RS485 transceiver between transmit and receive in a tight time frame. Keeping transmit too long means losing at least a part of the response. Switching to receive too early means we cut off part of the command we are sending via UART. Switching the GPIO at the exact time needed while communicating via UART didn’t work with just the high-level app running on A7 core. Fortunately, we have access to the real-time M4 cores. On the M4 cores, we can set appropriate interrupts to exactly know when the last bit of our command is sent and we have to switch the transceiver into receive mode. We decided to keep the low-level app as simple as possible and implement more complex things in our high-level app, like creating the MODBUS packets and calculating the checksum.

Apart from that, the high-level app is also responsible for communicating with the cloud (i.e. telemetry ingress, receiving cloud to device commands to trigger a firmware update of the sensors and downloading a new firmware from the internet). Beside that it doesn’t make sense, it’s also not allowed – and physically not possible – to access the network directly by the low-level app.

That turned out to be a great solution. Here is a schematic of the components involved:

MODBUS on Azure Sphere with Rust Components

It’s also worth noting that while Azure Sphere can certainly talk to any cloud it’s actually very easy to connect Azure Sphere to Azure IoT.

Even things like device provisioning, which usually is a headache, is a no-brainer in this setup. It’s no exaggeration to say that Azure Sphere and Azure IoT are matches made in heaven.

The Details – How to use Rust to develop for Azure Sphere

Out of the box, the Azure Sphere SDK supports developing your applications in C. It’s a comprehensible and good decision and makes it not too hard to use Rust here.

While it is perfectly possible to cross-compile libstd for high-level apps, because support for Linux is already integrated, we will go with no-std – not only to have the generated binaries a bit smaller but also since it’s just a bit easier to start without libstd here for now. However, no-std isn’t really a concern here - the core library gives us already enough functionality to conveniently develop our solution.


Before we can start development we need some components installed:

  • A recent Rust nightly toolchain (e.g. 1.50.0-nightly (1c389ffef 2020-11-24) or later)
  • MUSL GCC toolchain for ARM-HF for developing high-level applications
  • a recent gcc-arm-none-eabi toolchain for developing real-time apps
  • Azure Sphere SDK for Windows or Linux

High-Level Application

In order to successfully compile, link, and deploy a high-level application, we need to specify some configuration options. Fortunately, it’s quite easy to customize the build options – we don’t need to recompile the compiler but we can easily configure everything we need when using a precompiled toolchain.

First we need a target specification file – it’s called arm-v7-none-eabi.json here:

  "abi-blacklist": [
  "arch": "arm",
  "data-layout": "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64",
  "emit-debug-gdb-scripts": false,
  "env": "",
  "executables": true,
  "cpu": "cortex-a7",
  "features": "+v7,+thumb-mode,+thumb2,+vfp3,+neon",
  "linker": "arm-linux-musleabihf-gcc",
  "linker-flavor": "gcc",
  "llvm-target": "arm-v7-none-eabihf",
  "max-atomic-width": 32,
  "os": "none",
  "panic-strategy": "abort",
  "relocation-model": "pic",
  "target-c-int-width": "32",
  "target-endian": "little",
  "target-pointer-width": "32",
  "vendor": ""

Here we specify various options for the code generation. Additionally we need the file .cargo/config

linker = 'sysroots/7+Beta2010/tools/sysroots/x86_64-pokysdk-linux/usr/bin/arm-poky-linux-musleabi/arm-poky-linux-musleabi-gcc'
runner = 'arm-eabi-gdb'
rustflags = [
  "-C", "link-arg=-flinker-output=exec",
  "-C", "link-arg=-Wl,--dynamic-linker=/lib/",
  "-C", "link-arg=-v",
  "-C", "link-arg=--sysroot=./sysroots/7+Beta2010/",
  "-C", "link-arg=-L",
  "-C", "link-arg=./sysroots/7+Beta2010/usr/lib/",
  "-C", "link-arg=-L",
  "-C", "link-arg=./sysroots/7+Beta2010/lib/",
  "-C", "link-arg=-Wl,--no-undefined,--gc-sections",
  "-C", "link-arg=-nodefaultlibs",
  "-C", "link-arg=-march=armv7ve+neon",
  "-C", "link-arg=-mcpu=cortex-a7",
  "-C", "link-arg=-mthumb",
  "-C", "link-arg=-mfpu=neon",
  "-C", "link-arg=-mfloat-abi=hard",
  "-C", "link-arg=-Wl,-Bdynamic",
  "-C", "link-arg=-lapplibs",
  "-C", "link-arg=-lazureiot",
  "-C", "link-arg=-lpthread",
  "-C", "link-arg=-lcurl",
  "-C", "link-arg=-ltlsutils",
  "-C", "link-arg=-lgcc_s",
  "-C", "link-arg=-lc",
  "-C", "link-arg=-Os",
target = "arm-v7-none-eabi"

Here we configure which default target to use (the arm-v7-none-eabi file from above) and how the linking is done.

We also configure which libraries we like to link. Those are from the Sphere SDK. Please note that I symlinked the sysroots folder for simplicity and because we cannot use environment variables in the config file. At this point, we are already able to compile a binary suitable to run on Azure Sphere by running cargo xbuild.

Now we are just a few commands away from having it running on the device. First let’s copy together the binary and the application manifest:

mkdir -p target/approot/
mkdir -p target/approot/bin

cp target/arm-v7-none-eabi/debug/sphere-app target/approot/bin/app
cp app_manifest.json target/approot

Next, we use the azsphere command-line utility to create the image and deploy it:

azsphere image-package pack-application --input target/approot --destination-file target/sphere-app.image --verbose

azsphere device sideload delete --component-id 00f3df71-a397-4a5e-89cb-7dde6486888d --verbose

azsphere device sideload deploy -p target/sphere-app.image --verbose

Here I’m using the Azure Sphere CLI v2 syntax. For v1 it looked similar but not exactly the same. That is all that is needed for this. To make development easier I decided to create and use some support crates:

sphere-sys A typical sys crate mainly consists of a file which is using bindgen to generate Rust FFI bindings from the header files contained in the Azure Sphere SDK.

sphere-lib Since it is quite inconvenient to directly use the generated FFI bindings and also needs unsafe code in many places, it is a good idea to create some Rust friendly wrappers to make using the functionality feel idiomatic on the Rust side.

sphere-rt This is optional but very useful. It contains a custom allocator which currently just directly uses malloc and free as well as some re-exports of functionality from libcore. Additionally, it defines the print family of macros to log to Sphere’s debug out and defines a panic handler and the start function of the application. The code sample below should show how this makes development easier. And this is how a very simple multi-color blinky could look like with the above explained approach:

#![cfg_attr(not(test), no_std)]
#![cfg_attr(not(test), no_main)]

extern crate sphere_rt as std;
use std::prelude::v1::*;

extern crate sphere_lib;
use sphere_lib::mt3620_gpio::*;
use sphere_lib::util::sleep;

const MT3620_GPIO8: i32 = 8;
const MT3620_GPIO9: i32 = 9;
const MT3620_GPIO10: i32 = 10;
const MT3620_RDB_LED1_GREEN: i32 = MT3620_GPIO9;
const MT3620_RDB_LED1_BLUE: i32 = MT3620_GPIO10;
const MT3620_RDB_LED1_RED: i32 = MT3620_GPIO8;

fn start() {

    let red = GpioPort::open(MT3620_RDB_LED1_RED);
    let green = GpioPort::open(MT3620_RDB_LED1_GREEN);
    let blue = GpioPort::open(MT3620_RDB_LED1_BLUE);

    loop {

As you can see we can just use the println macro and there is no need to fiddle with an allocator or panic handler. Side note: If you wonder about the “not(test)” directives – that’s because I like to have a way to run high-level tests on the host OS for early feedback.

Real-Time Application

As said before, we are in need of a real-time application for the given task. Actually writing a real-time application for Azure Sphere is pretty much straightforward – much the same as writing a bare-metal application for any other MCU in Rust. In this very first version, it doesn’t make use of the drivers provided by MediaTek and CodethinkLabs you can find on Github. That’s because they just didn’t exist when we implemented it. So we did it from scratch in Rust based on the samples provided by Microsoft and the datasheet. It also doesn’t implement Rust’s embedded-hal which would be great to have for the future. That being said here is how it works. This time we don’t need a custom target specification since one of Rust’s built-in targets already fit. So we start with .cargo/config

rustflags = [
  "-C", "linker=arm-none-eabi-gcc",
  "-C", "link-arg=-mcpu=cortex-m4",
  "-C", "link-arg=-g",
  "-C", "link-arg=-nostartfiles",
  "-C", "link-arg=-Wl,--no-undefined",
  "-C", "link-arg=-Wl,-n",
  "-C", "link-arg=-Wl,-Tlinker.ld"

target = "thumbv7em-none-eabihf"

Again we configure some parameters for the linker and the default target. This time we need a to copy the linker script to a suitable location during the build:

use std::env;
use std::fs::File;
use std::io::Write;
use std::path::PathBuf;

fn main() {
 // Put the linker script somewhere the linker can find it
 let out = &PathBuf::from(env::var_os("OUT_DIR").unwrap());
 println!("cargo:rustc-link-search={}", out.display());

 // Only re-run the build script when linker.ld is changed,
 // instead of when any part of the source code changes.

Last not least we need a linker.ld – we can just use this one. Now we are good to go and are able to compile for Azure Sphere’s M4 cores with the cargo xbuild command. However there is quite some code needed to get everything setup:

const SCB_BASE: usize = 0xE000ED00;

const INTERRUPT_COUNT: usize = 100;

extern "C" {
 pub static StackTop: u32;

pub union Vector {
 handler: unsafe extern "C" fn(),
 reserved: usize,
 address: *const u32,

#[link_section = ".vector_table"]
pub static mut ExceptionVectorTable: [Vector; EXCEPTION_COUNT] = [
 Vector {
 address: unsafe { &StackTop },
 Vector { handler: main }, // RESET
 Vector {
 handler: defaultHandler,
 }, // NMI

unsafe extern "C" fn main() {
  // SCB->VTOR = ExceptionVectorTable
  write_reg32(SCB_BASE, 0x08, ExceptionVectorTable.as_ptr() as u32);


It is more or less the setup used in this example. These are the same basic steps needed on any Cortex-M. As said before, in this first step we implemented things pretty much straight forward. We could have even used the Cortex-M-RT crate for the basics here and avoid that boilerplate setup code. For the future, the way to go would be to implement the embedded-hal traits and also have a runtime crate for initialization and other conveniences. Creating an image and side loading looks pretty much the same as for a high-level application.


You can see a complete example of both - the high level and the bare metal application- on our GitHub repository here. It’s basically “blinky” in both variants. While not all the code is needed for a simple “blinky” the code for things like Azure IoT connectivity and Intercore Communication is included for completeness.


Even given the slightly more effort needed to get started with Rust for Azure Sphere, it was quite easy and efficient to use that approach. Memory safety alone would be worth it but there is so much more you get from using Rust.

A big part of the easiness of this implementation is also because of Azure Sphere and Azure IoT. Device provisioning and communication with the cloud is almost a no-brainer. Not to forget the managed software updates.

While the high-level application support crates we implemented makes things very easy to use, there is some work left on the real-time part of the story. But that shouldn’t be a big effort as we already have a working solution here that just needs to be brought in shape.