Using MODBUS on Azure Sphere with Rust
Azure Sphere connects via MODBUS RTU over RS485 to the central receiver and the telemetry data needs to go into Azure IoT for further processing and analysis. Additionally, a sensor’s firmware can be updated via MODBUS whenever the manufacturer releases a new firmware.
Having all the telemetry data in the cloud gives great opportunities for visualizing and analyzing the data. This way is also suitable to give different target groups access to the data depending on the user’s role. Also alerting can be implemented without much effort this way. Beside these business and technical requirements we wanted to implement everything on the Azure Sphere side in Rust – the reasons for that are explained below.
Azure Sphere
Rust
Here is a quick summary of some things Rust offers:
- High performance
- Memory safety
- Useable for anything from small embedded devices to big iron servers – and the web
- A modern feature-rich language
- Zero cost abstractions
The Approach
So, we want to talk MODBUS RTU via RS485 to the sensors.
The protocol is easy enough to implement it without much effort. But there is one catch: Since only one device is allowed to talk on the bus at any given time, we have to switch the RS485 transceiver between transmit and receive in a tight time frame. Keeping transmit too long means losing at least a part of the response. Switching to receive too early means we cut off part of the command we are sending via UART. Switching the GPIO at the exact time needed while communicating via UART didn’t work with just the high-level app running on A7 core. Fortunately, we have access to the real-time M4 cores. On the M4 cores, we can set appropriate interrupts to exactly know when the last bit of our command is sent and we have to switch the transceiver into receive mode. We decided to keep the low-level app as simple as possible and implement more complex things in our high-level app, like creating the MODBUS packets and calculating the checksum.
Apart from that, the high-level app is also responsible for communicating with the cloud (i.e. telemetry ingress, receiving cloud to device commands to trigger a firmware update of the sensors and downloading a new firmware from the internet). Beside that it doesn’t make sense, it’s also not allowed – and physically not possible – to access the network directly by the low-level app.
That turned out to be a great solution. Here is a schematic of the components involved:
It’s also worth noting that while Azure Sphere can certainly talk to any cloud it’s actually very easy to connect Azure Sphere to Azure IoT.
Even things like device provisioning, which usually is a headache, is a no-brainer in this setup. It’s no exaggeration to say that Azure Sphere and Azure IoT are matches made in heaven.
The Details – How to use Rust to develop for Azure Sphere
Out of the box, the Azure Sphere SDK supports developing your applications in C. It’s a comprehensible and good decision and makes it not too hard to use Rust here.
While it is perfectly possible to cross-compile libstd for high-level apps, because support for Linux is already integrated, we will go with no-std – not only to have the generated binaries a bit smaller but also since it’s just a bit easier to start without libstd here for now. However, no-std isn’t really a concern here - the core library gives us already enough functionality to conveniently develop our solution.
Prerequisites
Before we can start development we need some components installed:
- A recent Rust nightly toolchain (e.g. 1.50.0-nightly (1c389ffef 2020-11-24) or later)
- MUSL GCC toolchain for ARM-HF for developing high-level applications
- a recent gcc-arm-none-eabi toolchain for developing real-time apps
- Azure Sphere SDK for Windows or Linux
High-Level Application
In order to successfully compile, link, and deploy a high-level application, we need to specify some configuration options. Fortunately, it’s quite easy to customize the build options – we don’t need to recompile the compiler but we can easily configure everything we need when using a precompiled toolchain.
First we need a target specification file – it’s called arm-v7-none-eabi.json here:
{
"abi-blacklist": [
"stdcall",
"fastcall",
"vectorcall",
"thiscall",
"win64",
"sysv64"
],
"arch": "arm",
"data-layout": "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64",
"emit-debug-gdb-scripts": false,
"env": "",
"executables": true,
"cpu": "cortex-a7",
"features": "+v7,+thumb-mode,+thumb2,+vfp3,+neon",
"linker": "arm-linux-musleabihf-gcc",
"linker-flavor": "gcc",
"llvm-target": "arm-v7-none-eabihf",
"max-atomic-width": 32,
"os": "none",
"panic-strategy": "abort",
"relocation-model": "pic",
"target-c-int-width": "32",
"target-endian": "little",
"target-pointer-width": "32",
"vendor": ""
}
[target.arm-v7-none-eabi]
linker = 'sysroots/7+Beta2010/tools/sysroots/x86_64-pokysdk-linux/usr/bin/arm-poky-linux-musleabi/arm-poky-linux-musleabi-gcc'
runner = 'arm-eabi-gdb'
rustflags = [
"-C", "link-arg=-flinker-output=exec",
"-C", "link-arg=-Wl,--dynamic-linker=/lib/ld-musl-armhf.so.1",
"-C", "link-arg=-v",
"-C", "link-arg=--sysroot=./sysroots/7+Beta2010/",
"-C", "link-arg=-L",
"-C", "link-arg=./sysroots/7+Beta2010/usr/lib/",
"-C", "link-arg=-L",
"-C", "link-arg=./sysroots/7+Beta2010/lib/",
"-C", "link-arg=-Wl,--no-undefined,--gc-sections",
"-C", "link-arg=-nodefaultlibs",
"-C", "link-arg=-march=armv7ve+neon",
"-C", "link-arg=-mcpu=cortex-a7",
"-C", "link-arg=-mthumb",
"-C", "link-arg=-mfpu=neon",
"-C", "link-arg=-mfloat-abi=hard",
"-C", "link-arg=-Wl,-Bdynamic",
"-C", "link-arg=-lapplibs",
"-C", "link-arg=-lazureiot",
"-C", "link-arg=-lpthread",
"-C", "link-arg=-lcurl",
"-C", "link-arg=-ltlsutils",
"-C", "link-arg=-lgcc_s",
"-C", "link-arg=-lc",
"-C", "link-arg=-Os",
]
[build]
target = "arm-v7-none-eabi"
Here we configure which default target to use (the arm-v7-none-eabi file from above) and how the linking is done.
We also configure which libraries we like to link. Those are from the Sphere SDK. Please note that I symlinked the sysroots folder for simplicity and because we cannot use environment variables in the config file. At this point, we are already able to compile a binary suitable to run on Azure Sphere by running cargo xbuild.
Now we are just a few commands away from having it running on the device. First let’s copy together the binary and the application manifest:
mkdir -p target/approot/
mkdir -p target/approot/bin
cp target/arm-v7-none-eabi/debug/sphere-app target/approot/bin/app
cp app_manifest.json target/approot
Next, we use the azsphere command-line utility to create the image and deploy it:
azsphere image-package pack-application --input target/approot --destination-file target/sphere-app.image --verbose
azsphere device sideload delete --component-id 00f3df71-a397-4a5e-89cb-7dde6486888d --verbose
azsphere device sideload deploy -p target/sphere-app.image --verbose
Here I’m using the Azure Sphere CLI v2 syntax. For v1 it looked similar but not exactly the same. That is all that is needed for this. To make development easier I decided to create and use some support crates:
sphere-sys A typical sys crate mainly consists of a build.rs file which is using bindgen to generate Rust FFI bindings from the header files contained in the Azure Sphere SDK.
sphere-lib Since it is quite inconvenient to directly use the generated FFI bindings and also needs unsafe code in many places, it is a good idea to create some Rust friendly wrappers to make using the functionality feel idiomatic on the Rust side.
sphere-rt This is optional but very useful. It contains a custom allocator which currently just directly uses malloc and free as well as some re-exports of functionality from libcore. Additionally, it defines the print family of macros to log to Sphere’s debug out and defines a panic handler and the start function of the application. The code sample below should show how this makes development easier. And this is how a very simple multi-color blinky could look like with the above explained approach:
#![cfg_attr(not(test), no_std)]
#![cfg_attr(not(test), no_main)]
#[cfg(not(test))]
extern crate sphere_rt as std;
#[cfg(not(test))]
use std::prelude::v1::*;
extern crate sphere_lib;
use sphere_lib::mt3620_gpio::*;
use sphere_lib::util::sleep;
const MT3620_GPIO8: i32 = 8;
const MT3620_GPIO9: i32 = 9;
const MT3620_GPIO10: i32 = 10;
const MT3620_RDB_LED1_GREEN: i32 = MT3620_GPIO9;
const MT3620_RDB_LED1_BLUE: i32 = MT3620_GPIO10;
const MT3620_RDB_LED1_RED: i32 = MT3620_GPIO8;
#[no_mangle]
fn start() {
println!("start");
let red = GpioPort::open(MT3620_RDB_LED1_RED);
let green = GpioPort::open(MT3620_RDB_LED1_GREEN);
let blue = GpioPort::open(MT3620_RDB_LED1_BLUE);
loop {
red.set_low();
green.set_low();
blue.set_low();
sleep(1);
...
}
}
Real-Time Application
[target.thumbv7em-none-eabihf]
rustflags = [
"-C", "linker=arm-none-eabi-gcc",
"-C", "link-arg=-mcpu=cortex-m4",
"-C", "link-arg=-g",
"-C", "link-arg=-nostartfiles",
"-C", "link-arg=-Wl,--no-undefined",
"-C", "link-arg=-Wl,-n",
"-C", "link-arg=-Wl,-Tlinker.ld"
]
[build]
target = "thumbv7em-none-eabihf"
use std::env;
use std::fs::File;
use std::io::Write;
use std::path::PathBuf;
fn main() {
// Put the linker script somewhere the linker can find it
let out = &PathBuf::from(env::var_os("OUT_DIR").unwrap());
File::create(out.join("linker.ld"))
.unwrap()
.write_all(include_bytes!("linker.ld"))
.unwrap();
println!("cargo:rustc-link-search={}", out.display());
// Only re-run the build script when linker.ld is changed,
// instead of when any part of the source code changes.
println!("cargo:rerun-if-changed=linker.ld");
}
const SCB_BASE: usize = 0xE000ED00;
const INTERRUPT_COUNT: usize = 100;
const EXCEPTION_COUNT: usize = 16 + INTERRUPT_COUNT;
extern "C" {
pub static StackTop: u32;
}
pub union Vector {
handler: unsafe extern "C" fn(),
reserved: usize,
address: *const u32,
}
#[link_section = ".vector_table"]
#[no_mangle]
pub static mut ExceptionVectorTable: [Vector; EXCEPTION_COUNT] = [
Vector {
address: unsafe { &StackTop },
},
Vector { handler: main }, // RESET
Vector {
handler: defaultHandler,
}, // NMI
...
#[no_mangle]
unsafe extern "C" fn main() {
// SCB->VTOR = ExceptionVectorTable
write_reg32(SCB_BASE, 0x08, ExceptionVectorTable.as_ptr() as u32);
...
Code
Conclusion
Even given the slightly more effort needed to get started with Rust for Azure Sphere, it was quite easy and efficient to use that approach. Memory safety alone would be worth it but there is so much more you get from using Rust.
A big part of the easiness of this implementation is also because of Azure Sphere and Azure IoT. Device provisioning and communication with the cloud is almost a no-brainer. Not to forget the managed software updates.
While the high-level application support crates we implemented makes things very easy to use, there is some work left on the real-time part of the story. But that shouldn’t be a big effort as we already have a working solution here that just needs to be brought in shape.