The Staaker Drone
The Staaker Drone is the product of The Staaker Company, which I co-founded as CTO during my last year of university. We had a common goal of a self flying drone that could follow and film extreme sports athletes autonomously.
As CTO I really focused on understanding our product, both technically and commercially, with the aim of finding a technically feasible product that would satisfy our business and customer constraints. I believe that one can not lead what one does not understand the underlying dynamics of. Therefore I spent a lot of time with each engineer to learn their specific fields. This made me capable of telling where the different fields were in conflict and needed guidance to not end up in a pinch, while they could stay focused on their specializations.
I did lot of the design and specification for the product, examples include:
- Specification of all electrical submodules, how they interact and what main components they should consist of.
- Specification of all physical parameters of the drone, weight, range, speed, and their interaction with each sub-field.
- Specification of all software components: What they do, and how they interact with each other to form a hard-real-time autonomous control system.
We were a small company and everyone took on several roles.
In addition to the CTO role, I also did software development. I am a skilled C++/Rust/Haskell/Python programmer and I was responsible for the implementation of signals processing, estimation, tracking, navigation and control, and implementing all of this hard-real-time in C++ on FreeRTOS on an STM32F4.
Here are some videos captured by our customers
The Staaker Drone state estimator
For the Staaker and Nordic Unmanned drones, I designed and implemented the state estimator. The design is an flexible extended kalman filter, running hard real-time on an STDM32F4. I estimates all states of the drone at 500Hz.
The Staaker Drone uses an extended kalman filter as its state estimator. The state contains position, velocity, acceleration, attitude modelled as a quaternion and 3-axis gyro-bias. The control input tho the filter is the gyroscope and body frame accelerometer measurements. The kalman filter supports several different types of measurements (kalman updates):
- World-frame-yaw heading
- Body acceleration
- Combinated body acceleration north-east heading
- Altitude measurement (barometer or GPS-height)
- Combined GPS-north-east-position, altitude, NorthEastDown-velocity, body-acceleration and word-frame-yaw heading
The reason for this many types of measurements is that data arrives at different rates. The sensors are ordered from slowes to fastes:
- GPS: 5-10Hz, not hard-real-time timing
- Barometer/compass: ~20Hz sampling
- Accelerometer/gyroscope: 500Hz sampling
At each update of the kalman filter, the readyness of new sensor data is checked for each sensor.
- If GPS data is available, the most expensive and full update step is run, 5.
- Else if magnetometer data is available, update 3. is run
- If barometer data is available, update 4.
- Last, if accelerometer data is available, and it has not yet been "consumed" by any of the other update steps, update 2. is run.
With this design, the kalman filter can estimate and integrate data from all the different sensors, in a consistent manner, at the highest possible rate, 500Hz, without having to to hacks like having an integrating observer running ahead in time of the kalman filter.
A critical point in the design of any kalmin filter that contains attitude is how to parametrize it. In my work I have choosen that the kalman filter estimates a linearized 3-axis error angle, of the current non-linear quaternion state. This is different from linearizing the quaternion equations and directly estimating the next 4 quaternion coordinates. The reason for this is that attitude-quaternions require that \(|q|=1\), which is a constraint the kalman filter is not able to take into account. This means that any extended kalman filter which is estimating directly an quaternions, DCM-matrices or similar will end up estimating poorly, especially for very dynamic systems, as the extended kalman filter ends up with an estimate where \(|q|\ne1\).
The kalman filter is required to run at 500Hz on a tiny STM32F4 chip. A normal naive implementation in C++ would have a hard time getting close to just 50Hz update rate on such a processor. To enable this fast enough numerics on such a small computing budget, I developed an optimizing symbolic math based numerical compiler in Python using SymPy. The compiler takes as input the symbolic equations for the movement of a drone, and outputs a C-file, containing no dynamic memory, no loops, no unbounded control logic. One of the key innovations for this was to find a general closed form solution for the covariance update step in the kalman equations and compressing this expression down to a code size that quickly will execute on a microcontroller.
The compiler is fully integrated into the drone build system, so chaning any of the declarative model-input files triggers a full rebuild of the whole kalman filter stack.
The Staaker Drone control system
For the Staaker and Nordic Unmanned drones, I designed and implemented the control system. The design is a classic cascaded PID-loop design, with a complex mapping between desired accelerations and desired attitude. The control stack is running hard real-time on an STDM32F4. I consumes estimates and refernce setpoints and controls the drone motors at 500Hz.
The Staaker drone control system is a classic cascaded PID design. For position, the implementation is straight forward. For velocity, if saturation is reached, a heuristic prioritizing altitude is used. This saves the drone from crashing into the ground if it is given velocity setpoints it can not reach. Acceleration is mapped to orientation and thrust, by solving the physical model of the forces on the craft for orientation. Orientation uses a a nonlinear control law. It takes into account that roll and pitch moments are much easier to generate than yaw. It also takes into account the existing momentum of the craft. The generated rate setpoint from the orientation control law is used by the rate PID controller.
Motor saturation handling
Quadcopter controller design would be easy, if it wasn't for saturation. The reason is that in the control system, information flows from the slow dynamics, to the fast dynamics. Position -> velocity -> attitude -> rates -> motor setpoints. First when some motor setpoints have been computed, you will know if any of the motors will saturate. If none of the motors saturate, you now have the problem of propagating backwards trough the control laws the saturation, trying to change the setpoints so that the saturation disappears.
In the Staaker drone, yaw-rate is always sacrificed first, if saturation is detected. This is because a quadrotor can fly safely as long as it is able to measure its yaw, while it does not need to control it. A fascinating example of this is in this video:
Nordic Unmanned RailRobot phase 1: Fly and drive frame
This project was an initial technical demonstrator. The goal was to create a drone platform that is physically capable of driving on rails and flying, as well as landing on them.
For the drone platform, our BG200 platform was adapted. Here my responsibility was an ensuring the following properties of the platform
- Flight characteristics able to land.
- Flight endurance to be be able to jump between tracks many times.
- Rail driving endurance of >100km.
- Feasible placement for all avionics, computer vision and onboard computing systems
- Compromize between fligth and rail running performance
As the project had constraints, we could not change the BG200 frame design, only add to it. The end result is a carefull compromize between all the performance factors, complexity and cost.
High efficiency low-latency live video streaming To control a drone, an operator depends on a live video stream. I designed and implemented the software stack that does videostreaming in Nordic Unmanneds Staaker drones.
General design goals
Key design goal for the video streaming stack was the following:
- Low end-to-end latency. From the camera grabs a frame untill it is visible on the screen on the ground, there should be no more than 60-70ms for the operator to feel he still has control of the craft.
- Low bandwith. Long range together with maximum power requirements on radio equipment means that the received signal from the drone is weak. This leads to a low maximum bandwidth. We optimized or systems to carry 2 video streams within 1.5megabit of bandwith.
- High tolerance to network packet loss
- High tolerance to network packet jitter
- High energy/compute efficiency
To get low end-to-end latency, there can be very few buffers in the pipeline. The only places where data is buffered in the solution is inside the video encoder, which requires at least 1 single frame delay to be able to do delta encoding, and withing the reciving jitterbufer, that handles network packet reordering. Another critical part of the latency was to use hardware accelerated encoding and tune the encoder for minimal latency.
To get the required energy/compute efficiency using VA-API was choosen. This offloads the hard work of H264 video encoding to the underlying graphics hardware.
To handle network packet jitter, a jitterbuffer was used on the receiver side. To handle packet loss, additional forward error correction coding is used on the sender side, and the accompanying FEC-decoding on the reciving side. This lets the link handle X% packet loss, at the cost of X% extra bandwidth, without any loss of information on the receiving side.
The system was implemented in the Rust programming language, using the gstreamer library.