TinyML for Smart Sensor Data: Running small machine learning models on microcontrollers for Environmental Monitoring

M-GEO
M-SE
GEM
Potential supervisors
M-SE Core knowledge areas
Spatial Information Science (SIS)
Spatial Planning for Governance (SPG)
Technical Engineering (TE)
Additional Remarks

The topic is also suitable for:

  • GEM students track 1 – GEM for Urban-Rural Interactions
  • GEM students in track 3 – GEM for Ecosystems & Natural Resources
Topic description

Environmental data is often rich and complex. A camera pointed at a meadow records every pixel of every frame, even if all you need is a simple greenness index such as the Green Chromatic Coordinate. A river camera captures thousands of image features, while you may only want the water level. A microphone records the full soundscape when your real interest might be detecting a specific species or quantifying ambient noise. In many environmental monitoring applications, only a small fraction of the raw data is actually needed.

This is where tiny machine learning can make a real difference. TinyML provides techniques for running very small machine learning models on low-power microcontrollers. These models can extract key information directly on the device, allowing a sensor to send only the essential output rather than transmitting large images, audio recordings, or other high-volume data streams.

It is an emerging field with excellent career prospects, especially as demand for edge computing, environmental AI and sustainable monitoring technologies continues to grow.

Low-power microcontrollers are already widely used to observe environmental variables such as temperature, humidity and soil moisture. Traditionally, these devices have been too limited to train or run standard machine learning models. Recent advances in TinyML, however, now make it possible to deploy small neural networks directly on microcontrollers. When combined with long-range, low-bandwidth communication technologies such as LoRaWAN, this enables “smart” environmental sensors that can perform onboard analysis before sending a small, meaningful message from the field.

This MSc topic invites you to develop and field-test a lightweight edge-computing model capable of interpreting RGB images or short image sequences on a microcontroller platform such as an ESP32 or STM32. Your model will reduce each captured image to a compact numerical output such as a classification score, index value, or compressed environmental indicator that can be transmitted via LoRaWAN. You will explore the full pipeline, from data collection and model design to embedded deployment and real-world testing.

Possible environmental applications include:

  • Optical detection of river discharge or turbidity, where camera views of a river channel are processed on-device.
  • Livestock counting or presence detection, using small edge cameras in rangelands or paddocks.
  • Computation of a vegetation index from RGB only, such as a simple greenness proxy (e.g., Green Leaf Index), supporting crop or rangeland monitoring.
  • Detection of anomalies, such as flooding, illegal dumping, or vegetation die-off.
  • As an alternative to image based observations: Deriving plot level soil moisture from reflected GNSS signals

Students are encouraged to explore a domain that fits their interests. The project sits at the intersection of proximal sensing, IoT engineering, embedded AI, and environmental monitoring, making it highly interdisciplinary. The student will be guided on this exciting and challenging learning path, and they are expected to commit to learning new advanced IT skills, such as installing and running firmware development kits for micro-controllers, and a willingness to engage in C/C++ or MicroPython code where necessary.

Topic objectives and methodology

The objectives of the topic are as follows:
(1) Generate a training and test dataset for the chosen application. 
(2) Design and train an offline machine learning model which can run on the chosen micro-controller platform. 
(3) Build the necessary firmware for the micro-controller which can run the imported ML model on the raw observables. 
(4) Deploy the sensor in the field and evaluate the results against the usability of the platform for real-time environmental monitoring.