Let’s talk about Artificial Intelligence (AI) wildfire detection accuracy.
We launched our AI wildfire detection software in July 2020. It was the first ready-to-use commercial product in the market. Our AI wildfire detection software now supports both on-premise and SaaS deployment, and is deployed in 10 countries to early detect wildfires and mitigate their risks and damages.
We have been supporting our customers to test and use AI wildfire detection since 2019. One common question from customers is about the accuracy of AI wildfire detection. Similar questions came up again from the participants in the recent UNEP Modern Technologies for Disaster Management webinar and the Global Catastrophic Wildfire Prevention Day Clubhouse session.
It is good timing to talk about the topic in more detail.

There are three key performance measurements about machine learning object detections:
- Precision = what proportion of positive identifications was actually correct = True Positive / (True Positive + Fales Positive)
- Recall = what proportion of actual positive was identified correctly = True Positive / (True Positive + False Negative)
- mean Average Precision (mAP) = arithmetic mean of the interpolated precision at each recall level
Precision concept is easier to comprehend. It focuses on the positives identified by the Machine Learning (ML) object detection. The bigger the value, the better is the precision.
We can achieve higher precision values in two directions:
- increase True Positive (TP) by continuously building better detection models with newer ML algorithms and improved dataset of early-stage forest fires in diverse environments.
- decrease Fales Positive (FP) by developing tools to isolate edge-cases that cause false positives.


Our LookOut wildfire detection SaaS provides the following false-positive reduction tools:
- Active Detection Region (ADR): user can define the active detection region and add masks to deactivate detection on the certain image areas
- Detection Sensitivity: user can increase the detection confidence level required to allow true positive only when the ML algorithm is very confident about the object it detects.
- Double-Detection: user can avoid FP caused by moving objects, such as cars and swaying trees, by requiring LookOut to report a positive only when a similar object is detected in two consecutive detection cycles.
- Detection Boundary Size Range: the boundary is the box that contains the object detected. There are several interesting use cases for this feature. For example, if a user doesn’t want to use ADR, they can limit the ML detection upper boundary size to avoid false-positive triggered by clouds in the sky.



With all these tools, LookOut users can customise the wildfire detection service to meet their unique environments and needs. According to our ground truth data, LookOut can achieve the precision of 95%+ in detecting early-stage wildfires with visible sizes as small as 0.03% of the image (about the size of a 25×25 pixels square in a 1920×1080 HD photo).

We evaluate the performances of the improved models and tools with hundreds of thousands of photos and then verify them in selected production environments around the world before rolling out new models to all customers. Taking models from development to production is challenging, and models in use require constant monitoring and retraining as date shifts. According to Gartner, 87% of data science projects never make it into production. ML model production deployment to us is as important as model development. We need both to provide good products.
Next, we will talk about Recall and mAP. Stay tuned~
Want to try our AI wildfire detection? Have an AXIS camera or wildfire photos? Please register a free trial LookOut account at https://lookout.roboticscats.com
References
- Precision and recall, wikipedia.org
- Evaluating performance of an object detection model, towardsdatascience.com
- Edge cases: predicting the unpredictable, cognatic.com
- Why do 87% of data science projects never make it into production?, venturebeat.com