To review all the footage a drone captures while flying over the ocean for hours at a time, we needed something to watch it all and spot and record locations of the marine animals. We’ll be the first to tell you watching 100’s of hours of ocean footage from a drone is pretty darned boring and really hard.
After a lot of work collecting aerial drone footage, building, and testing machine learning models we completed our initial AI model to prove that what we wanted to achieve is possible. Below you can see a computer vision model running on Darknet & Yolo that quickly identifies dolphins in the ocean from aerial footage.
We are pretty proud of our ~96% accuracy (mAP). This means compared to a human manually tagging images of dolphins, the computer can do the same job 96% as accurately. A computer can also do it 1000x+ faster. It takes an efficient human around 10 seconds to tag an image and our computer models can do 100’s per second.
This speed and frame by frame processing allow dolphin spotting to be quickly pulled from video footage where humans often miss them.
Our first model is trained to detect Māui and Hector's dolphins, but the possibilities are vast. In theory we can develop AI detectors for almost anything that spends time on the surface of the water, we’ve already started with a few new species. We want to store all the footage captured so that other models created by MAUI63 or other researchers can be run back over them in future to create even more data.