Results for ""
Cameras hidden in wildlife habitats allow researchers to capture photos and videos of animals in their natural surroundings, without the disruption of humans or captivity. These “camera traps” are critical to understanding our wild ecosystems, providing insights into behaviours, migration, populations, and more. Photos and videos are used in projects around the world and across many projects. But the vast amount of visual data from these projects creates a new challenge: How can we sift through millions of frames to find and label animals?
Zamba Cloud was created from a machine learning model that outperformed all others in an automated wildlife identification competition. The winner achieved 96% accuracy in identifying the presence of wildlife and 99% average accuracy in identifying species. DrivenData incorporated this model into their open-source software, Zamba, where it’s available to be used by researchers and conservationists to help parse the thousands of hours of valuable footage. Video camera traps capture footage of wild animals in their natural habitat. Users upload the videos to the server. Neural networks run on Microsoft Azure GPUs to detect animals in the videos. Users can help the algorithm improve by identifying any mistakes. Researchers can more easily find videos that warrant further study.
Zamba Cloud’s machine learning model started with labelled images from Chimp&See to identify chimps, monkeys, and hogs. The project has now grown to support more than 20 species, including elephants, hippos, birds, lions, and hyenas. With more collaborators, DrivenData aims to expand the supported species and geographies.
Source: Microsoft
Image from pixahive