Get featured on IndiaAI

Contribute your expertise or opinions and become part of the ecosystem!

An experimental satellite, PhiSat 1 (about the size of a cereal box) was ejected from a rocket’s dispenser along with 45 other similarly small satellites. It is now soaring at over 27,500 kmh in sun-synchronous orbit about 530 km overhead.

PhiSat-1 contains a new hyperspectral-thermal camera and onboard AI processing thanks to an Intel Movidius™ Myriad™ 2 Vision Processing Unit (VPU) — the same chip inside many smart cameras and even a $99 selfie drone here on Earth. PhiSat-1 is actually one of a pair of satellites on a mission to monitor polar ice and soil moisture, while also testing inter satellite communication systems in order to create a future network of federated satellites.

Myriad 2 is helping to solve the challenge of handling the large amount of data generated by high-fidelity cameras like the one on PhiSat-1. “The capability that sensors have to produce data increases by a factor of 100 every generation, while our capabilities to download data are increasing, but only by a factor of three, four, five per generation,” says Gianluca Furano, data systems and onboard computing lead at the European Space Agency, which led the collaborative effort behind PhiSat-1.

At the same time, about two-thirds of our planet’s surface is covered in clouds at any given time. That means a whole lot of useless images of clouds are typically captured, saved, sent over precious down-link bandwidth to Earth, saved again, reviewed by a scientist (or an algorithm) on a computer hours or days later — only to be deleted.

“And artificial intelligence at the edge came to rescue us, the cavalry in the Western movie,” says Furano. The idea the team rallied around was to use onboard processing to identify and discard cloudy images — thus saving about 30% of bandwidth.

Irish startup Ubotica built and tested PhiSat-1’s AI technology, working in close partnership with cosine, maker of the camera, in addition to the University of Pisa and Sinergise to develop the complete solution. The Myriad was designed from the ground up to have an impressive compute capability but in a very low power envelope, suiting space applications.

The Myriad 2, however, was not intended for orbit. Spacecraft computers typically use very specialized “radiation-hardened” chips that can be “up to two decades behind state-of-the-art commercial technology and AI was not considered. Dunne and the Ubotica team performed “radiation characterization,” putting the Myriad chip through a series of tests to figure out how to handle any resulting errors or wear-and-tear. The first test ran for 36 straight hours, with radiation-beam blasting at CERN in late 2018, and was deemed successful. This low-power, high-performance computer vision chip was ready to venture beyond Earth’s atmosphere. But then came another challenge.

Typically, AI algorithms are built, or “trained,” using large quantities of data to “learn” — in this case, what’s a cloud and not a cloud. But given the camera was so new, “we didn’t have any data,” says Furano. “We had to train our application on synthetic data extracted from existing missions.” All this system and software integration and testing, with involvement of a half-dozen different organizations across Europe, took four months to complete. 

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE