Results for ""
In mid-April, Facebook AI has announced the launch (https://ai.facebook.com/blog/flashlight-fast-and-flexible-machine-learning-in-c-plus-plus/) of Flashlight, its open-source machine-learning (ML) library that can cater to developers and researchers run Artificial Intelligence (AI) and ML applications seamlessly via C++ API.
Talking about Flashlight on their blog, Facebook AI said that it is constructed using only the most basic of the building blocks essential for research. When the core components are altered, the library rebuilds itself in a few seconds and trains its pipelines. "We wrote Flashlight from the ground up in modern C++ because the language is a powerful tool for doing research in high-performance computing environments," reveals the blog.
Flashlight is written in modern C++ so it has incredibly low framework overhead, as modern C++ enables parallelism and speed. In addition to this, it also provides simple bridges to integrate code from low-level domain-specific languages and libraries.
Flashlight is created on top of a shallow stack of basic abstractions, using ArrayFire tensor library, that is modular and easy to use. The library supports dynamic tensor shapes and types which helps to let go of rigidity that comes along with C++. "Building on these base components, Flashlight includes custom, tunable memory managers and APIs for distributed and mixed-precision training. Combined with a fast, lightweight autographed — a deep learning staple that automatically computes derivatives of chained operations common in deep neural networks — Flashlight also features modular abstractions for working with data and training at scale," reveals the blog.
Flashlight can be used to support research in various capacities - speech recognition, language, modelling and image classification with one single codebase, thus removing the need for creating domain-specific libraries. This opens endless possibilities for Facebook AI.
We’re already using Flashlight at Facebook in our research focused on developing a fast speech recognition pipeline, a threaded and customizable train-time relabeling pipeline for iterative pseudo-labelling, and a differentiable beam search decoder. Our ongoing research is further accelerated by the ability to integrate external platform APIs for new hardware or compiler toolchains and achieve instant interoperability with the rest of Flashlight," wrote the creators on the blog.
The team further hopes that just like they've been able to accelerate research using Flashlight, others in the AI community too can exploit it to iterate faster on their ideas.