Get featured on IndiaAI

Contribute your expertise or opinions and become part of the ecosystem!

Fans of gardening will be pleased to learn that artificial intelligence scientists at the Pacific Northwest National Laboratory (PNNL), U.S. Department of Energy, have discovered that the art of pruning is quite useful in creating better performing machine algorithms. By 'pruning' little bits of coding branches in algorithms, they can reduce unnecessary complexities in decision trees and increase predictive performance of machine learning. 

While exploring with binarised neural networks (BNNs), PNNL used the pruning principles successfully. BNNs are similar to the deep neural networks (DNN) that consume large amounts of computation. However, BNNs, unlike DNNs, use single bits to encode each neuron and parameter, which uses considerably less energy and power for computation. 

They discovered that if the algorithm is pruned optimally, BNNs are more energy efficient while being as accurate as DNNs. This makes BNNs more appealing in environments with limited resources, for eg - mobiles, smart devices, Internet of Things (IoT), etc. 

"Pruning is currently a hot topic in machine learning," said PNNL computer scientist Ang Li. "We can add software and architecture coding to push the trimming towards a direction that will have more benefits for the performance of computing devices. These benefits include lower energy needs and lower computing costs."

Recently, Li, along with fellow PNNL researchers, has published the results of the selective pruning in the Institute of Electrical and Electronics Engineers Transactions on Parallel and Distributed Systems (TPDS). The research demonstrates that applying pruning to redundant parts of algorithms of the BNN architecture made way for a custom-built out-of-order BNN (O3BNN-R). Li and team's experiment shows an O3BNN-R which could display high-perfomring supercomputing qualities without loosing accuracy, while being considerably smaller in size. 

"Binarised neural networks have the potential of making the processing time of neural networks around microseconds," said Tong "Tony" Geng, a Boston University doctoral candidate who, as a PNNL intern, assisted Li on the O3BNN-R project.

"BNN research is headed in a promising direction to make neural networks really useful and be readily adopted in the real-world," said Geng, who will rejoin the PNNL staff in January as a postdoctoral research fellow. "Our finding is an important step to realize this potential."

The O3BNN-R can prune around 30% of operations without any accuracy loss. The team states that with better training, the performance can be improved to an additional 15%! 

Want to publish your content?

Get Published Icon
ALSO EXPLORE