Machine learning categorizes the stances of Bharatanatyam dancing. A team of researchers from Anna University in Chennai has precisely identified and classified 108 fundamental Bharatanatyam dance stances using cutting-edge computer methods. AI technology can also model and preserve other traditional performing art forms. 

This study explores the specialist field of human action recognition, emphasising the identification of classical dance poses from India, especially Bharatanatyam. In dance, a "Karana" refers to a coordinated and melodic movement of the body, hands, and feet, as the Natyashastra describes. 

Karana combines nritta hasta (hand movements), sthaana (body postures), and chaari (leg movements). Though many, the Natyashastra lists 108 karanas, illustrated in the elaborate stone carvings that cover the Nataraj temples in Chidambaram and reveal Lord Shiva's connection to these motions. Automating pose detection in Bharatanatyam is difficult because there are many different hand and body postures, mudras (hand gestures), facial expressions, and head movements. 

This work uses automation and image processing methods to ease this complex work. The suggested approach has four steps: skeletonization and data augmentation techniques for image acquisition and preprocessing, feature extraction from images, deep learning network-based convolution neural network model (InceptionResNetV2) classification of dance postures, and mesh creation from point clouds for 3D model visualization. 

Using cutting-edge technology, such as deep learning networks and the MediaPipe library for body key point recognition, simplifies identification. A crucial phase, data augmentation, increases the model's accuracy by expanding tiny datasets. The efficient recognition of complex dancing motions by the convolution neural network model made analysis and interpretation easier. This creative method makes Bharatanatyam pose recognition easier and establishes a standard for improving efficiency and accessibility for Indian classical dance practitioners and researchers.

Given its many and varied uses in daily life, human posture detection has presented severe difficulties in computer vision. As such, posture identification in the context of Indian classical dance—especially Bharatanatyam—is essential because of its possible influence on human well-being.

The authors of this paper present InceptionResNetV2, a unique deep-learning network-based convolutional neural network model. This model works on important aspects found with MediaPipe and correctly categorises 108 dance positions. Their strategy was created after thoroughly analysing the published related literature. 

Their design is based on independently extracting depth and spatial elements from the photos and using both sets of information to recognize poses. Their architecture gains from this special strategy, allowing it to discern between poses more successfully, as was first proposed in their methodology and then confirmed by comparisons and result analysis in their study.

Moreover, their feature extraction approach allows their suggested design to support various positions. The main goal of future research projects will be to improve performance through hyperparameter tuning.

Finally, their work has greatly aided the continuous efforts to identify Indian classical dance stances, especially in Bharatanatyam. Their research has increased the precision and robustness of posture recognition in this complex dance form and created opportunities for more general applications in human pose detection by using cutting-edge methods in 3D model reconstruction and human pose detection.

Their study advanced computer vision and 3D modelling methods, which have ramifications in many fields, such as healthcare, sports analysis, and animation. It also enriches our knowledge of and ability to preserve the rich cultural legacy of Bharatanatyam. All parties engaged in this project will gain from their work, which they hope will direct researchers in this field toward obtaining almost flawless performance metrics. The evaluation shows how well augmentation, preprocessing, and skeletonization operate; the subsequent work concentrates on validation and optimization for improved pipeline robustness and speed.

Sources of Article

Source: https://www.nature.com/articles/s41598-024-58680-w

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in