The most noteworthy scientific publications are listed here. It's a curated collection of the most recent advances in artificial intelligence and data science, organized chronologically with a link to a more in-depth article.

Is Sharing an Egocentric Video Giving Away Your Biometric Signature?

The widespread accessibility of wearable egocentric cameras, coupled with the perception of privacy resulting from the wearer's absence in the recorded recordings, has significantly contributed to the exponential increase in the public dissemination of such videos. Unlike handheld cameras, Egocentric cameras are mounted on the wearer's head, enabling the observation of optical flow in egocentric movies to monitor the wearer's head motion. 

The researchers in this study develop a new type of privacy breach by extracting the individual's gait profile, a well-recognized biometric signature, from the optical flow in the egocentric films. These devices have impressive skills to recognize the person wearing them by analyzing their walking patterns, a unique and essential advantage not found in handheld videos. Furthermore, their research suggests that uploading a personal video should be regarded as revealing one's unique biometric identity, and they propose implementing stricter supervision before sharing such recordings. 

The source code is available here.

Sketch-Guided Object Localization in Natural Images

The researchers present a novel problem: locating every instance of an object in a natural image using a sketch query. They call this object localization problem sketch-guided. Compared to the classic sketch-based image retrieval challenge, where the gallery collection frequently consists of photographs with a single item, this problem is very different. 

The researchers suggest a unique cross-modal attention approach that directs the region proposal network (RPN) to produce object proposals pertinent to the sketch query to solve the challenge of sketch-guided object localization. Their technique works well with just one drawing query. Additionally, it is good at localizing numerous item instances in the image and generalizes well to object categories not encountered during training. 

Additionally, the researchers use the innovative feature fusion and attention fusion techniques presented in this paper to expand their framework to a multi-query setting. Using sketch queries from `Quick, Draw!', the localization performance is assessed on two publicly accessible object detection benchmarks, MS-COCO and PASCAL-VOC. The suggested approach considerably surpasses associated baselines in single- and multi-query localization assignments.

Recurrent Image Annotation With Explicit Inter-Label Dependencies

Building upon the achievements of the CNN-RNN framework in image captioning, numerous studies have investigated its application in multi-label image annotation. The objective is to utilize the RNN-CNN combination to enhance the encoding of inter-label dependencies, surpassing the performance of employing a CNN alone. 

However, given the ground truth consists of labels that are not arranged in a particular order, attempting to impose a predetermined and definite sequence on them aligns poorly with this objective. Most of these strategies rely on the RNN to implicitly select a sequence for the ground-truth labels for each sample during training, which introduces inherent bias. 

This study aims to overcome the mentioned constraint by introducing a new strategy that requires the RNN to learn various interlabel dependencies. It is achieved without providing the ground-truth data in a specific order. Through comprehensive empirical comparisons, they prove their strategy surpasses different cutting-edge techniques on two widely used datasets. Moreover, it offers a fresh viewpoint on considering an unorganized set of labels comparable to an assortment of distinct permutations (sequences) of those labels, naturally aligning with the image annotation assignment.

The source code is available here.

Sources of Article

Image source: Unsplash

Want to publish your content?

Get Published Icon
ALSO EXPLORE