These are the year's most intriguing AI research publications. It combines recent advances in artificial intelligence (AI) and data science. It is organized chronologically, and there is a link to a more detailed article.

LeSICiN: A Heterogeneous Graph-based Approach for Automatic Legal Statute Identification from Indian Legal Documents

The goal of Legal Statute Identification (LSI) is to figure out which laws apply to a set of facts or pieces of evidence in a legal case. Existing methods for doing this kind of work only use the text of Facts and legal articles. Existing models don't consider the citation network between case documents and laws, which is a rich source of extra information.

In this work, the researchers take the first step toward using both the text and the legal citation network for the LSI task. For this task, they gathered a large, new set of data that includes facts from several major Indian courts of law and laws from the Indian Penal Code (IPC). The laws and training documents are modelled as a heterogeneous graph. Our proposed model, LeSICiN, can learn rich textual and graphical features and can also be tuned to show how these features relate to each other.

After that, the model can infer links between test documents (new nodes whose graphical features are unavailable to the model) and statutes (existing nodes). Extensive tests on the dataset show that their model easily outperforms several state-of-the-art baselines by using both graphical and textual features.

On Causally Disentangled Representations

Learners of representation who can separate factors of variation have already been shown to help deal with real-world problems like fairness and being able to understand something. However, in the beginning, there were only unsupervised models with assumptions of independence. More recently, weak supervision and correlated features have been looked at, but there is still no causal view of how the process works.

On the other hand, the researchers work with a causal generative process, in which the factors that cause something to happen are either independent or can be confused by a set of confounders that can be seen or not seen. Through the idea of a "disentangled causal process," they give an analysis of representations that are not mixed up. The researchers explain why they need new metrics and datasets to study how to separate causes from each other, and they suggest two metrics and a dataset. They show that their metrics meet the needs of figuring out how things happen. Lastly, the researchers use their metrics and dataset to evaluate state-of-the-art disentangled representation learners from a causal point of view in a real-world study.

Self-supervised Enhancement of Latent Discovery in GANS

Several ways have been suggested to find interpretable directions in the latent space of GANs that have already been trained. Since unsupervised methods don't use pre-trained attribute classifiers, they tend to find latent semantics that is less clear than supervised methods.

The researchers have developed Scale Ranking Estimator (SRE), which is trained using self-supervision. SRE enhances the disentanglement in directions obtained by existing unsupervised disentanglement techniques. These directions have been changed to maintain the order of differences within each latent space direction. Evaluations of the found directions in qualitative and quantitative ways show that their proposed method makes it much easier to separate different datasets. The researchers also show that the learned SRE can be used to do Attribute-based image retrieval tasks without more training.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE