These are the most intriguing AI research articles published this year. It combines recent advances in artificial intelligence (AI) and data science. It is in chronological order, and there is a link to a more comprehensive article.

Interpretable Neural Subgraph Matching for Graph Retrieval

A graph retrieval system takes a query graph and a database of corpus graphs and tries to find the most helpful corpus graphs. Graph retrieval based on subgraph matching can be used for many things, like finding molecular fingerprints, making circuits, analyzing software, and answering questions. In these situations, a corpus graph is related to a query graph if the query graph is a subgraph of the corpus graph (precisely or almost exactly). Existing neural graph retrieval models compare the nodes or graph embeddings of the query-corpus pairs to figure out how relevant they are to each other. But such models might not ensure that the edges of the query graph and the corpus graph are the same. Also, they mostly use symmetric relevance scores, which aren't suitable for subgraph matching because the essential relevance score in subgraph search should be measured by the partial order caused by the relationship between the subgraph and the supergraph. So, they aren't very good at retrieval when it comes to subgraph matching.

In response, the researchers propose ISONET, a new interpretable neural edge alignment formulation better at learning the edge-consistent mapping needed for subgraph matching. ISONET has a new scoring system that uses an asymmetric relevance score and is designed to work well with subgraph matching. Moreover, because of how it is made, ISONET can directly find the subgraph in a corpus graph relevant to a given query graph. Their tests with different data sets show that ISONET works better than current graph retrieval systems and formulations.

Latent Time Neural Ordinary Differential Equations

Neural ordinary differential equations (NODE) have been suggested as a continuous depth extension to Residual networks and other popular deep learning models (ResNets). They make deep learning models more efficient with their parameters and automate some model selection processes. But they don't have the basic uncertainty modelling and robustness features needed for real-world applications like autonomous driving and healthcare.

The researchers suggest a new and different way to model uncertainty in NODE by looking at a distribution over the ODE solver's end-time T. The proposed method, called latent time NODE (LT-NODE), uses Bayesian learning to get a posterior distribution over T from the data. T is treated as a latent variable. In particular, the researchers use variational inference to learn the model parameters and an approximation of the posterior. Prediction is made by looking at the NODE representations from different posterior samples. It can do it efficiently with a single forward pass. As T implicitly defines the depth of a NODE, the posterior distribution over T would also help in choosing a model in a NODE.

The researchers also suggest adaptive latent time NODE (ALT-NODE), which lets each data point have its posterior distribution over end times. ALT-NODE uses amortized variational inference to learn an approximation of the posterior using inference networks. Experiments with made-up and real-world image classification data show that the proposed approaches work well for modelling uncertainty and robustness.

Learning Temporal Point Processes for Efficient Retrieval of Continuous Time Event Sequences

Recent improvements in predictive modelling with marked temporal point processes (MTPP) have made it possible to accurately describe several real-world applications with continuous-time event sequences (CTESs). But the retrieval problem of such sequences hasn't been discussed much in the literature. Hence the researchers came up with NEUROSEQRET, which learns from an extensive collection of sequences how to find and rank a relevant set of continuous-time event sequences for a given query sequence.

More specifically, NEUROSEQRET first applies a trainable unwarping function to the query sequence. It makes it comparable to corpus sequences, especially when a relevant query-corpus pair has different attributes. Next, it puts the unwarped query and corpus sequences into MTPP-guided neural relevance models. The researchers develop two versions of the relevance model that trade off accuracy and speed. They also devised an optimization framework for learning binary sequence embeddings from relevance scores. This framework is suitable for locality-sensitive hashing and makes it much faster to get the top-K results for a given query sequence. Their tests with several datasets show that NEUROSEQRET is much more accurate than several baselines and that our hashing mechanism works well.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in