These are the year's most intriguing AI research publications. It integrates innovations in artificial intelligence (AI) and data science. It is chronologically organized and includes a link to a lengthier article.

Inductive Quantum Embedding

Recently, Quantum Embedding (QE), inspired by quantum logic, was presented for embedding a Knowledge-Base (KB) by Garg:2019. It is stated that the QE maintains the input KB's logic structure, which is expressed as a hierarchy of unary and binary predicates. These embedding vectors can be used in inductive reasoning like Boolean logic by keeping their structure intact. However, the original QE theory only works in a transductive (and not inductive) context. 

The researchers begin by rephrasing the QE problem so that induction is possible. Along the way, they draw attention to the solution's unique analytic and geometric qualities and then use those features to speed up the training process. The inductive QE method is applied to the common NLP problem of fine-grained entity type categorization, where it is shown to attain state-of-the-art performance. Their training is nine times faster than the original QE approach to this specific challenge.

GCOMB: Learning Budget-constrained Combinatorial Algorithms over Billion-sized Graphs

There has been a rise in the use of machine learning to find heuristics for combinatorial issues on graphs. Existing methods have concentrated primarily on generating high-quality solutions but have yet to address scalability to billion-sized graphs effectively. Furthermore, the effect of a budget limitation has yet to be investigated despite its necessity in many real-world circumstances. The authors of this paper suggest a framework they term GCOMB to address these issues. GCOMB trains a Graph Convolutional Network (GCN) with a unique probabilistic greedy approach to anticipate a node's quality. GCOMB uses a Q-learning framework that is optimized using importance sampling to better deal with the combinatorial character of the problem. 

To evaluate GCOMB's performance, the researchers run comprehensive tests on real-world graphs. Their findings show that GCOMB is one hundred times faster and more accurate than the best-of-breed algorithms for learning combinatorial algorithms. Case studies on the practical combinatorial problem of Influence Maximization (IM) have shown that GCOMB is 150 times quicker than the dedicated IM algorithm IMM while maintaining equivalent quality.

Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games

The researchers examine the issue of online education and its application to the solution of minimax games. Follow the Perturbed Leader (FTPL) is a well-studied algorithm for the online learning problem with the optimal O(T1/2) worst-case regret guarantee for both convex and nonconvex losses. 

In this study, the authors demonstrate that when the sequence of loss functions is predictable, a simple modification of FTPL that incorporates optimism can achieve improved regret guarantees. In contrast, the optimal worst-case regret guarantee is maintained for unpredictable sequences. The algorithm's stochasticity and optimism pose a significant obstacle in attaining these tighter regret bounds, necessitating analysis techniques distinct from those typically employed in the analysis of FTPL. 

The researchers' analysis is predicated on the dual perspective of perturbation as regularization. While their algorithm has multiple applications, we will focus on the minimax game application. The only requirement for their algorithm to solve smooth convex-concave games is access to a linear optimization oracle. For Lipschitz and smooth nonconvex-nonconcave games, their algorithm requires access to an optimization oracle that computes the optimal response when perturbed. In both cases, their algorithm solves the game with an O(T1/2) precision using T calls to the optimization oracle. Their algorithm is highly parallelizable and requires only O(T1/2) iterations, with each iteration making O(T1/2) calls parallel to the optimization oracle.

Sources of Article

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE