Results for ""
The recently concluded 35th edition of NeurIPS from 6-14 December 2021 saw a total of 9,122 submissions, of which nearly 26 per cent i.e., 2,344 papers got accepted. The annual conference is one of the world’s prestigious gatherings of industry and academia to foster the exchange of research advances in the domain of artificial intelligence and machine learning.
In this article, we have listed the recipients of outstanding papers award winners. Let’s understand the papers in the simplest form and the reasons that made them stand out.
Researchers from the Allen Institute for Artificial Intelligence, Stanford University and the University of Washington introduced MAUVE, a comparison measure for open-ended text generation, that directly compares the learnt distribution from a text generation model to the distribution of human-written text using divergence frontiers.
Text generation or natural language generation (NLG) is a subfield of natural language processing (NLP). Take, for example, I write a sentence “One day I will......” and provide the same to a text generation model. It may provide me back with a complete sentence, say, “One day I will become the Prime Minister of India.” Going a step further, the model presented by the team compares the text generation by these models, aka neural texts with those from humans i.e., natural text. The divergence is mapped to figure out the errors, hence the model provides the way forward for the open-ended models for course correction and quality improvement.
Researchers from the University of Montreal and the Google Research team came out with a paper that suggests practical approaches to improve the rigour of deep reinforcement learning algorithm comparison. The focus was specifically that the evaluation of new algorithms should provide stratified bootstrap confidence intervals, interquartile means, and performance profiles across tasks and runs.
The findings of the paper called for a change in the way it is used to evaluate the performance of the deep RL. What is deep RL? Reinforcement learning is dynamically learning by altering behaviours based on continuous feedback to optimise a reward, whereas deep learning is learning from a training set and then applying that learning to a new data set. When we use deep learning in a reinforcement learning system, it is referred to as deep reinforcement learning or deep RL.
A group of researchers from the Weizmann Institute of Science, UCLA and Facebook AI Research team introduced Moser Flow (MF), a generative model in the family of continuous normalising flows (CNFs) that represents the target density using the divergence operator applied to a vector-valued neural network. The main benefits of MF stem from the simplicity and locality of the divergence operator. MFs circumvent the need to solve an ODE in the training process, and are thus applicable on a broad class of manifolds.
Researchers from Microsoft Research and Stanford University came out with the paper. As per the paper, data interpolation using a parameterized model class is traditionally conceivable as long as the number of parameters exceeds the number of equations to be solved. Deep learning models are taught with far more parameters than this traditional theory would imply, which is a perplexing phenomenon. For this phenomenon, the team suggests a theoretical explanation. They show that if one wants to interpolate data smoothly, over parameterization is required for a wide range of data distributions and model classes.
The paper was presented by Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Pierre Gaillard, Hadrien Hendrikx, Laurent Massoulié, and Adrien Taylor. As per the details, this paper describes a “continuised” version of Nesterov’s accelerated gradient method in which the two separate vector variables evolve jointly in continuous-time—much like previous approaches that use differential equations to understand acceleration—but uses gradient updates that occur at random times determined by a Poisson point process.
A group of researchers from the CS department of Princeton University and Brown University and Google’s DeepMind. As widely known, the reward is the driving force for reinforcement-learning agents. The paper is dedicated to understanding the expressivity of reward as a way to capture tasks that the team would want an agent to perform. They frame this study around three new abstract notions of “task” that might be desirable:
The selection procedure was devised with the purpose of identifying an equivalence class of outstanding papers that represent a cross-section of the NeurIPS community's excellent research.
Source: NeurIPS
Image source: Unsplash