Our take on IJCAI 2022

The International Joint Conference on Artificial Intelligence (IJCAI) is a premier event for researchers working in all areas of AI.

Among specialized tracks on deep learning, reinforcement learning, game theory, logic and others, IJCAI also hosts a special track on AI for good to explore how AI can contribute to the 17 United Nations’ Sustainable Development Goals. This year’s event was the first after the Covid-19 pandemic. Our team was happy to use the opportunity to meet up with the research community after such a long time. We documented contributions that caught our attention in a series of paper pills which we summarize in this post.

Generative models

Towards Generative Modelling of Multivariate Extremes and Tail Dependence

Normalizing flows struggle to accurately represent the tails of multivariate distributions with heavy-tailed marginal distributions and asymmetric tail dependence. Comet Flows combine normalizing flows with extreme value and copula theory to face these issues.

Hybrid inference

Neuro Symbolic Verification of Deep Neural Networks

The neuro-symbolic approach to verification of neural networks uses reference networks for representing high level concepts. These are then used as a proxy for verifying properties that are expressed in a high level specification language. An example for such a property is that a self-driving car needs to stop at a red traffic light.

Probabilistic Inference With Algebra and Logic Constraints

Hybrid probabilistic inference allows performing probabilistic inference with algebra and logic constraints. Such constraints appear regularly in numerous application areas. The algorithmic core of this method is the weighted model integration problem which is computationally hard in general. Current implementations rely on SMT solving and various integration engines. Recent advances with emphasis on computational tractability are reviewed and compared.

Algorithmic game theory

Robust Solutions for Multi-Defender Stackelberg Security Games

Multi-defender Stackelberg security games are used for strategic reasoning in scenarios where a group of defenders tries to protect a set of common goods. Despite their usefulness, they possess the unpleasant property that the solutions (in terms of Nash-equilibria or cores) are highly sensitive to small perturbations of the attackers’ valuation of the targets. By introducing some small uncertainty about the other players’ valuations this problem can be overcome.

Network Creation With Homophilic Agents

Segregation describes a tendency of a group with members of mixed types to separate out into clusters of homogeneous types. To understand this phenomenon better, heterogeneous network creation games with homophilic (preference for same type connections) or prejudiced agents (adverse to multi-type connections) are considered. An important observation is that the initial conditions seem to play an important role when edge creation has a high cost. Segregation being very unfavourable in many real world scenarios, this might hint to an escape route through initial integration investments.

Complexity of Approved Winners

Approval-based multi winner voting with a metric between the candidates is considered. The goal is to compute a collective choice that minimizes the unhappiness among the voters. The winner of the vote is a committee of $k$ candidates such that the distance of the committee (maximal / minimal / summed among the committee) to the votes (maximal (egalitarian) / summed (utilitarian) among the votes) is minimized. The authors study the problem under the possible selection rules in terms of algorithmic and parameterized complexity as well as through the lens of approximability. Overall, they present a rather complete image of the landscape.

Deep Learning

Tesselation Filtering ReLu Networks

Relu networks define piecewise linear functions on tessellations of the input space. The authors analyse the shape complexity of the decision surface of a relu neural network. The main contribution is a framework to decompose a network along a specific layer into a tessellation and a shape complexity part which is represented by a tessellation-filtering neural network, a special subclass of relu networks which is also identified in the paper.

Optimization Based Modelling

In decision-focused machine learning one wishes to make decisions minimising a given cost function. Classically, this has often been done with a two-step approach: first learn a predictor, then compute the cost incurred by it. But in recent years a new end-to-end alternative has emerged using differentiable optimization layers. This makes it possible to train the predictor directly on the downstream decision task thus reducing the overall cost.

Natural Language Processing

DictBERT: Dictionary Description Knowledge Enhanced Language Model Pre-training via Contrastive Learning

External knowledge can be injected into pretrained language models (here specifically BERT) by training an additional language model on various dictionary-related tasks. This strategy may result in better representations and should be especially useful for sentences with abundant jargon.