At the TransferLab we are constantly surveying new developments in artificial intelligence. We write short summaries of publications, libraries, talks, etc. that we find interesting for our community and publish them as so-called paper pills. This blog collects what we found in September.
Algorithmic game theoryRobust Solutions for Multi-Defender Stackelberg Security Games
Multi-defender Stackelberg security games are used for strategic reasoning in scenarios where a group of defenders tries to protect a set of common goods. Despite their usefulness, they possess the unpleasant property that the solutions (in terms of Nash-equilibria or cores) are highly sensitive to small perturbations of the attackers’ valuation of the targets. By introducing some small uncertainty about the other players’ valuations this problem can be overcome.Network Creation With Homophilic Agents
Segregation describes a tendency of a group with members of mixed types to separate out into clusters of homogeneous types. To understand this phenomenon better, heterogeneous network creation games with homophilic (preference for same type connections) or prejudiced agents (adverse to multi-type connections) are considered. An important observation is that the initial conditions seem to play an important role when edge creation has a high cost. Segregation being very unfavourable in many real world scenarios, this might hint to an escape route through initial integration investments.Complexity of Approved Winners
Approval-based multi winner voting with a metric between the candidates is considered. The goal is to compute a collective choice that minimizes the unhappiness among the voters. The winner of the vote is a committee of $k$ candidates such that the distance of the committee (maximal / minimal / summed among the committee) to the votes (maximal (egalitarian) / summed (utilitarian) among the votes) is minimized. The authors study the problem under the possible selection rules in terms of algorithmic and parameterized complexity as well as through the lens of approximability. Overall, they present a rather complete image of the landscape.
A great entry-level review to coordinate-based neural networks (a.k.a. neural fields), a technique that has recently shown widespread success in tasks ranging from 3d reconstruction to generative modelling and robotics.
Normalizing flows struggle to accurately represent the tails of multivariate distributions with heavy-tailed marginal distributions and asymmetric tail dependence. Comet Flows combine normalizing flows with extreme value and copula theory to face these issues.
Hybrid inferenceNeuro Symbolic Verification of Deep Neural Networks
The neuro-symbolic approach to verification of neural networks uses reference networks for representing high level concepts. These are then used as a proxy for verifying properties that are expressed in a high level specification language. An example for such a property is that a self-driving car needs to stop at a red traffic light.Probabilistic Inference With Algebra and Logic Constraints
Hybrid probabilistic inference allows performing probabilistic inference with algebra and logic constraints. Such constraints appear regularly in numerous application areas. The algorithmic core of this method is the weighted model integration problem which is computationally hard in general. Current implementations rely on SMT solving and various integration engines. Recent advances with emphasis on computational tractability are reviewed and compared.
Theory of deep learningTesselation Filtering ReLu Networks
Relu networks define piecewise linear functions on tessellations of the input space. The authors analyse the shape complexity of the decision surface of a relu neural network. The main contribution is a framework to decompose a network along a specific layer into a tessellation and a shape complexity part which is represented by a tessellation-filtering neural network, a special subclass of relu networks which is also identified in the paper.
Two neural networks $A$ and $B$ of the same architecture and trained by SGD are compared after finding a suitable permutation $\pi$ that minimizes a certain distance notion. Experimentally, weights linearly interpolated between $A$ and $\pi(B)$ yield the same loss. In the context of federated learning, merging $A$ and $\pi(B)$ performs much better than merging $A$ and $B$.