Markov Logic Networks: Construction, Learning and Inference

Markov logic networks provide a simple way for combining two seemingly rather different descriptions of data - first order logic and probability. While logical systems are a compact way to represent knowledge, they are too rigid for many real world applications. When trying to describe a dataset through logical relations between different features, a single data point violating them is enough to render the description incorrect. At the same time, it is not obvious how logical relations could be incorporated into a more flexible probabilistic framework. The idea behind Markov logic networks is to use a first order knowledge base for creating a probabilistic model in form of a Markov network, thereby benefiting from the flexibility of probabilistic descriptions and the usefulness of rules. We will begin the talk with a review of Markov networks and first order logic. We will then discuss the motivation behind Markov logic networks, their construction and an overview of different techniques for learning and inference, followed by highlighting possible applications of the theory.