The TransferLab continuously monitors advancements in artificial intelligence. We create concise summaries of articles, libraries, speeches, and other events that we think our community would find interesting and publish them as so-called paper pills. What we discovered in October is summarized in this blog.
Calibration
Local calibration: metrics and recalibrationA calibration method that takes sample similarity into account, automatically providing group calibration even when groups are unknown.
Large language models
DictBERT: Dictionary Description Knowledge Enhanced Language Model Pre-training via Contrastive LearningExternal knowledge can be injected into pre-trained language models (here specifically BERT) by training an additional language model on various dictionary related tasks. This strategy may result in better representations and should be especially useful for sentences that use a lot of jargon.
Decision-focused learning
Optimization Based ModellingOptimization layers allow incorporating downstream optimization problems into neural network training. The performance gap between the first-learn-then-optimise and the end-to-end approach in the perfect model setting is considered. The authors find that the gap can be arbitrarily large for non-linear cost functions. Additionally, the authors identify several classes of practically relevant optimization problems where the end-to-end approach yields optimal solutions in the perfect model setting.

Image Editing
Prompt-to-Prompt Image Editing with Cross Attention ControlGenerating images from text is complicated by the fact that small changes in prompt gives very different results. This new study shows how to do minimal changes to generated images through manipulation of the neural network attention maps.