AI systems are increasingly deployed not as isolated tools, but as interconnected agents that interact with each other, with humans, and with their environment. We study these neural ecosystems at multiple scales: the internal structure of individual models, the dynamics of multi-agent interaction, and the collective behavior of agent populations.
Our research is organized around five themes:
Interpretability & Transparency. What do models know, and how do they represent it? We develop methods to understand the internal structure of neural networks, from localizing culture-specific neurons to mechanistic analysis of model behavior.
Control & Containment. How do we keep AI systems within safe boundaries? We investigate principled steering techniques, guardrails, and intervention methods that remain effective as systems grow in autonomy.
Agentic & Multi-agent Safety. What happens when AI systems interact? We study coordination, cooperation, and failure modes in multi-agent systems, bridging the gap between multi-agent reinforcement learning and LLM-based agents.
Uncertainty & Risk Quantification. How confident should we be in a system’s outputs? We work on quantifying and communicating uncertainty across scales.
Sustainability & Resource Impact. What is the ecological footprint of neural ecosystems? We investigate the real-world resource costs of training and deploying AI systems at scale.
Interpretability & Transparency
Publications:
Isolating Culture Neurons in Multilingual Large Language Models
Danial Namazifard, Lukas Galke
IJCNLP-AACL 2025, 2025
Abstract
Language and culture are deeply intertwined, yet it is so far unclear how and where multilingual large language models encode culture. Here, we extend upon an established methodology for identifying language-specific neurons and extend it to localize and isolate culture-specific neurons, carefully disentangling their overlap and interaction with language-specific neurons. To facilitate our experiments, we introduce MUREL, a curated dataset of 85.2 million tokens spanning six different cultures. Our localization and intervention experiments show that LLMs encode different cultures in distinct neuron populations, predominantly in upper layers, and that these culture neurons can be modulated independently from language-specific neurons or those specific to other cultures. These findings suggest that cultural knowledge and propensities in multilingual language models can be selectively isolated and edited - promoting fairness, inclusivity, and alignment.
Tokenization and Morphology in Multilingual Language Models: A Comparative Analysis of mT5 and ByT5
Thao Anh Dang, Limor Raviv, Lukas Galke
ICNLSP, 2025
Abstract
Morphology is a crucial factor for multilingual language modeling as it poses direct challenges for tokenization. Here, we seek to understand how tokenization influences the morphological knowledge encoded in multilingual language models. Specifically, we capture the impact of tokenization by contrasting a minimal pair of multilingual language models: mT5 and ByT5. The two models share the same architecture, training objective, and training data and only differ in their tokenization strategies: subword tokenization vs. character-level tokenization. Probing the morphological knowledge encoded in these models on four tasks and 17 languages, our analyses show that the models learn the morphological systems of some languages better than others and that morphological information is encoded in the middle and late layers. Finally, we show that languages with more irregularities benefit more from having a higher share of the pre-training data.
Learning and communication pressures in neural networks: Lessons from emergent communication
Lukas Galke, Limor Raviv
Language Development Research 5(1), 2025
Deep neural networks and humans both benefit from compositional language structure
Lukas Galke, Yoav Ram, Limor Raviv
Nature Communications 15:10816, 2024
Morphology Matters: Probing the Cross-linguistic Morphological Generalization Abilities of Large Language Models through a Wug Test
Anh Dang, Limor Raviv, Lukas Galke
Cognitive Modeling and Computational Linguistics Workshop at ACL 2024,
Control & Containment
Guarded Query Routing for Large Language Models
Richard Šléher, William Brach, Tibor Sloboda, Kristián Košťál, and Lukas Galke
ECAI, 2025
Agentic & Multi-Agent Safety
Super-additive Cooperation in Language Model Agents
Filippo Tonini, Lukas Galke
3rd International Conference on Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications (FAIEMA), 2025
Uncertainty & Risk Quantification
POWN: Prototypical Open-world Node Classification
Marcel Hoffmann, Lukas Galke, Ansgar Scherp
Conference on Lifelong Learning Agents (CoLLAs), 2024
Lifelong Learning on Evolving Graphs Under the Constraints of Imbalanced Classes and New Classes
Lukas Galke, Iacopo Vagliano, Benedikt Franke, Tobias Zielke, Marcel Hoffmann, and Ansgar Scherp
Neural Networks 164, 2023
Abstract
Lifelong graph learning deals with the problem of continually adapting graph neural network (GNN) models to changes in evolving graphs. We address two critical challenges of lifelong graph learning in this work: dealing with new classes and tackling imbalanced class distributions. The combination of these two challenges is particularly relevant since newly emerging classes typically resemble only a tiny fraction of the data, adding to the already skewed class distribution. We make several contributions: First, we show that the amount of unlabeled data does not influence the results, which is an essential prerequisite for lifelong learning on a sequence of tasks. Second, we experiment with different label rates and show that our methods can perform well with only a tiny fraction of annotated nodes. Third, we propose the gDOC method to detect new classes under the constraint of having an imbalanced class distribution. The critical ingredient is a weighted binary cross-entropy loss function to account for the class imbalance. Moreover, we demonstrate combinations of gDOC with various base GNN models such as GraphSAGE, Simplified Graph Convolution, and Graph Attention Networks. Lastly, our k-neighborhood time difference measure provably normalizes the temporal changes across different graph datasets. With extensive experimentation, we find that the proposed gDOC method is consistently better than a naive adaption of DOC to graphs. Specifically, in experiments using the smallest history size, the out-of-distribution detection score of gDOC is 0.09 compared to 0.01 for DOC. Furthermore, gDOC achieves an Open-F1 score, a combined measure of in-distribution classification and out-of-distribution detection, of 0.33 compared to 0.25 of DOC (32% increase).
Sustainability & Resource Impact
Continual Quantization-Aware Pre-Training: When to transition from 16-bit to 1.58-bit pre-training for BitNet language models?
Jacob Nielsen, Peter Schneider-Kamp, and Lukas Galke
ACL Findings, 2025
Abstract
Large language models (LLMs) require immense resources for training and inference. Quantization, a technique that reduces the precision of model parameters, offers a promising solution for improving LLM efficiency and sustainability. While post-training quantization methods typically achieve 4-8 bits per parameter, recent research suggests that training LLMs with 1.58 bits per weight parameter from scratch can maintain model accuracy while greatly reducing memory requirements and energy consumption at inference time. Here, we investigate a training strategy for quantization-aware pre-training, where the models are first trained with 16-bit precision and then transition into 1.58-bit quantization-aware training. Our results on 11 downstream tasks, show that this 16-to-1.58-bit training strategy is preferable over full 1.58-bit training and leaves models closer to those which have undergone 16-bit training. We further investigate the effects of retaining the optimizer state at the transition point and gradually phasing in quantization strength - finding that both techniques alleviate the magnitude of loss spikes, but also that these effects can be compensated through further training.
When are 1.58 bits enough? A Bottom-up Exploration of Quantization-aware Training with Ternary Weights
Jacob Nielsen, Lukas Galke, and Peter Schneider-Kamp
18th International Conference on Agents and Artificial Intelligence (ICAART), 2025
Abstract
Contemporary machine learning models, such as language models, are powerful, but come with immense resource requirements both at training and inference time. It has been shown that decoder-only language models can be trained to a competitive state with ternary weights (1.58 bits per weight), facilitating efficient inference. Here, we start our exploration with non-transformer model architectures, investigating 1.58-bit training for multi-layer perceptrons and graph neural networks. Then, we explore 1.58-bit training in other transformer-based language models, namely encoder-only and encoder-decoder models. Our results show that in all of these settings, 1.58-bit training is on par with or sometimes even better than the standard 32/16-bit models.