Papers
📚 Full List
For a complete and up-to-date publication list, see my Google Scholar profile.
Filter by Area
🤖 AI & Multi-agent Systems
Game theory provides a natural framework for understanding and designing AI agents that must coordinate, compete, or cooperate with others. We use evolutionary game theory and reinforcement learning to study how cooperation can emerge in decentralised multi-agent systems.
📄 Papers
Hybrid Human-Agent Social Dilemmas in Energy Markets 🔗
Isuri Perera, Frits de Nijs and Julian García. arXiv preprint, 2026.
What happens when humans and autonomous agents interact strategically in energy markets? We show that artificial agents using observable signals can improve coordination, and that early adopters face no structural disadvantage despite potential free-rider dynamics.
Learning to cooperate against ensembles of diverse opponents 🔗
Isuri Perera, Frits de Nijs and Julian García. Neural Computing and Applications, 2025.
How can an RL agent learn to cooperate when facing many different, anonymous opponents? Inspired by evolutionary game theory, we design agents that maintain cooperation across diverse partners without needing to model each one individually.
Picking strategies in games of cooperation 🔗
Julian García and Arne Traulsen. PNAS, 2025.
How modellers choose which strategies to include in evolutionary game theory models can quietly shape the conclusions. We propose principles for more systematic strategy selection and argue that AI methods can help build richer models of cooperation.
Cooperation and Reputation Dynamics with Reinforcement Learning 🔗
Nicolas Anastassacos, Julian García, Stephen Hailes and Mirco Musolesi. AAMAS, 2021.
Can RL agents independently learn to use reputation to sustain cooperation? We show they can — but only with the right nudges. A combination of seeding and intrinsic rewards based on altruistic reciprocation stabilises cooperation even in fully decentralised settings.
No strategy can win in the repeated prisoner's dilemma 🔗
Julian García and Matthijs van Veelen. Frontiers in Robotics and AI, 2018.
For every Nash equilibrium in the repeated prisoner's dilemma, there are sequences of mutants that destabilise it. Populations cycle between equilibria with different levels of cooperation — and this instability is inescapable regardless of how strategies are represented.