The European Summer School on Artificial Intelligence
6 - 10 July 2026, Vienna, Austria
serra Prater-Riesenrad Wiener Staatsoper (Vienna, Austria) vienna

Courses

ESSAI, the largest school of broad AI in Europe, will offer courses in wide areas of Artificial Intelligence and from a wide range of perspectives. Its thematic scope is analogous to major AI conferences like ECAI, IJCAI and AAAI, covering all AI subdisciplines and their interconnections. The courses and tutorials will be given by leading AI researchers and we expect to welcome around 400 students with diverse backgrounds, who can choose courses from parallel tracks in order to deepen their understanding of familiar AI disciplines, dive into new ones, and discover new interdisciplinary research areas combining different AI approaches.

Courses of ESSAI 2026

The educational program of ESSAI 2026 will consist of courses (each comprising four 90-minute lectures) as well as tutorials (to be announced later). The courses will be held in 6 lecture halls of TU Wien during July 6-10, 2026. The registration to ESSAI 2026 will open in February, 2026. The preliminary list of courses is as follows:

Course Descriptions

1. AI for Autonomous Robots: Bridging Theory and Practice
Lecturers: Timothy Wiley (The Royal Melbourne Institute of Technology)
Course Type: Introductory
Keywords: Robotics, Autonomous Agents and Multi-agent Systems
Abstract: Designing AI Algorithms for real-world use with Autonomous Robots presents unique challenges beyond conventional AI development. These include meeting real-time operational requirements under limited onboard computational resources and managing the uncertainty and noise inherent in robotic sensors and actuators. AI techniques must also seamlessly integrate into multi-layered robot software architectures that bridge multiple levels of abstraction, from low-level hardware to high-level reasoning. This foundational course offers a comprehensive introduction to the practical design of AI algorithms for autonomous robots. Participants will learn essential concepts that are required to interface with robot hardware such kinematics and will then explore a spectrum of AI algorithms adapted for autonomous robots including localisation, mapping, reinforcement learning, task planning, and recent advances in the use of LLMs. This course combines theory with practical experiments on robot platforms, offering participants a holistic perspective on the unique challenges of AI-driven robotics.
2. Specification-Guided Reinforcement Learning
Lecturers: Suguman Bansal (Georgia Institute of Technology)
Course Type: Advanced
Keywords: Reinforcement Learning, Neuro-Symbolic Learning and Reasoning
Abstract: Reinforcement Learning (RL) has achieved remarkable success across diverse applications, from game playing to robotics. However, a critical bottleneck remains: the design of effective reward functions. Reward engineering is challenging because it conflates two distinct problems—task specification (what the agent should accomplish) and reward shaping (what rewards guide effective learning). This difficulty manifests in several ways: reward functions for complex tasks become cumbersome to write, require careful tuning throughout training, must balance short-term and long-term objectives, and are prone to misspecification leading to reward hacking. This advanced tutorial introduces specification-guided RL as a principled alternative, where tasks are expressed using formal logical specifications rather than numerical rewards. Temporal logics such as Linear Temporal Logic (LTL), LTLf. and SPECTRL provide intuitive, compositional syntax with rigorous semantics for describing agent behaviors. For example, instead of carefully balancing rewards and penalties for a warehouse robot, one can simply specify: "Eventually reach goal AND Always avoid obstacles." The tutorial provides a comprehensive treatment of specification-guided RL, carefully designed to be accessible while covering the full landscape of the field. We begin with foundational material, providing background on both RL fundamentals (MDPs, policies, and learning algorithms) and temporal specifications (syntax, semantics, and expressiveness), ensuring students from either the RL or formal methods communities can follow the content. Building on these foundations, we examine fundamental theoretical results including impossibility theorems for infinite-horizon specifications and PAC learning guarantees for finite-horizon cases. We then present state-of-the-art practical algorithms, with particular emphasis on compositional approaches that scale to complex, long-horizon tasks in high-dimensional continuous domains. Finally, we explore advanced topics and future research directions including multi-task learning, generalization to unseen specifications, verification and safety guarantees, and open theoretical questions. By the end of this tutorial, students will have a complete understanding of when and how to apply specification-guided RL, the theoretical guarantees and fundamental limitations of different approaches, practical implementation techniques for real-world applications, and promising directions for future research in this rapidly evolving field.
3. Introduction to Constraint Satisfaction
Lecturers: Roman Barták (Charles University)
Course Type: Introductory
Keywords: Search and Optimization, Planning and Strategic Reasoning
Abstract: Constraint programming is a technology for declarative description and solving of hard combinatorial problems, such as scheduling. It represents one of the closest approaches to the Holy Grail of automated problem solving: the user states the constraints over the problem variables and the system finds an instantiation of variables satisfying the constraints and representing the solution of the problem. The course overviews major constraint satisfaction techniques and shows how they can be used to solve practical problems.
4. Why Is Symbolic Reasoning Computationally Hard?
Lecturers: Andreas Pieris (University of Cyprus, University of Edinburgh)
Course Type: Introductory
Keywords: Knowledge Representation and Reasoning
Abstract: Symbolic reasoning is a process of manipulating symbols and abstract representations to draw conclusions. It involves using structured knowledge, such as rules and facts, to perform logical operations and derive new information. On the other hand, sub-symbolic reasoning is a method that uses patterns and statistical learning from data to make decisions, rather than explicit, human-defined logical rules. There is a consensus that the key advantage of symbolic reasoning compared to sub-symbolic reasoning is its transparency and explainability, as the reasoning process can be traced through the logical rules. However, this advantage comes at a high price, that is, symbolic reasoning is in general computationally expensive. The goal of this introductory course is to perform a thorough computational complexity analysis, using formal tools coming from complexity theory, with the aim of explaining the key reasons why symbolic reasoning with logical rules is computationally hard. In particular, we will show that reasoning with arbitrary logical rules is algorithmically an unsolvable problem, and we will further show that, even if we severely restrict the syntax of the logical rules, symbolic reasoning remains computationally very hard. A key characteristic of this course is its self-contained nature as all the technical tools that will be used to perform the aforementioned complexity analysis will be properly introduced.
5. Wikidata: A backbone for Hybrid/Bilateral AI
Lecturers: Axel Polleres (WU Wien) , Diego Rincon-Yanez (WU Wien)
Course Type: Introductory
Keywords: Knowledge Representation and Reasoning, Knowledge Graphs, Bilateral AI
Abstract: The importance of structured, accessible knowledge in Knowledge Graphs like Wikidata is paramount in AI applications, as a basis for reliable, curated facts. This introductory course will introduce you to how to use and leverage large-scale collaboratively edited Knowledge Graphs such as Wikidata, both for driving "bilateral" (aka "hybrid", i.e. combining symbolic and sub-symbolic) AI applications on the one hand, but also in terms of discussing how hybrid/bilateral AI can help to build and consolidate such large collections of structured knowledge. We will focus on a hands-on introduction and examples.
6. The Art of Compressing LLMs: Pruning, Distillation, and Quantization Demystified
Lecturers: Liana Mikaelyan (NVIDIA) , Lavinia Ghita (NVIDIA) , Harshita Seth (NVIDIA) , Sergio Perez (NVIDIA)
Course Type: Advanced
Keywords: Machine Learning, Natural Language Processing
Abstract: As large language models dominate AI applications, their computational footprint has become a critical barrier to deployment. This advanced course demystifies model compression - the essential skill of reducing model size and inference cost while preserving accuracy. Across four intensive 90-minute lectures, participants will master three core techniques (pruning, knowledge distillation, and quantization), learn to navigate accuracy- latency-cost trade-offs, and apply joint compression strategies to real-world LLM optimization problems using the NVIDIA stack. Designed for graduate students, ML engineers, and HPC practitioners, this course bridges theory and hands-on practice, enabling participants to deploy resource-efficient LLMs in production environments.
7. Trustworthy AI
Lecturers: Indrė Žliobaitė (University of Helsinki)
Course Type: Introductory
Keywords: Safe, Explainable and Trustworthy AI, Ethical, Legal and Social Aspects of AI
Abstract: This course introduces the foundations of trustworthy artificial intelligence, focusing on technical ideas that help address ethical challenges in AI-based decision support for commerce and the public sector. Students will explore how trust and accountability differ between human and machine-learned decision making, including key concepts from explainable AI. The course also examines fairness in machine learning, highlighting the difficulties of defining and measuring fairness in computational systems and introducing algorithmic approaches for promoting fair outcomes. In addition, the course introduces principles from causal machine learning and discusses how AI can support knowledge discovery across scientific disciplines, including the humanities. Emphasis is placed on core design principles and conceptual understanding rather than mathematical detail. The course is designed for an interdisciplinary audience and has no formal prerequisites.
8. Data Driven Approaches in (Multi-objective) Bayesian Optimisation
Lecturers: Tinkle Chugh (University of Exeter)
Course Type: Introductory
Keywords: Search and Optimization, Uncertainty in AI
Abstract: The Data Driven Approaches in (Multi-objective) Bayesian Optimisation course focuses on solving complex optimisation problems using the latest data-driven and AI techniques. Participants will learn about probabilistic machine learning, specifically Gaussian processes, and their application in Bayesian optimisation. The course will emphasise methods for solving problems with multiple conflicting objectives, using real-world examples to show how these advanced techniques work in practice. The students in the course will gain a deep understanding of modern data-driven optimisation methodologies and knowledge of making efficient decisions when solving problems with conflicting objectives.
9. Multi-Perspective Reasoning in Knowledge Representation: An Introduction to Standpoint Logic
Lecturers: Timothy Lyon (TU Dresden) , Lucía Gómez Álvarez (University Grenoble Alpes)
Course Type: Introductory
Keywords: Knowledge Representation and Reasoning, Autonomous Agents and Multi-agent Systems
Abstract: This course introduces standpoint logics, a novel family of lightweight multi-modal logics designed to represent and reason about knowledge arising from multiple, potentially conflicting perspectives. Many contemporary AI tasks--from ontology engineering to data integration and multi-agent reasoning--require handling context-dependent information without enforcing unification. Standpoint logic provides a principled and computationally well-behaved framework for this purpose, extending propositional, temporal, and description logics with explicit standpoint-indexed operators and refinement relations between standpoints. Across four sessions, students will learn the syntax and semantics of propositional standpoint logic, its temporal extension SLTL, and standpoint description logics for ontological modeling. They will also study core reasoning tasks, algorithms, and complexity results. The course culminates in a hands-on exploration of semantic interoperability challenges in domains such as biomedicine, demonstrating how standpoint-based formalisms support robust integration across heterogeneous ontologies.
10. Recommender Systems: Past, Present, and Future (Challenges)
Lecturers: Markus Reiter-Haas (Graz University of Technology) , Elisabeth Lex (Graz University of Technology)
Course Type: Introductory
Keywords: Applications of AI, Ethical, Legal and Social Aspects of AI
Abstract: Recommender systems represent one of the most impactful applications of AI in the modern era, shaping everything from global commerce to information consumption. This course offers a comprehensive look into their evolution, navigating the path from classical matrix factorization to current deep learning and the emerging frontiers of neuro-symbolic and agentic approaches. Moving beyond simple accuracy, the curriculum adopts an interdisciplinary lens to tackle systemic challenges, including fairness, transparency, inclusion, and the growing divergence between offline metrics and real-world impact. In light of the EU AI Act, we will critically examine how to build trustworthy systems that align with societal values. Participants will move beyond algorithmic implementation to develop a sophisticated research mindset, equipping them with the conceptual toolbox necessary to navigate and shape the next generation of recommendation technologies.
11. Logic meets Learning
Lecturers: Vaishak Belle (University of Edinburgh)
Course Type: Introductory
Keywords: Neuro-Symbolic Learning and Reasoning, Knowledge Representation and Reasoning
Abstract: The tension between reasoning and learning remains fundamental to artificial intelligence. This tutorial surveys the intersection of logic and learning, exploring how these historically distinct paradigms can be unified. We examine three strands: logic versus learning (including weighted model counting and knowledge compilation), machine learning for logic (inductive logic programming, Bayesian scoring, PAC-semantics), and logic for machine learning (probabilistic programming, algebraic model counting, abstraction). The tutorial emphasizes both foundations and algorithmic techniques. Attendees will gain understanding of statistical relational learning, neuro-symbolic systems, and the mathematical ideas connecting symbolic reasoning with data-driven approaches. The material bridges classical AI and modern machine learning, preparing researchers for cross-over applications by unifying reasoning and learning.
12. Beyond Breakpoints: AI for Software Fault Localization
Lecturers: Birgit Hofer (Graz University of Technology) , Franz Wotawa (Graz University of Technology)
Course Type: Introductory
Keywords: Knowledge Representation and Reasoning, Machine Learning
Abstract: Debugging is one of the most challenging and costly tasks in software development, making effective fault localization essential. This course introduces students to a broad spectrum of techniques used to identify faults in programs, spanning both symbolic and subsymbolic approaches. We begin with classical methods such as program slicing, model‑based diagnosis, spectrum‑based fault localization, and code smell analysis. Building on these foundations, the course explores how supervised machine learning and natural language processing can predict faulty code and link bug reports to relevant locations. Students also learn how to design rigorous, reproducible evaluations of debugging techniques. Finally, we examine emerging opportunities enabled by large language models and discuss how hybrid approaches can combine the strengths of multiple paradigms. The course is ideal for students interested in AI for software engineering, as well as those from machine learning, NLP, information retrieval, or formal methods seeking to apply their expertise to debugging.
13. Foundations of Concept-Based Interpretable Deep Learning
Lecturers: Giuseppe Marra (KU Leuven) , Pietro Barbiero (IBM Research)
Course Type: Advanced
Keywords: Safe, Explainable and Trustworthy AI, Neuro-Symbolic Learning and Reasoning
Abstract: As notoriously opaque deep neural networks (DNNs) become commonplace in powerful Artificial Intelligence (AI) systems, Interpretable Deep Learning (IDL) has emerged as a promising direction for designing interpretable-by-construction neural architectures. At their core, IDL models learn a latent space where some of their representations are aligned with high-level units of information, or concepts, that domain experts are familiar with (e.g., “stripped texture”, “round object”, etc.). By introducing inductive biases that encourage predictions to be made based on these interpretable representations, IDL models enable the construction of expressive yet highly transparent architectures that can be vetted, analysed, and intervened on. This course aims to capitalise on the surge of interest in IDL by exposing AI researchers and engineers to the core foundations necessary to understand the general principles behind existing IDL models. By doing so, we aim to equip attendees with the knowledge necessary to comprehend the current state of this extensive body of literature, enabling them to build upon it for their research. Specifically, this course will provide an overview of core principles, as well as seminal and recent works in IDL. Particular attention will be given to the formal interpretation of these models in terms of neurosymbolic AI and how such interpretation enables formal reasoning and verification of the resulting architectures.
14. Learning Deep Low-dimensional Models from High-Dimensional Data: From Theory to Practice
Lecturers: Qing Qu (University of Michigan) , Sam Buchanan (UC Berkeley) , Yi Ma (University of Hong Kong) , Zhihui Zhu (Ohio State University)
Course Type: Introductory
Keywords: Machine Learning, Neural Networks
Abstract: Over the past decade, the advent of deep learning and large-scale computing has immeasurably changed the ways we process, interpret, and predict data in imaging and computer vision. The ``traditional'' approach to algorithm design, based around parametric models for specific structures of signals and measurements---such as sparse and low-rank models---and the associated optimization toolkit, is now significantly enriched with data-driven learning-based techniques, where large-scale networks are pre-trained and then adapted to a variety of specific tasks. Nevertheless, the successes of both modern data-driven and classic model-based paradigms rely crucially on correctly identifying the low-dimensional structures present in real-world data, to the extent that we view the roles of learning and the compression of data processing algorithms---whether explicit or implicit, as with deep networks---as inextricably linked. As such, this tutorial provides a timely resource that uniquely bridges low-dimensional models with deep learning. This tutorial will show how (\emph{i}) these low-dimensional models and principles provide a valuable lens for formulating problems and understanding the behavior of modern deep models, and (\emph{ii}) how ideas from low-dimensional models can provide valuable guidance for designing new parameter-efficient, robust, and interpretable deep learning models. The tutorial will start by introducing fundamental low-dimensional models (e.g., basic sparse and low-rank models) with motivating engineering applications. Based on these developments, we will discuss strong conceptual, algorithmic, and theoretical connections between low-dimensional structures and deep models, providing new perspectives to understand state-of-the-art deep models in terms of learned representations and generative models. Finally, we will demonstrate that these connections can lead to new principles for designing deep networks and learning low-dimensional structures, with both clear interpretability and practical benefits.
15. Decision trees: from efficient prediction to responsible AI
Lecturers: Hendrik Blockeel (KU Leuven)
Course Type: Introductory
Keywords: Machine Learning, Safe, Explainable and Trustworthy AI
Abstract: Shadowed by the spectacular recent developments in deep learning, decision trees and their ensembles (decision forests) are still relevant in many contexts. Early research focused mostly on maximizing the predictive accuracy of trees and forests, as well as the efficiency with which they can be learned and deployed. But in the context of responsible AI, other aspects such as fairness, robustness, explainability, verifiability, … of learned models have gained importance. Also from this point of view, decision trees still have an important role to play. This course will present an overview of methods and algorithms related to decision trees and forests, from early learning methods to the most recent analysis and verification methods.
16. Trustworthy Machine Learning from Data to Models
Lecturers: Bo Han (Hong Kong Baptist University)
Course Type: Introductory
Keywords: Safe, Explainable and Trustworthy AI, Foundation Models
Abstract: Trustworthy machine learning seeks to handle critical problems in addressing the issues of robustness, privacy, security, reliability, and other desirable properties. The broad research area has achieved remarkable advancement and brings various emerging topics along with the progress. This course provides a systematic overview of the research problems under trustworthy machine learning covering the perspectives from data to model. Starting with fundamental data-centric learning, the course reviews learning with noisy data, long-tailed distribution, out-of-distribution data, and adversarial examples to achieve robustness. Delving into private and secured learning, this course elaborates on core methodologies of differential privacy, different attacking threats, and learning paradigms, to realize privacy protection and enhance security. Meanwhile, this course introduces several trendy issues related to the foundation models, including jailbreak prompts, watermarking, and hallucination, as well as causal learning and reasoning. To sum up, this course integrates commonly isolated research problems in a unified manner, which provides general problem setups, detailed sub-directions, and further discussion on its challenges or future developments.
17. Reward and Constraint Learning: Foundations for Human-AI Alignment
Lecturers: Sebastian Tschiatschek (University of Vienna)
Course Type: Advanced
Keywords: Reinforcement Learning, Safe, Explainable and Trustworthy AI
Abstract: Deploying AI agents in the real world requires ensuring agents pursue objectives aligned with human intent, which is often difficult to articulate precisely. Traditional reinforcement learning (RL) relies on hand-coded rewards that are frequently misspecified, potentially leading to unintended or unsafe behaviors in systems like chatbots or self-driving cars. This advanced course provides a technical foundation for bridging the gap between human values and machine objectives. Students will explore three core pillars: Inverse Reinforcement Learning (IRL) to infer rewards from expert demonstrations, Deep Reward Learning from human feedback, and Constraint Learning to identify implicit safety and feasibility boundaries. Designed for PhD students and researchers familiar with the fundamentals of RL, the course combines lectures with discussions of high-impact research to equip attendees with the modern technical toolkit required to bridge the gap between human values and reward and constraint functions for reinforcement learning and hence the basis for human-AI alignment.
18. Recurrent GNNs: The Power of Iteration
Lecturers: Jonni Virtema (University of Glasgow) , Floris Geerts (University of Antwerp)
Course Type: Introductory
Keywords: Neural Networks, Theory
Abstract: Recurrent Graph Neural Networks (GNNs) take the familiar message-passing idea and add one game-changing feature: they can iterate beyond a fixed bound—updating node representations again and again, until a stable “fixed point” or some other halting condition is reached. Why does this matter? Because many standard GNNs only see a fixed-radius neighbourhood, which raises a natural set of questions: What can a finite-depth GNN never compute uniformly? When do we truly need recursion? Can a neural model learn concepts like reachability (“is there a path between nodes?”) without hard-coding a graph algorithm? This course is a student-friendly, mathematically precise introduction to those questions. We start with labeled graphs and basic GNN definitions, then connect expressivity to colour refinement and graded modal logic. Next, we push past their limits focusing on the following topics: Why do reachability and other global properties break the usual tools? What changes once we allow iteration? Finally, we show how recurrent GNNs line up with fixpoint logics and discuss what this tells us about their capabilities and limitations. If you are curious about the fundamentals of graph ML—what these models can express, what they cannot, and how these results are proven—this course is for you.
19. Knowledge Compilation: Theory, Practice, and Applications
Lecturers: Johannes Fichte (Linköping University) , Jean-Marie Lagniez (CRIL, Université d'Artois)
Course Type: Introductory
Keywords: Knowledge Representation and Reasoning, Uncertainty in
Abstract: Symbolic AI provides the foundations for transparent and explainable reasoning by grounding decisions in explicit logical or probabilistic models. Unfortunately, many reasoning tasks in that realm are computationally hard. Knowledge compilation addresses this intractability by transforming propositional models into circuits that make otherwise intractable tasks efficiently solvable. This enables fast, dependable inference and explanation over complex reasoning tasks while preserving the expressive power of the underlying models. In this short course, we introduce propositional satisfiability (SAT), modern solving techniques, and how practical solvers can be turned into engines for knowledge compilation. We examine preprocessing techniques that influence the performance and size of the compiled output. We explore different types of circuits discuss computational lower bounds, representational trade-offs, and theoretical properties that guide the choice of target languages. Finally, we demonstrate how compiled circuits enable exact model counting, uniform and weighted sampling, and direct access to structural features of the solution space.
20. AI for Fair and Transparent Decision-Making from Legal and Technical Perspectives
Lecturers: Maria Flórez Rojas (Groningen University) , Matias Valdenegro (Groningen University)
Course Type: Introductory
Keywords: AI for Social Good, Safe, Explainable and Trustworthy AI
Abstract: This course explores how Artificial Intelligence can be designed for social good, combining technical and socio-legal perspectives. The technical part, taught by a machine-learning scientist, introduces how ML models are actually built and deployed: data pipelines, model choices, uncertainty estimation, robustness, and explainability. The socio-legal part, led by a law-and-technology scholar, covers fundamental rights, data and consumer protection, and key obligations under the EU AI Act, focusing on accountability, oversight and redress. Throughout, we treat AI for Social Good as a problem of operationalisation: how principles such as non-discrimination, transparency, accountability and human oversight can be embedded in the AI life cycle through requirements for architecture, documentation, testing and organisational practice. The course targets advanced Bachelor and Master students in AI, computer science, data science, law and public policy who are eager to work across disciplines.
21. Uncertainty in Machine Learning: From Aleatoric to Epistemic
Lecturers: Willem Waegeman (Universiteit Gent) , Eyke Huellermeier (LMU Munich)
Course Type: Advanced
Keywords: Machine Learning, Uncertainty in AI
Abstract: This tutorial aims to provide an overview of uncertainty representation and quantification in machine learning, a topic that has received increasing attention in recent past. The main focus is on novel approaches for distinguishing and representing so-called aleatoric and epistemic uncertainty. By the end of the tutorial, attendees will have a comprehensive understanding of the fundamental concepts and recent advances in this field.
22. Tractable Circuits: A Common Language for Logic, Probability, and Neural Models
Lecturers: Robert Peharz (TU Graz) , Adrián Javaloy (University of Edinburgh)
Course Type: Introductory
Keywords: Neuro-Symbolic Learning and Reasoning
Abstract: This course offers a unified perspective on symbolic, probabilistic, and neural approaches to AI through the lens of structured tractable circuits. We motivate the need for neuro-symbolic and Bilateral AI by contrasting the strengths and limitations of classical symbolic reasoning and modern data-driven learning. Algebraic circuits provide a unifying abstraction, enabling us to present three major circuit families—logical circuits (NNFs) within the knowledge-compilation landscape, probabilistic circuits (PCs) with their learning and inference algorithms, and advanced compositional and differentiable circuit architectures. A central theme is how structural properties such as smoothness, (structured) decomposability, and determinism guarantee tractable inference, allowing large symbolic and probabilistic models to support linear-time queries that would otherwise be intractable. We also highlight recent advances and applications—from semantic layers, cryptographic applications to causal inference. The course targets students from both symbolic and sub-symbolic backgrounds, equipping them with principled tools for building next-generation hybrid AI models.
23. Modern Constraint Programming
Lecturers: Emir Demirović (Delft University of Technology)
Course Type: Advanced
Keywords: Search and Optimization, Delft University of Technology
Abstract: Automated decision‑making increasingly relies on combinatorial optimisation algorithms, which underpin a wide range of real‑world applications, including Industry 4.0 production processes, timetabling, scheduling, train shunting, and logistics. These algorithms are central to technological progress, yet the techniques behind them are often treated as black boxes. Constraint programming has emerged as a powerful paradigm for modelling and solving diverse combinatorial optimisation problems. This course aims to demystify the inner workings of constraint programming solvers from an algorithmic perspective. We will explore both foundational methods and state‑of‑the‑art techniques, including recent developments in proof and certificate generation. Core ideas are introduced through visual explanations and illustrative examples. Participants will also gain hands‑on experience through practical implementation assignments using Pumpkin, a constraint programming solver developed by the lecturer’s research group. The course is accompanied by dedicated lecture notes designed to support deeper understanding and practical application.
24. From In-Context Learning to Neuro-Symbolic Reasoning with Large Reasoning Models
Lecturers: Zied Bouraoui (CRIL CNRS and Univ Artois) , Tanmoy Mukherjee (CRIL CNRS and Univ Artois)
Course Type: Introductory
Keywords: Foundation Models, Neuro-Symbolic Learning and Reasoning
Abstract: Large language models are increasingly used as Reasoning Models (LRMs) for tasks involving mathematics, logic, planning, and explanation. Yet their “reasoning” is typically elicited through in-context learning and prompting, which remains brittle, difficult to control, and poorly aligned with formal reasoning paradigms. This advanced course develops a coherent view of LRMs as components in reasoning systems. We first analyse in-context reasoning and its systematic limitations, then study how training objectives, steering methods, and activation-level interventions can render reasoning behaviour more systematic. Building on this, we show how LRMs can be used to generate structured representations and embedded logical formulas and how these can already support reasoning. Finally, we discuss neuro-symbolic architectures that combine LLMs with solvers and knowledge-based systems. Participants will gain practical and conceptual tools for designing, analysing, and deploying LLM-centric reasoning pipelines.
SPONSORS