Autonomy and autonomous systems
Artificial intelligence (AI) is transitioning from relatively low-risk consumer applications to mission-critical industrial contexts, i.e. autonomous vehicles, industrial robotics, smart grids, critical infrastructure control, and autonomous maritime systems. This transition introduces unprecedented challenges for safety, reliability, and trustworthiness. As autonomous technologies become embedded in critical infrastructure, transportation, energy systems and industrial processes, the ability to rigorously quantify, manage, and mitigate AI-specific risks becomes essential to societal resilience and safety.
This call targets the fundamental scientific challenges associated with enabling safe and trustworthy AI-enabled autonomous systems and managing the risks they introduce. Specifically, it focuses on understanding and controlling emergent capabilities in alignment with intended objectives as complexity and autonomy increase, and on how to rigorously assess the risks of deploying such systems in complex real-world environments. To achieve this will require foundational advances in complexity science, risk science, and AI safety and alignment research, coupled with practical approaches that equip society to deploy autonomy with confidence.
Projects should be domain-agnostic and consider how the outcomes can contribute to ongoing international standards, policy and regulatory initiatives such as the EU AI Act’s requirements for high-risk systems. Projects will provide the scientific foundation for risk-informed design, operation, and governance of autonomous systems, to ensure that these systems behave in ways that are safe, predictable, and beneficial for individuals and society – accepting that what is deemed to be beneficial is itself a rich area of research within AI ethics. The project outcomes will transform how autonomy is designed, operated, regulated and governed across sectors.
Activities within scope of the project:
Activities are expected to achieve TRL 1-3 by the end of the project, and projects are expected to address at least one of the following:
- Emergence in autonomous systems
What causes emergent capabilities in autonomous systems, and can emergent capabilities be predicted and controlled?
In autonomous and complex engineered systems, emergent behaviour refers to system-level capabilities or outcomes that arise from interactions among components and that cannot be attributed to any single element alone. These behaviours, such as sudden capability onset, collective adaptation, or unintended global effects, often appear only when systems operate at scale, or under specific interaction patterns. Such emergence poses challenges in both design and operation, especially for safety-critical applications.
The scientific challenge is to advance the theory of emergence for autonomous systems that explains, predicts, and bounds collective behaviours. This involves developing a fundamental understanding of complexity, dynamical systems, multi-agent interaction, and information flows, and establishing approaches to handle emergence-driven risks in the development and operation of autonomous systems.
- Alignment of autonomous AI systems
How can autonomous AI systems be designed and governed so that their goals, decisions, and behaviours remain consistently aligned with their intended objectives and constraints, even as they become more capable and operate in complex real-world contexts?
The alignment problem refers to the core scientific and engineering challenge of ensuring that an AI system’s performance remains consistent with its intended objectives and constraints, rather than diverging in unexpected or harmful ways as complexity and capability grow. Misalignment occurs when an AI system optimises for proxies or unintended objectives that do not faithfully represent its intended objectives or operational constraints. Current AI systems already exhibit misalignment in narrow domains, and this challenge becomes increasingly acute as systems gain autonomy, adaptivity, and influence across socio-technical environments.
The scientific challenge is to advance foundational theories and methods that enable autonomous AI systems to faithfully represent and pursue intended objectives under real-world operational conditions.
- Quantifying risk and uncertainty in autonomous AI systems
How can risk be estimated for autonomous AI systems when complexity, learning behaviour, and novel technologies create new types of failure and “unknown unknowns”?
In complex AI-enabled systems, risk cannot always be inferred from past data or known failure modes, as experience data is often scarce and the space of possible behaviours may itself change as systems learn and interact. Autonomous systems may operate in environments characterized by multiple sources of uncertainty, including incomplete data, changing operational contexts, model limitations, and complex interactions with human and technical systems.
The scientific challenge is to develop rigorous frameworks for quantifying uncertainty and representing system risk. This includes advancing methods that distinguish between different sources and types of uncertainty, and understanding how these uncertainties propagate through AI-enabled systems to influence decisions and outcomes, and ultimately the likelihood of system failure.
Activities outside the scope of the project:
The following activities are explicitly out-of-scope:
- Applied product development, commercialization, or market deployment
- Domain-specific applications without fundamental research contributions
- Data collection, labelling, or dataset creation as the primary result
- Incremental improvements to existing methods without paradigm shifts
- Purely empirical testing and validation without theoretical foundations
Expected outcome and impact
The successful proposal will contribute to
- Fundamental scientific advances in understanding and managing autonomous AI-enabled systems, including theoretical and methodological breakthroughs related to emergent system behaviour, alignment of autonomous decision-making with intended objectives and operational constraints, and the rigorous quantification of uncertainty and risk in complex socio-technical systems. Research may focus either on developing methods that enable these capabilities by design or on approaches that demonstrate, verify, or provide formal guarantees about them.
- New frameworks, methods, and tools for the safe and trustworthy deployment of autonomous systems, framed such that these can eventually feed into the normative frameworks that govern design, operation, assurance and governance of AI-enabled systems in critical domains.
For the inaugural 2026 call for proposals, we welcome accredited universities in Denmark, Finland, Iceland, Norway and Sweden to apply as host institutions. This initial geographic anchoring provides a focused starting point for the research funding to increase impact and supports operational learning in the first funding cycle. This anchoring is an operational choice for 2026, not a long‑term geographic definition. The Foundation’s long-term ambition is global, and the geographic scope of future calls is expected to be reviewed and may evolve over time as part of the Board’s regular oversight, learning, and strategic development of the programme.