For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
The Amsterdam Business School’s (ABS) Amsterdam Digital Transformation Lab (ADTL) has launched Nine Ways to Cross. This is an interactive AI system governance lab that lets participants design, run, and test governance systems for autonomous vehicles. The lab then shows which philosophical tradition their design choices most closely follow.
Somendra Narayan
Somendra Narayan

Nine Ways to Cross was developed by Dr Somendra Narayan, an assistant professor with the ABS Strategy and Innovation section and director of ADTL. The lab is used in the Managing Business with AI course in the Executive Programme in Management Studies (EPMS) at the ABS.

Making AI governance concrete and observable

In this context, governance means the rules, structures, and checks that guide how an AI system behaves. Nine Ways to Cross translates 9 philosophical traditions from Western, Chinese, Indian, and African thought into clear coordination systems. A coordination system is the set of rules that determines how many separate agents act together in a shared space. These agents can include autonomous cars, buses, cyclists, pedestrians, traffic lights, emergency vehicles, and central control systems that coordinate the overall flow.

Participants make 7 practical design choices. These concern rules, fairness, authority, agent identity, interdependence, enforcement, and decision speed. These choices are turned into a continuous simulation that runs their own governance configuration. In other words, participants see what their governance design does in a realistic traffic setting with autonomous vehicles.

Dr Narayan explains why this matters for leaders who work with AI systems: ‘Every protocol embeds a philosophy. Every optimisation function inherits an ethics, says Dr Narayan. ‘Most executives, engineers, and regulators make governance decisions without examining the philosophical commitments embedded in those decisions. We try to make those commitments visible. This is done through lectures and through simulations where you can fail and succeed.’

Stress testing AI under pressure

The lab includes interactive stress tests. Stress testing means putting a system under pressure to see how it behaves when something goes wrong. Participants can add different difficult situations to the simulation. Examples are emergency vehicles, traffic surges, sensor failures, and adversarial bad actors that deliberately break the rules. A real-time dashboard tracks 5 dimensions of performance: efficiency, fairness, resilience, adaptability, and safety.

After each run, participants can redesign and test again. The system clearly shows the trade-offs. A trade-off is a situation where improving 1 dimension, such as speed, makes another dimension, such as safety, weaker.

Commit, observe, discover

The lab reverses the usual order of teaching governance and ethics. Participants do not start by learning philosophical theories. Instead, they first make concrete design decisions. Next, they observe what happens at system level under stress. Only then do they discover which philosophical tradition their design most closely follows.

This sequence is described as commit, observe, discover. It is designed to produce deeper insight. Participants see how abstract ideas about rules, outcomes, and character play out in real-time system behaviour.

Nine philosophical traditions in a 3×3 framework

The 9 traditions are arranged in a 3×3 framework along 2 analytical dimensions. The first dimension is governance logic. This spans rule-based approaches, outcome-based approaches, and approaches that focus on character and relationships. The second dimension is the model of the agent. This can be an autonomous individual, an agent that is strongly shaped by context, or an agent in an interdependent network. The traditions include:

  • Kantian deontology, utilitarianism, and Aristotelian virtue ethics from Western philosophy
  • Dharma–Svadharma from Indian Dharmasastra
  • Pratatyasamutpada from Madhyamika Buddhist philosophy
  • Confucian Li and Daoist Wu Wei from Chinese traditions
  • Ubuntu from African Nguni/Bantu thought
  • American Pragmatism

Each tradition is implemented as a distinct coordination algorithm. An algorithm is a step-by-step procedure that defines how decisions are made. Each algorithm has its own typical failure mode. A failure mode is a specific way in which a system can go wrong.

The system is not built so that any 1 governance design performs best on all dimensions. Instead, each design has strengths that are linked to particular vulnerabilities. Participants learn that there is no perfect governance system for AI. There are only choices, trade-offs, and responsibilities that must be made explicit.

Embedding AI governance in executive education

By placing Nine Ways to Cross at the heart of the Managing Business with AI course in the EPMS programme, Amsterdam Business School integrates technical, ethical, and philosophical perspectives in a single learning experience.

Participants explore how their own design instincts shape AI governance. They also see how these instincts are rooted in deeper traditions of thought. This supports more reflective and responsible use of AI in managerial practice.