G Fun Facts Online explores advanced technological topics and their wide-ranging implications across various fields, from geopolitics and neuroscience to AI, digital ownership, and environmental conservation.

The Ludus Algorithm: Using AI to Decode Ancient Roman Board Games

The Ludus Algorithm: Using AI to Decode Ancient Roman Board Games

The wind howls through the broken columns of the Roman Forum, whistling past the incised circles and grids etched into the marble pavers of the Basilica Julia. For centuries, these silent geometric ghosts—the gaming boards of idle senators, bored soldiers, and street urchins—have guarded their secrets. We knew the Romans played. We found their dice, their glass counters, and their heavy stone boards from Hadrian’s Wall to the sands of Egypt. But the rules—the logic, the strategy, the thrill of the victory—had evaporated into the ether of history, lost because no Roman scribe ever thought it necessary to write down instructions for games that everyone already knew.

For two thousand years, historians and archaeologists could only guess at how the legions passed their time. They projected modern rules like Chess or Checkers onto ancient grids, creating anachronistic hybrids that felt stiff and lifeless. But in the third decade of the 21st century, a new player entered the archaeological arena: Artificial Intelligence.

This is the story of the "Ludus Algorithm"—a revolutionary intersection of digital archaeology, evolutionary computation, and ludology (the study of games). It is the story of how the Digital Ludeme Project, led by researchers at Maastricht University, utilized the raw processing power of the Ludii general game system to reverse-engineer the lost pastimes of the Empire. By treating games not as static artifacts but as evolving organisms made of "ludemes"—genetic units of game logic—and by training AI agents to play millions of phantom matches, we have finally broken the silence of the stones. We can now play the games of Caesar and Augustus not as we imagine they were played, but as the mathematics of fun and the physical evidence of wear patterns dictate they must have been played.

Part I: The Silence of the Stones

To understand the magnitude of this technological breakthrough, one must first appreciate the archaeological void it fills. Gaming was not merely a trivial pastime in ancient Rome; it was a societal heartbeat. It occurred in the smoke-filled popinae (taverns) of Suburra, on the leather tents of the legions in Germania, and in the marble courtyards of the Palatine Hill.

The archaeological record is teeming with hardware. We have the tabulae (boards), often scratched as graffiti into public spaces. We have the calculi (counters), made of glass, bone, pottery, or polished stone. We have the tesserae (dice), identical to our modern six-sided cubes, though often weighted by cheats to favor the "Venus" throw.

However, the "software"—the rules—is almost entirely missing. The Romans did not write rulebooks. Why would they? One learns to play Latrunculi or Duodecim Scripta by watching one’s father, or by losing a week’s wages to a veteran in the barracks. The literary sources we do have are fragmentary and poetic, often frustratingly vague.

The poet Ovid, in his Tristia, mentions a game where "a piece is lost when caught between two enemies." This single line is the cornerstone of our understanding of Ludus Latrunculorum, the Game of Mercenaries. But does the captured piece vanish immediately? Does the captor move into the square? Can a piece commit suicide by moving between two enemies? Ovid does not say. He was writing a lament on exile, not a technical manual.

Martial, the epigrammatist, jokes about the "glass soldiers" and the "blocked path," but offers no grid dimensions. Varro, the scholar, mentions the grid but not the movement. For centuries, this left us with a puzzle: we had the chessboard and the pieces, but no idea how the knight moved.

Traditional historians attempted to fill these gaps with intuition. In the 19th and 20th centuries, scholars like H.J.R. Murray produced "reconstructions" that were often heavily biased by the games of their own time. They assumed Roman games were early versions of Chess or Backgammon. But history is rarely linear, and the evolution of play is a complex branching tree, not a straight line.

The result was that museum gift shops sold "Roman Board Games" with rules that were historically implausible and, worse, terribly boring to play. If a game is boring, it is unlikely to have captivated a civilization for a thousand years. This was the "Playability Gap." If a reconstructed rule set results in a game that always ends in a draw, or where the first player always wins, it is almost certainly incorrect. The Romans were sophisticated; they would not have wasted their time on broken mechanics.

This is where the Ludus Algorithm—the application of AI to the archaeological problem—changed everything.

Part II: The DNA of Play

The breakthrough began with a fundamental shift in how we define a game. In the eyes of the Digital Ludeme Project, a game is not a monolithic entity. It is a compound structure made of smaller, atomic units called ludemes.

Just as a biological organism is defined by its genes, a game is defined by its ludemes.

  • Ludeme A: The board is an 8x8 grid.
  • Ludeme B: Pieces move orthogonally (like a rook).
  • Ludeme C: Capture is custodial (sandwiching an enemy).
  • Ludeme D: The goal is to eliminate all enemy pieces.

By breaking games down into these functional units, researchers could create a "Game Description Language" (GDL) that a computer could understand. The Ludii system, developed by Cameron Browne and his team, serves as a universal game engine capable of modeling almost any game in history, provided it is fed the correct ludemes.

The "Ludus Algorithm" is essentially a brute-force evolutionary process. Since we don't know the exact combination of ludemes that made up the Roman game Ludus Latrunculorum, the AI generates thousands of potential variations.

  • Variant 1: 8x8 board, diagonal movement, jump capture.
  • Variant 2: 10x11 board, orthogonal movement, custodial capture.
  • Variant 3: 12x8 board, knight’s move, custodial capture.

To a human researcher, testing these thousands of variants would take a lifetime. To the AI, utilizing Monte Carlo Tree Search (MCTS), it takes hours. The AI plays these variants against itself, millions of times, at a speed impossible for biological minds.

But the AI doesn't just play; it judges.

The core innovation of the Ludus methodology is the Heuristic of Quality. The AI evaluates each reconstructed rule set based on a series of metrics that define a "good" game:

  1. Balance: Does Player 1 win 99% of the time? If so, the rules are likely wrong (or the game wouldn't have survived).
  2. Depth: Does the game require strategic thinking, or is it pure luck?
  3. Drama: Does the game oscillate? Are there comebacks, or is the winner decided in the first three moves?
  4. Drawishness: Does the game end in a stalemate too often?
  5. Duration: Is the game over in 3 turns (too short) or 3,000 turns (too long)?

By filtering the thousands of generated rule sets through these "fun filters," the AI discards the broken, boring, and unbalanced versions. What remains is a small cluster of rule sets that are mathematically sound, historically plausible (based on the few literary clues we have), and genuinely engaging to play.

The computer, in essence, hallucinates the lost rules of Rome, and then applies the Turing Test of "fun" to see which hallucinations are real.

Part III: Case Study — The Mercenary’s Dilemma (Ludus Latrunculorum)

Ludus Latrunculorum (The Game of Little Soldiers/Mercenaries) is perhaps the most famous of all Roman board games. It was the game of strategy, the intellectual equivalent of Greek Petteia, played by emperors and legionaries alike. The Evidence:

Archaeology gives us grids of varying sizes: 7x8, 8x8, 10x10, and even massive 17x18 boards found in the Roman East. We have the glass counters (usually black and white/blue). We have Ovid’s line about custodial capture ("sandwiching"). We have a cryptic poem called the Laus Pisonis which describes a player sacrificing a piece to break an enemy blockade.

The Problem:

How do the pieces move? Do they move one step at a time (like a King in Chess) or any distance (like a Rook)? How does the game end?

The AI Reconstruction:

When the Digital Ludeme Project fed these variables into the system, the AI quickly identified a problem with the traditional "Rook-move" theory. If pieces can move unlimited distances on a small board with custodial capture, the game becomes incredibly volatile. Pieces are swept off the board too fast. It lacks the "grind" of a battlefield described in the Laus Pisonis.

However, when the AI tested a limited movement rule (pieces move like Rooks but cannot jump, or perhaps move only a set distance) combined with a specific victory condition, the "quality" metrics spiked.

The AI also solved the "Suicide Problem." In many basic reconstructions, a player could move their piece between two enemies to attack a third, but strictly speaking, they are now "sandwiched." Does this count as suicide? The AI simulations showed that allowing suicide moves creates a chaotic, illogical game flow. The most stable rule set—and therefore the most likely historical one—introduces a "passive vs. active" state. You are only captured if the enemy makes the move to sandwich you. If you walk into the sandwich yourself, you are safe.

This rule nuance, which is standard in many modern abstract games, was "discovered" by the AI as a necessity for the game to function at a high level.

Furthermore, the AI shed light on the Victory Condition. For years, scholars debated if the goal was to capture all pieces or merely to block the enemy. The Laus Pisonis speaks of "broken ranks" and pieces unable to move. The AI simulations favored the "immobilization" victory (stalemate/blockade) over total annihilation. On the larger boards found in Britain (like the Corbridge board), a game of total annihilation takes hours and often drags into a draw. A game of immobilization creates a tense, claustrophobic tightening of the noose—a feeling that resonates with the military tactics of the Roman maniple system.

The "Ludus Algorithm" suggests that Latrunculi was not a game of slaughter, but a game of maneuver and encirclement. It was a simulation of the acies (battle line), teaching the Roman mind the value of the cohesive unit over the individual hero.

IV: The Coriovallum Mystery — The Smoking Gun

If Latrunculi was the AI’s strategic triumph, the Coriovallum Stone was its forensic masterpiece.

In February 2026, the archaeological world was rocked by a study published in Antiquity concerning a peculiar stone artifact found in Heerlen (ancient Coriovallum), Netherlands. The stone, discovered a century prior, bore a grid of incised lines—a rectangle with diagonals and crossing patterns. It looked vaguely like a game board, but it matched no known Roman configuration. It wasn't Latrunculi. It wasn't Duodecim Scripta.

For decades, it sat in a museum storage room, a silent enigma. Was it a game? A surveyor's tool? A decorative paver?

Researchers Walter Crist and his team applied the Ludus methodology. They didn't just ask the AI to invent rules; they asked it to simulate the physics of play.

They scanned the stone with high-resolution photogrammetry, creating a microscopic topographic map of the surface. They found that the grooves were not uniform. Some lines were deeply worn, smoothed by the friction of thousands of fingers and sliding glass pieces. Others were relatively pristine.

This was the physical data set.

The team then tasked the Ludii AI to play hundreds of different game types—race games, hunt games, blocking games, war games—on this specific board geometry. For every game type, the AI generated a "Heat Map" of piece movement. If the AI played a "Hunt Game" (like Fox and Geese), the center of the board saw the most traffic. If it played a "Race Game," the perimeter lines lit up.

The AI ran thousands of iterations. Then, it compared its digital Heat Maps with the physical Wear Pattern on the 2,000-year-old stone.

The match was startling.

The wear patterns on the Coriovallum stone did not match a race game or a complex war game. They perfectly matched the traffic density of a Blocking Game—specifically, a game related to the modern "Tic-Tac-Toe" but far more complex, similar to the "Three Men's Morris" family or the bead games played in desert cultures.

The AI suggested that the game involved placing pieces to form lines, but with a movement phase that involved sliding pieces to block the opponent. The "deepest grooves" on the stone corresponded exactly to the "choke points" the AI agents fought over most fiercely in the simulation.

This was a paradigm shift. We weren't just guessing rules; we were forensically matching the ghosts of Roman gestures with digital agents. The AI had proven that the Romans played a specific type of blocking game in the Low Countries—a game previously undocumented in any text. It revealed a local gaming culture, perhaps a hybrid of Roman and Germanic traditions, that had been invisible to history.

V: The Psychology of the Algorithm

What does the Ludus Algorithm tell us about the Roman mind?

When we play these AI-reconstructed games, we are engaging in a form of cognitive archaeology. We are stepping into the neural pathways of an ancient Roman.

The AI reveals that Roman games were highly deterministic. Unlike the Egyptians, who loved games of pure chance (like Senet, where the dice represented the will of the gods), or the Greeks, who preferred pure calculation, the Romans occupied a middle ground, but with a heavy lean toward strategy.

Ludus Latrunculorum, as decoded by AI, is a game of high risk management. The "Custodian Capture" mechanic means that every piece is potentially vulnerable from two sides. A formation that looks strong can be shattered by a single enemy insertion. This mirrors the Roman military obsession with the integrity of the line. In the Roman legions, if the line breaks, the battle is lost. The game reinforces this cultural imperative: stay together, support your neighbor, do not get isolated.

The AI also highlights the Roman tolerance for complexity. The reconstructed rules for Duodecim Scripta (the ancestor of Backgammon) show a game that is significantly more complex than modern Backgammon. It involved three dice (not two), and arguably a movement system that allowed for more aggressive blocking. The Romans were not playing simple diversions; they were playing mental athletics.

Furthermore, the "Drawishness" metrics in the AI simulations offer a glimpse into Roman social dynamics. Many reconstructed variants of Latrunculi have a high draw rate between expert players. In a modern context, we might find this frustrating. But in a Roman context—gambling in a tavern or killing time in a barrack—a game that allows for a "graceful exit" without a clear loser might have been socially advantageous. It prevents loss of face (and money) among armed soldiers. The AI allows us to test these sociological theories by showing us the frequency of such outcomes.

VI: The Phylomemetic Tree

The Digital Ludeme Project does not view these games in isolation. It utilizes Phylomemetics—the study of the evolution of information (memes) similar to phylogenetics in biology.

By mapping the ludemes of Roman games against games from Egypt, Greece, Persia, and the Germanic tribes, the AI constructs a family tree of play.

The algorithm has shown, for instance, the strong genetic link between the Greek game Petteia and Roman Latrunculi. But it also highlights where the mutation occurred. The Romans added the "King" or "Dux" piece (mentioned in some sources and confirmed by AI balance testing as a necessary mechanic for breaking stalemates). This addition of a "Hero Piece" shifts the game from a democratic phalanx battle to a more imperial, centered command structure—a reflection of the shift from Republic to Empire?

Even more fascinating is the link forward in time. The AI analysis shows striking mathematical similarities between the "Blocking" logic of the Coriovallum stone and the later Viking game Hnefatafl (The King’s Table). Did Roman mercenaries serving on the Rhine frontier teach these mechanics to Germanic tribes, who then carried them north to Scandinavia? The "Ludus Algorithm" provides the data to back up this diffusion theory, tracing the migration of rule-sets across the map of Europe like a virus.

VII: The Future of Digital Ludology

The success of the Ludus Algorithm in decoding Roman games is just the beginning. The methodology is now being applied to the "Holy Grails" of lost games.

The Royal Game of Ur: While we have a tablet describing the rules for this Babylonian game, the translation is debated. AI is currently testing the translations to see which one actually produces a playable game. Senet: The game of the Pharaohs. Was it a race to the afterlife or a strategy game? The AI is currently simulating the movement of pieces on the 3x10 grid to determine the probability of different rule sets. The Knossos Game: The magnificent inlaid board found in the Minoan palace of Knossos. No one has any idea how it was played. It is the Everest of ancient board games. The Ludus Algorithm is currently mapping the geometry of the board to find mathematical symmetries that might hint at the mechanics.

The implications extend beyond history. This technology is being used to design new games. By reversing the process—telling the AI "I want a game that is high drama, low draw rate, and uses a hexagonal board"—the system can generate rule sets that have never existed before. We are using the ghosts of the past to design the play of the future.

VIII: Conclusion — The Resurrection of Play

For the visitor to the Roman Forum today, the incised circles on the Basilica Julia are no longer silent. Thanks to the Ludus Algorithm, we can pull out a tablet or smartphone, load the Ludus Latrunculorum app powered by the Digital Ludeme Project, and play the game exactly as a centurion might have played it in 100 AD.

We can feel the frustration of the "Custodial Capture," the tension of the "Immobilization Victory," and the rhythm of the logic that sharpened the minds of the conquerors of the known world.

The AI has not just reconstructed rules; it has reconstructed an experience. It has proven that while stones erode and papyrus rots, the logic of a good game is immortal. It waits in the mathematical ether, patient and eternal, ready for a mind—biological or artificial—to discover it again.

The Romans often spoke of the Manes—the spirits of the dead who required remembrance. In a strange, digital way, the Ludus Algorithm has performed the ultimate rite of remembrance. It has invited the Romans back to the table, handed them the dice, and asked them, finally, to show us their moves.


Extended Analysis: The Mechanics of Reconstruction

To fully grasp the "Ludus Algorithm," we must look under the hood of the technology. The process is a cycle of Generation, Simulation, and Evaluation.

1. The Generation Phase (The Combinatorial Explosion)

The first challenge is the sheer number of possibilities. A game board of 8x8 squares with 16 pieces per side offers a staggering number of starting positions. Add in variables for movement (orthogonal, diagonal, L-shaped, leaping), capture (replacement, custodial, surrounding, leaping), and turn order, and the "search space" becomes astronomical—larger than the number of atoms in the universe.

The AI cannot test everything. It uses "Constraint Satisfaction." It locks in the "hard facts" (Ludemes that are non-negotiable).

  • Constraint 1: The board found at Corbridge is 7x8.
  • Constraint 2: The pieces are distinct from the board (it is not a marking game like Tic-Tac-Toe, but a dynamic piece game).
  • Constraint 3: Literary evidence confirms "Custodial Capture."

The AI then explores the "soft" variables.

  • Variable A: Can the "Dux" (Leader piece) be captured?
  • Variable B: Does the Dux move differently?
  • Variable C: Is the starting position fixed or do players take turns placing pieces (a "Setup Phase")?

The "Setup Phase" variable was a major discovery. Many modern abstract games start with fixed positions (Chess, Checkers). But the AI found that for Latrunculi, a "Placement Phase" (where players take turns putting pieces on the empty board) creates a much more balanced and strategic game than fixed starting lines. This aligns with the Laus Pisonis text which describes the "gathering of the forces." The AI showed that a fixed start often leads to a "First Player Advantage" of over 60%, whereas a placement phase drops this to a near-perfect 51-49% split.

2. The Simulation Phase (Monte Carlo Tree Search)

Once a rule set is generated, the AI must play it. It uses Monte Carlo Tree Search (MCTS), the same algorithm used by AlphaGo.

MCTS works by "playouts." From the current state of the game, the AI simulates thousands of random games to the end. It doesn't need to know strategy yet; it just needs to know the result. If Move A leads to a win in 60% of random simulations, and Move B leads to a win in 40%, the AI learns that Move A is better.

Over millions of iterations, the AI teaches itself the strategy of the reconstructed game. It becomes a Grandmaster of a game that hasn't been played in 2,000 years.

This "Self-Play" is crucial. If the AI, playing at a Grandmaster level, cannot find a winning strategy for Player 2, the rule set is marked as "Unbalanced" and discarded. If the AI finds that the best strategy is to simply move one piece back and forth forever (a "Cycle"), the rule set is marked as "Broken."

3. The Evaluation Phase (The Phylomemetic Distance)

This is the most "archaeological" part of the algorithm. The AI doesn't just look for good games; it looks for related games.

It calculates the "Edit Distance" between the reconstructed game and known games of the period.

  • How different is this rule set from Greek Petteia?
  • How different is it from Egyptian Seega?

If a reconstructed rule set is a masterpiece of game design but uses mechanics that don't appear in Europe until the 18th century (like "castling" or "en passant"), the AI applies a penalty. The system favors rule sets that use "local" ludemes—mechanics that were culturally available to the Romans.

This prevents the AI from inventing a modern game on an ancient board. It forces the AI to "think Roman."

The Cultural Context: What the AI Misses

While the Ludus Algorithm is a triumph of logic, archaeologists caution that it cannot capture the human element of the rules.

The Gambling Factor: Romans were degenerate gamblers. They bet on everything. It is very likely that Latrunculi, though a game of skill, involved betting. Did the rules change when money was on the table? Did they play faster? The AI assumes "Perfect Play"—that players always want to win. But in a tavern, a player might make a suboptimal move to prolong the game, or to humiliate an opponent. The "House Rules": Just as Monopoly is played differently in every household today, Roman games likely had infinite regional variations. The "Coriovallum Rules" might have been different from the "Rome Rules." The AI is searching for the Platonic Ideal of the game, but the historical reality was likely a messy spectrum of variants. The Cheating: Roman dice are notoriously biased. Many found in excavations are shaved or weighted to favor the 6 (Venus) or the 1 (Dog). The AI simulates fair play. It cannot currently model the "Meta-Game" of spotting a cheater, which was surely a key part of the Roman gaming experience.

The Lasting Legacy of the Ludus Algorithm

The true beauty of the Ludus Algorithm is that it democratizes history. In the past, only a scholar with a PhD in Classics could access the obscure Latin texts describing these games. Now, the code is open-source. A high school student in Tokyo can download the Ludii system, generate a variant of Latrunculi, and perhaps discover a strategic nuance that the professional archaeologists missed.

We are entering the age of Participatory Archaeology. We are no longer just looking at the past behind glass cases. We are interacting with it. We are running the same algorithms in our brains that the Romans ran in theirs.

When you corner an opponent’s piece in the digital recreation of Ludus Latrunculorum, and you feel that spark of satisfaction, you are feeling an emotion that is 2,000 years old. You are sharing a moment with a ghost. And in that moment, the distance between the silicon chip and the marble stone vanishes completely.

The game is afoot. And the Romans are finally ready to play.

Reference: