Skip to content

Add Xiangqi (Chinese Chess) implementation to OpenSpiel#1512

Open
Arahan-kujur wants to merge 6 commits intogoogle-deepmind:masterfrom
Arahan-kujur:add-xiangqi-game
Open

Add Xiangqi (Chinese Chess) implementation to OpenSpiel#1512
Arahan-kujur wants to merge 6 commits intogoogle-deepmind:masterfrom
Arahan-kujur:add-xiangqi-game

Conversation

@Arahan-kujur
Copy link
Copy Markdown

This PR adds an implementation of Xiangqi (Chinese Chess) to OpenSpiel.

Features:

  • 10x9 board with standard starting position
  • movement rules for all pieces
  • cannon capture mechanics
  • horse blocking rule
  • flying general rule
  • observation tensor support
  • undo functionality

Includes unit tests for move generation, rule enforcement, and state transitions.

@lanctot
Copy link
Copy Markdown
Collaborator

lanctot commented Mar 30, 2026

Thanks!

Copy link
Copy Markdown
Collaborator

@lanctot lanctot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add the game to docs/games.md ?

Copy link
Copy Markdown
Collaborator

@lanctot lanctot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you:

  1. Add the game to pyspiel_test.py
  2. Add a playthrough

See steps 5 & 11 of the "Adding a Game" guidelines here: https://github.com/google-deepmind/open_spiel/blob/master/docs/developer_guide.md#adding-a-game

@lanctot lanctot added the waiting Waiting to hear back from contributor (tests failed or thread reply / code update required) label Mar 31, 2026
@Arahan-kujur
Copy link
Copy Markdown
Author

Can you:

  1. Add the game to pyspiel_test.py
  2. Add a playthrough

See steps 5 & 11 of the "Adding a Game" guidelines here: https://github.com/google-deepmind/open_spiel/blob/master/docs/developer_guide.md#adding-a-game

Added Xiangqi to docs/games.md, updated pyspiel_test.py, and added the playthrough file for integration testing. Let me know if anything else needs to be adjusted.

@lanctot lanctot removed the waiting Waiting to hear back from contributor (tests failed or thread reply / code update required) label Mar 31, 2026
@lanctot
Copy link
Copy Markdown
Collaborator

lanctot commented Mar 31, 2026

Added Xiangqi to docs/games.md, updated pyspiel_test.py, and added the playthrough file for integration testing. Let me know if anything else needs to be adjusted.

Great, thanks.

Did you see this comment? #1512 (comment)

@Arahan-kujur
Copy link
Copy Markdown
Author

Added Xiangqi to docs/games.md, updated pyspiel_test.py, and added the playthrough file for integration testing. Let me know if anything else needs to be adjusted.

Great, thanks.

Did you see this comment? #1512 (comment)

Yes, thanks — I’ve addressed that comment in the latest commit.

@lanctot
Copy link
Copy Markdown
Collaborator

lanctot commented Apr 1, 2026

Something seems wrong with the usual tests, but the wheels tests passed. So it's probably fine.

Can you make a one-line change (maybe add a comment somewhere) and push it so it can trigger a new invocation of the tests?

@Arahan-kujur
Copy link
Copy Markdown
Author

Something seems wrong with the usual tests, but the wheels tests passed. So it's probably fine.

Can you make a one-line change (maybe add a comment somewhere) and push it so it can trigger a new invocation of the tests?

I’ve pushed a small change to trigger a new test run.

@lanctot
Copy link
Copy Markdown
Collaborator

lanctot commented Apr 1, 2026

Ok, I ran into the same issue again, and each time I could not see a log. So this time I'm watching it.

https://github.com/google-deepmind/open_spiel/actions/runs/23845020407/job/69518259486

If it happens again, I suspect one of the simulation tests might be stuck in an infinite loop. Especially if it's a game where uniform random policy could go on forever, then it's likely just getting stuck in an endless game.

(To fix these cases, we usually put a limit on the number of moves and call it a draw after that number of moves have been applied without a winner. You'd have to add that to the implementation.)

@lanctot
Copy link
Copy Markdown
Collaborator

lanctot commented Apr 1, 2026

Hypothesis confirmed. This is the last part of the output:

player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 6907 (G(8,4)-(7,4))
State: 
...g.....
.........
.........
.........
.........
.........
.........
....G....
.........
.........
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 282 (g(0,3)-(1,3))
State: 
.........
...g.....
.........
.........
.........
.........
.........
....G....
.........
.........
player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 6106 (G(7,4)-(8,4))
State: 
.........
...g.....
.........
.........
.........
.........
.........
.........
....G....
.........
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 1101 (g(1,3)-(2,3))
State: 
.........
.........
...g.....
.........
.........
.........
.........
.........
....G....
.........
player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 6925 (G(8,4)-(9,4))
State: 
.........
.........
...g.....
.........
.........
.........
.........
.........
.........
....G....
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 1902 (g(2,3)-(1,3))
State: 
.........
...g.....
.........
.........
.........
.........
.........
.........
.........
....G....
player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 7726 (G(9,4)-(8,4))
State: 
.........
...g.....
.........
.........
.........
.........
.........
.........
....G....
.........
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 1083 (g(1,3)-(0,3))
State: 
...g.....
.........
.........
.........
.........
.........
.........
.........
....G....
.........
player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 6925 (G(8,4)-(9,4))
State: 
...g.....
.........
.........
.........
.........
.........
.........
.........
.........
....G....
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 282 (g(0,3)-(1,3))
State: 
.........
...g.....
.........
.........
.........
.........
.........
.........
.........
....G....
player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 7726 (G(9,4)-(8,4))
State: 
.........
...g.....
.........
.........
.........
.........
.........
.........
....G....
.........
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 1101 (g(1,3)-(2,3))
State: 
.........
.........
...g.....
.........
.........
.........
.........
.........
....G....
.........
player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 6907 (G(8,4)-(7,4))
State: 
.........
.........
...g.....
.........
.........
.........
.........
....G....
.........
.........
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 1902 (g(2,3)-(1,3))
State: 
.........
...g.....
.........
.........
.........
.........
.........
....G....
.........
.........
player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 6106 (G(7,4)-(8,4))
State: 
.........
...g.....
.........
.........
.........
.........
.........
.........
....G....
.........
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 1101 (g(1,3)-(2,3))
State: 
.........
.........
...g.....
.........
.........
.........
.........
.........
....G....
.........
player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 6907 (G(8,4)-(7,4))
State: 
.........
.........
...g.....
.........
.........
.........
.........
....G....
.........
.........
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 1902 (g(2,3)-(1,3))
State: 
.........
...g.....
.........
.........
.........
.........
.........
....G....
.........
.........
player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 6106 (G(7,4)-(8,4))
State: 
.........
...g.....
.........
.........
.........
.........
.........
.........
....G....
.........
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 1083 (g(1,3)-(0,3))
State: 
...g.....
.........
.........
.........
.........
.........
.........
.........
....G....
.........
player 0
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 6925 (G(8,4)-(9,4))
State: 
...g.....
.........
.........
.........
.........
.........
.........
.........
.........
....G....
player 1
Rewards: 0 0
Returns: 0 0
Sum Rewards: 0 0
chose action: 282 (g(0,3)-(1,3))
State: 
.........
312/312 Test #271: python/pytorch/deep_cfr_pytorch_test.py ....................   Passed  166.26 sec
99% tests passed, 1 tests failed out of 312
Total Test time (real) = 379.87 sec
The following tests FAILED:
	136 - xiangqi_test (Subprocess killed)
Errors while running CTest
+ print_tests_failed
+ echo -e '\033[31mAt least one test failed.\e[0m'
At least one test failed.\e[0m
+ echo 'If this is the first time you have run these tests, try:'
+ echo 'python3 -m pip install -r requirements.txt'
+ echo 'Note that outside a virtualenv, you will need to install the '
+ echo 'system-wide matplotlib: sudo apt-get install python-matplotlib'
+ exit 1
If this is the first time you have run these tests, try:
python3 -m pip install -r requirements.txt
Note that outside a virtualenv, you will need to install the 
system-wide matplotlib: sudo apt-get install python-matplotlib
+ cleanup
+ [[ false == \t\r\u\e ]]

@lanctot
Copy link
Copy Markdown
Collaborator

lanctot commented Apr 1, 2026

Ok so the fix is to keep a variable in XianqiState that counts the number of total moves (see coop_box_pushing for an example: https://github.com/google-deepmind/open_spiel/tree/master/open_spiel/games/coop_box_pushing) and then ends the game in a draw after the maximum number of moves has been played.

You don't need to make the maximum a parameter to the game; you can just put a sensible constant at the top of the header file. But then you also need to return that number in MaxGameLength.

@lanctot
Copy link
Copy Markdown
Collaborator

lanctot commented Apr 1, 2026

move_number_ works too, nice - thanks.

@Arahan-kujur
Copy link
Copy Markdown
Author

Ok so the fix is to keep a variable in XianqiState that counts the number of total moves (see coop_box_pushing for an example: https://github.com/google-deepmind/open_spiel/tree/master/open_spiel/games/coop_box_pushing) and then ends the game in a draw after the maximum number of moves has been played.

You don't need to make the maximum a parameter to the game; you can just put a sensible constant at the top of the header file. But then you also need to return that number in MaxGameLength.

Thanks for the clarification. I currently added the move limit check using the existing move_number_ from the base State class. Would you prefer that I instead add a separate move counter in XiangqiState, similar to coop_box_pushing, and return the same constant from MaxGameLength()?

@lanctot
Copy link
Copy Markdown
Collaborator

lanctot commented Apr 1, 2026

Nope, no need -- this is perfrect.

@Arahan-kujur
Copy link
Copy Markdown
Author

Nope, no need -- this is perfrect.

Nope, no need -- this is perfrect.

Great, thanks! Is everything good to merge now, or is there anything else you'd like me to adjust?

@lanctot
Copy link
Copy Markdown
Collaborator

lanctot commented Apr 1, 2026

Great, thanks! Is everything good to merge now, or is there anything else you'd like me to adjust?

I only took a quick look, but it seems mostly fine. I'm currently on vacation, so I'll import it when I get back to work (after April 14th). There may be 1-2 more things to do if there's something major in the import, but that's rare. So expect it to be merged mid to late April.

@Arahan-kujur
Copy link
Copy Markdown
Author

Great, thanks! Is everything good to merge now, or is there anything else you'd like me to adjust?

I only took a quick look, but it seems mostly fine. I'm currently on vacation, so I'll import it when I get back to work (after April 14th). There may be 1-2 more things to do if there's something major in the import, but that's rare. So expect it to be merged mid to late April.

Thanks for taking a look, and enjoy your vacation! Happy to make any adjustments if something comes up during the import.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants