Skip to content

Add Chip Design environment#407

Merged
josephdviviano merged 33 commits intomasterfrom
chip-design
Apr 4, 2026
Merged

Add Chip Design environment#407
josephdviviano merged 33 commits intomasterfrom
chip-design

Conversation

@younik
Copy link
Copy Markdown
Collaborator

@younik younik commented Oct 7, 2025

This PR introduces a new discrete GFlowNet environment for chip macro placement (ChipDesign), along with supporting helper code, example training scripts, and tests.

Changes Made

  • New environment: ChipDesign environment + ChipDesignStates, backed by Circuit Training's plc_wrapper_main via a local client wrapper.
  • Helper package: Chip-design helper package with test netlists/placements, container/binary setup scripts, and container build helpers.
  • Training scripts & tests: Tutorial training scripts, smoke tests, and environment tests.
  • Bug fix: get_blockages_from_comments in utils.py now normalizes a single str input to [str] before iterating, preventing character-by-character iteration when a single filename is passed.
  • Resource management: Added close() and __del__() methods to ChipDesign to properly shut down the PlacementCost subprocess and prevent resource leaks across tests or repeated runs.

Updates (April 2026)

Merged with master

  • Resolved all conflicts with latest master (583 commits)
  • Adapted ChipDesign env to master's lazy mask caching API (DiscreteStates property-based masks, debug flag, conditions/device/debug kwargs)

Singularity/Apptainer support

  • singularity_image parameter on PlacementCost, ChipDesign, and training CLI (--singularity_image auto)
  • _resolve_plc_binary() auto-detection: env var > native binary > apptainer wrapper
  • build_container.py and plc_wrapper.def for building .sif images
  • Compatible with both Singularity and Apptainer

Coordinate descent orientation optimization

  • optimize_orientations() sweeps all 8 DEF/LEF orientations per macro via greedy coordinate descent (matching Google's cd_finetune approach)
  • Called in log_reward after analytical placement, controlled by cd_finetune=True (default)
  • Consistently improves placement quality (~3-5% reward gain)

Reward normalization

  • reward_norm parameter: None (raw), "zscore", or "minmax" — estimated from random placement samples at init
  • reward_temper for temperature scaling (higher = sharper distribution)
  • CostStats.precompute() for batch pre-computation across multiple netlists — intended workflow for conditional GFlowNets
  • Without normalization reward ratio is ~1.3x; with zscore ~33x; with minmax ~2.7x

Bug fixes

  • Fixed CUDA/CPU device mismatch in States._compare (.to() is no-op on same device, torch.compile safe)
  • Fixed sink state handling in update_masks
  • Fixed _apply_state_to_plc sentinel guard (loc >= 0 instead of loc != -1)
  • Fixed duplicate [tool.pytest] / [tool.pytest.ini_options] in pyproject.toml
  • Added graceful FileNotFoundError skip in tests when binary unavailable

younik and others added 15 commits October 7, 2025 16:45
- Remove 11MB plc_wrapper_main binary from git
- Add setup_plc.sh to download from Google Cloud Storage
- Add Apptainer/Singularity .def and wrapper for HPC environments
- Auto-detect binary in plc_client.py (env var > native > apptainer)
- Add medium synthetic netlist (15 macros, 8x8 grid, 1000x1000 canvas)
- Add train_chip_design_medium.py with replay buffer + cost logging
- Update train_chip_design.py with per-iteration cost/logZ logging
- Skip chip design tests gracefully when binary unavailable
- Update .gitignore to track setup scripts, ignore binary + .sif

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Resolves conflicts in gym/__init__.py, states.py, test_environments.py,
and test_scripts.py. Adapts ChipDesign env to master's lazy mask caching
API (DiscreteStates property-based masks, debug flag).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add singularity_image param to PlacementCost, create_placement_cost,
  ChipDesign, and train_chip_design.py (supports None, "auto", or path)
- Add plc_wrapper.def and build_container.py for building .sif images
- Adapt ChipDesign to master's lazy mask caching API (property-based
  forward_masks/backward_masks, debug flag, conditions/device kwargs)
- Override _step/_backward_step/reset to call update_masks explicitly
- Guard update_masks against sink states
- Fix _apply_state_to_plc to use loc >= 0 (handles sf sentinel -2)
- Fix duplicate pytest config sections in pyproject.toml
- Add *.sif to .gitignore

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move `other` (s0/sf ClassVars) to the instance tensor's device in
_compare via .to(), which is a no-op when devices already match and
torch.compile-safe. Removes the now-unnecessary ensure_same_device
debug guards in is_initial_state/is_sink_state.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Merges fd720e7 which replaces the bundled binary with a download
script, adds apptainer wrapper, medium netlist, and graceful test
skips. Keeps our singularity_image param and _find_singularity helper
alongside the new _resolve_plc_binary auto-detection.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@josephdviviano josephdviviano marked this pull request as ready for review April 3, 2026 08:23
josephdviviano and others added 2 commits April 3, 2026 04:28
The Google Cloud Storage download URL is no longer accessible (403).
Re-add the binary from git history until a reliable download source
is available.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a new discrete GFlowNet environment for chip macro placement (“ChipDesign”), along with supporting helper code, example training scripts, and smoke/tests to exercise the environment.

Changes:

  • Add ChipDesign environment + ChipDesignStates, backed by Circuit Training’s plc_wrapper_main via a local client wrapper.
  • Add chip-design helper package (test netlists/placements, container/binary setup scripts, container build helpers).
  • Add tutorial training scripts and integrate new smoke tests + environment tests.

Reviewed changes

Copilot reviewed 20 out of 22 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
src/gfn/gym/chip_design.py New ChipDesign environment and state class.
src/gfn/gym/helpers/chip_design/* PlacementCost client + Circuit Training helper utilities, test data, and setup/container scripts.
src/gfn/gym/__init__.py Export ChipDesign from gfn.gym.
tutorials/examples/train_chip_design.py New minimal training example for ChipDesign.
tutorials/examples/train_chip_design_medium.py Medium problem example with replay buffer.
tutorials/examples/test_scripts.py Add ChipDesign smoke test wiring.
testing/test_environments.py Add environment-level test for ChipDesign.
src/gfn/states.py Adjust device handling in _compare() used by state comparisons.
pyproject.toml Move pytest config to tool.pytest.ini_options.
.gitignore Allow committing chip-design setup scripts; ignore downloaded binaries/images.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +197 to +218
for i in range(len(states)):
state_tensor = states.tensor[i]

# Skip sink states — they have no valid actions.
if states.is_sink_state[i]:
continue

current_node_idx = states.current_node_idx[i].item()

if current_node_idx >= self.n_macros: # All macros placed
fwd[i, -1] = True # Only exit is possible
else:
self._apply_state_to_plc(state_tensor)
node_to_place = self._hard_macro_indices[int(current_node_idx)]
mask = self.plc.get_node_mask(node_to_place)
mask = torch.tensor(mask, dtype=torch.bool, device=self.device)
fwd[i, : self.n_grid_cells] = mask

if current_node_idx > 0:
last_placed_loc = state_tensor[int(current_node_idx - 1)].item()
assert last_placed_loc >= 0, "Last placed location should be >= 0"
bwd[i, int(last_placed_loc)] = True
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update_masks iterates for i in range(len(states)), but len(states) is prod(states.batch_shape). Indexing states.tensor[i] / fwd[i] / states.is_sink_state[i] only works for 1D batches; for multi-dimensional batch_shape this will raise IndexError or produce incorrect masks. A robust fix is to work on states.flatten() (and reshape back) or to iterate over states.tensor.view(-1, ...) and similarly view/reshape the masks.

Suggested change
for i in range(len(states)):
state_tensor = states.tensor[i]
# Skip sink states — they have no valid actions.
if states.is_sink_state[i]:
continue
current_node_idx = states.current_node_idx[i].item()
if current_node_idx >= self.n_macros: # All macros placed
fwd[i, -1] = True # Only exit is possible
else:
self._apply_state_to_plc(state_tensor)
node_to_place = self._hard_macro_indices[int(current_node_idx)]
mask = self.plc.get_node_mask(node_to_place)
mask = torch.tensor(mask, dtype=torch.bool, device=self.device)
fwd[i, : self.n_grid_cells] = mask
if current_node_idx > 0:
last_placed_loc = state_tensor[int(current_node_idx - 1)].item()
assert last_placed_loc >= 0, "Last placed location should be >= 0"
bwd[i, int(last_placed_loc)] = True
flat_states_tensor = states.tensor.view(-1, self.n_macros)
flat_is_sink_state = states.is_sink_state.view(-1)
flat_current_node_idx = states.current_node_idx.view(-1)
flat_fwd = fwd.view(-1, self.n_actions)
flat_bwd = bwd.view(-1, self.n_actions - 1)
for i in range(len(states)):
state_tensor = flat_states_tensor[i]
# Skip sink states — they have no valid actions.
if flat_is_sink_state[i]:
continue
current_node_idx = flat_current_node_idx[i].item()
if current_node_idx >= self.n_macros: # All macros placed
flat_fwd[i, -1] = True # Only exit is possible
else:
self._apply_state_to_plc(state_tensor)
node_to_place = self._hard_macro_indices[int(current_node_idx)]
mask = self.plc.get_node_mask(node_to_place)
mask = torch.tensor(mask, dtype=torch.bool, device=self.device)
flat_fwd[i, : self.n_grid_cells] = mask
if current_node_idx > 0:
last_placed_loc = state_tensor[int(current_node_idx - 1)].item()
assert last_placed_loc >= 0, "Last placed location should be >= 0"
flat_bwd[i, int(last_placed_loc)] = True

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on it!

Comment on lines +253 to +281
def step(self, states: ChipDesignStates, actions: Actions) -> ChipDesignStates:
"""Performs a forward step in the environment."""
new_tensor = states.tensor.clone()

non_exit_mask = ~actions.is_exit
if torch.any(non_exit_mask):
rows = torch.arange(len(states), device=self.device)[non_exit_mask]
cols = states.current_node_idx[non_exit_mask]
new_tensor[rows, cols] = actions.tensor[non_exit_mask].squeeze(-1)

if torch.any(actions.is_exit):
new_tensor[actions.is_exit] = self.sf

new_current_node_idx = states.current_node_idx.clone()
new_current_node_idx[non_exit_mask] += 1

return self.States(tensor=new_tensor, current_node_idx=new_current_node_idx)

def backward_step(
self, states: ChipDesignStates, actions: Actions
) -> ChipDesignStates:
"""Performs a backward step in the environment."""
new_tensor = states.tensor.clone()
rows = torch.arange(len(states), device=self.device)
cols = states.current_node_idx - 1
new_tensor[rows, cols] = -1

new_current_node_idx = states.current_node_idx - 1
return self.States(tensor=new_tensor, current_node_idx=new_current_node_idx)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

step()/backward_step() assume states.tensor is 2D (batch, n_macros) and use rows = torch.arange(len(states)) plus new_tensor[rows, cols]. This breaks when batch_shape has more than one dimension (since len(states) is prod(batch_shape) and the tensor is higher-rank). Either explicitly restrict ChipDesign to 1D batches (and assert in reset/step) or rewrite these ops to operate on a flattened view and then reshape back.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on it!

Comment on lines +126 to +131
self.plc = placement_util.create_placement_cost(
netlist_file=netlist_file,
init_placement=init_placement,
singularity_image=singularity_image,
)
self.std_cell_placer_mode = std_cell_placer_mode
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ChipDesign creates a long-lived PlacementCost subprocess (self.plc = ...) but the environment never closes it. This can leak processes/sockets across tests or repeated runs. Consider adding an explicit close()/__del__ (and optionally __enter__/__exit__) that calls self.plc.close() and using it in tests/scripts.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot can you submit a PR to do the simpler case (add explicit close / del)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in ef01743. Added close() (which guards against double-close with hasattr/None check) and __del__() that delegates to it, so the subprocess is cleaned up both on explicit teardown and on garbage collection.

Comment on lines +133 to +140
def get_blockages_from_comments(
filenames: Union[str, List[str]],
) -> Optional[List[List[float]]]:
"""Returns list of blockages if they exist in the file's comments section."""
for filename in filenames:
if not filename:
continue
blockages = []
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_blockages_from_comments accepts Union[str, List[str]] but then iterates for filename in filenames:. If a single string is passed, this will iterate over characters instead of filenames. Normalize filenames to a list when it is a str to match the annotated API.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot can you push the fix

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in b290752. Added if isinstance(filenames, str): filenames = [filenames] before the loop to normalize a single string to a list, preventing character-by-character iteration.

Comment on lines +462 to +464
return min(
x, y, canvas_width - x - canvas_width, canvas_height - y - canvas_height
)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

distance_to_edge() computes canvas_width - x - canvas_width (and similarly for height), which simplifies to -x / -y and makes the distance negative/unrelated to the edge. This will misorder nodes in legalization_order. Likely intended: canvas_width - x and canvas_height - y (or similar).

Suggested change
return min(
x, y, canvas_width - x - canvas_width, canvas_height - y - canvas_height
)
return min(x, y, canvas_width - x, canvas_height - y)

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@younik I agree with copilot here - what was the logic behind this math?

Comment on lines +851 to +873
states = env.reset(batch_shape=BATCH_SIZE)
assert states.tensor.shape == (BATCH_SIZE, env.n_macros)
assert torch.all(states.tensor == -1)

# Place macros
for i in range(env.n_macros):
actions = env.actions_from_tensor(format_tensor([i] * BATCH_SIZE))
expected_tensor = states.tensor.clone()
states = env._step(states, actions)
expected_tensor[..., i] = i
assert torch.equal(states.tensor, expected_tensor)

# Exit action (valid)
actions = env.actions_from_tensor(format_tensor([env.n_actions - 1] * BATCH_SIZE))
final_states = env._step(states, actions)
assert torch.all(final_states.is_sink_state)

# Check rewards
assert isinstance(final_states, ChipDesignStates)
rewards = env.log_reward(final_states)
assert torch.all(rewards == rewards[0])


Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_chip_design creates ChipDesign (which spawns a PlacementCost subprocess) but never closes it. To avoid leaking processes across the test suite, add a teardown (e.g., try/finally calling env.plc.close() or env.close() if added).

Suggested change
states = env.reset(batch_shape=BATCH_SIZE)
assert states.tensor.shape == (BATCH_SIZE, env.n_macros)
assert torch.all(states.tensor == -1)
# Place macros
for i in range(env.n_macros):
actions = env.actions_from_tensor(format_tensor([i] * BATCH_SIZE))
expected_tensor = states.tensor.clone()
states = env._step(states, actions)
expected_tensor[..., i] = i
assert torch.equal(states.tensor, expected_tensor)
# Exit action (valid)
actions = env.actions_from_tensor(format_tensor([env.n_actions - 1] * BATCH_SIZE))
final_states = env._step(states, actions)
assert torch.all(final_states.is_sink_state)
# Check rewards
assert isinstance(final_states, ChipDesignStates)
rewards = env.log_reward(final_states)
assert torch.all(rewards == rewards[0])
try:
states = env.reset(batch_shape=BATCH_SIZE)
assert states.tensor.shape == (BATCH_SIZE, env.n_macros)
assert torch.all(states.tensor == -1)
# Place macros
for i in range(env.n_macros):
actions = env.actions_from_tensor(format_tensor([i] * BATCH_SIZE))
expected_tensor = states.tensor.clone()
states = env._step(states, actions)
expected_tensor[..., i] = i
assert torch.equal(states.tensor, expected_tensor)
# Exit action (valid)
actions = env.actions_from_tensor(
format_tensor([env.n_actions - 1] * BATCH_SIZE)
)
final_states = env._step(states, actions)
assert torch.all(final_states.is_sink_state)
# Check rewards
assert isinstance(final_states, ChipDesignStates)
rewards = env.log_reward(final_states)
assert torch.all(rewards == rewards[0])
finally:
env.plc.close()

Copilot uses AI. Check for mistakes.
Comment on lines 178 to +194

[tool.pytest.ini_options]
pythonpath = [
"tutorials",
"testing",
]
reportOptionalMemberAccess = "error"
filterwarnings = [
"ignore::pytest.PytestConfigWarning",
]
reportUnusedImport = true
reportUnusedVariable = true
reportOptionalSubscript = true
reportOptionalCall = true
reportOptionalIterable = true
reportOptionalContextManager = true
reportOptionalOperand = true
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The [tool.pytest.ini_options] section contains several report* settings that belong to Pyright, not Pytest, so they will be ignored (and may emit/require suppressing config warnings). Consider keeping only valid pytest ini options here (e.g., pythonpath, filterwarnings) and moving/removing the report* settings under [tool.pyright] where they take effect.

Suggested change
[tool.pytest.ini_options]
pythonpath = [
"tutorials",
"testing",
]
reportOptionalMemberAccess = "error"
filterwarnings = [
"ignore::pytest.PytestConfigWarning",
]
reportUnusedImport = true
reportUnusedVariable = true
reportOptionalSubscript = true
reportOptionalCall = true
reportOptionalIterable = true
reportOptionalContextManager = true
reportOptionalOperand = true
reportOptionalMemberAccess = "error"
reportUnusedImport = true
reportUnusedVariable = true
reportOptionalSubscript = true
reportOptionalCall = true
reportOptionalIterable = true
reportOptionalContextManager = true
reportOptionalOperand = true
[tool.pytest.ini_options]
pythonpath = [
"tutorials",
"testing",
]
filterwarnings = [
"ignore::pytest.PytestConfigWarning",
]

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot can you make a new PR that only changes pyproject.toml and submits it to master? This is a distinct issue that does not belong in this pr.

josephdviviano and others added 2 commits April 3, 2026 13:48
The smoke test was failing because ChipDesignArgs didn't include the
singularity_image field added to train_chip_design.py's argparse.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…rocess leak

Agent-Logs-Url: https://github.com/GFNOrg/torchgfn/sessions/80dd5722-f393-4868-8932-83d335fb19ac

Co-authored-by: josephdviviano <4142570+josephdviviano@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

Copilot AI commented Apr 3, 2026

@josephdviviano I've opened a new pull request, #505, to work on those changes. Once the pull request is ready, I'll request review from you.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@codecov
Copy link
Copy Markdown

codecov bot commented Apr 3, 2026

Codecov Report

❌ Patch coverage is 54.48878% with 365 lines in your changes missing coverage. Please review.
✅ Project coverage is 70.92%. Comparing base (e6eddda) to head (ba6f878).
⚠️ Report is 46 commits behind head on master.

Files with missing lines Patch % Lines
src/gfn/gym/helpers/chip_design/utils.py 40.59% 233 Missing and 26 partials ⚠️
src/gfn/gym/chip_design.py 82.05% 38 Missing and 4 partials ⚠️
src/gfn/gym/helpers/chip_design/build_container.py 0.00% 32 Missing ⚠️
src/gfn/gym/helpers/chip_design/plc_client.py 67.67% 24 Missing and 8 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #407      +/-   ##
==========================================
- Coverage   72.49%   70.92%   -1.57%     
==========================================
  Files          55       59       +4     
  Lines        8522     9339     +817     
  Branches     1090     1270     +180     
==========================================
+ Hits         6178     6624     +446     
- Misses       1957     2287     +330     
- Partials      387      428      +41     
Files with missing lines Coverage Δ
src/gfn/states.py 81.13% <100.00%> (+0.13%) ⬆️
src/gfn/gym/helpers/chip_design/build_container.py 0.00% <0.00%> (ø)
src/gfn/gym/helpers/chip_design/plc_client.py 67.67% <67.67%> (ø)
src/gfn/gym/chip_design.py 82.05% <82.05%> (ø)
src/gfn/gym/helpers/chip_design/utils.py 40.59% <40.59%> (ø)

... and 5 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

josephdviviano and others added 10 commits April 3, 2026 14:46
update_masks, step, and backward_step assumed 1D batch_shape, using
flat indexing (states.tensor[i]) that would fail for higher-rank
batches. Now flatten to (n, n_macros) before iterating/indexing and
reshape back to (*batch_shape, ...) on return. Also handle the empty
batch edge case in step().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Implements optimize_orientations() which sweeps all 8 DEF/LEF
orientations (N/S/E/W/FN/FS/FE/FW) per macro via greedy coordinate
descent, matching Google's cd_finetune approach. Called in log_reward
after analytical placement. Enabled by default via cd_finetune=True.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds --cd_finetune / --no-cd_finetune CLI flag (default: enabled).
Final evaluation now compares placements with and without CD
orientation optimization, showing per-placement and mean reward gains.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds reward_norm, reward_temper, and cost_stats params to ChipDesign.
Normalization modes: None (raw -cost), "zscore", or "minmax", all
estimated from random placement samples at init.

CostStats.precompute() allows batch pre-computation across multiple
netlists, returning a dict of stats keyed by netlist path. These can
be passed to ChipDesign via cost_stats= to avoid recomputation, which
is the intended workflow for conditional GFlowNets across netlists.

Without normalization, reward ratio across placements is ~1.3x.
With zscore: ~33x. With minmax: ~2.7x.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…asks

New testing/test_chip_design.py with comprehensive coverage:
- ChipDesignStates: init, clone, getitem, setitem, extend, stack (9)
- Environment core: reset, step, backward, roundtrips (8)
- Masks: initial, all-placed, sink, mid-trajectory (4)
- Sentinel handling: unplaced, sf, partial (3)
- Reward & normalization: finite, identical, zscore, minmax, temper (7)
- PLC resolution: env var, not found, singularity detection (4)
- File parsing: attributes, sizes, blockages, cost_info (8)
- Integration: lifecycle, ordering, orientations roundtrip (5)

Module-scoped fixture reuses one PLC subprocess. Only one log_reward
call total. Pure Python tests (12) always run; binary tests (36) skip
gracefully. Full suite runs in ~58s.

Also fixes empty batch crash in update_masks (reshape(0, -1) ambiguity).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@josephdviviano josephdviviano merged commit ea2931a into master Apr 4, 2026
6 checks passed
@josephdviviano
Copy link
Copy Markdown
Collaborator

@younik feel free to do a post hoc check

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants