Skip to content

Updating Dimod Sampler#73

Open
anahitamansouri wants to merge 1 commit intodwavesystems:mainfrom
anahitamansouri:feature/dimod-conditional-sampling
Open

Updating Dimod Sampler#73
anahitamansouri wants to merge 1 commit intodwavesystems:mainfrom
anahitamansouri:feature/dimod-conditional-sampling

Conversation

@anahitamansouri
Copy link
Copy Markdown
Collaborator

This PR adds:

  • Conditional sampling feature for Dimod sampler.
  • Tests for this feature.

@anahitamansouri anahitamansouri self-assigned this Apr 7, 2026
@anahitamansouri anahitamansouri added the enhancement New feature or request label Apr 7, 2026
Copy link
Copy Markdown
Collaborator

@kevinchern kevinchern left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a quick first pass over the main function.

super().__init__()

def sample(self, x: torch.Tensor | None = None) -> torch.Tensor:
"""Sample from the dimod sampler and return the corresponding tensor.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update docstrings to reflect behaviour of conditional sampling

Comment on lines +118 to +124
for i in range(x.shape[0]):
# Fresh BQM
bqm = dimod.BinaryQuadraticModel.from_ising(h, J)

# Build conditioning dict
conditioned = {node: int(x[i, j].item())
for j, node in enumerate(nodes) if mask[i, j]}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we iterate over x instead of range(x.shape[0])? Seems more natural

Comment on lines +149 to +153
if bqm.num_variables == 0:
full = torch.tensor([conditioned[node] for node in nodes],
device=device, dtype=torch.float)
results.append(full)
continue
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd move this up the for-loop to avoid unnecessary computations; continue sooner when possible

results.append(full)
continue

# Sample one configuration per input
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be documented in the docstring. It should also raise a warning if num_reads is supplied and overwritten.

# Sample one configuration per input
sample_kwargs = dict(self._sampler_params)
sample_kwargs["num_reads"] = 1
sample_set = AggregatedSamples.spread(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the motivation for this line?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh you mean calling the spread? I was following the earlier use case. You're right, it's redundant. I'll remove it then.

Comment on lines +163 to +169
sample = sample_set.first.sample

# Reconstruct full sample
full = torch.empty(n_nodes, device=device)
for j, node in enumerate(nodes):
full[j] = conditioned[node] if node in conditioned else float(sample[node])
results.append(full)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sample's variable ordering may not be identical to that of grbm.nodes.
I'd add a test to verify correct ordering first.

@@ -22,7 +22,7 @@
from dwave.plugins.torch.samplers.dimod_sampler import DimodSampler
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants