Skip to content

Support scalar parameters in type inference#157

Closed
fabio-innatera wants to merge 1 commit intoneuromorphs:mainfrom
fabio-innatera:snntorch-type-inference
Closed

Support scalar parameters in type inference#157
fabio-innatera wants to merge 1 commit intoneuromorphs:mainfrom
fabio-innatera:snntorch-type-inference

Conversation

@fabio-innatera
Copy link
Copy Markdown
Collaborator

@fabio-innatera fabio-innatera commented Jul 21, 2025

I am trying to import a NIR file exported by latest snnTorch from GitHub, and got an error in type inference.

Turns out that snnTorch uses numpy scalars like np.float32(4.0) to define parameters, which end up
as input_type={'input': ()} (empty tuple). This implies that the actual neuron count is not known on the neuron node.

Technically, snnTorch is wrong here, the type annotation for neurons call for a np.array, but because numpy scalar values like np.float32 have a .shape property, this actually worked just fine before we introduced type inference.

Thankfully we are already doing type inference, and in most cases we can infer the number of neurons and therefore input_type and output_type from preceding nodes, and this is what I am doing here.

When node parameters are scalars (e.g., LIF(tau=np.float32(4.0))), the resulting input_type
contains tuples of length 0, since scalar parameters don't specify the layer size.
In these cases, we need to infer the input dimensions from the preceding nodes.
This is done by checking if the input_type is None or if any of the values are None or empty tuples.
@fabio-innatera fabio-innatera force-pushed the snntorch-type-inference branch from fd8dbdd to a59ae9a Compare July 31, 2025 11:33
@fabio-innatera fabio-innatera changed the title Fix type inference for snnTorch: Support scalar parameters in type inference Support scalar parameters in type inference Jul 31, 2025
@fabio-innatera fabio-innatera marked this pull request as ready for review July 31, 2025 11:51
@fabio-innatera fabio-innatera requested a review from Jegp July 31, 2025 12:01
@fabio-innatera
Copy link
Copy Markdown
Collaborator Author

Maybe we should also change the type annotations?

import typing
import numpy.typing as npt

typing.Union[np.floating, npt.NDArray[np.floating]]

@Jegp
Copy link
Copy Markdown
Collaborator

Jegp commented Jul 31, 2025

I would argue against this. One of the core motivations when building NIR was to disambiguate as much as possible. We had long discussios on broadcasting when we designed the specs and the general sentiment was to avoid it because it is, as you say, ambiguous. We wouldn't know how many neurons that appear in the graph which isn't kosher for evaluation purposes.

@fabio-innatera
Copy link
Copy Markdown
Collaborator Author

I agree that we should enforce usage of np.array instead of loosening the constraints.

I opened a new PR that does just that: #162

Digging around in snntorch, it looks like at least the tests already use array parameters: https://github.com/jeshraghian/snntorch/blob/master/tests/test_nir.py#L60

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants