Support scalar parameters in type inference#157
Closed
fabio-innatera wants to merge 1 commit intoneuromorphs:mainfrom
Closed
Support scalar parameters in type inference#157fabio-innatera wants to merge 1 commit intoneuromorphs:mainfrom
fabio-innatera wants to merge 1 commit intoneuromorphs:mainfrom
Conversation
When node parameters are scalars (e.g., LIF(tau=np.float32(4.0))), the resulting input_type contains tuples of length 0, since scalar parameters don't specify the layer size. In these cases, we need to infer the input dimensions from the preceding nodes. This is done by checking if the input_type is None or if any of the values are None or empty tuples.
fd8dbdd to
a59ae9a
Compare
Collaborator
Author
|
Maybe we should also change the type annotations? import typing
import numpy.typing as npt
typing.Union[np.floating, npt.NDArray[np.floating]] |
Collaborator
|
I would argue against this. One of the core motivations when building NIR was to disambiguate as much as possible. We had long discussios on broadcasting when we designed the specs and the general sentiment was to avoid it because it is, as you say, ambiguous. We wouldn't know how many neurons that appear in the graph which isn't kosher for evaluation purposes. |
Collaborator
Author
|
I agree that we should enforce usage of np.array instead of loosening the constraints. I opened a new PR that does just that: #162 Digging around in snntorch, it looks like at least the tests already use array parameters: https://github.com/jeshraghian/snntorch/blob/master/tests/test_nir.py#L60 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
I am trying to import a NIR file exported by latest snnTorch from GitHub, and got an error in type inference.
Turns out that snnTorch uses numpy scalars like
np.float32(4.0)to define parameters, which end upas
input_type={'input': ()}(empty tuple). This implies that the actual neuron count is not known on the neuron node.Technically, snnTorch is wrong here, the type annotation for neurons call for a np.array, but because numpy scalar values like
np.float32have a.shapeproperty, this actually worked just fine before we introduced type inference.Thankfully we are already doing type inference, and in most cases we can infer the number of neurons and therefore input_type and output_type from preceding nodes, and this is what I am doing here.