Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #189 +/- ##
=======================================
Coverage 77.50% 77.50%
=======================================
Files 3 3
Lines 40 40
=======================================
Hits 31 31
Misses 9 9 🚀 New features to boost your workflow:
|
|
Thanks @Andrea-gm this is really great. This failed pre-commit. Please install pre-commit in your local env then when you are in the top level directory of the package, to get teh checks to run locally. More info can be found in scikit-package.github.io/scikit-package. There is also info about how to use the news feature there too. I don't mind if you don't add a news but if you are willing, it helps us to build the changelog when we release. One final comment. I think this is a user-facing variable, and so I wonder if we want the trailing underscore. Why did you make that choice? |
|
Hi Simon, Thanks for the suggestion, will do. I have also added the news item. I used the trailing underscore to follow the standard scikit-learn convention for attributes that are estimated or determined during the fit process. Since converged_ is a result of the optimization (like components_, weights_, and stretch_ which are already in the class), I thought it best to maintain consistency with the existing attributes and the broader scientific Python ecosystem. I am happy to remove the underscore if you prefer a different style for status flags, but I aimed to match the current pattern of the codebase. |
Was giving me errors due to use of python 3.11
sbillinge
left a comment
There was a problem hiding this comment.
This is maybe a question for @john-halloran as well as @Andrea-gm but I am not loving that self.converged_ has a trailing underscore. I was wondering where that choice came from. I see that there are other variables with a similar pattern (I see self.components_ for example) so perhaps @Andrea-gm is following a pattern from @john-halloran but can we discuss this quickly? A trailing underscore is often reserved for making variables distinct from builtin quantities, but that doesn't seem to be an issue here.
|
Hi all. I originally added the underscores to follow the PEP8 convention where an underscore indicates a private feature of a class. But, this isn't consistently applied at the moment. There are things marked private that shouldn't be and vice versa. I also got the standard a bit wrong because the underscore is meant to be leading, not trailing. |
|
Indeed, I tried to follow whatever convention was being used in the code. Please let me know if I should change my fork or if you will take care of this later |
|
I see. We normally prepend the underscore to make a function private, not append it. But I thought the point was for this to be public? Aren't we doing this so the user can know if their process converged or not? So we want an attribute that stores whether it converged or not and a user can interrogate the object to find this out? |
|
I was able to look back through the full thread. Sorry, I was looking at it a bit quickly. It is an interesting discussion. It seems that maybe there is a scikit-learn convention that we could adopt if we wanted that @Andrea-gm mentions, but it is not a convention that we were knowingly using. @john-halloran what do you think? Should we adopt that convention? It has some merits. But also, I would be ok removing the underscores for public and prepending them for private attributes and methods and making everything consistent this way. |
This PR adds a converged_ boolean attribute to the SNMFOptimizer class.
Previously, the .fit() method would stop optimization upon reaching the tolerance threshold, but there was no programmatic way for the user to verify if the model actually converged or if it simply hit max_iter.
Changes:
Added self.converged_ flag (initialized to False, set to True upon successful convergence).
Follows standard scikit-learn conventions for estimated attributes (trailing underscore).