-
Notifications
You must be signed in to change notification settings - Fork 12
JIT compilation? #70
Copy link
Copy link
Open
Labels
improvementSomething which would improve current status, but not add anything newSomething which would improve current status, but not add anything newinvestigationSomething which might require a careful studySomething which might require a careful studymedium priorityNot urgent but should be dealt with sooner rather than laterNot urgent but should be dealt with sooner rather than later
Metadata
Metadata
Assignees
Labels
improvementSomething which would improve current status, but not add anything newSomething which would improve current status, but not add anything newinvestigationSomething which might require a careful studySomething which might require a careful studymedium priorityNot urgent but should be dealt with sooner rather than laterNot urgent but should be dealt with sooner rather than later
PyTorch has the ability to Just In Time compile stuff to make it run quicker and be more memory efficient. I'd tried to do this a while ago with
@weak_scriptand@weak_moduledecorators, however they didn't seem to do much and I had trouble automatically generating the docs. I then found that PyTorch recommended that users not use these decorators. Since then PyTorch apparently introduced@torch.jit.scriptdecorators, which are for user use and supposedly provide noticeable improvements in speed and memory usage.Examples could be for compiling activation functions:


Whereas LUMIN's implementation of Swish is simply:
x*torch.sigmoid(x). Other possibilities could be in LUMIN's loss function (e.g.WeightedMSE). I'm not sure how far one can take this; should all things related to PyTorch be JIT complied, or perhaps only operations on tensors?A starting point would be test out the JIT compiled Swish against the current version, and then to try to find out more about what should be JITed, and what doesn't.