Releases: GilesStrong/lumin
Releases · GilesStrong/lumin
v0.9.1
Full Changelog: v0.9.0...v0.9.1
v0.9.0
Major updates to dependencies and moves to poetry for build and packaging
What's Changed
- Refactor use poetry by @GilesStrong in #116
- chore: python version by @GilesStrong in #118
- chore: m2r version by @GilesStrong in #119
Full Changelog: v0.8.1...v0.9.0
v0.8.1
N.B. Last version to use setuptools for building, subsequent versions use Poetry
What's Changed
- Remove lambdas by @GilesStrong in #106
- Matrix targets by @GilesStrong in #107
- Sparse targ fixes by @GilesStrong in #108
- No test shuffle by @GilesStrong in #109
- Fix ensemble loading by @GilesStrong in #110
- fix: Use 'req@url' syntax to install from remote VCS by @matthewfeickert in #111
- Feat by from fy by @GilesStrong in #112
- Feat add geo data by @GilesStrong in #113
- Refactor: PDPDBox is optional dep by @GilesStrong in #114
New Contributors
- @matthewfeickert made their first contribution in #111
Full Changelog: v0.8.0...v0.8.1
v0.8.0 - Mistake not...
v0.8.0 - Mistake not...
Important changes
- GNN architectures generalised into feature extraction and graph collapse stages, see details below and updated tutorial
Breaking
Additions
GravNetGNN head andGravNetLayersub-block Qasim, Kieseler, Iiyama, & Pierini, 2019- Includes optional self-attention
SelfAttentionandOffsetSelfAttention- Batchnorm:
LCBatchNorm1dto run batchnorm over length x channel data- Additional
bn_classarguments to blocks, allowing the user to choose different batchnorm implementations - 1, 2, & 3D Running batchnorm layers from fastai (https://github.com/fastai/course-v3)
GNNHeadencapsulating head for feature extraction, usingAbsGraphFeatExtractorclasses, and graph collapsing, usingGraphCollapserclasses- New callbacks:
AbsWeightDatato weight folds of data based on their inputs or targetsEpochSaverto save the model to a new file at the end of every epochCycleStepcombines OneCycle and step-decay of optimiser hyper-parameters
- New CNN blocks:
AdaptiveAvgMaxConcatPool1d,AdaptiveAvgMaxConcatPool2d,AdaptiveAvgMaxConcatPool3duse average and maximum pooling to reduce data to specified number sizes per channelSEBlock1d,SEBlock2d,SEBlock3dapply squeeze-excitation to data channels
BackwardHookfor recording telemetric data during backwards passes- New losses:
WeightedFractionalMSE,WeightedBinnedHuber,WeightedFractionalBinnedHuber
- Options for log x & y axis in
plot_feat
Removals
- Scheduled removal of depreciated methods and functions from old model and callback system:
OldAbsCallbackOldCallbackOldAbsCyclicCallbackOldCycleLROldCycleMomOldOneCycleOldBinaryLabelSmoothOldBinaryLabelSmoothSequentialReweightSequentialReweightClassesOldBootstrapResampleOldParametrisedPredictionOldGradClipOldLsuvInitOldAbsModelCallbackOldSWAOldLRFinderOldEnsembleOldAMSOldMultiAMSOldBinaryAccuracyOldRocAucScoreOldEvalMetricOldRegPullOldRegAsProxyPullOldAbsModelOldModelfold_train_ensembleOldMetricLoggerfold_lr_findold_plot_train_history_get_folds
- Unnecessary
pred_cbargument intrain_models
Fixes
- Bug when trying to use batchnorm in
InteractionNet - Bug in
FoldFile.save_fold_predwhen predictions change shape and try to overwrite existing predictions
Changes
paddingargument in conv 1D blocks renamed to pad- Graph nets: generalised into feature extraction for features per vertex and graph collapsing down to flat data (with optional self-attention)
- Renamed
FowardHooktoForwardHook - Abstract classes no longer inherit from ABC, but rather have
metaclass=ABCMetain order to be compatible with py>=3.7 - Updated the example of binary classification of signal & background to use the model and training resulting from https://iopscience.iop.org/article/10.1088/2632-2153/ab983a
- Also changed the multi-target regression example to use non-densely connected layers, and the multi-target classification example to use a cosine annealed cyclical LR
- Updated the single-target regression example to use
WeightedBinnedHuberas a loss - Changed
from torch.tensor import Tensortofrom torch import Tensorfor compatibility with latest PyTorch
Depreciations
OldInteractionNetreplaced in favour ofInteractionNetfeature extractor. Will be removed in v0.9
v0.7.2 - All your batch are belong to us - Micro Update
v0.7.2 - All your batch are belong to us - Micro Update
Important changes
- Fixed bug in
Model.set_momwhich resulted in momentum never being set (affects e.g. OneCycle and CyclicalMom) Model.fitnow shuffles the fold indices for training folds prior to each epoch rather than once per training; removes the periodicity in training loss which was occasionally apparent.- Bugs found in
OneCycle:- When training multiple models, the initial LR for subsequent models was the end LR of the previous model (list in partial was being mutated)
- The model did not stop training at end of cycle
- Momentum was never altered in the optimiser
Breaking
Additions
- Mish activation function
Model.fit_params.val_requires_gradto control whether to compute validation epoch with gradient, default zero, built some losses might require it in the futureParameterisedPredictionnow stores copies of values for parametrised features in case they change, or need to be changed locally during prediction.freeze_layersandunfreeze_layersmethods forModelPivotTrainingcallback implementing Learning to Pivot Louppe, Kagan, & Cranmer, 2016- New example reimplementing paper's jets example
TargReplacecallback for replacing target data inBatchYielderduring training- Support for loss functions being
fastcorepartiallerobjects train_modelsnow has arguments to:- Exclude specific fold indices from training and validation
- Train models on unique folds, e.g. when training 5 models on a file with 10 folds, each model would be trained on their own unique pair of folds
- Added discussion of core concepts in LUMIN to the docs
Removals
Fixes
- Cases in which a NaN in the metric during training could spoil plotting and
SaveBest - Bug in
Model.set_momwhich resulted in momentum never being set (affects e.g. OneCycle and CyclicalMom) - Bug in
MetricLogger.get_resultswhere tracking metrics could be spoilt by NaN values - Bug in
trainwhen not passing any metrics - Bug in FoldYielder when loading output pipe from Path
- Bugs found in
OneCycle:- When training multiple models, the initial LR for subsequent models was the end LR of the previous model (list in partial was being mutated)
- The model did not stop training at end of cycle
- Momentum was never altered in the optimiser
Changes
Model.fitnow shuffles the fold indices for training folds prior to each epoch rather than once per training; removes the periodicity in training loss which was occasionally apparent.- Validation and prediction forwards passes now performed without gradient tracking to save memory and time
MetricLoggernow records loss values on batch end rather than on forwards endon_batch_endnow always called regardless of model state
Depreciations
Comments
v0.7.1 - All your batch are belong to us - Micro Update
v0.7.1
Important changes
EvalMetricsrevised to inherit fromCallbackand be called on validation data after every epoch. User-writtenEvalMetricswillneed to be adjusted to work with the new calling method: adjustevaluatemethod and constructor may need to be adjusted; see existing metrics to see how.
Breaking
eval_metricsargument intrain_modelsrenamed tometric_partialsand now takes a list of partialEvalMetrics- User-written
EvalMetricswill need to be adjusted to work with the new calling method: adjustevaluatemethod and constructor may need to be adjusted; see existing metrics to see how.
Additions
OneCyclenow has acycle_ends_trainingwhich allows training to continue at the final LR and Momentum. keeping at default ofTrueends the training once the cycle is complete, as usual.to_npnow returnsNonewhen input tensor isNoneplot_train_historynow plots metric evolution for validation data
Removals
Fixes
Modelnow createscb_savepathis it didn't already exist- Bug in
PredHandlerwhere predictions were kept on device leading to increased memory usage - Version issue in matplotlib affecting plot positioning
Changes
Depreciations
- V0.8:
- All
EvalMetricsdepreciated with metric system. They have been copied and renamed to Old* for compatibility with the old model training system. OldEvalMetric: Replaced byEvalMetricOldMultiAMS: Replaced byMultiAMSOldAMS: Replaced byAMSOldRegPull: Replaced byRegPullOldRegAsProxyPull: Replaced byRegAsProxyPullOldRocAucScore: Replaced byRocAucScoreOldBinaryAccuracy: Replaced byBinaryAccuracy
- All
Comments
v0.7.0 - All your batch are belong to us
v0.7.0 - All your batch are belong to us
Important changes
- Model training and callbacks have significantly changed:
Model.fitnow expects to perform the entire training proceedure, rather than just single epochs.- A lot of the functionality of the old training method
fold_train_ensembleis now delegated toModel.fit. - A new ensemble training method
train_modelshas replacedfold_train_ensemble. It provied a similar API, but aims to be more understandable to users. Model.fitis now 'stateful': afit_paramsclass is created containing all the information and data relevant to training the model and trainig methods change their actions according tofit_params.state('train', 'valid', and 'test')- Callbacks now have greater potential: They have more action points during the training cycle, where they can affect training behaviour, and they have access to
fit_params, allowing them to modify more aspects of the training and have indirect access to all other callbacks. - The "tick" for the training loop is now one epoch, i.e. validation loss is computed after the entire use of the training data (as opposed to after every sub-epoch), cyclic callbacks now work on the scale of epochs, rather than sub-epochs. Due to the data being split into folds, the concept of a sup-epoch still exists, but the APIs are now simplified for the user (previously they were a mixture of sup-epoch and epoch arguments).
- For users who do not wish to transition to the new model behaviour, the existing behaviour can still be achieved by using the
Old*models and classes. See the depreciations section for the full list.
- Input masks (present if e.g using feature subsampling in
ModelBuilder`)BatchYieldernow takes aninput_maskargument to filter inputsModelprediction methods no longer take input mask arguments, instead the input mask (if present) is automatically used. If users have already filtered their data, they should manually remove the input mask from the model (i.e. set it to None)
- Callbacks which take arguments related to (sub-)epochs (e.g. cycle length, scale, time to renewal. etc. for
CycleLR,OneCycle, etc. andSWA) now take these arguments in terms of epochs. I.e. a OneCycle schedule with 9 training folds, running for 15 epochs would previously require e.g.lenghts=(45,90)in order to complete the cycle in 15 epochs (135 subepochs). Now it is specified as simplylenghts=(5,10). Additionally, these arguments must be integers. Floats will be coerced to integers with warning. lr_findnow runds over all training folds, instead of just 1
Breaking
- Heavy renaming of methods and classes due to changes in model trainng and callbacks.
Additions
__del__method toFowardHookclassBatchYielder:- Now takes an
input_maskargument to filter inputs - Now takes an argument allowing incomplete batches to be yielded
- Target array can now be None
- Now takes an
Model:- now takes a
bsargument forevaluate - predictions can now be modified by passing a
PredHandlercallback topred_cb. The default one simply returns the model predicitons, however other actions could be defined by the user, e.g. performing argmax for multiclass classifiers.
- now takes a
Removals
Model:- Now no longer takes
callbacksandmask_inputsas arguments forevaluate evaluate_from_byremoved, just callevaluate
- Now no longer takes
- Callbacks no longer take model and plot_settings arguments during initialisation. These should be added by calling the relevant setters.
Modelwill call them when relevant.
Fixes
- Potential bug in convolutional models where checking the out size of the head would affect the batchnorm averaging
- Potential bug in
plot_sample_predto do with bin ranges ForwardHooknot working with passed hook functions
Changes
BinaryLabelSmoothnow only applies smoothing during training and not in validationEnsemblefrom_resultsandbuild_ensemblenow no longer takelocationas an argument. Instead, results should contain the savepath for the models_build_ensembleis now private
Model:predict_arrayandpredict_foldsare now privatefitnow expects to perform the entire fitting of the model, rather than just one sup-epoch. Additionally, validation loss is now computed only at the end of the epoch, rather that previously where it was computed after each fold.
SWArenewal_periodshould now be None in order to prevent a second average being tracked (previously was negative)- Some examples have been renamed, and copies using the old model fitting proceedure and old callbacks are available in
examples/old lr_findnow runds over all training folds, instead of just 1
Depreciations
- V0.8:
- Many classes and methods depreciated with new model. They have been copied and renamed to Old*.
OldAbsModel: Replaced byAbsModelOldModel: Replaced byModelOldAbsCallback: Replaced byAbsCallbackOldCallback: Replaced byCallbackOldBinaryLabelSmooth: Replaced byBinaryLabelSmoothOldSequentialReweight: Will not be replacedSequentialReweightClasses: Will not be replacedOldBootstrapResample: Replaced byBootstrapResampleOldParametrisedPrediction: Replaced byParametrisedPredictionOldGradClip: Replaced byGradClipOldLsuvInitLReplaced byLsuvInitOldAbsCyclicCallback: Replaced byAbsCyclicCallbackOldCycleLR: Replaced byCycleLROldCycleMom: Replaced byCycleMomOldOneCycle: Replaced byOneCycleOldLRFinder: Replaced byLRFinderfold_lr_find: Replaced bylr_findfold_train_ensemble: Replaced bytrain_modelsOldMetricLogger: Replaced byMetricLoggerAbsModelCallback: Will not be replacedOldSWA: Replaced bySWAold_plot_train_history: Replaced byplot_train_historyOldEnsemble: Replaced byEnsemble
Comments
v0.6.0 - Train and Converge Until it is Done
v0.6.0 - Train and Converge Until it is Done
Important changes
auto_filter_on_linear_correlationnow examines all features within correlated clusters, rather than just the most correlated pair. This means that the function now only needs to be run once, rather than the previously recommended multiple rerunning.- Moved to Scikit-learn 0.22.2, and moved, where possible, to keyword argument calls for sklearn methods in preparation for 0.25 enforcement of keyword arguments
- Fixed error in patience when using cyclical LR callbacks, now specify the number of cycles to go without improvement. Previously had to specify 1+number.
- Matrix data is no longer passed through
np.nan_to_numinFoldYielder. Users should ensure that all values in matrix data are not NaN or Inf - Tensor data:
df2foldfile,fold2foldfile, and 'add_meta_data` can now support the saving of arbitrary matrices as a matrix input- Pass a
numpy.arraywhose first dimension matches the length of the DataFrame to thetensor_dataargument ofdf2foldfileand a name totensor_name.
The array will be split along the first dimension and the sub-arrays will be saved as matrix inputs in the resulting foldfile - The matrices may also be passed as sparse format and be densified on loading by FoldYielder
Breaking
plot_rank_order_dendrogramnow returns sets of all features in cluster with distance over the threshold, rather than just the closest features in each cluster
Additions
- Addition of batch size parameter to
Ensemble.predict* - Lorentz Boost Network (https://arxiv.org/abs/1812.09722):
LorentzBoostNetbasic implementation which learns boosted particles from existing particles and extracts features from them using fixed kernel functionsAutoExtractLorentzBoostNetwhich also learns the kernel-functions during training
- Classification
Evalclasses:BinaryAccuracy: Computes and returns the accuracy of a single-output model for binary classification tasks.RocAucScore: Computes and returns the area under the Receiver Operator Characteristic curve (ROC AUC) of a classifier model.
plot_binary_sample_feat: a version ofplot_sample_preddesigned for plotting feature histograms with stacked contributions by sample for
background.- Added compression arguments to
df2foldfile,fold2foldfile, andsave_to_grp - Tensor data:
df2foldfile,fold2foldfile, and 'add_meta_data` can now support the saving of arbitrary matrices as a matrix input- Pass a
numpy.arraywhose first dimension matches the length of the DataFrame to thetensor_dataargument ofdf2foldfileand a name totensor_name.
The array will be split along the first dimension and the sub-arrays will be saved as matrix inputs in the resulting foldfile - The matrices may also be passed as sparse format and be densified on loading by FoldYielder
plot_lr_findersnow has alog_yargument for logarithmic y-axis. Defaultautoset log_y if maximum fractional difference between losses is greater than 50- Added new rescaling options to
ClassRegMultiusing linear outputs and scaling by mean and std of targets LsuvInitnow applies scaling tonn.Conv3dlayersplot_lr_findersandfold_lr_findnow have options to save the resulting LR finder plot (currently limited to png due to problems with pdf)- Addition of AdamW and an optimiser, thanks to @kiryteo
- Contribution guide, thanks to @kiryteo
- OneCycle
lr_rangenow supports a non-zero final LR; just supply a three-tuple to thelr_rangeargument. Ensemble.from_modelsclassmethod for combining in-memory models into an Ensemble.
Removals
FeatureSubsampleplotskeyword infold_train_ensemble
Fixes
- Docs bug for nn.training due to missing ipython in requirements
- Bug in LSUV init when running on CUDA
- Bug in TF export based on searching for fullstops
- Bug in model_bar update during fold training
- Quiet bug in 'MultHead' when matrix feats were not listed first; map construction indexed self.matrix_feats not self.feats
- Slowdown in
ensemble.predict_arraywhich caused the array to get sent to device in during each model evaluations
-Model.get_param_countnow includes mon-trainable params when requested - Fixed bug in
fold_lr_findwhere LR finders would use different LR steps leading to NaNs when plotting infold_lr_find plot_featused to coerce NaNs and Infs vianp.nan_to_numprior to plotting, potentially impacting distributions, plotting scales, moments, etc. Fixed so that nan and inf values are removed rather than coerced.- Fixed early-stopping statement in
fold_train_ensembleto state the number as "sub-epochs" (previously said "epochs") - Fixed error in patience when using cyclical LR callbacks, now specify the number of cycles to go without improvement. Previously had to specify 1+number.
- Unnecessary warning
df2foldfilewhen no strat-key is passed. - Saved matrices in
fold2foldfileare now in float32 - Fixed return type of
get_layersmethods inRNNs_CNNs_and_GNNs_for_matrix_dataexample - Bug in
model.predict_arraywhen predicting matrix data with a batch size - Added missing indexing in
AbsMatrixHeadto usetorch.boolif PyTorch version is >= 1.2 (wasuint8but now depreciated for indexing) - Errors when running in terminal due to trying to call
.showon fastprogress bars - Bug due to encoding of readme when trying to install when default encoder is ascii
- Bug when running
Model.predictin batches when the data contains less than one batch - Include missing files in sdist, thanks to @thatch
- Test path correction in example notebook, thanks to @kiryteo
- Doc links in
hep_proc - Error in
MultiHead._set_featswhenmatrix_headdoes not contain 'vecs' or 'feats_per_vec' keywords - Compatibility error in numpy >= 1.18 in
bin_binary_class_preddue to float instead of int - Unnecessary second loading of fold data in
fold_lr_find - Compatibility error when working in PyTorch 1.6 based on integer and true division
- SWA not evaluating in batches when running in non-bulk-move mode
- Moved from
normedtodensitykeywords for matplotlib
Changes
ParametrisedPredictionnow accepts lists of parameterisation featuresplot_sample_prednow ensures that signal and background have the same binningPlotSettingsnow coerces string arguments forsavepathtoPath- Added default value for
targ_nameinEvalMetric plot_rank_order_dendrogram:- Now uses "optimal ordering" for improved presentation
- Now returns sets of all features in cluster with distance over the threshold, rather than just the closest features in each cluster
auto_filter_on_linear_correlationnow examines all features within correlated clusters, rather than just the most correlated pair. This means that the function now only needs to be run once, rather than the previously recommended multiple rerunning.- Improved data shuffling in
BatchYielder, now runs much quicker - Slight speedup when loading data from foldfiles
- Matrix data is no longer passed through
np.nan_to_numinFoldYielder. Users should ensure that all values in matrix data are not NaN or Inf
Depreciations
Comments
- RFPImp still imports from
sklearn.ensemble.forestwhich is depreciated, and possibly part of the private API. Hopefully the package will remedy this in time for depreciation. For now, future warnings are displayed.
v0.5.1 - The Gradient Must Flow - Micro Update
v0.5.1 - The Gradient Must Flow - Micro Update
Important changes
- New live plot for losses during training (
MetricLogger):- Provides additional information
- Only updates after every epoch (previously every subepoch) reducing training times
- Nicer appearance and automatic log scale for y-axis
Breaking
Additions
- New live plot for losses during training (
MetricLogger):- Provides additional information
- Only updates after every epoch (previously every subepoch) reducing training times
- Nicer appearance and automatic log scale for y-axis
Removals
Fixes
- Fixed error in documentation which removed the ToC for the nn module
Changes
Depreciations
plotsargument infold_train_ensemble. The plots argument is now depreciated and ignored. Loss history will always be shown, lr history will no longer be shown separately, and live feedback is now controlled by the four live_fdbk arguments. This argument will be removed in V0.6.
Comments
v0.5 The Gradient Must Flow
V0.5 The Gradient Must Flow
Important changes
- Added support for processing and embedding of matrix data
MultiHeadto allow the use of multiple head blocks to handle input data containing flat and matrix inputsAbsMatrixHeadabstract class for head blocks designed to process matrix dataInteractionNeta new head block to apply interaction graph-nets to objects in matrix formRecurrentHeada new head block to apply recurrent layers (RNN, LSTM, GRU) to series objects in matrix formAbsConv1dHeada new abstract class for building convolutional networks from basic blocks to apply to object in matrix form.
- Meta data:
FoldYieldernow checks its foldfile for ameta_datagroup which contains information about the features and inputs in the datacont_featsandcat_featsnow no longer need to be passed toFoldYielderduring initialisation of the foldfile contains meta dataadd_meta_datafunction added to write meta data to foldfiles and is automatically called bydf2foldfile
- Improved usage with large datasets:
- Added
Model.evaluate_from_byto allow batch-wise evaluation of loss bulk_moveinfold_train_ensemblenow also affects the validation fold, i.e.bulk_move=Falseno longer preloads the validation fold, and validation loss is evaluated usingModel.evaluate_from_bybulk_movearguments added tofold_lr_find- Added batch-size argument to Model predict methods to run predictions in batches
- Added
Potentially Breaking
FoldYielder.get_df()now returns any NaNs present in data rather than zeros unlessnan_to_numis set toTrue- Zero bias init for bottlenecks in
MultiBlockbody
Additions
__repr__ofModelnow detail information about input variables- Added support for processing and embedding of matrix data
MultiHeadto allow the use of multiple head blocks to handle input data containing flat and matrix inputsAbsMatrixHeadabstract class for head blocks designed to process matrix dataInteractionNeta new head block to apply interaction graph-nets to objects in matrix formRecurrentHeada new head block to apply recurrent layers (RNN, LSTM, GRU) to series objects in matrix formAbsConv1dHeada new abstract class for building convolutional networks from basic blocks to apply to object in matrix form.
- Meta data:
FoldYieldernow checks its foldfile for ameta_datagroup which contains information about the features and inputs in the datacont_featsandcat_featsnow no longer need to be passed toFoldYielderduring initialisation of the foldfile contains meta dataadd_meta_datafunction added to write meta data to foldfiles and is automatically called bydf2foldfile
get_inputsmethod toBatchYielderto return the inputs, optionally on device- Added LSUV initialisation, implemented by
LsuvInitcallback
Removals
Fixes
FoldYielder.get_df()now returns any NaNs present in data rather than zeros unlessnan_to_numis set toTrue- Various typing fixes`
- Body and tail modules not correctly freezing
- Made
Swishto not be inplace - seemed to cause problems sometimes - Enforced fastprogress version; latest version renamed a parameter
- Added support to
df2foldfilefor missingstrat_key - Added support to
fold2foldfilefor missing features - Zero bias init for bottlenecks in
MultiBlockbody
Changes
- Slight optimisation in
FullyConnectedwhen not using dense or residual networks FoldYielder.set_foldfileis now a private functionFoldYielder._set_foldfile- Improved usage with large datasets:
- Added
Model.evaluate_from_byto allow batch-wise evaluation of loss bulk_moveinfold_train_ensemblenow also affects the validation fold, i.e.bulk_move=Falseno longer preloads the validation fold, and validation loss is evaluated usingModel.evaluate_from_bybulk_movearguments added tofold_lr_find- Added batch-size argument to Model predict methods to run predictions in batches
- Added