Releases: warner-benjamin/fastxtend
v0.0.18
Adds the Lion Optimizer
Full Changelog: v0.0.17...v0.0.18
v0.0.17
Bug fixes and compatibility with fastai 2.7.11
Full Changelog: v0.0.16...v0.0.17
v0.016
Decrease Adan memory usage and increase optimizer step speed
Fix EMA callbacks not applying to buffers
EMA callback accepts epochs or percent of total training
Full Changelog: v0.0.15...v0.0.16
v0.0.15
Drop support for Python 3.7. CI tests on Python 3.8, 3.9, & 3.10.
ProgressiveResize can be imported independent of other fastxtend features.
Bug fixes
Full Changelog: v0.0.14...v0.0.15
v0.0.14
0.2-0.25x speed improvement to EMACallback
v.0.0.13
Updated EMACallback
and EMAWarmupCallback
with new features, including a fast fused implementation, EMA start delay, and EMA warmup.
Added two new schedulers: fit_flat_warmup
and fit_cos_anneal
.
Bug fixes.
Full Changelog: v0.0.12...v.0.0.13
v.0.0.12
Support PyTorch 1.13 via fastai 2.7.10
v0.0.11
What's Changed
- Add fused ForEach and TorchScript Optimizers
- Add the Adan Optimizer
- Port to nbdev2
- Improved Callbacks & Bug Fixes: EMA, ProgressiveResize, SimpleProfiler
- XResNet compatibility with TorchScript and
vision_learner
- BCEWithLogitLoss with 'batchmean' reduction
Full Changelog: v0.0.10...v0.0.11
v0.0.10
CutMix
andCutMixAugment
support small batch sizes- Bug fix in
CutMix
andCutMixAugment
0.0.9
- Add
ProgressiveResize
callback to implement automatic progressive resizing in fastai - Add support for element-wise MixUp, CutMix and Augmentations in
CutMixUp
andCutMixUpAugment
- Add samples per second to simple profiler, logging and output improvements
- Add MixUp support to
MultiLoss
- Add new batch augmentations
Thanks to @marii-moe for assistance in debugging and fixing ProgressiveResize
memory allocation overflows.