Releases: keras-team/keras
Keras 3.2.1
Keras 3.2.0
What changed
- Introduce QLoRA-like technique for LoRA fine-tuning of
Dense
andEinsumDense
layers (thereby any LLM) in int8 precision. - Extend
keras.ops.custom_gradient
support to PyTorch. - Add
keras.layers.JaxLayer
andkeras.layers.FlaxLayer
to wrap JAX/Flax modules as Keras layers. - Allow
save_model
&load_model
to accept a file-like object. - Add quantization support to the
Embedding
layer. - Make it possible to update metrics inside a custom
compute_loss
method with all backends. - Make it possible to access
self.losses
inside a customcompute_loss
method with the JAX backend. - Add
keras.losses.Dice
loss. - Add
keras.ops.correlate
. - Make it possible to use cuDNN LSTM & GRU with a mask with the TensorFlow backend.
- Better JAX support in
model.export()
: add support for aliases, finer control overjax2tf
options, and dynamic batch shapes. - Bug fixes and performance improvements.
New Contributors
- @abhaskumarsinha made their first contribution in #19302
- @qaqland made their first contribution in #19378
- @tvogel made their first contribution in #19310
- @lpizzinidev made their first contribution in #19409
- @Murhaf made their first contribution in #19444
Full Changelog: v3.1.1...v3.2.0
Keras 3.1.1
This is a minor bugfix release over 3.1.0.
What's Changed
- Unwrap variable values in all stateless calls. by @hertschuh in #19287
- Fix
draw_seed
causing device discrepancy issue duringtorch
's symbolic execution by @KhawajaAbaid in #19289 - Fix TestCase.run_layer_test for multi-output layers by @shkarupa-alex in #19293
- Sine docstring by @grasskin in #19295
- Fix
keras.ops.softmax
for the tensorflow backend by @tirthasheshpatel in #19300 - Fix mixed precision check in TestCase.run_layer_test: compare with output_spec dtype instead of hardcoded float16 by @shkarupa-alex in #19297
- ArrayDataAdapter no longer converts to NumPy and supports sparse tens⦠by @hertschuh in #19298
- add token to codecov by @haifeng-jin in #19312
- Add Tensorflow support for variable
scatter_update
in optimizers. by @hertschuh in #19313 - Replace
dm-tree
withoptree
by @james77777778 in #19306 - downgrade codecov to v3 by @haifeng-jin in #19319
- Allow tensors in
tf.Dataset
s to have different dimensions. by @hertschuh in #19318 - update codecov setting by @haifeng-jin in #19320
- Set dtype policy for uint8 by @sampathweb in #19327
- Use Value dim shape for Attention compute_output_shape by @sampathweb in #19284
New Contributors
- @tirthasheshpatel made their first contribution in #19300
Full Changelog: v3.1.0...v3.1.1
Keras 3.1.0
New features
- Add support for
int8
inference. Just callmodel.quantize("int8")
to do an in-place conversion of a bfloat16 or float32 model to an int8 model. Note that onlyDense
andEinsumDense
layers will be converted (this covers LLMs and all Transformers in general). We may add more supported layers over time. - Add
keras.config.set_backend(backend)
utility to reload a different backend. - Add
keras.layers.MelSpectrogram
layer for turning raw audio data into Mel spectrogram representation. - Add
keras.ops.custom_gradient
decorator (only for JAX and TensorFlow). - Add
keras.ops.image.crop_images
. - Add
pad_to_aspect_ratio
argument toimage_dataset_from_directory
. - Add
keras.random.binomial
andkeras.random.beta
functions. - Enable
keras.ops.einsum
to run with int8 x int8 inputs and int32 output. - Add
verbose
argument in all dataset-creation utilities.
Notable fixes
- Fix Functional model slicing
- Fix for TF XLA compilation error for
SpectralNormalization
- Refactor
axis
logic across all backends and add support for multiple axes inexpand_dims
andsqueeze
New Contributors
- @mykolaskrynnyk made their first contribution in #19190
- @chicham made their first contribution in #19201
- @joycebrum made their first contribution in #19214
- @EtiNL made their first contribution in #19228
Full Changelog: v3.0.5...v3.1.0
Keras 3.0.5
This release brings many bug fixes and performance improvements, new linear algebra ops, and sparse tensor support for the JAX backend.
Highlights
- Add support for sparse tensors with the JAX backend.
- Add support for saving/loading in bfloat16.
- Add linear algebra ops in
keras.ops.linalg
. - Support nested structures in
while_loop
op. - Add
erfinv
op. - Add
normalize
op. - Add support for
IterableDataset
toTorchDataLoaderAdapter
.
New Contributors
- @frazane made their first contribution in #19107
- @SamanehSaadat made their first contribution in #19111
- @sitamgithub-MSIT made their first contribution in #19142
- @timotheeMM made their first contribution in #19169
Full Changelog: v3.0.4...v3.0.5
Keras 3.0.4
This is a minor release with improvements to the LoRA API required by the next release of KerasNLP.
Full Changelog: v3.0.3...v3.0.4
Keras 3.0.3 release
This is a minor Keras release.
What's Changed
- Add built-in LoRA (low-rank adaptation) API to all relevant layers (
Dense
,EinsumDense
,Embedding
). - Add
SwapEMAWeights
callback to make it easier to evaluate model metrics using EMA weights during training. - All
DataAdapters
now create a native iterator for each backend, improving performance. - Add built-in prefetching for JAX, improving performance.
- The
bfloat16
dtype is now allowed in the globalset_dtype
configuration utility. - Bug fixes and performance improvements.
New Contributors
- @kiraksi made their first contribution in #18977
- @dugujiujian1999 made their first contribution in #19010
- @neo-alex made their first contribution in #18997
- @anas-rz made their first contribution in #19057
Full Changelog: v3.0.2...v3.0.3
Keras 3.0.2
Breaking changes
There are no known breaking changes in this release compared to 3.0.1.
API changes
- Add
keras.random.binomial
andkeras.random.beta
RNG functions. - Add masking support to
BatchNormalization
. - Add
keras.losses.CTC
(loss function for sequence-to-sequence tasks) as well as the lower-level operationkeras.ops.ctc_loss
. - Add
ops.random.alpha_dropout
andlayers.AlphaDropout
. - Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
Full Changelog: v3.0.1...v3.0.2
Keras 3.0.1
This is a minor release focused on bug fixes and performance improvements.
What's Changed
- Bug fixes and performance improvements.
- Add
stop_evaluating
andstop_predicting
model attributes for callbacks, similar tostop_training
. - Add
keras.device()
scope for managing device placement in a multi-backend way. - Support dict items in
PyDataset
. - Add
hard_swish
activation and op. - Fix cuDNN LSTM performance on TensorFlow backend.
- Add a
force_download
arg toget_file
to force cache invalidation.
Full Changelog: v3.0.0...v3.0.1
Keras 3.0.0
Major updates
See the release announcement for a detailed list of major changes. Main highlights compared to Keras 2 are:
- Keras can now be run on top of JAX, PyTorch, TensorFlow, and even NumPy (note that the NumPy backend is inference-only).
- New low-level
keras.ops
API for building cross-framework components. - New large-scale model distribution
keras.distribution
based on JAX. - New stateless API for layers, models, optimizers, and metrics.
Breaking changes
See this thread for a complete list of breaking changes, as well as the Keras 3 migration guide.