Skip to content

Releases: keras-team/keras

Keras 2.1.5

06 Mar 22:07
Compare
Choose a tag to compare

Areas of improvement

  • Bug fixes.
  • New APIs: sequence generation API TimeseriesGenerator, and new layer DepthwiseConv2D.
  • Unit tests / CI improvements.
  • Documentation improvements.

API changes

  • Add new sequence generation API keras.preprocessing.sequence.TimeseriesGenerator.
  • Add new convolutional layer keras.layers.DepthwiseConv2D.
  • Allow weights from keras.layers.CuDNNLSTM to be loaded into a keras.layers.LSTM layer (e.g. for inference on CPU).
  • Add brightness_range data augmentation argument in keras.preprocessing.image.ImageDataGenerator.
  • Add validation_split API in keras.preprocessing.image.ImageDataGenerator. You can pass validation_split to the constructor (float), then select between training/validation subsets by passing the argument subset='validation' or subset='training' to methods flow and flow_from_directory.

Breaking changes

  • As a side effect of a refactor of ConvLSTM2D to a modular implementation, recurrent dropout support in Theano has been dropped for this layer.

Credits

Thanks to our 28 contributors whose commits are featured in this release:

@DomHudson, @Dref360, @VitamintK, @abrad1212, @ahundt, @bojone, @brainnoise, @bzamecnik, @caisq, @cbensimon, @davinnovation, @farizrahman4u, @fchollet, @gabrieldemarmiesse, @khosravipasha, @ksindi, @lenjoy, @masstomato, @mewwts, @ozabluda, @paulpister, @sandpiturtle, @saralajew, @srjoglekar246, @stefangeneralao, @taehoonlee, @tiangolo, @treszkai

Keras 2.1.4

13 Feb 23:53
Compare
Choose a tag to compare

Areas of improvement

  • Bug fixes
  • Performance improvements
  • Improvements to example scripts

API changes

  • Allow for stateful metrics in model.compile(..., metrics=[...]). A stateful metric inherits from Layer, and implements __call__ and reset_states.
  • Support constants argument in StackedRNNCells.
  • Enable some TensorBoard features in the TensorBoard callback (loss and metrics plotting) with non-TensorFlow backends.
  • Add reshape argument in model.load_weights(), to optionally reshape weights being loaded to the size of the target weights in the model considered.
  • Add tif to supported formats in ImageDataGenerator.
  • Allow auto-GPU selection in multi_gpu_model() (set gpus=None).
  • In LearningRateScheduler callback, the scheduling function now takes an argument: lr, the current learning rate.

Breaking changes

  • In ImageDataGenerator, change default interpolation of image transforms from nearest to bilinear. This should probably not break any users, but it is a change of behavior.

Credits

Thanks to our 37 contributors whose commits are featured in this release:

@DalilaSal, @Dref360, @galaxydream, @GarrisonJ, @Max-Pol, @May4m, @MiliasV, @MrMYHuang, @N-McA, @Vijayabhaskar96, @abrad1212, @ahundt, @angeloskath, @bbabenko, @bojone, @brainnoise, @bzamecnik, @caisq, @cclauss, @dsadulla, @fchollet, @gabrieldemarmiesse, @ghostplant, @gorogoroyasu, @icyblade, @kapsl, @kevinbache, @mendesmiguel, @mikesol, @myutwo150, @ozabluda, @sadreamer, @simra, @taehoonlee, @veniversum, @yongtang, @zhangwj618

Keras 2.1.3

16 Jan 05:47
Compare
Choose a tag to compare

Areas of improvement

  • Performance improvements (esp. convnets with TensorFlow backend).
  • Usability improvements.
  • Docs & docstrings improvements.
  • New models in the applications module.
  • Bug fixes.

API changes

  • trainable attribute in BatchNormalization now disables the updates of the batch statistics (i.e. if trainable == False the layer will now run 100% in inference mode).
  • Add amsgrad argument in Adam optimizer.
  • Add new applications: NASNetMobile, NASNetLarge, DenseNet121, DenseNet169, DenseNet201.
  • Add Softmax layer (removing need to use a Lambda layer in order to specify the axis argument).
  • Add SeparableConv1D layer.
  • In preprocessing.image.ImageDataGenerator, allow width_shift_range and height_shift_range to take integer values (absolute number of pixels)
  • Support return_state in Bidirectional applied to RNNs (return_state should be set on the child layer).
  • The string values "crossentropy" and "ce" are now allowed in the metrics argument (in model.compile()), and are routed to either categorical_crossentropy or binary_crossentropy as needed.
  • Allow steps argument in predict_* methods on the Sequential model.
  • Add oov_token argument in preprocessing.text.Tokenizer.

Breaking changes

  • In preprocessing.image.ImageDataGenerator, shear_range has been switched to use degrees rather than radians (for consistency). This should not actually break anything (neither training nor inference), but keep this change in mind in case you see any issues with regard to your image data augmentation process.

Credits

Thanks to our 45 contributors whose commits are featured in this release:

@Dref360, @OliPhilip, @TimZaman, @bbabenko, @bdwyer2, @berkatmaca, @caisq, @decrispell, @dmaniry, @fchollet, @fgaim, @gabrieldemarmiesse, @gklambauer, @hgaiser, @hlnull, @icyblade, @jgrnt, @kashif, @kouml, @lutzroeder, @m-mohsen, @mab4058, @manashty, @masstomato, @mihirparadkar, @myutwo150, @nickbabcock, @novotnj3, @obsproth, @ozabluda, @philferriere, @piperchester, @pstjohn, @roatienza, @souptc, @spiros, @srs70187, @sumitgouthaman, @taehoonlee, @tigerneil, @titu1994, @tobycheese, @vitaly-krumins, @yang-zhang, @ziky90

Keras 2.1.2

01 Dec 18:28
Compare
Choose a tag to compare

Areas of improvement

  • Bug fixes and performance improvements.
  • API improvements in Keras applications, generator methods.

API changes

  • Make preprocess_input in all Keras applications compatible with both Numpy arrays and symbolic tensors (previously only supported Numpy arrays).
  • Allow the weights argument in all Keras applications to accept the path to a custom weights file to load (previously only supported the built-in imagenet weights file).
  • steps_per_epoch behavior change in generator training/evaluation methods:
    • If specified, the specified value will be used (previously, in the case of generator of type Sequence, the specified value was overridden by the Sequence length)
    • If unspecified and if the generator passed is a Sequence, we set it to the Sequence length.
  • Allow workers=0 in generator training/evaluation methods (will run the generator in the main process, in a blocking way).
  • Add interpolation argument in ImageDataGenerator.flow_from_directory, allowing a custom interpolation method for image resizing.
  • Allow gpus argument in multi_gpu_model to be a list of specific GPU ids.

Breaking changes

  • The change in steps_per_epoch behavior (described above) may affect some users.

Credits

Thanks to our 26 contributors whose commits are featured in this release:

@Alex1729, @alsrgv, @apisarek, @asos-saul, @athundt, @cherryunix, @dansbecker, @datumbox, @de-vri-es, @drauh, @evhub, @fchollet, @heath730, @hgaiser, @icyblade, @jjallaire, @knaveofdiamonds, @lance6716, @luoch, @mjacquem1, @myutwo150, @ozabluda, @raviksharma, @rh314, @yang-zhang, @zach-nervana

Keras 2.1.1

14 Nov 21:41
Compare
Choose a tag to compare

This release amends release 2.1.0 to include a fix for an erroneous breaking change introduced in #8419.

Keras 2.1.0

13 Nov 20:46
Compare
Choose a tag to compare

This is a small release that fixes outstanding bugs that were reported since the previous release.

Areas of improvement

  • Bug fixes (in particular, Keras no longer allocates devices at startup time with the TensorFlow backend. This was causing issues with Horovod.)
  • Documentation and docstring improvements.
  • Better CIFAR10 ResNet example script and improvements to example scripts code style.

API changes

  • Add go_backwards to cuDNN RNNs (enables Bidirectional wrapper on cuDNN RNNs).
  • Add ability to pass fetches to K.Function() with the TensorFlow backend.
  • Add steps_per_epoch and validation_steps arguments in Sequential.fit() (to sync it with Model.fit()).

Breaking changes

None.

Credits

Thanks to our 14 contributors whose commits are featured in this release:

@Dref360, @LawnboyMax, @anj-s, @bzamecnik, @datumbox, @diogoff, @farizrahman4u, @fchollet, @frexvahi, @jjallaire, @nsuh, @ozabluda, @roatienza, @yakigac

Keras 2.0.9

01 Nov 21:01
Compare
Choose a tag to compare

Areas of improvement

  • RNN improvements:
    • Refactor RNN layers to rely on atomic RNN cells. This makes the creation of custom RNN very simple and user-friendly, via the RNN base class.
    • Add ability to create new RNN cells by stacking a list of cells, allowing for efficient stacked RNNs.
    • Add CuDNNLSTM and CuDNNGRU layers, backend by NVIDIA's cuDNN library for fast GPU training & inference.
    • Add RNN Sequence-to-sequence example script.
    • Add constants argument in RNN's call method, making RNN attention easier to implement.
  • Easier multi-GPU data parallelism via keras.utils.multi_gpu_model.
  • Bug fixes & performance improvements (in particular, native support for NCHW data layout in TensorFlow).
  • Documentation improvements and examples improvements.

API changes

  • Add "fashion mnist" dataset as keras.datasets.fashion_mnist.load_data()
  • Add Minimum merge layer as keras.layers.Minimum (class) and keras.layers.minimum(inputs) (function)
  • Add InceptionResNetV2 to keras.applications.
  • Support bool variables in TensorFlow backend.
  • Add dilation to SeparableConv2D.
  • Add support for dynamic noise_shape in Dropout
  • Add keras.layers.RNN() base class for batch-level RNNs (used to implement custom RNN layers from a cell class).
  • Add keras.layers.StackedRNNCells() layer wrapper, used to stack a list of RNN cells into a single cell.
  • Add CuDNNLSTM and CuDNNGRU layers.
  • Deprecate implementation=0 for RNN layers.
  • The Keras progbar now reports time taken for each past epoch, and average time per step.
  • Add option to specific resampling method in keras.preprocessing.image.load_img().
  • Add keras.utils.multi_gpu_model for easy multi-GPU data parallelism.
  • Add constants argument in RNN's call method, used to pass a list of constant tensors to the underlying RNN cell.

Breaking changes

  • Implementation change in keras.losses.cosine_proximity results in a different (correct) scaling behavior.
  • Implementation change for samplewise normalization in ImageDataGenerator results in a different normalization behavior.

Credits

Thanks to our 59 contributors whose commits are featured in this release!

@alok, @Danielhiversen, @Dref360, @HelgeS, @JakeBecker, @MPiecuch, @MartinXPN, @RitwikGupta, @TimZaman, @adammenges, @aeftimia, @ahojnnes, @akshaychawla, @alanyee, @aldenks, @andhus, @apbard, @aronj, @bangbangbear, @bchu, @bdwyer2, @bzamecnik, @cclauss, @colllin, @datumbox, @deltheil, @dhaval067, @durana, @ericwu09, @facaiy, @farizrahman4u, @fchollet, @flomlo, @fran6co, @grzesir, @hgaiser, @icyblade, @jsaporta, @julienr, @jussihuotari, @kashif, @lucashu1, @mangerlahn, @myutwo150, @nicolewhite, @noahstier, @nzw0301, @olalonde, @ozabluda, @patrikerdes, @podhrmic, @qin, @raelg, @roatienza, @shadiakiki1986, @smgt, @souptc, @taehoonlee, @y0z

Keras 2.0.8

25 Aug 19:21
Compare
Choose a tag to compare

The primary purpose of this release is to address an incompatibility between Keras 2.0.7 and the next version of TensorFlow (1.4). TensorFlow 1.4 isn't due until a while, but the sooner the PyPI release has the fix, the fewer people will be affected when upgrading to the next TensorFlow version when it gets released.

No API changes for this release. A few bug fixes.

Keras 2.0.7

21 Aug 23:31
Compare
Choose a tag to compare

Areas of improvement

  • Bug fixes.
  • Performance improvements.
  • Documentation improvements.
  • Better support for training models from data tensors in TensorFlow (e.g. Datasets, TFRecords). Add a related example script.
  • Improve TensorBoard UX with better grouping of ops into name scopes.
  • Improve test coverage.

API changes

  • Add clone_model method, enabling to construct a new model, given an existing model to use as a template. Works even in a TensorFlow graph different from that of the original model.
  • Add target_tensors argument in compile, enabling to use custom tensors or placeholders as model targets.
  • Add steps_per_epoch argument in fit, enabling to train a model from data tensors in a way that is consistent with training from Numpy arrays.
  • Similarly, add steps argument in predict and evaluate.
  • Add Subtract merge layer, and associated layer function subtract.
  • Add weighted_metrics argument in compile to specify metric functions meant to take into account sample_weight or class_weight.
  • Make the stop_gradients backend function consistent across backends.
  • Allow dynamic shapes in repeat_elements backend function.
  • Enable stateful RNNs with CNTK.

Breaking changes

  • The backend methods categorical_crossentropy, sparse_categorical_crossentropy, binary_crossentropy had the order of their positional arguments (y_true, y_pred) inverted. This change does not affect the losses API. This change was done to achieve API consistency between the losses API and the backend API.
  • Move constraint management to be based on variable attributes. Remove the now-unused constraints attribute on layers and models (not expected to affect any user).

Credits

Thanks to our 47 contributors whose commits are featured in this release!

@5ke, @alok, @Danielhiversen, @Dref360, @NeilRon, @abnerA, @acburigo, @airalcorn2, @angeloskath, @athundt, @brettkoonce, @cclauss, @denfromufa, @enkait, @erg, @ericwu09, @farizrahman4u, @fchollet, @georgwiese, @ghisvail, @gokceneraslan, @hgaiser, @inexxt, @joeyearsley, @jorgecarleitao, @kennyjacob, @keunwoochoi, @krizp, @lukedeo, @milani, @n17r4m, @nicolewhite, @nigeljyng, @nyghtowl, @nzw0301, @rapatel0, @souptc, @srinivasreddy, @staticfloat, @taehoonlee, @td2014, @titu1994, @tleeuwenburg, @udibr, @waleedka, @wassname, @yashk2810

Keras 2.0.6

07 Jul 20:59
Compare
Choose a tag to compare

Areas of improvement

  • Improve generator methods (predict_generator, fit_generator, evaluate_generator) and add data enqueuing utilities.
  • Bug fixes and performance improvements.
  • New features: new Conv3DTranspose layer, new MobileNet application, self-normalizing networks.

API changes

  • Self-normalizing networks: add selu activation function, AlphaDropout layer, lecun_normal initializer.
  • Data enqueuing: add Sequence, SequenceEnqueuer, GeneratorEnqueuer to utils.
  • Generator methods: rename arguments pickle_safe (replaced with use_multiprocessing) and max_q_size (replaced with max_queue_size).
  • Add MobileNet to the applications module.
  • Add Conv3DTranspose layer.
  • Allow custom print functions for model's summary method (argument print_fn).