Skip to content

Commit

Permalink
Update
Browse files Browse the repository at this point in the history
  • Loading branch information
lucadellalib committed Nov 7, 2022
1 parent 0379b26 commit cdca066
Show file tree
Hide file tree
Showing 47 changed files with 2,370 additions and 238 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -119,3 +119,6 @@ dmypy.json

# Visual Studio Code
.vscode/

# Data
*/data/*
6 changes: 4 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,14 @@ repos:
language: system
entry: isort
types: [python]
exclude: (^examples/mnist|^examples/regression)

- id: black
name: black
language: system
entry: black
types: [python]
exclude: (^examples/mnist|^examples/regression)

- id: trailing-whitespace
name: trailing-whitespace
Expand Down Expand Up @@ -56,7 +58,7 @@ repos:
language: system
entry: flake8
types: [python]
exclude: /__init__\.py$
exclude: (^examples/mnist|^examples/regression|/__init__\.py)

- id: flake8
name: flake8 only __init__.py
Expand All @@ -65,7 +67,7 @@ repos:
types: [python]
# Ignore unused imports in __init__.py
args: ["--extend-ignore=F401"]
files: /__init__\.py$
files: /__init__\.py

- id: pytest
name: pytest
Expand Down
67 changes: 67 additions & 0 deletions NOTICE
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
This project incorporates components from the projects listed below. The original copyright notices are set forth below.

#############################################################################################################################################################

1. Code in bayestorch/optimizers/{sghmc, sgld}.py adapted from:
https://github.com/JavierAntoran/Bayesian-Neural-Networks/blob/1f867a5bcbd1abfecede99807eb0b5f97ed8be7c/src/Stochastic_Gradient_HMC_SA/optimizers.py#L1
https://github.com/JavierAntoran/Bayesian-Neural-Networks/blob/1f867a5bcbd1abfecede99807eb0b5f97ed8be7c/src/Stochastic_Gradient_Langevin_Dynamics/optimizers.py#L1

MIT License

Copyright (c) 2019 Javier Antoran

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

#############################################################################################################################################################

2. Code in examples/{mnist, regression}/{train_bbb, train_mcmc, train_svgd}.py adapted from:
https://github.com/pytorch/examples/blob/9aad148615b7519eadfa1a60356116a50561f192/mnist/main.py#L1
https://github.com/pytorch/examples/blob/9aad148615b7519eadfa1a60356116a50561f192/regression/main.py#L1

BSD 3-Clause License

Copyright (c) 2017,
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

#############################################################################################################################################################
16 changes: 10 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,25 @@
# Bayestorch
# BayesTorch

[![Python version: 3.8 | 3.9 | 3.10](https://img.shields.io/badge/python-3.8|%203.9%20|%203.10-blue)](https://www.python.org/downloads/)
[![Python version: 3.6 | 3.7 | 3.8 | 3.9 | 3.10](https://img.shields.io/badge/python-3.6%20|%203.7%20|%203.8%20|%203.9%20|%203.10-blue)](https://www.python.org/downloads/)
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://github.com/lucadellalib/bayestorch/blob/main/LICENSE)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://github.com/PyCQA/isort)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit)

Welcome to `bayestorch`, a Bayesian deep learning library for fast prototyping based on
[PyTorch](https://pytorch.org). It provides the basic building blocks for the following
Bayesian inference algorithms:

- [Bayes by Backprop](https://arxiv.org/abs/1505.05424)
- [Stein variational gradient descent](https://arxiv.org/abs/1608.04471)

- [Bayes by Backprop (BBB)](https://arxiv.org/abs/1505.05424)
- [Markov Chain Monte Carlo (MCMC)](https://www.cs.toronto.edu/~radford/ftp/thesis.pdf)
- [Stein variational gradient descent (SVGD)](https://arxiv.org/abs/1608.04471)
---------------------------------------------------------------------------------------------------------

## 🛠️️ Installation

### From source

First of all, install [Python](https://www.python.org).
First of all, install [Python 3.6 or later](https://www.python.org).
Clone or download and extract the repository, navigate to `<path-to-repository>`, open a
terminal and run:

Expand All @@ -29,6 +31,8 @@ pip install -e .

## ▶️ Quickstart

See the examples in `examples/mnist` and `examples/regression`.

---------------------------------------------------------------------------------------------------------

## 📧 Contact
Expand Down
11 changes: 9 additions & 2 deletions bayestorch/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,12 @@

"""Bayesian deep learning library for fast prototyping based on PyTorch."""

from . import distributions, kernels, losses, models, preconditioners
from .version import VERSION as __version__
from bayestorch import (
distributions,
kernels,
losses,
models,
optimizers,
preconditioners,
)
from bayestorch.version import VERSION as __version__
3 changes: 2 additions & 1 deletion bayestorch/distributions/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,5 @@

"""Distributions."""

from .log_scale_normal import *
from bayestorch.distributions.log_scale_normal import *
from bayestorch.distributions.softplus_inv_scale_normal import *
35 changes: 29 additions & 6 deletions bayestorch/distributions/log_scale_normal.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@

from typing import Optional, Union

from torch import Tensor
from torch.distributions import Normal
from torch import Size, Tensor
from torch.distributions import Normal, constraints


__all__ = [
Expand All @@ -28,17 +28,27 @@


class LogScaleNormal(Normal):
"""Normal distribution parameterized by `loc`
and `log_scale` parameters.
"""Normal distribution parameterized by location
and log scale parameters.
Scale parameter is computed as `exp(log_scale)`.
Examples
--------
>>> from bayestorch.distributions import LogScaleNormal
>>>
>>>
>>> loc = 0.0
>>> log_scale = -1.0
>>> distribution = LogScaleNormal(loc, log_scale)
"""

arg_constraints = {
"loc": constraints.real,
"log_scale": constraints.real,
} # override

# override
def __init__(
self,
Expand All @@ -51,9 +61,22 @@ def __init__(
# override
@property
def scale(self) -> "Tensor":
return self._log_scale.exp()
return self.log_scale.exp()

# override
@scale.setter
def scale(self, value: "Tensor") -> "None":
self._log_scale = value
self.log_scale = value

# override
def expand(
self,
batch_shape: "Size" = Size(), # noqa: B008
_instance: "Optional[LogScaleNormal]" = None,
) -> "LogScaleNormal":
new = self._get_checked_instance(LogScaleNormal, _instance)
loc = self.loc.expand(batch_shape)
log_scale = self.log_scale.expand(batch_shape)
super(LogScaleNormal, new).__init__(loc, log_scale, validate_args=False)
new._validate_args = self._validate_args
return new
85 changes: 85 additions & 0 deletions bayestorch/distributions/softplus_inv_scale_normal.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# ==============================================================================
# Copyright 2022 Luca Della Libera.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================

"""Inverse softplus scale normal distribution."""

from typing import Optional, Union

import torch.nn.functional as F
from torch import Size, Tensor
from torch.distributions import Normal, constraints


__all__ = [
"SoftplusInvScaleNormal",
]


class SoftplusInvScaleNormal(Normal):
"""Normal distribution parameterized by location
and inverse softplus scale parameters.
Scale parameter is computed as `softplus(softplus_inv_scale)`.
Examples
--------
>>> from bayestorch.distributions import SoftplusInvScaleNormal
>>>
>>>
>>> loc = 0.0
>>> softplus_inv_scale = -1.0
>>> distribution = SoftplusInvScaleNormal(loc, softplus_inv_scale)
"""

arg_constraints = {
"loc": constraints.real,
"softplus_inv_scale": constraints.real,
} # override

# override
def __init__(
self,
loc: "Union[int, float, Tensor]",
softplus_inv_scale: "Union[int, float, Tensor]",
validate_args: "Optional[bool]" = None,
) -> "None":
super().__init__(loc, softplus_inv_scale, validate_args)

# override
@property
def scale(self) -> "Tensor":
return F.softplus(self.softplus_inv_scale)

# override
@scale.setter
def scale(self, value: "Tensor") -> "None":
self.softplus_inv_scale = value

# override
def expand(
self,
batch_shape: "Size" = Size(), # noqa: B008
_instance: "Optional[SoftplusInvScaleNormal]" = None,
) -> "SoftplusInvScaleNormal":
new = self._get_checked_instance(SoftplusInvScaleNormal, _instance)
loc = self.loc.expand(batch_shape)
softplus_inv_scale = self.softplus_inv_scale.expand(batch_shape)
super(SoftplusInvScaleNormal, new).__init__(
loc, softplus_inv_scale, validate_args=False
)
new._validate_args = self._validate_args
return new
2 changes: 1 addition & 1 deletion bayestorch/kernels/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@

"""Kernels."""

from .rbf_stein_kernel import *
from bayestorch.kernels.rbf_stein_kernel import *
13 changes: 10 additions & 3 deletions bayestorch/kernels/rbf_stein_kernel.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,8 @@ class RBFSteinKernel:
--------
>>> import torch
>>>
>>> from bayestorch.kernels import RBFSteinKernel
>>>
>>>
>>> num_particles = 5
>>> particle_size = 1000
Expand All @@ -70,11 +72,16 @@ def __call__(self, particles: "Tensor") -> "Tuple[Tensor, Tensor]":
Returns
-------
- The kernels, shape: ``[N, N]``;
- the kernel gradients w.r.t. to the
particles, shape: ``[N, D]``.
- the kernel gradients with respect
to the particles, shape: ``[N, D]``.
"""
num_particles = len(particles)
num_particles = particles.shape[0]
return self._forward(particles, num_particles)

@staticmethod
@torch.jit.script
def _forward(particles: "Tensor", num_particles: "int") -> "Tuple[Tensor, Tensor]":
deltas = torch.cdist(particles, particles)
squared_deltas = deltas**2
bandwidth = squared_deltas.median() / math.log(num_particles)
Expand Down
4 changes: 2 additions & 2 deletions bayestorch/losses/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,5 @@

"""Losses."""

from .elbo_loss import *
from .nlup_loss import *
from bayestorch.losses.elbo_loss import *
from bayestorch.losses.nlup_loss import *
13 changes: 9 additions & 4 deletions bayestorch/losses/elbo_loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,8 @@ class ELBOLoss(loss._Loss):
--------
>>> import torch
>>>
>>> from bayestorch.losses import ELBOLoss
>>>
>>>
>>> num_mc_samples = 5
>>> num_train_batches = 100
Expand Down Expand Up @@ -96,10 +98,13 @@ def forward(
kl_divs:
The Kullback-Leibler divergences, shape: ``[N]``.
kl_div_weight:
The Kullback-Leibler divergence weight (`M` in the literature).
It counterbalances the bias deriving from summing the log
likelihood over a single batch of data instead of the entire
dataset. It is usually set equal to the number of training batches.
The Kullback-Leibler divergence weight (`1 / M` in the literature).
According to reference [1], it counterbalances the bias deriving
from summing the log likelihood over a single batch of data instead
of over the entire dataset. It is often set equal to the number of
training batches. More generally, it controls the strength of the
regularization provided by the Kullback-Leibler divergence term and
its optimal value depends on factors such as model and dataset size.
Returns
-------
Expand Down
Loading

0 comments on commit cdca066

Please sign in to comment.