SPyRiT’s documentation

SPyRiT is a PyTorch-based deep image reconstruction package primarily designed for single-pixel imaging.

SPyRiT allows to simulate measurements and perform image reconstruction using a full network structure. It takes a normalized image as input and performs data simulation and image reconstruction in a single forward pass or in separate steps. A full network generally consists of a measurement operator, a noise operator, a preprocessing operator, a reconstruction operator, and a learnable neural network. All operators inherit from PyTorch’s nn.Module class, which allows them to be easily combined into a full network.

_images/full.png

The complete network contains two main parts: a physics simulation part that simulates measurements from images, and a reconstruction part that estimates the unknown image from measurements.

The Physics Simulation part consists of a Measurement operator (\(\mathcal{N}\)) and a Noise operator (\(\mathcal{P}\)).

The reconstruction part consists of a preprocessing (\(\mathcal{B}\)) that produces the pre-processed measurements from the noisy measurements, a reconstruction operator (\(\mathcal{R}\)) that estimates the unknown image from the pre-processed measurements, and an optional neural network (\(\mathcal{G_{\theta}}\)) that can be trained to improve the reconstruction quality.

Installation

The spyrit package is available for Linux, MacOs and Windows:

pip install spyrit

Advanced installation guidelines are available on GitHub.

Single-pixel imaging

Modelling of the measurements

Single-pixel imaging aims to recover an image \(x\in\Re^N\) from a few noisy scalar products \(y\in\Re^M\), where \(M\ll N\). We model the acquisition as

\(y = (\mathcal{N} \circ \mathcal{P})(x),\)

where \(\mathcal{P}\) is a linear operator, \(\mathcal{N}\) is a noise operator, and \(\circ\) denotes the composition of operators.

Image reconstruction

Learning-based reconstruction approaches estimate the unknown image as \(x^* = \mathcal{I}_\theta(y)\), where \(\mathcal{I}_\theta\) represents the parameters that are learned during a training phase. In the case of supervised learning, the training phase solves

\(\min_{\theta}{\sum_i \mathcal{L}\left(x_i,\mathcal{I}_\theta(y_i)\right)},\)

where \(\mathcal{L}\) is the training loss between the true image \(x\) and its estimate, and \(\{x_i,y_i\}_i\) is a set of training pairs.

Consider the typical reconstruction operator \(\mathcal{I}_\theta\) which can be written as:

\(\mathcal{I}_\theta = \mathcal{G}_\theta \circ \mathcal{R} \circ \mathcal{B},\)

where \(\mathcal{B}\) is a preprocessing operator, \(\mathcal{R}\) is a (standard) linear reconstruction operator, and \(\mathcal{G}_\theta\) is a neural network that can be trained during the training phase. Alternatively, \(\mathcal{R}\) can be simply “plugged”. In this case, it is trained beforehand.

To introduce the full network, a forward pass can be written as follows:

\(F_{\theta}(x) = (\mathcal{G}_\theta \circ \mathcal{R} \circ \mathcal{B} \circ \mathcal{N} \circ \mathcal{P})(x).\)

The full network can be trained using a database containing only images:

\(\min_{\theta}{\sum_i \mathcal{L}\left(x_i,\mathcal{F}_\theta(x_i)\right)}.\)

This pipeline allows noisy data to be simulated on the fly, providing data augmentation while avoiding storing the measurements.

Package structure

The main functionalities of SPyRiT are implemented in the subpackage spyrit.core , which contains six submodules:

  1. Measurement operators (meas) compute linear measurements \(\mathcal{P}x\) from images \(x\), where \(\mathcal{P}\) is a linear operator (matrix) and \(x\) is a vectorized image (see spyrit.core.meas).

  2. Noise operators (noise) corrupt measurements \(y=(\mathcal{N}\circ\mathcal{P})(x)\) with noise (see spyrit.core.noise).

  3. Preprocessing operators (prep) are used to process noisy measurements, \(m=\mathcal{B}(y)\) , prior to reconstruction. They typically compensate for the image normalization previously performed (see spyrit.core.prep).

  4. Reconstruction operators (recon) comprise both standard linear reconstruction operators \(\mathcal{R}\) and full network definitions \(\mathcal{F}_\theta\), which include both forward and reconstruction layers (see spyrit.core.recon).

  5. Neural networks (nnet) include well-known neural networks \(\mathcal{G_{\theta}}\), generally used as denoiser layers (see spyrit.core.nnet).

  6. Training (train) provide the functionalities for training reconstruction networks (see spyrit.core.train).

Subpackages

spyrit.core

spyrit.misc

Tutorials

Here you can find a series of Tutorials that will guide you through the use of Spyrit. It is recommended to follow them in order.

For each tutorial, please download the corresponding Python Script (.py) or Jupyter notebook (.ipynb) file at the bottom of the page. The images used in these tutorials can be found on this page of the Spyrit GitHub.

Below is a diagram of the entire image processing pipeline. Each tutorial focuseson a specific part of the pipeline.

on the measurement operators, with or without noise

the pseudo-inverse reconstruction process from the (possibly noisy) measurements

a CNN to denoise the image if necessary

is used to train the CNN introduced in Tutorial 3

introduces a new type of measurement operator (‘split’) that simulates positive and negative measurements

a Denoised Completion Network with a trainable image denoiser to improve the results obtained in Tutorial 5

if you want to go deeper into Spyrit’s capabilities

Principle of the image processing pipeline


01. Acquisition operators

01. Acquisition operators

02. Pseudoinverse solution from linear measurements

02. Pseudoinverse solution from linear measurements

03. Pseudoinverse solution + CNN denoising

03. Pseudoinverse solution + CNN denoising

04. Train pseudoinverse solution + CNN denoising

04. Train pseudoinverse solution + CNN denoising

05. Acquisition operators (advanced) - Split measurements and subsampling

05. Acquisition operators (advanced) - Split measurements and subsampling

06. DCNet solution for split measurements

06. DCNet solution for split measurements

Bonus. Advanced methods - Colab

Bonus. Advanced methods - Colab

01. Acquisition operators

This tutorial shows how to simulate measurements using the spyrit.core submodule. The simulation is based on three modules:

  1. Measurement operators compute linear measurements \(y = Hx\) from images \(x\), where \(H\) is a linear operator (matrix) and \(x\) is a vectorized image (see spyrit.core.meas)

  2. Noise operator corrupts measurements \(y\) with noise (see spyrit.core.noise)

  3. Preprocessing operators are typically used to process the noisy measurements prior to reconstruction (see spyrit.core.prep)

Measurement, noise, and preprocessing sketches

These tutorials load image samples from /images/.

Load a batch of images

Images \(x\) for training neural networks expect values in [-1,1]. The images are normalized using the transform_gray_norm() function.

import os

import torch
import torchvision
import matplotlib.pyplot as plt

from spyrit.misc.disp import imagesc
from spyrit.misc.statistics import transform_gray_norm

# sphinx_gallery_thumbnail_path = 'fig/tuto1.png'

h = 64  # image size hxh
i = 1  # Image index (modify to change the image)
spyritPath = os.getcwd()
imgs_path = os.path.join(spyritPath, "images/")


# Create a transform for natural images to normalized grayscale image tensors
transform = transform_gray_norm(img_size=h)

# Create dataset and loader (expects class folder 'images/test/')
dataset = torchvision.datasets.ImageFolder(root=imgs_path, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=7)

x, _ = next(iter(dataloader))
print(f"Shape of input images: {x.shape}")

# Select image
x = x[i : i + 1, :, :, :]
x = x.detach().clone()
b, c, h, w = x.shape

# plot
x_plot = x.view(-1, h, h).cpu().numpy()
imagesc(x_plot[0, :, :], r"$x$ in [-1, 1]")
$x$ in [-1, 1]
Shape of input images: torch.Size([7, 1, 64, 64])
The measurement and noise operators

Noise operators are defined in the noise module. A noise operator computes the following three steps sequentially:

1. Normalization of the image \(x\) with values in [-1,1] to get an image \(\tilde{x}=\frac{x+1}{2}\) in [0,1], as it is required for measurement simulation

  1. Application of the measurement model, i.e., computation of \(H\tilde{x}\)

  2. Application of the noise model

\[y \sim \texttt{Noise}(H\tilde{x}) = \texttt{Noise}\left(\frac{H(x+1)}{2}\right),\]

The normalization is usefull when considering distributions such as the Poisson distribution that are defined on positive values.

Note

The noise operator is constructed from a measurement operator (see the meas submodule) in order to compute the measurements \(H\tilde{x}\), as given by step #2.

A simple example: identity measurement matrix and no noise
\[y = \tilde{x}\]

We start with a simple example where the measurement matrix \(H\) is the identity, which can be handled by the more general spyrit.core.meas.Linear class. We consider the noiseless case handled by the spyrit.core.noise.NoNoise class.

from spyrit.core.meas import Linear
from spyrit.core.noise import NoNoise

meas_op = Linear(torch.eye(h * h))
noise_op = NoNoise(meas_op)

We simulate the measurement vector \(y\) that we visualise as an image. Remember that the input image \(x\) is handled as a vector.

x = x.view(b * c, h * w)  # vectorized image
print(f"Shape of vectorized image: {x.shape}")
y_eye = noise_op(x)  # noisy measurement vector
print(f"Shape of simulated measurements y: {y_eye.shape}")

# plot
x_plot = y_eye.view(-1, h, h).cpu().numpy()
imagesc(x_plot[0, :, :], r"$\tilde{x}$ in [0, 1]")
$\tilde{x}$ in [0, 1]
Shape of vectorized image: torch.Size([1, 4096])
Shape of simulated measurements y: torch.Size([1, 4096])

Note

Note that the image identical to the original one, except it has been normalized in [0,1].

Same example with Poisson noise

We now consider Poisson noise, i.e., a noisy measurement vector given by

\[y \sim \mathcal{P}(\alpha H \tilde{x}),\]

where \(\alpha\) is a scalar value that represents the maximum image intensity (in photons). The larger \(\alpha\), the higher the signal-to-noise ratio.

We consider the spyrit.core.noise.Poisson class and set \(\alpha\) to 100 photons.

from spyrit.core.noise import Poisson
from spyrit.misc.disp import add_colorbar, noaxis

alpha = 100  # number of photons
noise_op = Poisson(meas_op, alpha)

We simulate two noisy measurement vectors

y1 = noise_op(x)  # a noisy measurement vector
y2 = noise_op(x)  # another noisy measurement vector

We now consider the case \(\alpha = 1000\) photons.

noise_op.alpha = 1000
y3 = noise_op(x)  # noisy measurement vector

We finally plot the measurement vectors as images

# plot
y1_plot = y1.view(b, h, h).detach().numpy()
y2_plot = y2.view(b, h, h).detach().numpy()
y3_plot = y3.view(b, h, h).detach().numpy()

f, axs = plt.subplots(1, 3, figsize=(10, 5))
axs[0].set_title("100 photons")
im = axs[0].imshow(y1_plot[0, :, :], cmap="gray")
add_colorbar(im, "bottom")

axs[1].set_title("100 photons")
im = axs[1].imshow(y2_plot[0, :, :], cmap="gray")
add_colorbar(im, "bottom")

axs[2].set_title("1000 photons")
im = axs[2].imshow(y3_plot[0, :, :], cmap="gray")
add_colorbar(im, "bottom")

noaxis(axs)
100 photons, 100 photons, 1000 photons

As expected the signal-to-noise ratio of the measurement vector is higher for 1,000 photons than for 100 photons

Note

Not only the signal-to-noise, but also the scale of the measurements depends on \(\alpha\), which motivate the introduction of the preprocessing operator.

The preprocessing operator

Preprocessing operators are defined in the spyrit.core.prep module. A preprocessing operator applies to the noisy measurements

\[m = \texttt{Prep}(y),\]

For instance, a preprocessing operator can be used to compensate for the scaling factors that appear in the measurement or noise operators. In this case, a preprocessing operator is closely linked to its measurement and/or noise operator counterpart. While scaling factors are required to simulate realistic measurements, they are not required for reconstruction.

Preprocessing measurements corrupted by Poisson noise

We consider the spyrit.core.prep.DirectPoisson class that intends to “undo” the spyrit.core.noise.Poisson class by compensating for:

  • the scaling that appears when computing Poisson-corrupted measurements

  • the affine transformation to get images in [0,1] from images in [-1,1]

For this, it computes

\[m = \frac{2}{\alpha} y - H1\]

We consider the spyrit.core.prep.DirectPoisson class and set \(\alpha\) to 100 photons.

from spyrit.core.prep import DirectPoisson

alpha = 100  # number of photons
prep_op = DirectPoisson(alpha, meas_op)

We preprocess the first two noisy measurement vectors

m1 = prep_op(y1)
m2 = prep_op(y2)

We now consider the case \(\alpha = 1000\) photons to preprocess the third measurement vector

prep_op.alpha = 1000
m3 = prep_op(y3)

We finally plot the preprocessed measurement vectors as images

# plot
m1 = m1.view(b, h, h).detach().numpy()
m2 = m2.view(b, h, h).detach().numpy()
m3 = m3.view(b, h, h).detach().numpy()

f, axs = plt.subplots(1, 3, figsize=(10, 5))
axs[0].set_title("100 photons")
im = axs[0].imshow(m1[0, :, :], cmap="gray")
add_colorbar(im, "bottom")

axs[1].set_title("100 photons")
im = axs[1].imshow(m2[0, :, :], cmap="gray")
add_colorbar(im, "bottom")

axs[2].set_title("1000 photons")
im = axs[2].imshow(m3[0, :, :], cmap="gray")
add_colorbar(im, "bottom")

noaxis(axs)
100 photons, 100 photons, 1000 photons

Note

The preprocessed measurements still have different the signal-to-noise ratios depending on \(\alpha\); however, they (approximately) all lie within the same range (here, [-1, 1]).

We show again one of the preprocessed measurement vectors (tutorial thumbnail purpose)

# Plot
imagesc(m2[0, :, :], "100 photons", title_fontsize=20)
100 photons

Total running time of the script: (0 minutes 1.217 seconds)

Gallery generated by Sphinx-Gallery

02. Pseudoinverse solution from linear measurements

This tutorial shows how to simulate measurements and perform image reconstruction. The measurement operator is chosen as a Hadamard matrix with positive coefficients. Note that this matrix can be replaced by any desired matrix.

Reconstruction architecture sketch

These tutorials load image samples from /images/.

Load a batch of images

Images \(x\) for training expect values in [-1,1]. The images are normalized using the transform_gray_norm() function.

import os

import torch
import torchvision
import numpy as np

from spyrit.misc.disp import imagesc
from spyrit.misc.statistics import transform_gray_norm

# sphinx_gallery_thumbnail_path = 'fig/tuto2.png'

h = 64  # image size hxh
i = 1  # Image index (modify to change the image)
spyritPath = os.getcwd()
imgs_path = os.path.join(spyritPath, "images/")


# Create a transform for natural images to normalized grayscale image tensors
transform = transform_gray_norm(img_size=h)

# Create dataset and loader (expects class folder 'images/test/')
dataset = torchvision.datasets.ImageFolder(root=imgs_path, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=7)

x, _ = next(iter(dataloader))
print(f"Shape of input images: {x.shape}")

# Select image
x = x[i : i + 1, :, :, :]
x = x.detach().clone()
b, c, h, w = x.shape

# plot
x_plot = x.view(-1, h, h).cpu().numpy()
imagesc(x_plot[0, :, :], r"$x$ in [-1, 1]")
$x$ in [-1, 1]
Shape of input images: torch.Size([7, 1, 64, 64])
Define a measurement operator

We consider the case where the measurement matrix is the positive component of a Hadamard matrix, which is often used in single-pixel imaging. First, we compute a full Hadamard matrix that computes the 2D transform of an image of size h and takes its positive part.

from spyrit.misc.walsh_hadamard import walsh2_matrix

F = walsh2_matrix(h)
F = np.where(F > 0, F, 0)

Next, we subsample the rows of the measurement matrix to simulate an accelerated acquisition. For this, we use the spyrit.misc.sampling.sort_by_significance() function that returns an input matrix whose rows are ordered in increasing order of significance according to a given array. The array is a sampling map that indicates the location of the most significant coefficients in the transformed domain.

To keep the low-frequency Hadamard coefficients, we choose a sampling map with ones in the top left corner and zeros elsewhere.

import math

und = 4  # undersampling factor
M = h**2 // und  # number of measurements (undersampling factor = 4)

Sampling_map = np.ones((h, h))
M_xy = math.ceil(M**0.5)
Sampling_map[:, M_xy:] = 0
Sampling_map[M_xy:, :] = 0

imagesc(Sampling_map, "low-frequency sampling map")
low-frequency sampling map

After permutation of the full Hadamard matrix, we keep only its first M rows

from spyrit.misc.sampling import sort_by_significance

F = sort_by_significance(F, Sampling_map, "rows", False)
H = F[:M, :]

print(f"Shape of the measurement matrix: {H.shape}")
Shape of the measurement matrix: (1024, 4096)

Then, we instantiate a spyrit.core.meas.Linear measurement operator

from spyrit.core.meas import Linear

meas_op = Linear(torch.from_numpy(H), pinv=True)
Noiseless case

In the noiseless case, we consider the spyrit.core.noise.NoNoise noise operator

from spyrit.core.noise import NoNoise

noise = NoNoise(meas_op)

# Simulate measurements
y = noise(x.view(b * c, h * w))
print(f"Shape of raw measurements: {y.shape}")
Shape of raw measurements: torch.Size([1, 1024])

To display the subsampled measurement vector as an image in the transformed domain, we use the spyrit.misc.sampling.meas2img() function

# plot
from spyrit.misc.sampling import meas2img

y_plot = y.detach().numpy().squeeze()
y_plot = meas2img(y_plot, Sampling_map)
print(f"Shape of the raw measurement image: {y_plot.shape}")

imagesc(y_plot, "Raw measurements (no noise)")
Raw measurements (no noise)
Shape of the raw measurement image: (64, 64)

We now compute and plot the preprocessed measurements corresponding to an image in [-1,1]. For details in the preprocessing, see Tutorial 1.

Note

Using spyrit.core.prep.DirectPoisson with \(\alpha = 1\) allows to compensate for the image normalisation achieved by spyrit.core.noise.NoNoise.

from spyrit.core.prep import DirectPoisson

prep = DirectPoisson(1.0, meas_op)  # "Undo" the NoNoise operator

m = prep(y)
print(f"Shape of the preprocessed measurements: {m.shape}")

# plot
m_plot = m.detach().numpy().squeeze()
m_plot = meas2img(m_plot, Sampling_map)
print(f"Shape of the preprocessed measurement image: {m_plot.shape}")

imagesc(m_plot, "Preprocessed measurements (no noise)")
Preprocessed measurements (no noise)
Shape of the preprocessed measurements: torch.Size([1, 1024])
Shape of the preprocessed measurement image: (64, 64)
Pseudo inverse

We can use the spyrit.core.recon.PseudoInverse class to perform the pseudo inverse reconstruction from the measurements y

from spyrit.core.recon import PseudoInverse

# Pseudo-inverse reconstruction operator
recon_op = PseudoInverse()

# Reconstruction
x_rec = recon_op(y, meas_op)

# plot
x_plot = x_rec.squeeze().view(h, h).cpu().numpy()
imagesc(x_plot, "Pseudoinverse reconstruction (no noise)", title_fontsize=20)
Pseudoinverse reconstruction (no noise)
PinvNet Network

Alternatively, we can consider the spyrit.core.recon.PinvNet class that reconstructs an image by computing the pseudoinverse solution, which is fed to a neural networker denoiser. To compute the pseudoinverse solution only, the denoiser can be set to the identity operator

Sketch of the PinvNet architecture
from spyrit.core.recon import PinvNet

pinv_net = PinvNet(noise, prep, denoi=torch.nn.Identity())

or equivalently

pinv_net = PinvNet(noise, prep)

Then, we reconstruct the image from the measurement vector y using the reconstruct() method

x_rec = pinv_net.reconstruct(y)

# plot
x_plot = x_rec.squeeze().cpu().numpy()
imagesc(x_plot, "PinvNet reconstruction (no noise)", title_fontsize=20)
PinvNet reconstruction (no noise)

Alternatively, the measurement vector can be simulated using the acquire() method

y = pinv_net.acquire(x)
x_rec = pinv_net.reconstruct(y)

# plot
x_plot = x_rec.squeeze().cpu().numpy()
imagesc(x_plot, "Another pseudoinverse reconstruction (no noise)")
Another pseudoinverse reconstruction (no noise)

Note that the full module pinv_net both simulates noisy measurements and reconstruct them

x_rec = pinv_net(x)
print(f"Ground-truth image x: {x.shape}")
print(f"Reconstructed x_rec: {x_rec.shape}")

# plot
x_plot = x_rec.squeeze().cpu().numpy()
imagesc(x_plot, "One more pseudoinverse reconstruction (no noise)")
One more pseudoinverse reconstruction (no noise)
Ground-truth image x: torch.Size([1, 1, 64, 64])
Reconstructed x_rec: torch.Size([1, 1, 64, 64])
Poisson-corrupted measurement

Here, we consider the spyrit.core.noise.Poisson class together with a spyrit.core.prep.DirectPoisson preprocessing operator (see Tutorial 1).

alpha = 10  # maximum number of photons in the image

from spyrit.core.noise import Poisson
from spyrit.misc.disp import imagecomp

noise = Poisson(meas_op, alpha)
prep = DirectPoisson(alpha, meas_op)  # To undo the "Poisson" operator
pinv_net = PinvNet(noise, prep)

x_rec_1 = pinv_net(x)
x_rec_2 = pinv_net(x)
print(f"Ground-truth image x: {x.shape}")
print(f"Reconstructed x_rec: {x_rec.shape}")

# plot
x_plot_1 = x_rec_1.squeeze().cpu().numpy()
x_plot_1[:2, :2] = 0.0  # hide the top left "crazy pixel" that collects noise
x_plot_2 = x_rec_2.squeeze().cpu().numpy()
x_plot_2[:2, :2] = 0.0  # hide the top left "crazy pixel" that collects noise
imagecomp(x_plot_1, x_plot_2, "Pseudoinverse reconstruction", "Noise #1", "Noise #2")
Pseudoinverse reconstruction, Noise #1, Noise #2
Ground-truth image x: torch.Size([1, 1, 64, 64])
Reconstructed x_rec: torch.Size([1, 1, 64, 64])

As shown in the next tutorial, a denoising neural network can be trained to postprocess the pseudo inverse solution.

Total running time of the script: (0 minutes 3.627 seconds)

Gallery generated by Sphinx-Gallery

03. Pseudoinverse solution + CNN denoising

This tutorial shows how to simulate measurements and perform image reconstruction using PinvNet (pseudoinverse linear network) with CNN denoising as a last layer. This tutorial is a continuation of the Pseudoinverse solution tutorial but uses a CNN denoiser instead of the identity operator in order to remove artefacts.

The measurement operator is chosen as a Hadamard matrix with positive coefficients, which can be replaced by any matrix.

Reconstruction and neural network denoising architecture sketch

These tutorials load image samples from /images/.

Load a batch of images

Images \(x\) for training expect values in [-1,1]. The images are normalized using the transform_gray_norm() function.

import os

import torch
import torchvision
import numpy as np
import matplotlib.pyplot as plt

from spyrit.misc.disp import imagesc
from spyrit.misc.statistics import transform_gray_norm

# sphinx_gallery_thumbnail_path = 'fig/tuto3.png'

h = 64  # image size hxh
i = 1  # Image index (modify to change the image)
spyritPath = os.getcwd()
imgs_path = os.path.join(spyritPath, "images/")

# Create a transform for natural images to normalized grayscale image tensors
transform = transform_gray_norm(img_size=h)

# Create dataset and loader (expects class folder 'images/test/')
dataset = torchvision.datasets.ImageFolder(root=imgs_path, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=7)

x, _ = next(iter(dataloader))
print(f"Shape of input images: {x.shape}")

# Select image
x = x[i : i + 1, :, :, :]
x = x.detach().clone()
b, c, h, w = x.shape

# plot
x_plot = x.view(-1, h, h).cpu().numpy()
imagesc(x_plot[0, :, :], r"$x$ in [-1, 1]")
$x$ in [-1, 1]
Shape of input images: torch.Size([7, 1, 64, 64])
Define a measurement operator

We consider the case where the measurement matrix is the positive component of a Hadamard matrix and the sampling operator preserves only the first M low-frequency coefficients (see Positive Hadamard matrix for full explantion).

import math
from spyrit.misc.sampling import sort_by_significance
from spyrit.misc.walsh_hadamard import walsh2_matrix

F = walsh2_matrix(h)
F = np.where(F > 0, F, 0)
und = 4  # undersampling factor
M = h**2 // und  # number of measurements (undersampling factor = 4)

Sampling_map = np.ones((h, h))
M_xy = math.ceil(M**0.5)
Sampling_map[:, M_xy:] = 0
Sampling_map[M_xy:, :] = 0

F = sort_by_significance(F, Sampling_map, "rows", False)
H = F[:M, :]
print(f"Shape of the measurement matrix: {H.shape}")

imagesc(Sampling_map, "low-frequency sampling map")
low-frequency sampling map
Shape of the measurement matrix: (1024, 4096)

Then, we instantiate a spyrit.core.meas.Linear measurement operator

from spyrit.core.meas import Linear

meas_op = Linear(torch.from_numpy(H), pinv=True)
Noiseless case

In the noiseless case, we consider the spyrit.core.noise.NoNoise noise operator

from spyrit.core.noise import NoNoise

N0 = 1.0  # Noise level (noiseless)
noise = NoNoise(meas_op)

# Simulate measurements
y = noise(x.view(b * c, h * w))
print(f"Shape of raw measurements: {y.shape}")
Shape of raw measurements: torch.Size([1, 1024])

We now compute and plot the preprocessed measurements corresponding to an image in [-1,1]

from spyrit.core.prep import DirectPoisson

prep = DirectPoisson(N0, meas_op)  # "Undo" the NoNoise operator

m = prep(y)
print(f"Shape of the preprocessed measurements: {m.shape}")
Shape of the preprocessed measurements: torch.Size([1, 1024])

To display the subsampled measurement vector as an image in the transformed domain, we use the spyrit.misc.sampling.meas2img() function

# plot
from spyrit.misc.sampling import meas2img

m_plot = m.detach().numpy().squeeze()
m_plot = meas2img(m_plot, Sampling_map)
print(f"Shape of the preprocessed measurement image: {m_plot.shape}")

imagesc(m_plot, "Preprocessed measurements (no noise)")
Preprocessed measurements (no noise)
Shape of the preprocessed measurement image: (64, 64)
PinvNet Network

We consider the spyrit.core.recon.PinvNet class that reconstructs an image by computing the pseudoinverse solution, which is fed to a neural network denoiser. To compute the pseudoinverse solution only, the denoiser can be set to the identity operator

from spyrit.core.recon import PinvNet

pinv_net = PinvNet(noise, prep, denoi=torch.nn.Identity())

or equivalently

pinv_net = PinvNet(noise, prep)

Then, we reconstruct the image from the measurement vector y using the reconstruct() method

x_rec = pinv_net.reconstruct(y)
Removing artefacts with a CNN

Artefacts can be removed by selecting a neural network denoiser (last layer of PinvNet). We select a simple CNN using the spyrit.core.nnet.ConvNet class, but this can be replaced by any neural network (eg. UNet from spyrit.core.nnet.Unet).

Sketch of the PinvNet with CNN architecture
from spyrit.core.nnet import ConvNet, Unet
from spyrit.core.train import load_net

# Define PInvNet with ConvNet denoising layer
denoi = ConvNet()
pinv_net_cnn = PinvNet(noise, prep, denoi)

# Send to GPU if available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
pinv_net_cnn = pinv_net_cnn.to(device)

As an example, we use a simple ConvNet that has been pretrained using STL-10 dataset. We download the pretrained weights and load them into the network.

# Load pretrained model
model_path = "./model"
num_epochs = 1

pretrained_model_num = 3
if pretrained_model_num == 1:
    # 1 epoch
    url_cnn = "https://drive.google.com/file/d/1iGjxOk06nlB5hSm3caIfx0vy2byQd-ZC/view?usp=drive_link"
    name_cnn = "pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_1_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pth"
    num_epochs = 1
elif pretrained_model_num == 2:
    # 5 epochs
    url_cnn = "https://drive.google.com/file/d/1tzZg1lU3AxOi8-EVXFgnxdtqQCJPjQ9f/view?usp=drive_link"
    name_cnn = (
        "pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_5_lr_0.001_sss_10_sdr_0.5_bs_512.pth"
    )
    num_epochs = 5
elif pretrained_model_num == 3:
    # 30 epochs
    url_cnn = "https://drive.google.com/file/d/1IZYff1xQxJ3ckAnObqAWyOure6Bjkj4k/view?usp=drive_link"
    name_cnn = "pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_30_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pth"
    num_epochs = 30

# Create model folder
if os.path.exists(model_path) is False:
    os.mkdir(model_path)
    print(f"Created {model_path}")

# Download model weights
model_cnn_path = os.path.join(model_path, name_cnn)
print(model_cnn_path)
if os.path.exists(model_cnn_path) is False:
    try:
        import gdown

        gdown.download(url_cnn, f"{model_cnn_path}.pth", quiet=False, fuzzy=True)
    except:
        print(f"Model {model_cnn_path} not downloaded!")

# Load model weights
load_net(model_cnn_path, pinv_net_cnn, device, False)
print(f"Model {model_cnn_path} loaded.")
Created ./model
./model/pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_30_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pth
Downloading...
From: https://drive.google.com/uc?id=1IZYff1xQxJ3ckAnObqAWyOure6Bjkj4k
To: /home/docs/checkouts/readthedocs.org/user_builds/spyrit/checkouts/master/tutorial/model/pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_30_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pth.pth

  0%|          | 0.00/50.4M [00:00<?, ?B/s]
  2%|▏         | 1.05M/50.4M [00:00<00:04, 10.4MB/s]
 18%|█▊        | 8.91M/50.4M [00:00<00:00, 48.3MB/s]
 34%|███▍      | 17.3M/50.4M [00:00<00:00, 50.4MB/s]
 53%|█████▎    | 26.7M/50.4M [00:00<00:00, 65.0MB/s]
 68%|██████▊   | 34.1M/50.4M [00:00<00:00, 66.2MB/s]
 86%|████████▋ | 43.5M/50.4M [00:00<00:00, 74.8MB/s]
100%|██████████| 50.4M/50.4M [00:00<00:00, 67.2MB/s]
Model Loaded: ./model/pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_30_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pth
Model ./model/pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_30_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pth loaded.

We now reconstruct the image using PinvNet with pretrained CNN denoising and plot results side by side with the PinvNet without denoising

with torch.no_grad():
    x_rec_cnn = pinv_net_cnn.reconstruct(y.to(device))
    x_rec_cnn = pinv_net_cnn(x.to(device))

# plot
x_plot = x.squeeze().cpu().numpy()
x_plot2 = x_rec.squeeze().cpu().numpy()
x_plot3 = x_rec_cnn.squeeze().cpu().numpy()

from spyrit.misc.disp import add_colorbar, noaxis

f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(15, 5))
im1 = ax1.imshow(x_plot, cmap="gray")
ax1.set_title("Ground-truth image", fontsize=20)
noaxis(ax1)
add_colorbar(im1, "bottom", size="20%")

im2 = ax2.imshow(x_plot2, cmap="gray")
ax2.set_title("Pinv reconstruction", fontsize=20)
noaxis(ax2)
add_colorbar(im2, "bottom", size="20%")

im3 = ax3.imshow(x_plot3, cmap="gray")
ax3.set_title(f"Pinv + CNN (trained {num_epochs} epochs", fontsize=20)
noaxis(ax3)
add_colorbar(im3, "bottom", size="20%")
Ground-truth image, Pinv reconstruction, Pinv + CNN (trained 30 epochs
<matplotlib.colorbar.Colorbar object at 0x7f32cca68710>

We show the best result again (tutorial thumbnail purpose)

# Plot
imagesc(x_plot3, f"Pinv + CNN (trained {num_epochs} epochs", title_fontsize=20)

plt.show()
Pinv + CNN (trained 30 epochs

In the next tutorial, we will show how to train PinvNet + CNN denoiser.

Total running time of the script: (0 minutes 4.474 seconds)

Gallery generated by Sphinx-Gallery

04. Train pseudoinverse solution + CNN denoising

This tutorial shows how to train PinvNet with a CNN denoiser for reconstruction of linear measurements (results shown in the previous tutorial). As an example, we use a small CNN, which can be replaced by any other network, for example Unet. Training is performed on the STL-10 dataset.

You can use Tensorboard for Pytorch for experiment tracking and for visualizing the training process: losses, network weights, and intermediate results (reconstructed images at different epochs).

The linear measurement operator is chosen as the positive part of a Hadamard matrix, but this matrix can be replaced by any desired matrix.

These tutorials load image samples from /images/.

Load a batch of images

First, we load an image \(x\) and normalized it to [-1,1], as in previous examples.

import os

import torch
import torchvision
import numpy as np
import matplotlib.pyplot as plt

from spyrit.misc.disp import imagesc
from spyrit.misc.statistics import transform_gray_norm

h = 64  # image size hxh
i = 1  # Image index (modify to change the image)
spyritPath = os.getcwd()
imgs_path = os.path.join(spyritPath, "images/")


# Create a transform for natural images to normalized grayscale image tensors
transform = transform_gray_norm(img_size=h)

# Create dataset and loader (expects class folder 'images/test/')
dataset = torchvision.datasets.ImageFolder(root=imgs_path, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=7)

x, _ = next(iter(dataloader))
print(f"Shape of input images: {x.shape}")

# Select image
x = x[i : i + 1, :, :, :]
x = x.detach().clone()
b, c, h, w = x.shape

# plot
x_plot = x.view(-1, h, h).cpu().numpy()
imagesc(x_plot[0, :, :], r"$x$ in [-1, 1]")
$x$ in [-1, 1]
Shape of input images: torch.Size([7, 1, 64, 64])
Define a dataloader

We define a dataloader for STL-10 dataset using spyrit.misc.statistics.data_loaders_stl10(). This will download the dataset to the provided path if it is not already downloaded. It is based on pytorch pre-loaded dataset torchvision.datasets.STL10 and torch.utils.data.DataLoader, which creates a generator that iterates through the dataset, returning a batch of images and labels at each iteration.

Set mode_run to True in the script below to download the dataset and for training; otherwise, pretrained weights and results will be download for display.

from spyrit.misc.statistics import data_loaders_stl10
from pathlib import Path

# Parameters
h = 64  # image size hxh
data_root = Path("./data")  # path to data folder (where the dataset is stored)
batch_size = 512

# Dataloader for STL-10 dataset
mode_run = False
if mode_run:
    dataloaders = data_loaders_stl10(
        data_root,
        img_size=h,
        batch_size=batch_size,
        seed=7,
        shuffle=True,
        download=True,
    )
Define a measurement operator

We consider the case where the measurement matrix is the positive component of a Hadamard matrix, which is often used in single-pixel imaging (see Hadamard matrix). Then, we simulate an accelerated acquisition by keeping only the first M low-frequency coefficients (see low frequency sampling).

import math
from spyrit.misc.walsh_hadamard import walsh2_matrix
from spyrit.misc.sampling import sort_by_significance

und = 4  # undersampling factor
M = h**2 // und  # number of measurements (undersampling factor = 4)

F = walsh2_matrix(h)
F = np.where(F > 0, F, 0)

Sampling_map = np.ones((h, h))
M_xy = math.ceil(M**0.5)
Sampling_map[:, M_xy:] = 0
Sampling_map[M_xy:, :] = 0

# imagesc(Sampling_map, 'low-frequency sampling map')

F = sort_by_significance(F, Sampling_map, "rows", False)
H = F[:M, :]

print(f"Shape of the measurement matrix: {H.shape}")
Shape of the measurement matrix: (1024, 4096)

Then, we instantiate a spyrit.core.meas.Linear measurement operator, a spyrit.core.noise.NoNoise noise operator for noiseless case, and a preprocessing measurements operator spyrit.core.prep.DirectPoisson.

from spyrit.core.meas import Linear
from spyrit.core.noise import NoNoise
from spyrit.core.prep import DirectPoisson

meas_op = Linear(torch.from_numpy(H), pinv=True)
noise = NoNoise(meas_op)
N0 = 1.0  # Mean maximum total number of photons
prep = DirectPoisson(N0, meas_op)  # "Undo" the NoNoise operator
PinvNet Network

We consider the spyrit.core.recon.PinvNet class that reconstructs an image by computing the pseudoinverse solution and applies a nonlinear network denoiser. First, we must define the denoiser. As an example, we choose a small CNN using the spyrit.core.nnet.ConvNet class. Then, we define the PinvNet network by passing the noise and preprocessing operators and the denoiser.

from spyrit.core.nnet import ConvNet
from spyrit.core.recon import PinvNet

denoiser = ConvNet()
model = PinvNet(noise, prep, denoi=denoiser)

# Send to GPU if available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# Use multiple GPUs if available
if torch.cuda.device_count() > 1:
    print("Let's use", torch.cuda.device_count(), "GPUs!")
    model = nn.DataParallel(model)

model = model.to(device)

Note

In the example provided, we choose a small CNN using the spyrit.core.nnet.ConvNet class. This can be replaced by any denoiser, for example the spyrit.core.nnet.Unet class or a custom denoiser.

Define a Loss function optimizer and scheduler

In order to train the network, we need to define a loss function, an optimizer and a scheduler. We use the Mean Square Error (MSE) loss function, weigh decay loss and the Adam optimizer. The scheduler decreases the learning rate by a factor of gamma every step_size epochs.

import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
from spyrit.core.train import save_net, Weight_Decay_Loss

# Parameters
lr = 1e-3
step_size = 10
gamma = 0.5

loss = nn.MSELoss()
criterion = Weight_Decay_Loss(loss)
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.StepLR(optimizer, step_size=step_size, gamma=gamma)
Train the network

To train the network, we use the train_model() function, which handles the training process. It iterates through the dataloader, feeds the inputs to the network and optimizes the solution (by computing the loss and its gradients and updating the network weights at each iteration). In addition, it computes the loss and desired metrics on the training and validation sets at each iteration. The training process can be monitored using Tensorboard.

Note

To launch Tensorboard type in a new console:

tensorboard –logdir runs

and open the provided link in a browser. The training process can be monitored in real time in the “Scalars” tab. The “Images” tab allows to visualize the reconstructed images at different iterations tb_freq.

In order to train, you must set mode_run to True for training. It is set to False by default to download the pretrained weights and results for display, as training takes around 40 min for 30 epochs.

# We train for one epoch only to check that everything works fine.

from spyrit.core.train import train_model
from datetime import datetime

# Parameters
model_root = Path("./model")  # path to model saving files
num_epochs = 5  # number of training epochs (num_epochs = 30)
checkpoint_interval = 2  # interval between saving model checkpoints
tb_freq = (
    50  # interval between logging to Tensorboard (iterations through the dataloader)
)

# Path for Tensorboard experiment tracking logs
name_run = "stdl10_hadampos"
now = datetime.now().strftime("%Y-%m-%d_%H-%M")
tb_path = f"runs/runs_{name_run}_n{int(N0)}_m{M}/{now}"

# Train the network
if mode_run:
    model, train_info = train_model(
        model,
        criterion,
        optimizer,
        scheduler,
        dataloaders,
        device,
        model_root,
        num_epochs=num_epochs,
        disp=True,
        do_checkpoint=checkpoint_interval,
        tb_path=tb_path,
        tb_freq=tb_freq,
    )
else:
    train_info = {}
Save the network and training history

We save the model so that it can later be utilized. We save the network’s architecture, the training parameters and the training history.

from spyrit.core.train import save_net

# Training parameters
train_type = "N0_{:g}".format(N0)
arch = "pinv-net"
denoi = "cnn"
data = "stl10"
reg = 1e-7  # Default value
suffix = "N_{}_M_{}_epo_{}_lr_{}_sss_{}_sdr_{}_bs_{}".format(
    h, M, num_epochs, lr, step_size, gamma, batch_size
)
title = model_root / f"{arch}_{denoi}_{data}_{train_type}_{suffix}"
print(title)

Path(model_root).mkdir(parents=True, exist_ok=True)

if checkpoint_interval:
    Path(title).mkdir(parents=True, exist_ok=True)

save_net(title, model)

# Save training history
import pickle

if mode_run:
    from spyrit.core.train import Train_par

    params = Train_par(batch_size, lr, h, reg=reg)
    params.set_loss(train_info)

    train_path = model_root / f"TRAIN_{arch}_{denoi}_{data}_{train_type}_{suffix}.pkl"

    with open(train_path, "wb") as param_file:
        pickle.dump(params, param_file)
    torch.cuda.empty_cache()
else:
    # Download training history
    import gdown

    train_path = os.path.join(
        model_root,
        "TRAIN_pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_30_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pkl",
    )
    url_train = "https://drive.google.com/file/d/13KIbSEigHBZ8ub_JxMUqwRDMHklnFz8A/view?usp=drive_link"
    gdown.download(url_train, train_path, quiet=False, fuzzy=True)

    with open(train_path, "rb") as param_file:
        params = pickle.load(param_file)
    train_info["train"] = params.train_loss
    train_info["val"] = params.val_loss
model/pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_5_lr_0.001_sss_10_sdr_0.5_bs_512
model/pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_5_lr_0.001_sss_10_sdr_0.5_bs_512.pth
Model Saved
Downloading...
From: https://drive.google.com/uc?id=13KIbSEigHBZ8ub_JxMUqwRDMHklnFz8A
To: /home/docs/checkouts/readthedocs.org/user_builds/spyrit/checkouts/master/tutorial/model/TRAIN_pinv-net_cnn_stl10_N0_1_N_64_M_1024_epo_30_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pkl

  0%|          | 0.00/714 [00:00<?, ?B/s]
100%|██████████| 714/714 [00:00<00:00, 3.09MB/s]

We plot the training loss and validation loss

# Plot
# sphinx_gallery_thumbnail_number = 2

fig = plt.figure()
plt.plot(train_info["train"], label="train")
plt.plot(train_info["val"], label="val")
plt.xlabel("Epochs", fontsize=20)
plt.ylabel("Loss", fontsize=20)
plt.legend(fontsize=20)
tuto 04 train pseudoinverse cnn linear
<matplotlib.legend.Legend object at 0x7f32d35295d0>

Note

See the googlecolab notebook spyrit-examples/tutorial/tuto_train_lin_meas_colab.ipynb for training a reconstruction network on GPU. It shows how to train using different architectures, denoisers and other hyperparameters from train_model() function.

Total running time of the script: (0 minutes 13.703 seconds)

Gallery generated by Sphinx-Gallery

05. Acquisition operators (advanced) - Split measurements and subsampling

This tutorial is a continuation of the Acquisition operators tutorial for single-pixel imaging, which showed how to simulate linear measurements using the spyrit.core submodule (based on three classes spyrit.core.meas, spyrit.core.noise, and spyrit.core.prep). This tutorial extends the previous case: i) by introducing split measurements that can handle a Hadamard measurement matrix, and ii) by discussing the choice of the subsampling pattern for accelerated acquisitions.

These tutorials load image samples from /images/.

Load a batch of images

Images \(x\) for training neural networks expect values in [-1,1]. The images are normalized using the transform_gray_norm() function.

import os

import torch
import torchvision
import numpy as np
import matplotlib.pyplot as plt

from spyrit.misc.disp import imagesc
from spyrit.misc.statistics import transform_gray_norm


h = 64  # image size hxh
i = 1  # Image index (modify to change the image)
spyritPath = os.getcwd()
imgs_path = os.path.join(spyritPath, "images/")


# Create a transform for natural images to normalized grayscale image tensors
transform = transform_gray_norm(img_size=h)

# Create dataset and loader (expects class folder 'images/test/')
dataset = torchvision.datasets.ImageFolder(root=imgs_path, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=7)

x, _ = next(iter(dataloader))
print(f"Shape of input images: {x.shape}")

# Select image
x = x[i : i + 1, :, :, :]
x = x.detach().clone()
b, c, h, w = x.shape

# plot
x_plot = x.view(-1, h, h).cpu().numpy()
imagesc(x_plot[0, :, :], r"$x$ in [-1, 1]")
$x$ in [-1, 1]
Shape of input images: torch.Size([7, 1, 64, 64])
The measurement and noise operators

Noise operators are defined in the noise module. A noise operator computes the following three steps sequentially:

  1. Normalization of the image \(x\) with values in [-1,1] to get an image \(\tilde{x}=\frac{x+1}{2}\) in [0,1], as it is required for measurement simulation

  2. Application of the measurement model, i.e., computation of \(P\tilde{x}\)

  3. Application of the noise model

\[y \sim \texttt{Noise}(P\tilde{x}) = \texttt{Noise}\left(\frac{P(x+1)}{2}\right).\]

The normalization is useful when considering distributions such as the Poisson distribution that are defined on positive values.

Split measurement operator and no noise
\[\begin{split}y = P\tilde{x}= \begin{bmatrix} H_{+} \\ H_{-} \end{bmatrix} \tilde{x}.\end{split}\]

Hadamard split measurement operator is defined in the spyrit.core.meas.HadamSplit class. It computes linear measurements from incoming images, where \(P\) is a linear operator (matrix) with positive entries and \(\tilde{x}\) is a vectorized image. The class relies on a matrix \(H\) with shape \((M,N)\) where \(N\) represents the number of pixels in the image and \(M \le N\) the number of measurements. The matrix \(P\) is obtained by splitting the matrix \(H\) as \(H = H_{+}-H_{-}\) where \(H_{+} = \max(0,H)\) and \(H_{-} = \max(0,-H)\).

Subsampling

We simulate an accelerated acquisition by subsampling the measurement matrix. We consider two subsampling strategies:

  • “Naive subsampling” by retaining only the first \(M\) rows of the measurement matrix.

  • “Variance subsampling” by retaining only the first \(M\) rows of a permuted measurement matrix where the first rows corresponds to the coefficients with largest variance and the last ones to the coefficients that are close to constant. The motivation is that almost constant coefficients are less informative than the others. This can be supported by principal component analysis, which states that preserving the components with largest variance leads to the best linear predictor.

Subsampling is done by retaining only the first \(M\) rows of a permuted Hadamard matrix \(\textrm{Perm} H\), where \(\textrm{Perm}\) is a permutation matrix with shape with shape \((M,N)\) and \(H\) is a “full” Hadamard matrix with shape \((N,N)\) (see Hadamard matrix in tutorial on pseudoinverse solution). The permutation matrix \(\textrm{Perm}\) is obtained from the ordering matrix \(\textrm{Ord}\) with shape \((h,h)\). This is all handled internally by the spyrit.core.meas.HadamSplit class.

First, we download the covariance matrix from our warehouse and load it. The covariance matrix has been computed from ImageNet 2012 dataset.

import girder_client

# api Rest url of the warehouse
url = "https://pilot-warehouse.creatis.insa-lyon.fr/api/v1"

# Generate the warehouse client
gc = girder_client.GirderClient(apiUrl=url)

# Download the covariance matrix and mean image
data_folder = "./stat/"
dataId_list = [
    "63935b624d15dd536f0484a5",  # for reconstruction (imageNet, 64)
    "63935a224d15dd536f048496",  # for reconstruction (imageNet, 64)
]
cov_name = "./stat/Cov_64x64.npy"

try:
    for dataId in dataId_list:
        myfile = gc.getFile(dataId)
        gc.downloadFile(dataId, data_folder + myfile["name"])

    print(f"Created {data_folder}")

    # Load covariance matrix for "variance subsampling"
    Cov = np.load(cov_name)
    print(f"Cov matrix {cov_name} loaded")
except:
    # Set to the identity if not found for "naive subsampling"
    Cov = np.eye(h * h)
    print(f"Cov matrix {cov_name} not found! Set to the identity")
/home/docs/checkouts/readthedocs.org/user_builds/spyrit/envs/master/lib/python3.11/site-packages/girder_client/__init__.py:1: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
  from pkg_resources import DistributionNotFound, get_distribution
/home/docs/checkouts/readthedocs.org/user_builds/spyrit/envs/master/lib/python3.11/site-packages/pkg_resources/__init__.py:2832: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('sphinxcontrib')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
  declare_namespace(pkg)
Created ./stat/
Cov matrix ./stat/Cov_64x64.npy loaded

We compute the order matrix \(Ord\) for the two sampling strategies, from the covariance matrix for the “variance subsampling”, and from the identity matrix for the “naive subsampling”. In the latter case, the order matrix is constant, as all coefficients are considered equally informative, and they are retained in the increasing ‘naive’ order. We also define the number of measurements \(M\) that will be used later.

from spyrit.misc.statistics import Cov2Var
from spyrit.misc.disp import add_colorbar, noaxis

# number of measurements (here, 1/4 of the pixels)
M = 64 * 64 // 4

# Compute the order matrix
# "Naive subsampling"
Cov_eye = np.eye(h * h)
Ord_nai = Cov2Var(Cov_eye)

# "Variance subsampling"
Ord_var = Cov2Var(Cov)

To provide further insight on the subsampling strategies, we can plot an approximation of the masks that are used to subsample the measurement matrices.

# sphinx_gallery_thumbnail_number = 2

# Mask for "naive subsampling"
mask_nai = np.zeros((h, h))
mask_nai[0 : int(M / h), :] = 1

# Mas for "variance subsampling"
idx = np.argsort(Ord_var.ravel(), axis=None)[::-1]
mask_var = np.zeros_like(Ord_var)
mask_var.flat[idx[0:M]] = 1

# Plot the masks
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
im1 = ax1.imshow(mask_nai, vmin=0, vmax=1)
ax1.set_title("Mask \n'naive subsampling'", fontsize=20)
noaxis(ax1)
add_colorbar(im1, "bottom", size="20%")

im2 = ax2.imshow(mask_var, vmin=0, vmax=1)
ax2.set_title("Mask \n'variance subsampling'", fontsize=20)
noaxis(ax2)
add_colorbar(im2, "bottom", size="20%")
Mask  'naive subsampling', Mask  'variance subsampling'
<matplotlib.colorbar.Colorbar object at 0x7f32cc8ba6d0>

Note

Note that in this tutorial the covariance matrix is used only for chosing the subsampling strategy. Although the covariance matrix can be also exploited to improve the reconstruction, this will be considered in a future tutorial.

Measurement and noise operators

We compute the measurement and noise operators and then simulate the measurement vector \(y\).

We consider Poisson noise, i.e., a noisy measurement vector given by

\[y \sim \mathcal{P}(\alpha P \tilde{x}),\]

where \(\alpha\) is a scalar value that represents the maximum image intensity (in photons). The larger \(\alpha\), the higher the signal-to-noise ratio.

We use the spyrit.core.noise.Poisson class, set \(\alpha\) to 100 photons, and simulate a noisy measurement vector for the two sampling strategies.

from spyrit.core.noise import Poisson
from spyrit.core.meas import HadamSplit
from spyrit.core.noise import Poisson

alpha = 100.0  # number of photons

# "Naive subsampling"
# Measurement and noise operators
meas_nai_op = HadamSplit(M, h, torch.from_numpy(Ord_nai))
noise_nai_op = Poisson(meas_nai_op, alpha)

# Measurement operator
x = x.view(b * c, h * w)  # vectorized image
y_nai = noise_nai_op(x)  # a noisy measurement vector

# "Variance subsampling"
meas_var_op = HadamSplit(M, h, torch.from_numpy(Ord_var))
noise_var_op = Poisson(meas_var_op, alpha)
y_var = noise_var_op(x)  # a noisy measurement vector

x = x.view(b * c, h * w)  # vectorized image
print(f"Shape of vectorized image: {x.shape}")
print(f"Shape of simulated measurements y: {y_var.shape}")
Shape of vectorized image: torch.Size([1, 4096])
Shape of simulated measurements y: torch.Size([1, 2048])
The preprocessing operator measurements for split measurements

We compute the preprocessing operators for the three cases considered above, using the spyrit.core.prep module. As previously introduced, a preprocessing operator applies to the noisy measurements in order to compensate for the scaling factors that appear in the measurement or noise operators:

\[m = \texttt{Prep}(y),\]

We consider the spyrit.core.prep.SplitPoisson class that intends to “undo” the spyrit.core.noise.Poisson class, for split measurements, by compensating for

  • the scaling that appears when computing Poisson-corrupted measurements

  • the affine transformation to get images in [0,1] from images in [-1,1]

For this, it computes

\[m = \frac{2(y_+-y_-)}{\alpha} - P\mathbb{1},\]

where \(y_+=H_+\tilde{x}\) and \(y_-=H_-\tilde{x}\). This is handled internally by the spyrit.core.prep.SplitPoisson class.

We compute the preprocessing operator and the measurements vectors for the two sampling strategies.

from spyrit.core.prep import SplitPoisson

# "Naive subsampling"
#
# Preprocessing operator
prep_nai_op = SplitPoisson(alpha, meas_nai_op)

# Preprocessed measurements
m_nai = prep_nai_op(y_nai)

# "Variance subsampling"
prep_var_op = SplitPoisson(alpha, meas_var_op)
m_var = prep_var_op(y_var)
Noiseless measurements

We consider now noiseless measurements for the “naive subsampling” strategy. We compute the required operators and the noiseless measurement vector. For this we use the spyrit.core.noise.NoNoise class, which normalizes the input vector to get an image in [0,1], as explained in acquisition operators tutorial. For the preprocessing operator, we assign the number of photons equal to one.

from spyrit.core.noise import NoNoise

nonoise_nai_op = NoNoise(meas_nai_op)
y_nai_nonoise = nonoise_nai_op(x)  # a noisy measurement vector

prep_nonoise_op = SplitPoisson(1.0, meas_nai_op)
m_nai_nonoise = prep_nonoise_op(y_nai_nonoise)

We can now plot the three measurement vectors

from spyrit.misc.sampling import meas2img2

# Plot the three measurement vectors
m_plot = m_nai_nonoise.numpy()
m_plot = meas2img2(m_plot.T, Ord_nai)
m_plot = np.moveaxis(m_plot, -1, 0)
m_plot_max = np.max(m_plot[0, :, :])
m_plot_min = np.min(m_plot[0, :, :])

m_plot2 = m_nai.numpy()
m_plot2 = meas2img2(m_plot2.T, Ord_nai)
m_plot2 = np.moveaxis(m_plot2, -1, 0)

m_plot3 = m_var.numpy()
m_plot3 = meas2img2(m_plot3.T, Ord_var)
m_plot3 = np.moveaxis(m_plot3, -1, 0)

f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20, 7))
im1 = ax1.imshow(m_plot[0, :, :], cmap="gray")
ax1.set_title("Noiseless measurements $m$ \n 'Naive' subsampling", fontsize=20)
noaxis(ax1)
add_colorbar(im1, "bottom", size="20%")

im2 = ax2.imshow(m_plot2[0, :, :], cmap="gray", vmin=m_plot_min, vmax=m_plot_max)
ax2.set_title("Measurements $m$ \n 'Naive' subsampling", fontsize=20)
noaxis(ax2)
add_colorbar(im2, "bottom", size="20%")

im3 = ax3.imshow(m_plot3[0, :, :], cmap="gray", vmin=m_plot_min, vmax=m_plot_max)
ax3.set_title("Measurements $m$ \n 'Variance' subsampling", fontsize=20)
noaxis(ax3)
add_colorbar(im3, "bottom", size="20%")
Noiseless measurements $m$   'Naive' subsampling, Measurements $m$   'Naive' subsampling, Measurements $m$   'Variance' subsampling
<matplotlib.colorbar.Colorbar object at 0x7f32ccb021d0>
PinvNet network

We use the spyrit.core.recon.PinvNet class where the pseudo inverse reconstruction is performed by a neural network

from spyrit.core.recon import PinvNet

# PinvNet(meas_op, prep_op, denoi=torch.nn.Identity())
pinvnet_nai_nonoise = PinvNet(nonoise_nai_op, prep_nonoise_op)
pinvnet_nai = PinvNet(noise_nai_op, prep_nai_op)
pinvnet_var = PinvNet(noise_var_op, prep_var_op)

# Reconstruction
z_nai_nonoise = pinvnet_nai_nonoise.reconstruct(y_nai_nonoise)
z_nai = pinvnet_nai.reconstruct(y_nai)
z_var = pinvnet_var.reconstruct(y_var)

We can now plot the three reconstructed images

from spyrit.misc.disp import add_colorbar, noaxis

# Plot
x_plot = x.view(-1, h, h).numpy()
z_plot_nai_nonoise = z_nai_nonoise.view(-1, h, h).numpy()
z_plot_nai = z_nai.view(-1, h, h).numpy()
z_plot_var = z_var.view(-1, h, h).numpy()

f, axs = plt.subplots(2, 2, figsize=(10, 10))
im1 = axs[0, 0].imshow(x_plot[0, :, :], cmap="gray")
axs[0, 0].set_title("Ground-truth image")
noaxis(axs[0, 0])
add_colorbar(im1, "bottom")

im2 = axs[0, 1].imshow(z_plot_nai_nonoise[0, :, :], cmap="gray")
axs[0, 1].set_title("Reconstruction noiseless")
noaxis(axs[0, 1])
add_colorbar(im2, "bottom")

im3 = axs[1, 0].imshow(z_plot_nai[0, :, :], cmap="gray")
axs[1, 0].set_title("Reconstruction \n 'Naive' subsampling")
noaxis(axs[1, 0])
add_colorbar(im3, "bottom")

im4 = axs[1, 1].imshow(z_plot_var[0, :, :], cmap="gray")
axs[1, 1].set_title("Reconstruction \n 'Variance' subsampling")
noaxis(axs[1, 1])
add_colorbar(im4, "bottom")

plt.show()
Ground-truth image, Reconstruction noiseless, Reconstruction   'Naive' subsampling, Reconstruction   'Variance' subsampling

Note

Note that reconstructed images are pixelized when using the “naive subsampling”, while they are smoother and more similar to the ground-truth image when using the “variance subsampling”.

Another way to further improve results is to include a nonlinear post-processing step, which we will consider in a future tutorial.

Total running time of the script: (0 minutes 26.324 seconds)

Gallery generated by Sphinx-Gallery

06. DCNet solution for split measurements

This tutorial shows how to perform image reconstruction using DCNet (denoised completion network) with and without a trainable image denoiser. In the previous tutorial Acquisition - split measurements we showed how to handle split measurements for a Hadamard operator and how to perform a pseudo-inverse reconstruction with PinvNet.

Reconstruction and neural network denoising architecture sketch using split measurements

These tutorials load image samples from /images/.

Load a batch of images

Images \(x\) for training neural networks expect values in [-1,1]. The images are normalized using the transform_gray_norm() function.

import os

import torch
import torchvision
import numpy as np
import matplotlib.pyplot as plt

from spyrit.misc.disp import imagesc
from spyrit.misc.statistics import transform_gray_norm

# sphinx_gallery_thumbnail_path = 'fig/tuto6.png'

h = 64  # image size hxh
i = 1  # Image index (modify to change the image)
spyritPath = os.getcwd()
imgs_path = os.path.join(spyritPath, "images/")


# Create a transform for natural images to normalized grayscale image tensors
transform = transform_gray_norm(img_size=h)

# Create dataset and loader (expects class folder 'images/test/')
dataset = torchvision.datasets.ImageFolder(root=imgs_path, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=7)

x, _ = next(iter(dataloader))
print(f"Shape of input images: {x.shape}")

# Select image
x = x[i : i + 1, :, :, :]
x = x.detach().clone()
b, c, h, w = x.shape

# plot
x_plot = x.view(-1, h, h).cpu().numpy()
imagesc(x_plot[0, :, :], r"$x$ in [-1, 1]")
$x$ in [-1, 1]
Shape of input images: torch.Size([7, 1, 64, 64])
Forward operators for split measurements

We consider noisy split measurements for a Hadamard operator and a “variance subsampling” strategy that preserves the coefficients with the largest variance, obtained from a previously estimated covariance matrix (for more details, refer to Acquisition - split measurements).

First, we download the covariance matrix and load it.

import girder_client

# api Rest url of the warehouse
url = "https://pilot-warehouse.creatis.insa-lyon.fr/api/v1"

# Generate the warehouse client
gc = girder_client.GirderClient(apiUrl=url)

# Download the covariance matrix and mean image
data_folder = "./stat/"
dataId_list = [
    "63935b624d15dd536f0484a5",  # for reconstruction (imageNet, 64)
    "63935a224d15dd536f048496",  # for reconstruction (imageNet, 64)
]
cov_name = "./stat/Cov_64x64.npy"

try:
    Cov = np.load(cov_name)
    print(f"Cov matrix {cov_name} loaded")
except FileNotFoundError:
    for dataId in dataId_list:
        myfile = gc.getFile(dataId)
        gc.downloadFile(dataId, data_folder + myfile["name"])

    print(f"Created {data_folder}")

    Cov = np.load(cov_name)
    print(f"Cov matrix {cov_name} loaded")
except:
    Cov = np.eye(h * h)
    print(f"Cov matrix {cov_name} not found! Set to the identity")
Cov matrix ./stat/Cov_64x64.npy loaded

We define the measurement, noise and preprocessing operators and then simulate a noiseless measurement vector \(y\). As in the previous tutorial, we simulate an accelerated acquisition by subsampling the measurement matrix by retaining only the first \(M\) rows of a Hadamard matrix \(\textrm{Perm} H\).

from spyrit.core.meas import HadamSplit
from spyrit.core.noise import Poisson
from spyrit.misc.sampling import meas2img2
from spyrit.misc.statistics import Cov2Var
from spyrit.core.prep import SplitPoisson

# Measurement parameters
M = 64 * 64 // 4  # Number of measurements (here, 1/4 of the pixels)
alpha = 100.0  # number of photons

# Ordering matrix
Ord = Cov2Var(Cov)

# Measurement and noise operators
meas_op = HadamSplit(M, h, torch.from_numpy(Ord))
noise_op = Poisson(meas_op, alpha)
prep_op = SplitPoisson(alpha, meas_op)

# Vectorize image
x = x.view(b * c, h * w)
print(f"Shape of vectorized image: {x.shape}")

# Measurements
y = noise_op(x)  # a noisy measurement vector
m = prep_op(y)  # preprocessed measurement vector

m_plot = m.detach().numpy()
m_plot = meas2img2(m_plot.T, Ord)
imagesc(m_plot, r"Measurements $m$")
Measurements $m$
Shape of vectorized image: torch.Size([1, 4096])
PinvNet network

We reconstruct with the pseudo inverse using spyrit.core.recon.PinvNet class as in the previous tutorial. For this, we define the neural network and then perform the reconstruction.

from spyrit.core.recon import PinvNet
from spyrit.misc.disp import add_colorbar, noaxis

# Reconstruction with for Core module (linear net)
pinvnet = PinvNet(noise_op, prep_op)

# use GPU, if available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# Pseudo-inverse net
pinvnet = pinvnet.to(device)

# Reconstruction
with torch.no_grad():
    z_invnet = pinvnet.reconstruct(y.to(device))  # reconstruct from raw measurements
DCNet network

We can improve PinvNet results by using the denoised completion network DCNet with the spyrit.core.recon.DCNet class. It has four sequential steps:

  1. denoising of the acquired measurements,

  2. estimation of the missing measurements from the denoised ones,

  3. mapping them to the image domain, and

  4. denoising in the image-domain.

Only the last step involves learnable parameters.

Sketch of the DCNet architecture

For the denoiser, we compare the default unit matrix (no denoising) with the UNet denoiser with the spyrit.core.nnet.Unet class. For the latter, we load the pretrained model weights.

Without learnable image-domain denoising

from spyrit.core.recon import DCNet
from spyrit.core.nnet import Unet
from torch import nn

# Reconstruction with for DCNet (linear net)
dcnet = DCNet(noise_op, prep_op, torch.from_numpy(Cov), denoi=nn.Identity())
dcnet = dcnet.to(device)

# Reconstruction
with torch.no_grad():
    z_dcnet = dcnet.reconstruct(y.to(device))  # reconstruct from raw measurements

With a UNet denoising layer, we define the denoising network and then load the pretrained weights.

from spyrit.core.train import load_net
import matplotlib.pyplot as plt
from spyrit.misc.disp import add_colorbar, noaxis

# Define UNet denoiser
denoi = Unet()

# Define DCNet (with UNet denoising)
dcnet_unet = DCNet(noise_op, prep_op, torch.from_numpy(Cov), denoi)
dcnet_unet = dcnet_unet.to(device)

# Load previously trained model
# Download weights
url_unet = "https://drive.google.com/file/d/15PRRZj5OxKpn1iJw78lGwUUBtTbFco1l/view?usp=drive_link"
model_path = "./model"
if os.path.exists(model_path) is False:
    os.mkdir(model_path)
    print(f"Created {model_path}")
model_unet_path = os.path.join(
    model_path,
    "dc-net_unet_stl10_N0_100_N_64_M_1024_epo_30_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pth",
)

load_unet = True
if os.path.exists(model_unet_path) is False:
    try:
        import gdown

        gdown.download(url_unet, f"{model_unet_path}.pth", quiet=False, fuzzy=True)
    except:
        print(f"Model {model_unet_path} not found!")
        load_unet = False

if load_unet:
    # Load pretrained model
    load_net(model_unet_path, dcnet_unet, device, False)
    # print(f"Model {model_unet_path} loaded.")


# Reconstruction
with torch.no_grad():
    z_dcnet_unet = dcnet_unet.reconstruct(
        y.to(device)
    )  # reconstruct from raw measurements
Downloading...
From: https://drive.google.com/uc?id=15PRRZj5OxKpn1iJw78lGwUUBtTbFco1l
To: /home/docs/checkouts/readthedocs.org/user_builds/spyrit/checkouts/master/tutorial/model/dc-net_unet_stl10_N0_100_N_64_M_1024_epo_30_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pth.pth

  0%|          | 0.00/149M [00:00<?, ?B/s]
  1%|          | 1.57M/149M [00:00<00:09, 14.7MB/s]
  4%|▍         | 5.77M/149M [00:00<00:04, 30.2MB/s]
  6%|▌         | 8.91M/149M [00:00<00:08, 17.4MB/s]
 12%|█▏        | 17.3M/149M [00:00<00:05, 25.7MB/s]
 17%|█▋        | 25.7M/149M [00:00<00:04, 29.3MB/s]
 23%|██▎       | 34.1M/149M [00:01<00:05, 22.7MB/s]
 29%|██▊       | 42.5M/149M [00:01<00:03, 28.2MB/s]
 34%|███▍      | 50.9M/149M [00:01<00:02, 34.8MB/s]
 40%|███▉      | 59.2M/149M [00:01<00:02, 39.0MB/s]
 45%|████▌     | 67.6M/149M [00:02<00:01, 42.7MB/s]
 51%|█████     | 76.0M/149M [00:02<00:01, 46.8MB/s]
 57%|█████▋    | 84.4M/149M [00:02<00:01, 49.8MB/s]
 62%|██████▏   | 92.8M/149M [00:02<00:01, 49.8MB/s]
 68%|██████▊   | 101M/149M [00:02<00:00, 51.5MB/s]
 74%|███████▎  | 110M/149M [00:02<00:00, 51.2MB/s]
 80%|████████  | 120M/149M [00:02<00:00, 60.9MB/s]
 85%|████████▍ | 126M/149M [00:03<00:00, 59.5MB/s]
 91%|█████████ | 135M/149M [00:03<00:00, 45.5MB/s]
 96%|█████████▌| 143M/149M [00:03<00:00, 45.8MB/s]
100%|██████████| 149M/149M [00:03<00:00, 41.4MB/s]
Model Loaded: ./model/dc-net_unet_stl10_N0_100_N_64_M_1024_epo_30_lr_0.001_sss_10_sdr_0.5_bs_512_reg_1e-07.pth

We plot all results

# plot reconstruction side by side
x_plot = x.view(-1, h, h).cpu().numpy()
x_plot2 = z_invnet.view(-1, h, h).cpu().numpy()
x_plot3 = z_dcnet.view(-1, h, h).cpu().numpy()
x_plot4 = z_dcnet_unet.view(-1, h, h).cpu().numpy()
f, axs = plt.subplots(2, 2, figsize=(10, 10))
im1 = axs[0, 0].imshow(x_plot[0, :, :], cmap="gray")
axs[0, 0].set_title("Ground-truth image", fontsize=16)
noaxis(axs[0, 0])
add_colorbar(im1, "bottom")

im2 = axs[0, 1].imshow(x_plot2[0, :, :], cmap="gray")
axs[0, 1].set_title("PinvNet", fontsize=16)
noaxis(axs[0, 1])
add_colorbar(im2, "bottom")

im3 = axs[1, 0].imshow(x_plot3[0, :, :], cmap="gray")
axs[1, 0].set_title(f"DCNet (without denoising)", fontsize=16)
noaxis(axs[1, 0])
add_colorbar(im3, "bottom")

im4 = axs[1, 1].imshow(x_plot4[0, :, :], cmap="gray")
axs[1, 1].set_title(f"DCNet (UNet denoising)", fontsize=16)
noaxis(axs[1, 1])
add_colorbar(im4, "bottom")

plt.show()
Ground-truth image, PinvNet, DCNet (without denoising), DCNet (UNet denoising)

Comparing results, PinvNet provides pixelized reconstruction, DCNet with no denoising leads to a smoother reconstruction, as expected by a Tikonov regularization, and DCNet with UNet denoising provides the best reconstruction.

Note

In this tutorial, we have used DCNet with a UNet denoising layer for split measurements. We refer to spyrit-examples tutorials for a comparison of different solutions for split measurements (pinvNet, DCNet and DRUNet).

Total running time of the script: (0 minutes 6.103 seconds)

Gallery generated by Sphinx-Gallery

Bonus. Advanced methods - Colab

We refer to spyrit-examples/tutorial for a list of tutorials that can be run directly in colab and present more advanced cases than the main spyrit tutorials, such as comparison of methods for split measurements, or comparison of different denoising networks.

The spyrit-examples repository also includes research contributions based on the SPYRIT toolbox.

Total running time of the script: (0 minutes 0.000 seconds)

Gallery generated by Sphinx-Gallery

Gallery generated by Sphinx-Gallery

Cite us

When using SPyRiT in scientific publications, please cite the following paper:

    1. Beneti-Martin, L Mahieu-Williame, T Baudier, N Ducros, “OpenSpyrit: an Ecosystem for Reproducible Single-Pixel Hyperspectral Imaging,” Optics Express, Vol. 31, No. 10, (2023). DOI.

When using SPyRiT specifically for the denoised completion network, please cite the following paper:

  • A Lorente Mur, P Leclerc, F Peyrin, and N Ducros, “Single-pixel image reconstruction from experimental data using neural networks,” Opt. Express 29, 17097-17110 (2021). DOI.

Join the project

Feel free to contact us by e-mail <mailto:nicolas.ducros@creatis.insa-lyon.fr> for any question. Active developers are currently Nicolas Ducros, Thomas Baudier, Juan Abascal and Romain Phan. Direct contributions via pull requests (PRs) are welcome.

The full list of contributors can be found here.