Fourier#

class mpol.fourier.FourierCube(coords: GridCoords, persistent_vis: bool = False)[source]#

This layer performs the FFT of an ImageCube and stores the corresponding dense FFT output as a cube. If you are using this layer in a forward-modeling RML workflow, because the FFT of the model is essentially stored as a grid, you will need to make the loss function calculation using a gridded loss function (e.g., mpol.losses.nll_gridded()) and a gridded dataset (e.g., mpol.datasets.GriddedDataset).

Parameters:
  • coords (GridCoords) – object containing image dimensions

  • persistent_vis (bool) – should the visibility cube be stored as part of the module s state_dict? If True, the state of the UV grid will be stored. It is recommended to use False for most applications, since the visibility cube will rarely be a direct parameter of the model.

forward(packed_cube: Tensor) Tensor[source]#

Perform the FFT of the image cube on each channel.

Parameters:

cube (torch.Tensor of shape (nchan, npix, npix)) – A ‘packed’ tensor. For example, an image cube from mpol.images.ImageCube.forward()

Returns:

The FFT of the image cube, in packed format.

Return type:

torch.Tensor of shape (nchan, npix, npix).

property ground_vis: Tensor#

The visibility cube in ground format cube fftshifted for plotting with imshow.

Returns:

complex-valued FFT of the image cube (i.e., the visibility cube), in ‘ground’ format.

Return type:

torch.Tensor of torch.complex128 of shape (nchan, npix, npix)

property ground_amp: Tensor#

The amplitude of the cube, arranged in unpacked format corresponding to the FFT of the sky_cube. Array dimensions for plotting given by self.coords.vis_ext.

Returns:

amplitude cube in ‘ground’ format.

Return type:

torch.Tensor of shape (nchan, npix, npix)

property ground_phase: Tensor#

The phase of the cube, arranged in unpacked format corresponding to the FFT of the sky_cube. Array dimensions for plotting given by self.coords.vis_ext.

Returns:

phase cube in ‘ground’ format (\([-\pi,\pi)\)).

Return type:

torch.Tensor of shape (nchan, npix, npix)

class mpol.fourier.NuFFT(coords: GridCoords, nchan: int = 1)[source]#

This layer translates input from an mpol.images.ImageCube to loose, ungridded samples of the Fourier plane, corresponding to the \(u,v\) locations provided. This layer is different than a mpol.Fourier.FourierCube in that, rather than producing the dense cube-like output from an FFT routine, it utilizes the non-uniform FFT or ‘NuFFT’ to interpolate directly to discrete \(u,v\) locations. This is implemented using the KbNufft routines of the TorchKbNufft package.

Parameters:
  • coords (GridCoords) – an object already instantiated from the GridCoords class. If providing this, cannot provide cell_size or npix.

  • nchan (int) – the number of channels in the mpol.images.ImageCube. Default = 1.

forward(packed_cube: Tensor, uu: Tensor, vv: Tensor, sparse_matrices: bool = False) Tensor[source]#

Perform the FFT of the image cube for each channel and interpolate to the uu and vv points. This call should automatically take the best parallelization option as indicated by the shape of the uu and vv points. In general, you probably do not want to provide baselines that include Hermitian pairs.

Parameters:
  • packed_cube (torch.Tensor) – shape (nchan, npix, npix)). The cube should be a “prepacked” image cube, for example, from mpol.images.ImageCube.forward()

  • uu (torch.Tensor) – 2D array of the u (East-West) spatial frequency coordinate [\(\lambda\)] of shape (nchan, npix)

  • vv (torch.Tensor) – 2D array of the v (North-South) spatial frequency coordinate [\(\lambda\)] (must be the same shape as uu)

  • sparse_matrices (bool) – If False, use the default table-based interpolation of TorchKbNufft.If True, use TorchKbNuFFT sparse matrices (generally slower but more accurate). Note that sparse matrices are incompatible with multi-channel uu and vv arrays (see below).

Returns:

  • torch.Tensor of torch.complex128 – Fourier samples of shape (nchan, nvis), evaluated at the uu, vv points

  • **Dimensionality** (You should consider the dimensionality of your image and)

  • your visibility samples when using this method. If your image has multiple

  • channels (nchan > 1), there is the possibility that the \(u,v\) sample

  • locations corresponding to each channel may be different. In ALMA/VLA

  • applications, this can arise when continuum observations are taken over

  • significant bandwidth, since the spatial frequency sampled by any pair of

  • antennas is wavelength-dependent

  • .. math:: – u = frac{D}{lambda},

  • where \(D\) is the projected baseline (measured in, say, meters) and

  • \(\lambda\) is the observing wavelength. In this application, the

  • image-plane model could be the same for each channel, or it may vary with

  • channel (necessary if the spectral slope of the source is significant).

  • On the other hand, with spectral line observations it will usually be the case

  • that the total bandwidth of the observations is small enough such that the

  • \(u,v\) sample locations could be considered as the same for each channel.

  • In spectral line applications, the image-plane model usually varies

  • substantially with each channel.

  • This routine will determine whether the spatial frequencies are treated as

  • constant based upon the dimensionality of the uu and vv input arguments.

  • * If uu and vv have a shape of (nvis), then it will be assumed that – the spatial frequencies can be treated as constant with channel (and will invoke parallelization across the image cube nchan dimension using the ‘coil’ dimension of the TorchKbNufft package).

  • * If the uu and vv have a shape of (nchan, nvis), then it will be – assumed that the spatial frequencies are different for each channel, and the spatial frequencies provided for each channel will be used (and will invoke parallelization across the image cube nchan dimension using the ‘batch’ dimension of the TorchKbNufft package).

  • Note that there is no straightforward, computationally efficient way to proceed

  • if there are a different number of spatial frequencies for each channel. The

  • best approach is likely to construct uu and vv arrays that have a shape

  • of (nchan, nvis), such that all channels are padded with bogus \(u,v\)

  • points to have the same length nvis, and you create a boolean mask to keep

  • track of which points are valid. Then, when this routine returns data points of

  • shape (nchan, nvis), you can use that boolean mask to select only the valid

  • \(u,v\) points points.

  • **Interpolation mode** (You may choose the type of interpolation mode that)

  • KbNufft uses under the hood by changing the boolean value of

  • sparse_matrices. If sparse_matrices=False, this routine will use the

  • default table-based interpolation of TorchKbNufft. If sparse_matrices=True,

  • the routine will calculate sparse matrices (which can be stored for later

  • operations, as in {class}`~mpol.fourier.NuFFTCached`) and use them for the

  • interpolation. This approach is likely to be more accurate but also slower. If

  • Note that as of TorchKbNuFFT version 1.4.0, sparse matrices are not yet

  • available when parallelizing using the ‘batch’ dimension — this will result

  • in a warning. For most applications, we anticipate the accuracy of the

  • table-based interpolation to be sufficiently accurate, but this could change

  • depending on your problem.

class mpol.fourier.NuFFTCached(coords: GridCoords, uu: Tensor, vv: Tensor, nchan: int = 1, sparse_matrices: bool = True)[source]#

This layer is similar to the mpol.fourier.NuFFT, but provides extra functionality to cache the sparse matrices for a specific set of uu and vv points specified at initialization.

For repeated evaluations of this layer (as might exist within an optimization loop), sparse_matrices=True is likely to be the more accurate and faster choice. If sparse_matrices=False, this routine will use the default table-based interpolation of TorchKbNufft. Note that as of TorchKbNuFFT version 1.4.0, sparse matrices are not yet available when parallelizing using the ‘batch’ dimension — this will result in a warning.

Parameters:
  • coords (GridCoords) – an object already instantiated from the GridCoords class. If providing this, cannot provide cell_size or npix.

  • nchan (int) – the number of channels in the mpol.images.ImageCube. Default = 1.

  • uu (array-like) – a length nvis array (not including Hermitian pairs) of the u (East-West) spatial frequency coordinate [klambda]

  • vv (array-like) – a length nvis array (not including Hermitian pairs) of the v (North-South) spatial frequency coordinate [klambda]

forward(packed_cube)[source]#

Perform the FFT of the image cube for each channel and interpolate to the uu and vv points set at layer initialization. This call should automatically take the best parallelization option as set by the shape of the uu and vv points.

Parameters:

packed_cubetorch.Tensor shape (nchan, npix, npix)). The cube should be a “prepacked” image cube, for example, from mpol.images.ImageCube.forward()

Returns:

of shape (nchan, nvis), Fourier samples evaluated

corresponding to the uu, vv points set at initialization.

Return type:

torch.complex tensor

mpol.fourier.generate_fake_data(packed_cube: Tensor, coords: GridCoords, uu: Tensor, vv: Tensor, weight: Tensor) tuple[Tensor, Tensor][source]#

Create a fake dataset from a supplied packed tensor cube using mpol.fourier.NuFFT. See mock-dataset-label for more details on how to prepare a generic image to a packed_cube.

The uu and vv baselines can either be 1D or 2D, depending on the desired broadcasting behavior from the mpol.fourier.NuFFT.

If the weight array is 1D, the routine assumes the weights will be broadcasted to all nchan. Otherwise, provide a 2D weight array.

Parameters:
  • packed_cube (torch.Tensor of class:`torch.double) – the image in “packed” format with shape (nchan, npix, npix)

  • coords (mpol.coordinates.GridCoords)

  • uu (torch.Tensor of class:`torch.double) – array of u spatial frequency coordinates, not including Hermitian pairs. Units of [\(\lambda\)]

  • vv (torch.Tensor of class:`torch.double) – array of v spatial frequency coordinates, not including Hermitian pairs. Units of [\(\lambda\)]

  • weight (torch.Tensor of class:`torch.double) – shape (nchan, nvis) array of thermal weights \(w_i = 1/\sigma_i^2\). Units of [\(1/\mathrm{Jy}^2\)] Will broadcast from 1D to 2D if necessary.

Returns:

:class:`torch.Tensor` of `class – A 2-tuple of the fake data. The first array is the mock dataset including noise, the second array is the mock dataset without added noise. Each array is shape (nchan, npix, npix).

Return type:

torch.complex128