Changelog#

v0.3.0#

  • Moved many continuously-integrated tutorials to MPoL/examples.

  • Added mpol.images.GaussConvImage() layer to calculate a Gaussian tapering window in the visibility plane.

  • Removed explicit type declarations in base MPoL modules. Previously, core representations were set to be in float64 or complex128. Now, core MPoL representations (e.g., mpol.images.BaseCube) will follow the default tensor type, which is commonly torch.float32. If you want your model to run fully in float32 or complex64, then be sure that your data is also in these formats, since otherwise PyTorch will promote downstream tensors as needed.

  • Added mpol.utils.convolve_packed_cube() method to convolve a 3D packed image cube with a 2D Gaussian. You can specify major axis, minor axis, and rotation angle.

  • Added the vis_ext_Mlam instance attribute to mpol.coordinates.GridCoords for convenience plotting of visibility grids with axes labels in units of M\(\lambda\).

  • Updated MPoL-dev/examples with Stochastic Gradient Descent Example.

  • Standardized nomenclature of mpol.coordinates.GridCoords and mpol.fourier.FourierCube to use sky_cube for a normal image and ground_cube for a normal visibility cube (rather than sky_ for visibility quantities). Routines use packed_cube instead of cube internally to be clear when packed format is preferred.

  • Modified mpol.coordinates.GridCoords object to use cached properties #187.

  • Changed the base spatial frequency unit from k\(\lambda\) to \(\lambda\), addressing #223. This will affect most users data-reading routines!

  • Added the mpol.gridding.DirtyImager.from_tensors() routine to cover the use case where one might want to use the mpol.gridding.DirtyImager() to image residual visibilities. Otherwise, mpol.gridding.DirtyImager() and mpol.gridding.DataAverager() are the only notable routines that expect np.ndarray input arrays. This is because they are designed to work with data arrays directly after loading (say from a MeasurementSet or .npy file) and are implemented internally in numpy. If a routine requires data separately as data_re and data_im, that is a tell-tale sign that the routine works with numpy histogram routines internally.

  • Changed name of mpol.precomposed.SimpleNet to mpol.precomposed.GriddedNet to more clearly indicate purpose. Updated documentation to make clear that this is just a convenience starter module, and users are encouraged to write their own nn.Modules.

  • Changed internal instance attribute of mpol.images.ImageCube from cube to packed_cube to more clearly indicate format.

  • Removed mpol.fourier.get_vis_residuals and added predict_loose_visibilities to mpol.precomposed.SimpleNet.

  • Standardized treatment of numpy vs torch.tensors, with preference for torch.tensor in many routines. This simplifies the internal logic of the routines and will make most operations run faster.

  • Standardized the input types of {class}:mpol.fourier.NuFFT and {class}:mpol.fourier.NuFFTCached to expect torch.Tensors (removed support for numpy arrays). This simplifies the internal logic of the routines and will make most operations run faster.

  • Changed mpol.fourier.make_fake_data -> mpol.fourier.generate_fake_data.

  • Changed base spatial frequency unit from k\(\lambda\) to \(\lambda\), closing issue #223 and simplifying the internals of the codebase in numerous places. The following routines now expect inputs in units of \(\lambda\):

  • Major documentation edits to be more concise with the objective of making the core package easier to develop and maintain. Some tutorials moved to the MPoL-dev/examples repository.

  • Added the mpol.losses.neg_log_likelihood_avg() method to be used in point-estimate or optimization situations where data amplitudes or weights may be adjusted as part of the optimization (such as via self-calibration). Moved all documentation around loss functions into the Losses API.

  • Renamed mpol.losses.nll -> mpol.losses.r_chi_squared() and mpol.losses.nll_gridded -> mpol.losses.r_chi_squared_gridded() because that is what those routines were previously calculating. (#237). Tutorials have also been updated to reflect the change.

  • Fixed implementation and docstring of mpol.losses.log_likelihood() (#237).

  • Made some progress converting docstrings from “Google” style format to “NumPy” style format. Ian is now convinced that NumPy style format is more readable for the type of docstrings we write in MPoL. We usually require long type definitions and long argument descriptions, and the extra indentation required for Google makes these very scrunched.

  • Make the passthrough behaviour of mpol.images.ImageCube the default and removed this parameter entirely. Previously, it was possible to have mpol.images.ImageCube act as a layer with nn.Parameters. This functionality has effectively been replaced since the introduction of mpol.images.BaseCube which provides a more useful way to parameterize pixel values. If a one-to-one mapping (including negative pixels) from nn.Parameters to output tensor is desired, then one can specify pixel_mapping=lambda x : x when instantiating mpol.images.BaseCube. More details in (#246)

  • Removed convenience classmethods from_image_properties from across the code base. From #233. The recommended workflow is to create a mpol.coordinates.GridCoords object and pass that to instantiate these objects as needed, rather than passing cell_size and npix separately. For nearly all but trivially short workflows, this simplifies the number of variables the user needs to keep track and pass around revealing the central role of the mpol.coordinates.GridCoords object and its useful attributes for image extent, visibility extent, etc. Most importantly, this significantly reduces the size of the codebase and the burden to maintain, test, and document multiple entry points to key nn.modules. We removed from_image_properties from

  • Removed unused routine mpol.utils.log_stretch.

  • Added type hints for core modules (#54). This should improve stability of core routines and help users when writing code using MPoL in an IDE.

  • Manually line wrapped many docstrings to conform to 88 characters per line or less. Ian thought black would do this by default, but actually that doesn’t seem to be the case.

  • Fully leaned into the pyproject.toml setup to modernize build via hatch. This centralizes the project dependencies and derives package versioning directly from git tags. Intermediate packages built from commits after the latest tag (e.g., 0.2.0) will have an extra long string, e.g., 0.2.1.dev178+g16cfc3e.d20231223 where the version is a guess at the next version and the hash gives reference to the commit. This means that developers bump versions entirely by tagging a new version with git (or more likely by drafting a new release on the GitHub release page).

  • Removed setup.py.

  • TOML does not support adding keyed entries, so creating layered build environments of default, docs, test, and dev as we used to with setup.py is laborious and repetitive with pyproject.toml. We have simplified the list to be default (key dependencies), test (minimal necessary for test-suite), and dev (covering everything needed to build the docs and actively develop the package).

  • Removed custom spheroidal_gridding routines, tests, and the UVDataset object that used them. These have been superseded by the TorchKbNuFFT package. For reference, the old routines (including the tricky corrfun math) is preserved in a Gist here.

  • Changed API of NuFFT. Previous signature took uu and vv points at initialization (__init__), and the .forward method took only an image cube. This behaviour is preserved in a new class NuFFTCached. The updated signature of NuFFT does not take uu and vv at initialization. Rather, its forward method is modified to take an image cube and the uu and vv points. This allows an instance of this class to be used with new uu and vv points in each forward call. This follows the standard expectation of a layer (e.g., a linear regression function predicting at new x) and the pattern of the TorchKbNuFFT package itself. It is expected that the new NuFFT will be the default routine and NuFFTCached will only be used in specialized circumstances (and possibly deprecated/removed in future updates). Changes implemented by #232.

  • Moved “Releasing a new version of MPoL” from the wiki to the Developer Documentation on the main docs.

v0.2.0#

  • Moved docs build out of combined and into standalone test workflow

  • Updated package test workflow with new dependencies and caching

  • Added geometry tests

  • Reorganized some of the docs API

  • Expanded discussion and demonstration in optimzation.md tutorial

  • Localized harcoded Zenodo record reference to single instance, and created new external Zenodo record from which to draw

  • Added Parametric inference with Pyro tutorial

  • Updated some discussion and notation in rml_intro.md tutorial

  • Added mypy static type checks

  • Added frank as a ‘test’ and ‘analysis’ extras dependency

  • Added fast-histogram as a core dependency

  • Updated support to recent Python versions

  • Removed mpol.coordinates._setup_coords helper function from GridCoords

  • Added new program mpol.crossval with the new CrossValidate for running a cross-validation loop and the new RandomCellSplitGridded for splitting data into training and test sets

  • Moved and rescoped KFoldCrossValidatorGridded to DartboardSplitGridded with some syntax changes

  • Altered GriddedDataset to subclass from torch.nn.Module, altered its args, added PyTorch buffers to it, added mpol.datasets.GriddedDataset.forward() to it

  • Added class method from_image_properties to various classes including BaseCube and ImageCube

  • Altered UVDataset to subclass from torch.utils.data.Dataset, altered its initialization signature, added new properties

  • Altered FourierCube args and initialization signature, added PyTorch buffers to it

  • Added get_vis_residuals()

  • Added new program mpol.geometry with new flat_to_observer() and observer_to_flat()

  • Replaced Gridder with the rescoped GridderBase and two classes which subclass this, DirtyImager and DataAverager

  • Added property flux to ImageCube

  • Added new program mpol.onedim with new radialI() and radialV()

  • Added new program mpol.training with new TrainTest and radialV()

  • Added new utility functions torch2npy(), check_baselines(), get_optimal_image_properties()

  • Added expected types and error checks in several places throughout codebase, as well as new programs - mpol.exceptions and mpol.protocols

  • Updated tests in several places and added many new tests

  • Added shell script GPU_SLURM.sh for future test implementations

  • Updated citations to include new contributors

v0.1.4#

  • Removed the GriddedResidualConnector class and the src/connectors.py module. Moved index_vis to datasets.py.

  • Changed BaseCube, ImageCube, and FourierCube initialization signatures

v0.1.3#

v0.1.2#

  • Switched documentation backend to MyST-NB.

  • Switched documentation theme to Sphinx Book Theme.

  • Added NuFFT layer, allowing the direct forward modeling of un-gridded \(u,v\) data. Closes GitHub issue #17.

v0.1.1#

v0.1.0#

  • Updated citations to include Brianna Zawadzki

  • Added Gridder and GridCoords objects

  • Removed mpol.dirty_image module

  • Migrated prolate spheroidal wavefunctions to mpol.spheroidal_gridding module

  • Added Jupyter notebook tutorial build process using Jupytext

  • Added SimpleNet precomposed module

  • Added Mermaid.js charting ability (for flowcharts)

  • Moved docs to github.io pages instead of Read the docs

  • Added \(\mathrm{Jy\;arcsec}^{-2}\) units to Gridder output

v0.0.5#

  • Introduced this Changelog

  • Updated citations to include Ryan Loomis

  • Added dirty_image.get_dirty_image routine, which includes Briggs robust weighting.

  • Added assert statements to catch if the user chooses cell_size too coarsely relative to the spatial frequencies in the dataset.

  • Implemented preliminary power spectral density loss functions.

  • The image cube is now natively stored (and optimized) using the natural logarithm of the pixel values. This defacto enforces positivity on all pixel values.

  • Changed entropy function to follow EHT-IV.

v0.0.4#

  • Made the package pip installable.