Changelog#
v0.3.0#
Moved many continuously-integrated tutorials to MPoL/examples.
Added
mpol.images.GaussConvImage()
layer to calculate a Gaussian tapering window in the visibility plane.Removed explicit type declarations in base MPoL modules. Previously, core representations were set to be in
float64
orcomplex128
. Now, core MPoL representations (e.g.,mpol.images.BaseCube
) will follow the default tensor type, which is commonlytorch.float32
. If you want your model to run fully infloat32
orcomplex64
, then be sure that your data is also in these formats, since otherwise PyTorch will promote downstream tensors as needed.Added
mpol.utils.convolve_packed_cube()
method to convolve a 3D packed image cube with a 2D Gaussian. You can specify major axis, minor axis, and rotation angle.Added the
vis_ext_Mlam
instance attribute tompol.coordinates.GridCoords
for convenience plotting of visibility grids with axes labels in units of M\(\lambda\).Updated MPoL-dev/examples with Stochastic Gradient Descent Example.
Standardized nomenclature of
mpol.coordinates.GridCoords
andmpol.fourier.FourierCube
to usesky_cube
for a normal image andground_cube
for a normal visibility cube (rather thansky_
for visibility quantities). Routines usepacked_cube
instead ofcube
internally to be clear when packed format is preferred.Modified
mpol.coordinates.GridCoords
object to use cached properties #187.Changed the base spatial frequency unit from k\(\lambda\) to \(\lambda\), addressing #223. This will affect most users data-reading routines!
Added the
mpol.gridding.DirtyImager.from_tensors()
routine to cover the use case where one might want to use thempol.gridding.DirtyImager()
to image residual visibilities. Otherwise,mpol.gridding.DirtyImager()
andmpol.gridding.DataAverager()
are the only notable routines that expectnp.ndarray
input arrays. This is because they are designed to work with data arrays directly after loading (say from a MeasurementSet or.npy
file) and are implemented internally in numpy. If a routine requires data separately asdata_re
anddata_im
, that is a tell-tale sign that the routine works with numpy histogram routines internally.Changed name of
mpol.precomposed.SimpleNet
tompol.precomposed.GriddedNet
to more clearly indicate purpose. Updated documentation to make clear that this is just a convenience starter module, and users are encouraged to write their ownnn.Module
s.Changed internal instance attribute of
mpol.images.ImageCube
fromcube
topacked_cube
to more clearly indicate format.Removed
mpol.fourier.get_vis_residuals
and addedpredict_loose_visibilities
tompol.precomposed.SimpleNet
.Standardized treatment of numpy vs
torch.tensor
s, with preference fortorch.tensor
in many routines. This simplifies the internal logic of the routines and will make most operations run faster.Standardized the input types of {class}:
mpol.fourier.NuFFT
and {class}:mpol.fourier.NuFFTCached
to expecttorch.Tensor
s (removed support for numpy arrays). This simplifies the internal logic of the routines and will make most operations run faster.Changed
mpol.fourier.make_fake_data
->mpol.fourier.generate_fake_data
.Changed base spatial frequency unit from k\(\lambda\) to \(\lambda\), closing issue #223 and simplifying the internals of the codebase in numerous places. The following routines now expect inputs in units of \(\lambda\):
Major documentation edits to be more concise with the objective of making the core package easier to develop and maintain. Some tutorials moved to the MPoL-dev/examples repository.
Added the
mpol.losses.neg_log_likelihood_avg()
method to be used in point-estimate or optimization situations where data amplitudes or weights may be adjusted as part of the optimization (such as via self-calibration). Moved all documentation around loss functions into the Losses API.Renamed
mpol.losses.nll
->mpol.losses.r_chi_squared()
andmpol.losses.nll_gridded
->mpol.losses.r_chi_squared_gridded()
because that is what those routines were previously calculating. (#237). Tutorials have also been updated to reflect the change.Fixed implementation and docstring of
mpol.losses.log_likelihood()
(#237).Made some progress converting docstrings from “Google” style format to “NumPy” style format. Ian is now convinced that NumPy style format is more readable for the type of docstrings we write in MPoL. We usually require long type definitions and long argument descriptions, and the extra indentation required for Google makes these very scrunched.
Make the
passthrough
behaviour ofmpol.images.ImageCube
the default and removed this parameter entirely. Previously, it was possible to havempol.images.ImageCube
act as a layer withnn.Parameter
s. This functionality has effectively been replaced since the introduction ofmpol.images.BaseCube
which provides a more useful way to parameterize pixel values. If a one-to-one mapping (including negative pixels) fromnn.Parameter
s to output tensor is desired, then one can specifypixel_mapping=lambda x : x
when instantiatingmpol.images.BaseCube
. More details in (#246)Removed convenience classmethods
from_image_properties
from across the code base. From #233. The recommended workflow is to create ampol.coordinates.GridCoords
object and pass that to instantiate these objects as needed, rather than passingcell_size
andnpix
separately. For nearly all but trivially short workflows, this simplifies the number of variables the user needs to keep track and pass around revealing the central role of thempol.coordinates.GridCoords
object and its useful attributes for image extent, visibility extent, etc. Most importantly, this significantly reduces the size of the codebase and the burden to maintain, test, and document multiple entry points to keynn.modules
. We removedfrom_image_properties
fromRemoved unused routine
mpol.utils.log_stretch
.Added type hints for core modules (#54). This should improve stability of core routines and help users when writing code using MPoL in an IDE.
Manually line wrapped many docstrings to conform to 88 characters per line or less. Ian thought
black
would do this by default, but actually that doesn’t seem to be the case.Fully leaned into the
pyproject.toml
setup to modernize build via hatch. This centralizes the project dependencies and derives package versioning directly from git tags. Intermediate packages built from commits after the latest tag (e.g.,0.2.0
) will have an extra long string, e.g.,0.2.1.dev178+g16cfc3e.d20231223
where the version is a guess at the next version and the hash gives reference to the commit. This means that developers bump versions entirely by tagging a new version with git (or more likely by drafting a new release on the GitHub release page).Removed
setup.py
.TOML does not support adding keyed entries, so creating layered build environments of default,
docs
,test
, anddev
as we used to withsetup.py
is laborious and repetitive withpyproject.toml
. We have simplified the list to be default (key dependencies),test
(minimal necessary for test-suite), anddev
(covering everything needed to build the docs and actively develop the package).Removed custom
spheroidal_gridding
routines, tests, and theUVDataset
object that used them. These have been superseded by the TorchKbNuFFT package. For reference, the old routines (including the trickycorrfun
math) is preserved in a Gist here.Changed API of
NuFFT
. Previous signature tookuu
andvv
points at initialization (__init__
), and the.forward
method took only an image cube. This behaviour is preserved in a new classNuFFTCached
. The updated signature ofNuFFT
does not takeuu
andvv
at initialization. Rather, itsforward
method is modified to take an image cube and theuu
andvv
points. This allows an instance of this class to be used with newuu
andvv
points in each forward call. This follows the standard expectation of a layer (e.g., a linear regression function predicting at newx
) and the pattern of the TorchKbNuFFT package itself. It is expected that the newNuFFT
will be the default routine andNuFFTCached
will only be used in specialized circumstances (and possibly deprecated/removed in future updates). Changes implemented by #232.Moved “Releasing a new version of MPoL” from the wiki to the Developer Documentation on the main docs.
v0.2.0#
Moved docs build out of combined and into standalone test workflow
Updated package test workflow with new dependencies and caching
Added geometry tests
Reorganized some of the docs API
Expanded discussion and demonstration in
optimzation.md
tutorialLocalized harcoded Zenodo record reference to single instance, and created new external Zenodo record from which to draw
Added Parametric inference with Pyro tutorial
Updated some discussion and notation in
rml_intro.md
tutorialAdded
mypy
static type checksAdded
frank
as a ‘test’ and ‘analysis’ extras dependencyAdded
fast-histogram
as a core dependencyUpdated support to recent Python versions
Removed
mpol.coordinates._setup_coords
helper function fromGridCoords
Added new program
mpol.crossval
with the newCrossValidate
for running a cross-validation loop and the newRandomCellSplitGridded
for splitting data into training and test setsMoved and rescoped
KFoldCrossValidatorGridded
toDartboardSplitGridded
with some syntax changesAltered
GriddedDataset
to subclass fromtorch.nn.Module
, altered its args, added PyTorch buffers to it, addedmpol.datasets.GriddedDataset.forward()
to itAdded class method
from_image_properties
to various classes includingBaseCube
andImageCube
Altered
UVDataset
to subclass fromtorch.utils.data.Dataset
, altered its initialization signature, added new propertiesAltered
FourierCube
args and initialization signature, added PyTorch buffers to itAdded
get_vis_residuals()
Added new program
mpol.geometry
with newflat_to_observer()
andobserver_to_flat()
Replaced
Gridder
with the rescopedGridderBase
and two classes which subclass this,DirtyImager
andDataAverager
Added property
flux
toImageCube
Added new program
mpol.onedim
with newradialI()
andradialV()
Added new program
mpol.training
with newTrainTest
andradialV()
Added new utility functions
torch2npy()
,check_baselines()
,get_optimal_image_properties()
Added expected types and error checks in several places throughout codebase, as well as new programs -
mpol.exceptions
andmpol.protocols
Updated tests in several places and added many new tests
Added shell script
GPU_SLURM.sh
for future test implementationsUpdated citations to include new contributors
v0.1.4#
Removed the
GriddedResidualConnector
class and thesrc/connectors.py
module. Movedindex_vis
todatasets.py
.Changed BaseCube, ImageCube, and FourierCube initialization signatures
v0.1.3#
Added the
mpol.fourier.make_fake_data()
routine and the Mock Data tutorial.Fixed a bug in the Dirty Image Initialization tutorial so that the dirty image is delivered in units of Jy/arcsec^2.
v0.1.2#
Switched documentation backend to MyST-NB.
Switched documentation theme to Sphinx Book Theme.
Added
NuFFT
layer, allowing the direct forward modeling of un-gridded \(u,v\) data. Closes GitHub issue #17.
v0.1.1#
Added
HannConvCube
, incorporating Hann-like pixels and bundled it in theSimpleNet
moduleAdded
Dartboard
andKFoldCrossValidatorGridded
for cross validationAdded cross validation tutorial
Removed DatasetConnector in favor of
nll_gridded()
Added
ground_cube_to_packed_cube()
,packed_cube_to_ground_cube()
,sky_cube_to_packed_cube()
, andpacked_cube_to_sky_cube()
v0.1.0#
Updated citations to include Brianna Zawadzki
Added
Gridder
andGridCoords
objectsRemoved
mpol.dirty_image
moduleMigrated prolate spheroidal wavefunctions to
mpol.spheroidal_gridding
moduleAdded Jupyter notebook tutorial build process using Jupytext
Added
SimpleNet
precomposed moduleAdded Mermaid.js charting ability (for flowcharts)
Moved docs to github.io pages instead of Read the docs
Added \(\mathrm{Jy\;arcsec}^{-2}\) units to Gridder output
v0.0.5#
Introduced this Changelog
Updated citations to include Ryan Loomis
Added
dirty_image.get_dirty_image
routine, which includes Briggs robust weighting.Added assert statements to catch if the user chooses
cell_size
too coarsely relative to the spatial frequencies in the dataset.Implemented preliminary power spectral density loss functions.
The image cube is now natively stored (and optimized) using the natural logarithm of the pixel values. This defacto enforces positivity on all pixel values.
Changed entropy function to follow EHT-IV.
v0.0.4#
Made the package
pip
installable.