AIPS HELP file for MAPPR in 31DEC24
As of Wed Oct 9 3:47:06 2024
MAPPR: Simplified access to IMAGR task
INPUTS
INNAME Input UV data (name)
INCLASS Input UV data (class)
INSEQ Input UV data (seq. #)
INDISK Input UV data disk drive #
TIMERANG Time range to use
BCHAN 0.0 8192.0 Low freq. channel 0 for cont.
ECHAN 0.0 8192.0 Highest freq channel
NCHAV Number of chan. to average.
CHINC Channel incr. between maps.
BIF First IF in average.
EIF Last IF in average.
OUTNAME Output image name (name)
OUTDISK Output image disk drive #
OUTSEQ -1.0 9999.0 Output seq. no.
CELLSIZE 1.E-12 (X,Y) size of grid in asec
IMSIZE 0.0 8192. Minimum image size
MAPSHIFT RA,DEC shift (asec)
UVTAPER 0. (U,V) Gaussian taper
units are kilo-lambda
ZEROSP 0-spacing fluxes and weights
SEE HELP!!
UVWTFN UV dist. weight function
blank => uniform
ROBUST Robustness power: -5 -> pure
uniform weights, 5 => natural
NITER * 0.0 Maximum # of Clean components
NBOXES 0.0 50.0 Number of boxes for Clean
NB: field 1 only.
CLBOX -2.0 8192.0 Four coordinates for each box
GAIN * 0.0 2.0 Clean loop gain
FLUX * Minimum Clean component (Jy)
BMAJ * -999.9 FWHM(asec) major axis Clean
* restoring beam.
BMIN * -999.9 FWHM(asec) minor axis Clean
* restoring beam.
BPA * -360.0 360.0 Clean beam position angle
FACTOR * -5.0 5.0 Speedup factor see HELP
DOTV * -1.0 512.0 Display residuals on TV ?
Start with field = DOTV
BADDISK -1.0 1000.0 Disks to avoid for scratch.
HELP SECTION
MAPPR
Type: Procedure
Use: MAPPR is a simplified interface to the main imaging task IMAGR.
It limits IMAGR to imaging a single field using pre-calibrated,
single-source data. It does not offer the multi-resolution and
SDI methods of IMAGR nor any of the wide-field option. It
retains all of the TV options of IMAGR.
See the help and explain files of IMAGR for details and for
options not available with this procedure.
Adverbs:
INNAME.....Input UV data file (name). Standard defaults.
INCLASS....Input UV data file (class). Standard defaults.
INSEQ......Input UV data file (seq. #). 0 => highest.
INDISK.....Input UV data file disk drive #. 0 => any.
TIMERANG...Time range of the data to be copied. In order: Start day,
hour, min. sec, end day, hour, min. sec. Days relative to
ref. date.
BCHAN......First channel number to image, 0=>1. Channel numbers are 1
relative as defined in the input data file.
ECHAN......Highest channel number to to include in image,
0 => max The actual # of output channels will be
(BCHAN-ECHAN+1-NCHAV)/CHINC + 1
Thus, ECHAN is the highest channel in the input averaged
into the output and is the highest output channel only if
NCHAV and CHINC are 1.
NCHAV......NCHAV is the number of channels to be averaged together
in in the gridding process. 0 => 1. If this value is less
than the total number of channels, then a multi-channel
image will result.
CHINC......Number of input channels to skip between images. 0 => 1
BIF........The lowest numbered IF to include. Multiple IFs can be
included in a bandwidth synthesis average. 0 => 1.
EIF........The highest numbered IF to include. 0 =>highest.
Note: not all data sets will have IFs.
OUTNAME....Output image name (name). Standard defaults.
OUTDISK....The disk drive # of output images. 0 => highest with space
Image and Beam go on same disk. Note: OUTCLASS='xCLnnn'
where x=Stokes, nnn=field number and 'xBMnnn' is the beam
CLASS. If NITER=0, OUTCLASS='xIMnnn'
OUTSEQ.....Output sequence number. 0 => highest to produce unique maps
and beam. Note that this will produce, potentially, maps
and beams with a variety of sequence numbers depending on
what files are already on disk. If OUTSEQ > 0, all images
and beams for all fields are assigned that sequence number
and pre-existing files are reused. An error will occur if
the pre-exsitng files are of different sizes than the ones
currently being requested.
CELLSIZE...(X,Y) pixel separation in asec.
IMSIZE.....(X,Y) The minimum desired size of the image.
MAPSHIFT...RA,DEC shift of the phase center of image from the tangent
point of the uv data in asec. Map center = tangent point +
shift. If X>0 shifts map center to east. NOTE:
MAPSHIFT(1) is a shift in RA scaled by cos (Dec_0) as
Ra_new(i) = RA_0 + MAPSHIFT(1) / cos (Dec_0)
where _0 => the tangent point in the uv data.
Declination shift of map center from tangent point:
Dec_new = Dec_0 + MAPSHIFT(2). If MAPSHIFT(2)>0
shifts map center to north.
UVTAPER....(U,V) Gaussian taper (kilo-lambda) at 30 percent level
ZEROSP.....(1)= zero spacing Stokes I flux density. Zero spacing flux
is placed at the center of FIELD 1.
(2)= zero spacing Stokes Q flux density
(3)= zero spacing Stokes U flux density
(4)= zero spacing Stokes V flux density
(5)= weight for zero spacing flux.
Both ZEROSP(1) and ZEROSP(5) must be > 0 to apply this
option. The zero-spacing data sample is appended to the
end of the input data set and participates in any uniform
weighting, gridding, etc. in the same way any other sample
does. ******* NOTE THAT THIS IS NOT THE SAME AS OTHER
IMAGING TASKS USAGE OF ZEROSP. ****************
UVWTFN.....Weighting function of (u-v) plane in 2 character code. If
the first character is N use "natural" weighting (no
variation due to local density), otherwise use "uniform"
weighting (weights are modified by local density of samples
- first letter C - or weights under control of UVSIZE,
UVBOX, UVBXFN, and ROBUST). If the second character is the
letter O, all weights are set to one, and if the second
letter is V, all weights are set the their fourth root,
before any use is made of them.
ROBUST.....Briggs' "robustness" parameter. "Uniform" weights are
tempered by a constant being added to the local density of
weights. ROBUST = -4 is nearly pure uniform weighting,
ROBUST = +4 is nearly pure natural weighting. Use of this
option requires a second array in the "AP" memory and may
therefore force the data to be sorted. The option is
turned off if ROBUST < -7 and uniform weighting is turned
off is ROBUST > 7. See HELP ROBUST - the AIPS ROBUST
differs numerically from that of Briggs.
NITER......Clean iteration limit. 0 => no Cleaning.
NBOXES.....Number (<=50) of rectangular search boxes to search on the
first field; the Clean window in all other fields is given
by FLDSIZE. 0 => use FLDSIZE to determine windows on
field 1.
CLBOX......A 4x50 array with the parameters of each box.
0 => use window specified in FLDSIZE.
CLBOX(1,i)=-1 indicates a circle of radius CLBOX(2,i)
pixels centered on (CLBOX(3,i), CLBOX(4,i)).
CLBOX(1,i) >= 0 indicates a rectangular box. NOTE: CLBOX
is not used to determine the size of the image to be made;
IMSIZE or FLDSIZE must set the size of the image.
GAIN.......The Clean loop gain. 0 => 0.10.
FLUX.......Stop Clean when abs(resid. image max) < FLUX (Jy) If FLUX
< 0 then Clean stops at first negative Clean Component.
BMAJ.......The FWHM (asec) major axis of the restoring beam. If 0;
value obtained from fitting to the beam. If <0; output
will contain the residual image. This is for the
point-source resolution. It will be corrected to the
other resolutions (if any).
BMIN.......The FWHM (asec) minor axis of the restoring beam.
BPA........The position angle in the unrotated image of BMAJ.
FACTOR.....FACTOR>0 causes deeper Clean in each major cycle, speeding
Clean, maybe "eating" extended structure.
FACTOR=0 => the normal Clark Clean. FACTOR=-0.3 is good for
deep Cleans of extended structure.
DOTV.......Display residuals on TV channel 1. > 0.5 => display field
number DOTV initially. Can be changed interactively and by
TELL. When using this option, you may interact with the
residual images, selecting which field is examined in what
window, resetting the Clean boxes, and stop the Cleaning of
the current channel. IMAGR uses DOTV in the form of the
nearest integer; set it ionly to integer values.
BADDISK....This array contains the numbers of disks on which it is
desired that scratch files not be located. BADDISK has no
effect on input and output maps.
EXPLAIN SECTION
***************************************************************
Note that not all of these capablities are offered by
the MAPPR procedure.
***************************************************************
IMAGR: Wide-field or wide-band imaging/Cleaning task.
Documenter: W. D. Cotton, E. W. Greisen NRAO
Related Programs: OHGEO, UBAVG, SCMAP, SCIMG
This task does a visibility-based Clean and contains a number of
optional corrections and enhancements which are useful for imaging a
wide field of view and/or using multiple or wide frequency bands. In
addition, it has revised options to control the data weighting prior
to the imaging. The draft explain file below contains 5 sections: A.
the new data weighting options, B. the new TV display options, C. the
extended-source Cleaning options, D. the wide-field, wide-band
options, and E. the MX explain file with a few modifications.
Data input to IMAGR can be in any sort order and can even be in
need of calibration --- IMAGR will do the operations of SPLIT if
requested. IMAGR will sort the data to avoid having to do multiple
passes through it for weighting, gridding, or Clean component model
subtraction and will tell you when it does sort the data. If you are
going to use the same data set for multiple runs of IMAGR, it would be
more efficient to run SPLIT and UVSRT in advance to avoid repeated needs
to sort and to apply calibration to the data.
The history file records most input adverbs plus some result
parameters. Adverbs that have values of 0 or blank are usually omitted
for brevity.
****************************************************************
**************** NVSS WWW Server assistance ****************
****************************************************************
There is a new Web service for AIPS users making images from low
frequency radio interferometer data. This Web service
(http://www.cv.nrao.edu/~bcotton/NVSSAIPS.html)
uses the NRAO/VLA Sky Survey (NVSS) catalog to produce a list of
potentially confusing sources around a specified celestial position.
The result is in the form of an AIPS RUN file that may be cut and pasted
into an AIPS window that sets up the input ADVERBS for AIPS task IMAGR
to add extra fields around these confusing sources. Selection of source
is by region of the sky and peak 20 cm flux density after correction for
the antenna gain at the frequency of observation.
The NVSS survey was made with the NRAO VLA array at 20 cm wavelength
(1.4 GHz) with a resolution of 45", full sky coverage north of -40
degrees declination, and a limiting source peak brightness of about 2.5
mJy (worse near the galactic plane). The NVSS is nearing completion and
the catalog contains more than 1.8 million sources.
Bill Cotton, Jim Condon
****************************************************************
****************** DATA WEIGHTING OPTIONS ******************
****************************************************************
In the following discussion, we will assume that the input data
have weights Wi(j) = K / (sigma(j)**2) for the j'th sample, where K is a
constant and sigma is the uncertainty in the visibility. If you have
multiple sub-arrays or have otherwise combined observations from
multiple interferometers, you should adjust the data weights in DBCON or
WTMOD to make this assumption be valid with the same value of K for each
sub-array and interferometer. In the weighting process, each data
weight is changed from Wi(j) to Wo(j). In so doing, the noise in the
output image (for point-source detection) is increased by a factor of
sqrt { sum [Wo(j) * Wo(j) / Wi(j)] sum [wi(j)] }
WTN = -------------------------------------------------
sum [Wo(j)]
IMAGR evaluates this factor, reports it and places it as keyword WTNOISE
in the output image headers. Note that weighting always increases the
noise, but is usually done to improve the shape of the point-source
response (dirty beam pattern).
Data re-weighting consists of three parts: (1) brute modifications
of the input weights, (2) modification of the input weights as a
function of the local density of data weights and/or samples, and (3)
modification of the data weights as a function of position in the
Fourier plane (i.e., taper). The first, and most brutal of the "brute"
modifications is done by discarding portions of the data via the UVRANGE
and GUARD adverbs. Data samples lost in this way are not included in
the reported WTN. The brute modification that does appear in the WTN
reported is controlled by the second character in UVWTFN. The second
character causes the weights to be modified before the local density of
weights is measured (if it is measured) and again before they are used.
If the second character is the letter 'V', the fourth root of the
weights is taken; if it is the letter 'S' the square root of the weights
is taken; and if it is the letter 'O' all weights are taken as 1.0.
Thus,
W1(j) = Wi(j) ** p
where p = 0, .25, .5, 1.0 as the second character of UVWTFN is 'O', 'V',
'S', and anything else, resp.
The first character in UVWTFN controls how step (2) is applied. If
the first character is the letter 'N', then "natural" weighting is done
and
W2(j) = W1(j)
If the first character of UVWTFN is the letter 'C', then the weights are
modified by the local density of samples. If it is nether 'N' nor 'C',
then "uniform weighting" is done in which the weights are modified by
the "local density" of data weights. Thus, let P(j) = 1 for mode C and
P(j) = W1(j) for uniform weights. To estimate the local density of P we
must define both local and density. This is done by defining a summing
array of size UVSIZE(1) by UVSIZE(2) to encompass the relevant area of
the UV plane (i.e. 1/CELLSIZE in appropriate units). The default UVSIZE
is the numerical size of the largest field being imaged, but you can set
any value you like in the adverb. UVSIZE does not have to be a power of
two and has virtually no upper limit imposed by the software. Of
course, a really large value will cause the weighting to require a
really large disk file and to take a long time. So what value of UVSIZE
should you use? In the limit of very large UVSIZE, only one data sample
would fall into each tiny cell in the array. If single cells determine
the local "density", then, in this limit, uniform and natural weighting
are the same. The same result occurs in the opposite extreme, UVSIZE=1.
In between, different counts of P apply to different points. If you use
a small UVSIZE, then the counting cells are large and it is likely that
data samples which are fairly close to each other may get rather
different density weights depending on exactly which cell they fall
into.
One way to "have your cake and eat it too" - paying with additional
computations - is to define small cells in the array with a large
UVSIZE, but to count each data sample's P value over a box 2 * UVBOX + 1
cells on a side centered on the cell in which the sample occurs. The
counting of P can be weighted by a square or circular function so that a
sample counts less the further one gets away from it. A set of simple
functions is provided with parameter UVBXFN and include pill box
(constant), linear, exponential, and Gaussian. (See HELP UVBXFN for the
exact form of these functions.) This combination of parameters (large
UVSIZE, UVBOX > 0, and UVBXFN) allows you to make a fairly subtle
estimate of the local density of weights or samples (i.e. of P).
Yet another adverb affects the output weights from uniform
weighting. The "robustness" parameter, devised by Daniel Briggs, is
used to temper the effects of uniform weighting. Thus, the weight
actually used for sample i is
W1(i)
W2(i) = ----------------------
Sum[P](i) + S
where Sum[P](i) is the weighted sum of P(j) (either W1(j) or 1) over all
samples in the neighborhood of the cell occupied by the i'th sample and
S is a function of ROBUST. We use
(10**(ROBUST)) *
S = --------------------------------
5.0
where < > denotes the average of the local sums over all visibilities.
Note that if ROBUST > 4, the weight sums have little effect and the
weighting is close to natural weighting. If ROBUST < -4, then S is very
small and we have virtually pure uniform weighting with no tempering.
One obvious virtue of this temperance is to reduce the effects of the
exact locations of samples with respect to counting cell boundaries. If
a track just clips the corner of a counting cell, the one sample
fallingin that cell will get much larger weight than nearby samples on
that track which fall multiply in adjacent counting cells. The noise in
the up-weighted sample will appear as a sine wave in the dirty image and
will not Clean away. A modest amount of robustness seems to reduce the
noise added by weighting significantly while causing only modest
increases in the undesirable near-in sidelobes and central lobe width of
the dirty beam.
Finally, in the third stage, we apply additional weighting based on
the position of the sample in the UV plane. At present, IMAGR only
applies a Gaussian taper in U and in V. Thus,
Wo(i) = W2(i) * exp (Cu U(i)**2 + Cv V(i)**2)
where
Cu = log (0.3) / (UVTAPER(1)*1000) ** 2
Cv = log (0.3) / (UVTAPER(2)*1000) ** 2
and U(i) and V(i) are the coordinates of the i'th data sample in the UV
plane in wavelengths. With normal VLA-like data sampling, uniform
weighting lowers the weights of the short spacings while tapering
lowers the weights of long spacings. Thus a combination of tapering and
uniform weighting can be used to reduce any severe clumping in the UV
plane (e.g. along the arms of the array) while leaving most of the other
weights virtually unchanged.
We recommend Daniel Brigg's thesis for a detailed discussion of
many of these aspects of weighting in imaging.
****************************************************************
*********************** TV DISPLAY *************************
****************************************************************
The TV display may be used to follow the progress of your imaging
and Cleaning. If you set DOTV > 0, the dirty image of field DOTV will
be displayed before Clean begins and the residual image of the selected
field will be displayed after each major Clean cycle. When the image is
displayed, a menu of interactive options is also displayed. Move the TV
cursor to a desired option, and press TV button A, B, or C. To get help
on the message screen about an option, move the cursor to that option
and press TV button D. The selected option is highlighted in a
different color. If you select no options in 30 seconds, IMAGR will
continue without you until the end of the next major cycle. Interactive
options do not appear on the final display which is either of the dirty
image (NITER = 0) or the Clean image with components restored. If you
forget to turn on the TV display - or turn it off after starting IMAGR -
you may turn it back on with TELL IMAGR (say SHOW IMAGR in AIPS to see
all the TELL parameters that you may send to IMAGR after it has
started).
When OVERLAP is 2, the Clean process follows a somewhat different
path. At the beginning, the end, and whenever instructed by the user,
IMAGR makes an image of all fields using the current residual uv data.
Otherwise, it makes an image of one field, does one major cycle on that
field, subtracts the new Clean components from the uv residual data and
then repeats the process with another field. The TV display makes it
clear that multiple fields are accessible during that TV display session
and it is important to set Clean windows when they are since the
selection of fields to image depends on the extrema and average absolute
value in the Clean windows only. When only one field is accesible, the
field number is displayed in the menu and the option to re-image all
fields is offered. That option will cause the next TV display to have
all fields accessible and will cause the correct extrema and averages to
be known for all fields at the current stage of the Cleaning.
Otherwise, they become less than perfectly reliable estimates. When
IMAGR is run in an interactive mode (> 64 fields or DOWAIT=true in GO),
you may force IMAGR to use a field other than the one it selected using
the "FORCE A FIELD" option in the TV menu. This lets you reset boxes
and get correct extrema for one or more fields without having to
recompute all of the fields. If you have specified Clean component
filtering (IMAGRPRM(8 and 9)), then the FILTER COMPS menu item also
causes the images to be recomputed after the components have been
filtered.
The interactive options appear in two or three columns. The left
column offers:
| OFFZOOM | turn off any zoom magnification
| OFFTRAN | turn off any black & white enhancement
| OFFCOLOR | turn off any pseudo-coloring
| TVFIDDLE | as in AIPS, allows zoom, pseudo-color contours or
black and white enhancement
| TVTRAN | black and white enhancement as in AIPS
| TVPSEUDO | many pseudo-colorings as in AIPS
| TVFLAME | flame-like pseudo-colorings as in AIPS
| TVZOOM | interactive zoom magnification and center
| CURVALUE | display image intensity, pixel x/y at cursor
| SET WINDOW | set sub-image display window
| RESET WINDOW | re-set display window to full image
| TVBOX | set number of Clean boxes and their
parameters interactively
| REBOX | modify and add Clean boxes for this field
When a field is displayed, the pixel increments are chosen to load the
full field to the TV screen. The TV window is even forced to be bigger
to show the image if needed and small fields are interpolated by up to a
factor of 3. The Clean boxes for the field are displayed in a graphics
overlay. You may select a smaller sub-image of the field for display in
greater detail, but IMAGR does insist that the sub-image includes all
Clean boxes for the field. You may use TVBOX or REBOX to create and
modify the Clean boxes interactively. Note that these set the number as
well as the type and parameters of the boxes. This is currently one of
two interactive ways to set Clean boxes for fields numbered > 1. The
other if the verb FILEBOX in AIPS.
TVBOX: To set the number, type, and parameters of the Clean boxes for
the displayed field with a TV graphics display of the boxes as they are
being set. The terminal will issue instructions. While setting the
lower left corner of each box for the first time, buttons A and B will
mark the corner and switch to setting the upper right corner of the box.
Button C will change the rectangular box to a circular one and button D
will exit. Similarly, while first setting the center of a circular box,
buttons A and B switch to setting the radius, C switches back to a
rectangular box, and D exits deleting that incomplete box. While
setting or re-setting the upper right corner or radius of the box or
re-setting the lower left corner or center, button A marks the current
corner and switches to the other corner (or marks and switches between
radius and center), button B marks the current corner and switches to
the next (new) box, button C marks the current corner and switches to a
search mode leading to the resetting of a previous box, and button D
exits keeping the current and previous boxes with their current
settings. In search mode, move the cursor to any lower left or upper
right corner of any already set rectangular box or the center or any
point on the circumference of an already set circular box and press
button A or B to reset that corner or push button C to go on to the next
box. As usual button D exits. REBOX is TVBOX started with the current
boxes in a resetting mode.
The right-hand columns offer the options:
| CONTINUE CLEAN | Continue Cleaning now, not waiting for the timer
to expire
| STOP CLEANING | Declare the Clean of this channel done, restore
components and write the output
| TURN OFF DOTV | Resume Cleaning now and turn off future TV
displays
| ABORT TASK | Abort the task, ending things as rapidly as
possible.
| SELECT FIELD nn | Display field nn allowing its Clean boxes to be
altered
| SELECT NEW FIELD | Prompt for new field number to be displayed
allowing its Clean boxes to be altered
(> 64 fields)
| THIS IS FLD nn | The only field number accessible at this time is
nn.
| FORCE SDI CLEAN | Force all following Cleans to use SDI method
| FORCE BGC CLEAN | Force all following Cleans to use Clark method
| FILTER COMPS | Exit TV and filter all Clean components, then
re-compute all fields using new residual data
| FORCE A FIELD | Prompt for a field number, exit TV, re-compute
and display that field with current residual data
(if needed) and then Clean that field (when
OVERLAP=2 and DOWAIT true)
| REMAKE IMAGES | Exit TV and re-compute all fields using the
current residual uv data (when OVERLAP=2)
The SELECT FIELD nn options appear only if there is more than 1 field
and only NFIELD of them appear. If you turn off the TV display, you may
restart it with TELL from AIPS.
****************************************************************
****************** EXTENDED SOURCE OPTIONS *****************
****************************************************************
Clean normally uses a point-source component model and finds one
component at a time. For regions consisting of small-diameter objects
this has been found to work well. But, for extended objects, the
selection of a point at one pixel biases Clean against selecting
points at the adjacent pixels. In the end, a corrugation has been
found in which the image is lumpy on a rectangular pattern separated
by a few pixels (Schwarz, U. J. 1984, in Indirect Imaging, ed. J. A.
Roberts, p 255). In addition, extended emission over a large area may
contribute significantly to the total flux, but have low
signal-to-noise on a pixel-by-pixel basis at full resolution. This
poor S/N further confounds the usual Clean algorithm. There are two
solutions proposed for the first problem, one of which also addresses
the latter. These are the SDI method found in
Steer, D.G., Dewdney, P.E., Ito, M.R. 1984, "Enhancements to the
Deconvolution Algorithm 'CLEAN'", Astron.Ap.,137, 159.
and the multi-resolution Clean one early variant of which is described
in
Wakker, B.P., Schwarz, U.J. 1998, "The Multi-Resolution Clean and
its application to the short-spacing problem in interferometry",
Astron.Ap., 200, 312.
IMAGR offers variants of both of these methods.
The SDI Clean proposes to avoid the bias against adjacent pixels by
taking every pixel above a certain level as a component. In the case
of a completely uniform "plateau" in the residual image, this would
uniformly lower the height of the plateau and each major cycle. The
difficulty comes around the edges and irregularities in the plateau.
The edge pixels have fewer strong neighbors than the central pixels
and so less is subtracted from them than from the central pixels. The
result is a bright edge around the plateau in the next residual image.
The AIPS task SDCLN uses an iterative method to estimate the different
loop gains to be applied to the pixels to avoid the bright edges.
IMAGR uses a less expensive and somewhat less effective method. The
intent in IMAGR is to Clean with a normal Clark variant of the Hogbom
Clean until a significant number of pixels in the residual image are
essentially "equal". This is expressed as a crowding of the uppermost
levels of the histogram so that the ratio of the peak residual flux to
the lowest level that can be loaded by the BGC Clean is near 1.0.
IMAGRPRM(4) > 0 specifies that SDI Clean will be used when this ratio
is less than (1 + GAIN * IMAGRPRM(4)); a typical value might be
IMAGRPRM(4) = 3. If the ratio becomes greater than this limit at a
later step, the BGC Clean will be used until the ratio again falls
below the limit. The TV allows you to force the Clean into either BGC
or SDI if you started by giving IMAGRPRM(4) > 0. Once forced, it
remains in the chosen method until forced into the other.
The MR Clean attempts to avoid the corrugation and S/N issues by using
more than one component diameter in the modeling. In the early 1970's
I experimented with using a Gaussian as a component model. This
worked well so long as the image contained no sources smaller than the
Gaussian. Unfortunately, the sky always contains point sources and
the method found these and "painted bulls eyes around them". (The
bulls eyes are effectively a sin(x)/x function trying to represent a
small thing as a sum of large things.) The Wakker and Schmidt MR
Clean used a smoothed image and beam and a difference image and
difference beam, doing a fairly normal point-model Clean on each but
appropriately subtracting the components found in one image from the
other at every iteration. In IMAGR, the MR Clean is truly
multi-resolution, seeking components of intrinsically different
diameters. For component i of diameter Di, a dirty image is formed
by convolving a full-resolution dirty image with a Gaussian Gi of
diameter Di. This is actually implemented just by making a normal
image with an added taper. This dirty image has a point-spread
function (dirty point beam) which is the full resolution dirty beam
convolved with Gi. However, the response in the dirty image i to a
Gaussian component Gi is this dirty point spread function convolved a
second time by Gi. Thus
DirtyImage_i = DirtyImage_0 * Gi
BeamImage_i = BeamImage_0 * Gi * Gi
where Gi is a Gaussian of diameter Di, subscript 0 refers to the full
resolution image, and * is a convolution.
IMAGR uses its large number of fields to implement the MR Clean. If
you ask for NFIELD fields and NGAUSS Gaussian components, IMAGR will
make and Clean NFIELD*NGAUSS fields, the first NFIELD with width
WGAUSS(1), the next NFIELD with width WGAUSS(2), etc. The ordinary
parameters such as FLDSIZE, RASHIFT, DECSHIFT are entered only for
fields 1-NFIELD and are then replicated for the corresponding fields
at the other resolutions. The Clean boxes may be entered for fields
1-NFIELD and replicated. If BOXFILE is used to enter Clean boxes for
a filed number > NFIELD then the boxes from the corresponding field <=
NFIELD are not replicated to that field. If you specify BMAJ and
BMIN, IMAGR computes the correct values to use for the various
component diameters. If you give BMAJ=0, IMAGR fits the beam
parameters after one convolution with Gi before recomputing the beam
with two convolutions. The MR Clean might attempt to work in other
modes, but OVERLAP=2 and DO3DIMAG=TRUE are strongly recommended. If
you set DOWAIT=TRUE before starting IMAGR, you will have the desirable
option of forcing the Clean to use a field of your choosing rather
than the one with the peak flux.
The MR Clean is an experimental algorithm and has been provided with a
number of "knobs" to adjust its behavior. All the knobs use the ratio
of the current beam area (BMAJ(field)*BMIN(field)) to the minimum beam
area (R). These knobs are:
1. Show some preference to select higher resolution images (lower
R) for the next image to Clean. Otherwise a strong point with
some extended emission will be over Cleaned at low resolution,
forcing higher resolutions to correct many pixels:
IMAGRPRM(11) select which field to Clean using peak fluxes
(in Jy/beam) weighted by 1 / R**IMAGRPRM(11).
2. Reduce this preference to zero as one Cleans more and more of
the R > 1 fields:
IMAGRPRM(12) decrement the initial value of IMAGRPRM(11)
used above by IMAGRPRM(12) each time an R > 1 resolution
field is Cleaned until it is 0.
3. The lower resolution fields may easily over Clean creating zero
net flux from a mix of negative and positive areas. These then
have to be corrected with numerous high resolution Clean
steps. To use a lower loop gain for lower resolution:
IMAGRPRM(13) use gain = GAIN / R ** IMAGRPRM(13)
4. To avoid over Cleaning with lower resolution, one may also
Clean each major cycle less deeply with the FACTOR parameter.
To control FACTOR:
IMAGRPRM(14,15) use
factor = FACTOR + IMAGRPRM(14) * (1 - R ** IMAGRPRM(15))
Multi-resolution experimental control limits:
0 <= IMAGRPRM(11) <= 1.0 not changed by TELL Try 0.5
0 <= IMAGRPRM(12) <= 0.1 changed by TELL Try 0.03
0 <= IMAGRPRM(13) <= 1.0 changed by TELL Try 0.5
0 <= IMAGRPRM(14) <= 1.0 changed by TELL Try 0.1
0 <= IMAGRPRM(15) <= 1.0 changed by TELL Try 0.5
Note that IMAGRPRM(11-15) = 0 causes each Clean to be done on the
field with the highest Jy/point-source-beam with the same GAIN and
FACTOR.
The author (Eric Greisen) of this option conceived this current
algorithm while listening, at the AIPS++ Imaging Workshop in July
1999, to Mark Holdaway describe his work done with Tim Cornwell.
Their "multi-scale" Cleans, according to Mark's vu-graphs, use images
at various smoothings to determine which scale currently has the most
flux. The "cross-beams" used for subtraction indicate that extended
components are used in the modeling, so long as the (i,i) cross beam
is used for image i on itself (a point on which I was confused).
Their algorithms, as described, are done all in the image plane which
avoids some of the over-Cleaning problems, while IMAGR's is done by
the Cotton-Schwab scheme of subtracting the model visibilities in the
visibility plane and re-imaging.
****************************************************************
************ WIDE-FIELD, WIDE-BAND CORRECTIONS *************
****************************************************************
Frequency Dependent Primary beam corrections:
If multiple, widely-spaced frequencies are used in a making a
single image then the differences in the primary beam size can vary
significantly with frequency. This causes problem in Cleaning objects
far from the pointing center as the primary beam differences cause the
subtraction of the combined model from the visibility data at the
different frequencies to be incorrect. There will be residuals in the
visibility data that cannot be Cleaned as the data does not correspond
to a possible sky brightness distribution.
If IMAGRPRM(1) is larger than 0 then a correction is made in the
subtraction to remove the effects of the frequency dependence of the
primary beam. The primary beam is assumed to be a uniformly illuminated
disk of diameter IMAGRPRM(1) meters. This correction is made out to
the 5 percent power point of the beam with a flat correction further out.
Note: this correction is only for the relative primary beam to
correct to a common frequency and DOES NOT correct for the primary beam
pattern at this frequency.
Simple Correction for Spectral Index:
If the sources observed do not have a flat spectrum then the source
spectrum will have effects on the Cleaning of a similar nature to the
frequency dependent primary beam problem described above. This problem
does not depend on position in the field except in the the spectral
index can vary across the field. In general the spectral index does
vary in an image but in many cases the spectral index is usually closer
to -.7 than to 0. To the degree that the structure in the field can be
characterized by a single spectral index the amplitudes of the data can
be scaled to the average frequency.
Before imaging the amplitudes of the uv data are scaled to the
average frequency using a spectral index of IMAGRPRM(2). For
optically thin synchrotron sources this spectral index is typically
between -0.6 and -1.0. This correction cannot remove the effects of
variable spectral index but allows a single correction.
Error in the Assumed Central Frequency:
If the frequency used to compute the u, v a, w terms is in error
then there will be a mis-scaling of the image due to this error.
Central frequencies are frequently computed on the basis of unrealistic
models of the bandpass shape.
If IMAGRPRM(3) is larger than 0 it is assumed to be a frequency
scaling factor for the u, v, and w that is to be applied before imaging.
Array Mis-orientation Effects
Images made with a coplanar array not oriented towards the
instrumental zenith will have a distortion of the geometry which
increases in severity away from the phase tracking center. For
non-coplanar arrays the image is distorted rather than just the
geometry. VLA snapshots are misaligned coplanar arrays whereas VLA
synthesis images cannot be considered to be made with a coplanar array.
Images made with misaligned coplanar arrays can be corrected using
task OHGEO to remove the effects of this misalignment. This correction
requires the knowledge of the observing geometry; in particular, the
average parallactic and zenith angles. IMAGR computes these values and
leaves then as header keywords where OHGEO can use them.
Non-coplanar Effects
WARNING: THIS OPTION MAY NOT EVEN WORK. TREAT IT WITH CARE.
***** For the moment, we have disallowed this option *****
If the observing array is not confined to a plane during the
observations then images produced from a 2-D FFT will be increasing
distorted from the phase center. If this is a serious problem then
using a direct Fourier transform (DFT) to image onto the celestial
sphere can eliminate this problem. However, THIS IS VERY SSSLLLOOOWWW.
In addition, the dirty beam is no longer a stationary function of
position in the field and a very conservative visibility based Clean is
needed making this EVEN SLOWER.
If IMAGRPRM(4) is greater than 0 then imaging and model
subtraction will be done using a DFT. Be prepared to wait. There is
also a limit of 1 field in this mode. There are several other changes
in the Cleaning for this mode; especially, the beam patch used is
fixed at a small value.
Since the speed of the DFT imaging is directly proportional to the
number of visibilities time averaging is advised. Task UBAVG can be
used to time average to the maximum possible extent without distorting a
specified field of view.
Scaling residuals
In general the units of the residuals are different than those of
the restored components; either being Jy per beam area. If the dirty
beam area is similar to the restoring beam area then this effect is
negligible. Similarly, if the Clean has proceeded well into the noise
then this difference is of little consequence. However, if there is
significant un-Cleaned flux left in the image then this difference may
be important.
If IMAGRPRM(5) > 0 then IMAGR will attempt to scale the residuals
to the same units as the restored components. The principle
difficulty is determining the effective area of the dirty beam.
Operationally this is done inside a box centered on the peak in the
beam with half-width IMAGRPRM(6) in x and IMAGRPRM(7) in y.
****************************************************************
*********** BASIC MX EXPLAIN FILE ALTERED SOME *************
****************************************************************
IMAGR: Task which makes and Cleans a map from UV data on disk
using AP
DOCUMENTERS: W. Cotton NRAO (mostly from UVMAP and APCLN)
G. Langston NRAO E. W. Greisen, NRAO
RELATED PROGRAMS: UVLOD,UVSRT,APCLN,UVMAP,BLOAT,HORUS,RSTOR
PURPOSE
IMAGR combines the functions of mapping, Cleaning and
subtracting components from the un-gridded uv data (ie. the
functions of UVMAP, APCLN and UVSUB). Because the Clean
components are subtracted from the un-gridded data, the entire
region of a map can be Cleaned; as opposed to the method used
in APCLN which can only Clean a quarter of the map area. Data
in an arbitrary sort order may be deconvolved.
IMAGR permits up to 512 independent fields in the antenna
beam and 46655 frequency channels to be mapped and Cleaned in
one execution. Multi-band data can be gridded together before
mapping. If the machine crashes before the end of the execution
IMAGR should be fairly easily restartable.
IMAGR is recommended over UVMAP and APCLN for the following
problems:
(1) Snap-shot observations with a small number of visibility
points run much faster with IMAGR than UVMAP and APCLN when only
the region with the source is mapped. With IMAGR there is no
need to confine the source to the inner quarter area of the
mapped region, although the source should not extend to the
boundary of the field of view.
(2) For observations at low frequency and at high sensitivity.
Radio emission is often detected over the entire primary beam.
It is then prohibitively expensive to make and Clean one large
map. IMAGR will permit you to choose up to 512 rectangular fields
in the sky, map each and then simultaneously Clean the entire
set. It is often possible to choose a small number of fields
which contains virtually all of the radio emission so that the
total area processed by IMAGR will be relatively small. IMAGR
is particularly valuable for radio maps which contain a few
"confusing" sources at large angular distance from the source of
interest. In order to determine the appropriate parameters for
IMAGR, first make a very low resolution map in order to
determine the approximate location and size of each of the
rectangular fields. Insert the appropriate parameters of the
fields into IMAGR and run the task at the desired resolution.
(3) Only IMAGR can produce and Clean 8192x8192 maps at the
present time. Make sure you have enough disk space to make
these maps!
(4) Use IMAGR if you need >1000:1 dynamic range for a
relatively extended source. By subtracting components from the
un-gridded (u-v) data, aliasing of side-lobes inside the field
of view is avoided. Finally, components very far from the phase
center but near the field center, will be subtracted from the
(u-v) data with the proper w-phase terms.
(5) IMAGR can average frequency channels in the gridding
process by gridding each channel independently onto the same
grid; this reduces the delay smearing problem in the maps to the
amount due to the individual channel rather than the total
bandwidth. This option can also be used to smooth line maps as
the number of channels to grid together and the channel
increment is independent.
(5) Only one Stokes' Parameter map can be made with one
execution. If you must have identical coverage for your I, Q,
U or V maps, use UVMAP. However, any differences among the
different Stokes' parameters are usually minimal.
IMAGR is superior to obsolete and now deleted tasks MX and WFCLN in its
(1) ability to calibrate input data,
(2) advanced data weighting options
(3) truly interactive DOTV option
(4) wide-field correction options.
(5) 3D imaging
(6) SDI method Cleaning option
(7) Clean component filtering
(8) Multi-resolution option
ADVERBS PECULIAR TO IMAGR
Most of the adverbs used by IMAGR are duplicates of those
used by UVMAP and APCLN and need no further explanation. The
adverbs peculiar to IMAGR are:
IN2NAME, IN2CLASS, IN2SEQ, IN2DISK
IMAGR keeps a scratch file with the current uv data with the
current list of components subtracted. This file is cataloged and may
be examined after IMAGR is run to look for questionable data points.
Unlike MX, IMAGR cannot be restarted with the work file although the
disk file itself may be re-used for new data or for a restart in which
components 1-BCOMP are subtracted from the input (by IMAGR) before a new
image is made.
The IMAGR uv work file will, in general, be different in some ways
from the original input data and may give difficulties to some of the
existing uv data handling routines. The data will be in the form of a
single Stokes' (or circular) polarization with the selected frequency
channels and IFs being imaged ("summed") into one grid. The data will
have been selected by the criterion given explicitly or implicitly to
IMAGR. The weights will have the uniform weighting correction made.
CHANNEL
This adverb, if >0, is used to restart IMAGR. If CHANNEL=N, then
restart IMAGR at frequency channel N (N=1 for continuum). When
restarting a Clean, use BCOMP as described below.
NCHAV
The number of channels to be combined on the same grid before
mapping. Use this option to obtain one map from several channels of
uv data at slightly different frequencies. Up to 16K channels can be
combined in the gridding stage.
CHINC
The number of (u-v) planes to skip between maps. If NCHAV>1,
CHINC refers to the first plane going into the map.
STOKES
Only one Stokes' parameter can be made at one time. A
beam is made for each polarization and each frequency channel.
BIF,EIF
These define the first and last IFs to be included in a
bandwidth synthesis average. An IF consists of a set of
one or more equally spaced frequency channels;
multiple IFs with arbitrary frequency spacings are allowed.
IMSIZE
The minimum desired size for all of the fields. The limits
are 32x32 to 8192x8192 and must be a power of 2. The adverb
FLDSIZE define the region over which Clean components are
searched for.
NFIELD
The number of independent fields to map. Up to 512 are
permitted for each frequency channel specified by BCHAN, ECHAN
and CHINC. Each field comes out as a separate cataloged file.
Clean components subtracted from one field will not be restored
to other fields even if the images overlap.
FLDSIZE, RASHIFT, DECSHIFT
For each independent field, specify the center of the field
by its RA offset and DEC offset in arc-seconds from the phase
center. A positive RA and DEC offset means that the field
center is East and North of the phase center. The FLDSIZE is
the area in each image where Clean components will be searched
for. It is limited to the range 32x32 to 8192x8192 but need not
be a power of 2. The output image size will be increase to the
next power of 2 or to IMSIZE if it is larger. For maps smaller
than 256x256, the size may be doubled for more accurate
Cleaning.
NBOXES, CLBOX (TVBOX), BOXFILE
For the first field, up to 50 Clean windows can be specified via
CLBOX as an alternate to FLDSIZE. This allows more flexibility than a
single window centered on the phase center. If NBOXES is greater than 0
then the contents of CLBOX are used to specify the input window. Since
these values are in pixels care should be taken that they are determined
from an image made with the same cell size and shift.
NOTE: the values contained in CLBOX are not used to determine the
size of the image for field 1. IMSIZE and/or FLDSIZE must be used for
this. In the case that CLBOX and NBOXES are used, this is the only use
made of FLDSIZE for field 1. Its use for higher numbered fields is
unaffected. If CLBOX is all 0, the value of FLDSIZE (or its default) is
used for CLBOX.
NBOXES and CLBOX specify the size and location of the rectangular
or circular boxes comprising the "Clean Window" area A. You make the
best use of prior knowledge about the source to help IMAGR do its job
when you restrict A as closely as possible to the true boundaries of its
emission. Recall that Clean attempts to optimize F(n) as a
representation of the sky, with zeroes everywhere else. The more
information you provide about where the "real" emission is, the more
likely IMAGR is to converge on a unique, credible solution. The
interactive options of IMAGR provide a powerful means to adjust this
information as the Clean procedes.
Rectangular boxes are given by 4 numbers represent the x and y
lower left corner and the x and y top right corner. Circular boxes are
specified by four numbers giving -1, the radius, and the x and y of the
center. The verb TVBOX may conveniently be used to set the number of
and parameters of the boxes. Following a prompt on your terminal,
position the TV cursor at the BLC of the first CLBOX and press any
track-ball button. Then position the cursor at the TRC of this box and
press button B. Repeat this for all desired boxes. This will fill the
CLBOX array for the IMAGR inputs. The terminal prompt will also give
instructions for resetting any previously set corners should you need to
do so. Note: since IMAGR will remake the image, be sure to run TVBOX on
an image made with the same cell size and shift as will be used for
IMAGR.
The BOXFILE adverb may be used to specify the name (and existence)
of an input text file of Clean box definitions . This is a text file
containing the coordinates of Clean boxes for all fields. Up to 256
(!!) boxes per field may be specified in the text file. Do so one box
per line, as field #, blc-x, blc-y, trc-x, trc-y (5 integers), e.g.
1 200 205 220 222
1 230 232 240 241
2 100 100 130 121
or, for circular areas, as field #, circular code (-1), radius,
center-x, center-y, (5 integers) e.g.
2 -1 14 80 75
3 -1 21 128 129
1 -1 5 175 175
...
Fields with no boxes specified default to the size specified by IMSIZE
and FLDSIZE (see above). If any boxes for field one appear in the file,
then NBOXES and CLBOX are completely overridden. Otherwise, those
adverbs are used for field 1. You are allowed to have blank and comment
lines in the file. The latter are indicated by putting a non-numeric
character in column one. E.g,
BOXFILE 'FITS:BOXES'
Verb FILEBOX may be of help in preparing this text file.
The BOXFILE may also be used to specify the FLDSIZEs and center
coordinates or shifts for some or all fields. See the help file above.
If BOXFILE = ' ', NBOXES and CLBOX apply.
BCOMP
The default (BCOMP=0) restarts the Clean from scratch. Other
values of BCOMP are used when IMAGR is to be restarted from an
intermediate step in the process. When set >0, it specifies the maximum
number of components from each subfield of a previous Clean to be
subtracted from the uv data to obtain the residual map for the first new
iteration. Each value in BCOMP corresponds to a field.
Restarts are sometimes needed after the computer has crashed during
a long IMAGR. Under these circumstances, the iteration number at the
end of the last major cycle is stored in the AIPS Clean components file
headers. Provided that the crash has not destroyed relevant image files
(or the CC extension file) on disk, the Clean may be restarted by
setting BCOMP equal to the number of iterations shown in the image
header for the Clean map - if this disagrees with the number in the
internal file header (as may happen if the crash comes at an unfortunate
point in a cycle), AIPS will adjust BCOMP downwards in a "fail-safe"
way. (PRTCC might be run to check the state of the components list in
such cases). To restart IMAGR Cleaning from a set of fields set BCOMP
to the highest of the number of Clean components from the set.
When you set BCOMP>0, you must set the OUTNAME and OUTSEQ
parameters explicitly to those of the Clean Map(s) whose CC extension
file(s) contains the components which are to be subtracted. You must
also set OUTVER to the CC version number of those files. (Otherwise, it
will use a new CC file which starts with nothing in it.) This Clean Map
file will be overwritten by the new Clean Map, so if you wish to
preserve it you should either write it to tape using FITTP or create a
second copy of it on disk using SUBIM. NOTE: there is one CC file per
output frequency channel with version number = output channel number.
A components file F(N) can be re-convolved with a different Clean
Beam H by restarting IMAGR with NITER=BCOMP=n. This is an effective
tool for making SMALL changes in the resolution of a Clean Map. Do NOT
use it for making large (factors >2) changes in resolution, e.g. to
"find" extended components of a source. If a structure has been resolved
out over the longer baselines, these baselines contribute only noise,
not signal, to maps of that structure, and should be excluded from any
search for the structure. Cleaning a noisy map and then convolving it
to much lower resolution makes no sense. In such cases, you should
re-map with an appropriate taper and run IMAGR on a dirty map with a
more appropriate resolution.
BMAJ, BMIN, BPA
IF BMAJ is less than zero, the output image will be the residual
image without the Clean components restored. Examining this image for
waves or other artifacts is helpful for finding bad UV data. The task
RSTOR can quickly restore the Clean components.
CMETHOD
IMAGR has two different routines for computing the model visibility
from the Clean components. The first ('DFT ') method does a direct
Fourier transform of the Clean components for each visibility point.
This method probably gives slightly better accuracy but can be slow if
there are many components and/or visibilities. (See TIMING section for
more detail).
The second model computation method is to grid the Clean
components, do a hybrid DFT/FFT ('GRID') onto a grid and interpolate
each visibility measurement from the grid using (currently) ninth order
Lagrange interpolation and a uv grid with half the spacing of the
mapping grid. This method is called the gridded-FFT method and CAN be
MUCH faster than the DFT method for large data bases and large numbers
of components. Since the w correction must be done for each field
separately the length of time this routine takes is proportional to the
number of fields in which there are Clean components in any major cycle.
To increase the accuracy of the interpolation, the size of the model
grid used for the interpolation is twice the size of the data grid for
images up to 2048x2048. This means the output scratch files (three of
them) may be four times the size of the largest output file.
CMETHOD allows the user to specify the method desired or to allow
IMAGR to decide which one will be faster. CMETHOD equal to ' ' (or any
values but 'DFT 'or 'GRID') indicates that the decision is left to
IMAGR, CMETHOD = 'GRID' causes IMAGR to use the gridded-FFT method and
CMETHOD = 'DFT ' forces IMAGR to use the DFT method.
In cases where there are bright, localized regions far from the map
center (eg. strong hot spots in double lobed sources) the gridded
subtraction may be inadequate. The principle failure should be to over
estimate the brightness of the bright regions far from the map center
(should be under a percent) and slightly increase the noise elsewhere.
This problem will be greatly reduced if the first few major cycles use
the DFT subtraction method until the brightest regions are removed. If
CMETHOD is set to ' ' the first few major cycles will probably use
the DFT.
EXAMPLES OF COMMON IMAGR PARAMETERS
1) Emulate UVMAP and APCLN:
Set the appropriate parameters for normal UVMAP and APCLN
execution. To emulate a 1024x1024 map and with a Clean of the
inner 512x512 (500X500) use the following adverbs;
IMSIZE=512; NFIELD=1
FLDSIZE=500; RASHIFT=0; DECSHIFT=0
NITER=200; CMETHOD=''; UVWTFN='UN'
IMAGR will effectively Clean nearly all of a 512x512 map with
200 iterations. IMAGR will decide (CMETHOD='') which of the two
component subtraction methods is most economical. For a data
base with less than about 1,000,000 data points, IMAGR will run
much faster than UVMAP and APCLN on a 1024x1024 map.
Furthermore, the Cleaning subtraction in IMAGR is more accurate
than that in APCLN. Uniform weight was chosen to give highest
resolution.
2) Piecemeal mapping and Cleaning using IMAGR
Set the usual input and output parameters, then
IMSIZE=128; NFIELD=3
FLDSIZE=32,32,256,256,20,20
RASHIFT=-40,5,200
DECSHIFT=120,2,400
UVWTFN='NA'
NITER=500; CMETHOD=''
IMAGR will produce the following set of maps.
a. 64x64 map centered on (-40",120") with Clean components
searched in the inner 32x32 area.
b. 256x256 map centered on (5",2") with Clean components
searched over the entire 256x256 area.
c. 64x64map centered on (200",400") with Clean components
searched over the inner 20x20 area.
d. 256x256 beam centered on (0",0"). The beam size is taken
to be the size of the biggest map or 1024, whichever
is smaller.
It may be best to choose Natural weighting for the UV weight
function. In making relatively small fields over a wide area,
Natural weight emulates the resolution and signal to noise of
a single map made over the wide area. The uniform weighted
map for the 64x64 fields may be several times more noise than
the naturally weighted map.
3) IMAGR used with a multi-frequency u-v data base.
A range of spectral channels to image can be given by BCHAN
and ECHAN which are the low and high channel numbers to be
imaged in the the input file. If CHINC is greater than 1 then
every CHINC'th channel is selected between BCHAN and ECHAN. If
NCHAV is greater than 1 then NCHAV channels will be averaged
starting with each channel selected by BCHAN, ECHAN and CHINC.
If IMAGR needs to be restarted then CHANNEL specifies the first
channel that needs to be processed. An image and a beam are
made for each channel. There is a limit of 46655 channels in an
output image.
The default value for BCHAN is 1, for ECHAN is BCHAN, CHINC
is 1 and NCHAV is 1.
Example:
BCHAN=1; ECHAN=6; NCHAV=2; CHINC=2; STOKES='I'
Will cause channels 1&2, 3&4, 5&6 to be combined and imaged
using the Stokes' I polarization data.
Prudence
If running IMAGR on a complicated source with low GAIN you
may need to work to large final values of NITER. As IMAGR is
the major consumer of CPU time in most map analyses, it is
prudent to preserve intermediate Clean maps by writing them to a
FITS tape with FITTP. This allows you to recover from disasters
such as crashes, over Cleaning, etc. with minimal impact on
total execution time, your time, and on disk space.
GENERAL COMMENTS
General comments concerning mapping and Cleaning follow.
Most of the comments have been taken from the EXPLAIN files for
UVMAP and APCLN.
MAPPING
IMAGR makes dirty maps and beams from (u,v) data using a
Fast Fourier Transform (FFT). The data may be in any sort
order. The data are convolved onto the regularly spaced grid
which is used for the Fourier transform. Maps of several
frequency channels, and a beam, can be made with one execution.
One polarization per execution.
A fairly complete description of the mapping functions
performed by IMAGR is given in Lecture 2 of the Proceedings of
the NRAO-VLA Workshop on Synthesis Mapping. Observers who are
unfamiliar with interferometry are recommended to study this
volume.
OUTDISK :
If OUTDISK = 0, the map and beam will be put on the same
disk.
FLDSIZE,IMSIZE,CELLSIZE :
For effective Cleaning of the maps, the number of pixels
per beam should be such that the pixel value immediately north
or east of the beam center is no less than about 50 percent of the
peak. However, if tapering is used, the outlying (u,v) points
may not have any significant weight in the map.
Strong aliased sources should be Cleaned in separate fields
unless they are close to the object of interest.
IMAGR will make maps which have a power of two pixels on a
side; between 32 and 4096 on the X-axis and between 32 and 4096
on the Y-axis. FLDSIZE defines the region to be searched for
Clean components.
If for some reason it is desirable to map a region much
larger than the region being Cleaned, IMSIZE can specify the
minimum size of a map. Components will be Cleaned from the
region specified by FLDSIZE but the output image size will be as
specified by IMSIZE. Values in IMSIZE must be powers of 2.
STOKES :
If you do not expect your source to show significant
circular polarization, as is normally the case with galactic
and extra-galactic continuum sources, making a V map can be a
useful diagnostic for calibration problems, correlator offsets,
etc. The V map should be a pure noise map close to the
theoretical sensitivity if your data base is well calibrated
and edited.
UVWTFN :
The default uniform weighting option gives higher
resolution than natural weighting. However, uniform weighting
gives a lower signal to noise ratio. Natural weighting is
therefore preferable for detection experiments. With uniform
weighting the dirty beam size decreases slightly with larger
maps, other parameters remaining unchanged. In cases of very
centrally condensed uv coverage such as that resulting from the
VLA in the D array uniform weighting with a UVBOX greater than
0.0 may be desirable. [SEE ALSO the long section above on
IMAGR's weighting options.]
ZEROSP :
To improve Cleaning of extended sources, the zero-spacing
flux should be included in IMAGR. The weight assigned should
normally be about the same as for other data samples - or better
yet = const/sigma**2, where the weights of the other samples are
the same constant divided by the square of their uncertainties.
Inclusion of the zero-spacing flux will allow Clean to
interpolate into the inner region of the (u,v) plane more
accurately, provided that this flux does not exceed the average
visibility at the short spacing by too much. You must also
Clean deeply to derive the full benefit of this (see the EXPLAIN
file for APCLN).
XTYPE,YTYPE :
The default convolution function Spheroidal (5) is now
recommended for nearly all maps.
Cleaning
IMAGR de-convolves a dirty beam from a dirty map image
using the Clean algorithm [Hogbom 1974] as modified to take
advantage of the Array Processor [Clark 1980] and doing the
subtraction from the un-gridded uv data.
Clean iteratively constructs discrete approximant F(n) to
a solution F of the convolution equation:
B^F = D (1)
where D denotes the discrete representation of the dirty map,
B of the dirty beam, the symbol ^ here denoting convolution.
The initial approximant F(0)=0 everywhere. At the n'th
iteration, Clean searches for the extremum of the residual map
R determined at the (n-1)'th iteration:
R(n-1) = D - B^F(n-1) (2)
A delta-function "Clean component", centered at this extremum,
and of amplitude g (the loop GAIN) times its value, is added to
F(n-1) to yield F(n). The search over R is restricted to an
area A called the "Clean window". A is specified as a number
NFIELD of rectangular sub-areas.
Iterations continue until either the number of iterations n
reaches a preset limit N (=NITER), or the absolute value of the
extremum of the residual map decreases to a preset value FLUX.
If FLUX is negative, the Clean stops at the first negative
Clean Component.
To diminish any spurious high spatial frequency features in
the solution, F(N) is normally convolved with a "hypothetical"
Gaussian "Clean Beam" H to construct a final "Clean Map" C:
C = H^F(N) + R(N) (3)
The Clean beam H may be specified by the user through the
parameters BMAJ, BMIN, BPA, or it may be defaulted to an
elliptical Gaussian fitted to the central region of the dirty
beam B. IMAGR writes the array of "Clean Components" F(N) to
the CC extension files of the Clean map image file.
The Clark algorithm speeds up the deconvolution process by
splitting it into "major" and "minor" iteration cycles. At the
beginning of the m'th major cycle, it loads into the AP a
RESTRICTED residual map R'(m) containing only the LARGEST
(positive and negative) values in the current residual map R(m).
It then performs a "minor" cycle of iterations wherein new Clean
components are sought with (a) the restricted residual map R'(m)
replacing the full residual map R and (b) the dirty beam B being
approximated by its values inside a small area (the "beam
PATCH") with zeroes outside.
A minor cycle is terminated at iteration n' when the peak
in the restricted residual map R'(n') falls to a given multiple
[Clark 1980] of the largest value that was ignored in R(m) when
R'(m) was passed to the the AP. At the end of the cycle of
minor iterations, the current Clean component list F(n') is
Fourier transformed, subtracted from the un-gridded uv data,
re-gridded and FFT-ed back the map plane, thereby performing
step (2) EXACTLY with the components list F(n') obtained at
the end of the minor cycle.
Errors introduced in the minor cycle through use of the
restricted beam patch are corrected to some extent at this step.
This ends the m'th major cycle, the (m+1)th beginning when the
new restricted residual map R'(m+1) is loaded into the AP.
Cleaning ends (with the transform steps used at the end of a
major cycle) when either the total number of minor iterations
reaches NITER, or the residual value being Cleaned at a minor
iteration reaches FLUX.
Prussian Hats
When dealing with two-dimensional extended structures,
Clean can produce artifacts in the form of low-level high
frequency stripes running through the brighter structure. These
stripes derive from poor interpolations into un-sampled or poorly
sampled regions of the (u,v) plane. [When dealing with quasi
one-dimensional sources (jets), the artifacts resemble knots
(which may not be so readily recognized as spurious)].
IMAGR invokes a modification of Clean that is intended to
bias it towards generating smoother solutions to the
deconvolution problem while preserving the requirement that the
transform of the Clean components list fits the data. The
mechanism for introducing this bias is the addition to the dirty
beam of a delta-function (or "spike") of small amplitude
(PHAT) while searching for the Clean components. [The beam
used for the deconvolution thereby resembles the helmet worn by
German military officers in World War I, hence the name
"Prussian Helmet Clean"]. The theory underlying the algorithm
is given by Cornwell (1982, 1983), where it is described as the
Smoothness Stabilized Clean (SSC).
Don't Clean
===================
If there is so little signal in your map that no side-lobes
of any source in it exceed the thermal noise, then no side-lobe
deconvolution is necessary, and Cleaning is a waste of your
time and of CPU cycles.
General - You can help Clean when you map
===============================================
Other things being equal, the accuracy of the deconvolution
process is greatest when the shape of the dirty beam is well
sampled. When mapping complicated fields, it is often necessary
to compromise between cell size and field of view; if you are
going to Clean a map image, you should set up your mapping
parameters in IMAGR so that there will be at least three or four
cells across the main lobe of the dirty beam.
It is also important to make the Cleaned region large
enough that no strong sources whose side-lobes will affect your
map have been aliased by the FFT. This can be done by making a
small map field around each confusing source.
Consider making a strongly tapered map of a wide
field around your source at low resolution to diagnose confusion
before running IMAGR on a high resolution map(s) (especially
when processing snapshot data from the lower VLA frequencies).
It is helpful to regard Clean as an attempt to interpolate
missing samples in the (u,v) plane. The accuracy of the
interpolation is greatest where the original sampling is dense
or where the visibility function varies slowly. The accuracy is
least where you ask Clean to Extrapolate into poorly sampled or
un-sampled regions of the (u,v) plane where the visibility
function changes rapidly.
One such region is the center of the (u,v) plane in any map
made from data where all of the fringe visibilities were less
than the integrated flux density of the source. You can help
Clean to guess what may have happened in the center of the (u,v)
plane (and thus to guess what the more extended structure on
your map should look like) by including a zero-spacing flux
density when you make your map. This gives Clean a datum to
"aim at" in the center of the (u,v) plane. Extended structure
can often be reconstructed well by deep Cleaning when the
zero-spacing flux density is between 100 percent and 125 percent of the
average visibility amplitude at the shortest spacings. If your
data do not meet this criterion, there may be no RELIABLE way
for you to reconstruct the more extended structure. (Some cases
with higher ratios of zero-spacing flux density to maximum
visibility amplitude can be successfully Cleaned, but success is
difficult to predict). If you see an increase in the visibility
amplitudes on the few shortest baselines in your data, but not
to near the integrated flux density, you may get better maps of
the FINE structure by excluding these innermost baselines.
Another un-sampled region lurks in the outer (u,v) plane in
many VLA maps of sources at declinations south of +50, if the
source has complicated fine structure. To see why, consult the
plots of (u,v) coverage for the VLA in Section 4 of the "Green
Book" [Hjellming 1982]. At lower declinations, some sectors of
the outer (u,v) plane are left poorly sampled, or un-sampled,
even by "full synthesis" mapping. (There are missing sectors in
the outer (u,v) plane in ANY snapshot map). If the visibility
function of your source has much structure in the un-sampled
sectors, Clean may work poorly on a high resolution map unless
it gets good "clues" about the source structure from the
well-sampled domain. If the clues are weak, badly extrapolated
visibilities in the un-sampled regions can cause high frequency
ripples on the Clean map. In such cases, Clean may give maps
with better dynamic range if you are not too resolution-greedy,
and restrict your data to the well-sampled "core" of the (u,v)
plane.
Before applying Clean, examine your (u,v) coverage and
think whether you will be asking the algorithm to guess what
happened in such un-sampled regions.
Frailties, Foibles and Follies
==============================
There are excellent discussions of Clean's built-in
idiosyncrasies by Schwarz (1978, 1979), by Clark (1982) and by
Cornwell (1982).
Another way of looking at Clean is to think of it as
attempting to answer the question "What is the distribution of
amplitudes at the Clean component positions [F(N)] which best
fits the visibility data, if we define the sky to be blank
everywhere else ?" The algorithm can then be thought of as
a "search" for places where F should be non-zero, and an
adjustment of the intensities in F(N) to obtain the "best"
agreement with the data.
The re-convolution of F(N) with the hypothetical "Clean
beam" H produces a "Clean map" C whose transform is no longer a
"best fit" to the data (due to differences between the
transforms of H and of the dirty beam B). The merit of the
re-convolution is that it produces maps whose noise properties
are pleasing to the eye. It may also be used to "cover up"
instabilities in Clean stemming from poor extrapolation into the
un-sampled regions of the (u,v) plane, by making H significantly
wider than the main lobe of B.
Note also that step (3) of the standard Clean combines this
re-convolution with the residual map, which contains faint sky
features convolved with the DIRTY beam B. If there is
significant signal remaining in the residual map, the effective
resolution of the Clean Map C varies with brightness. You must
therefore be particularly careful when comparing Clean maps made
at different frequencies or in different polarizations; you
should Clean all such maps sufficiently deeply that the
integrated flux density in the Clean components F(N) is much
greater than that in the residual map R(N).
A recurrent question about Clean concerns the uniqueness of
the Clean Map. In the absence of noise, Clean could adjust the
amplitudes of the components in F(N) to minimize the rms
difference between the observed visibility function and the
transform of F(N) [Schwarz 1978, 1979]. If the number of
degrees of freedom in F(N) is less than the number in the data,
Clean can (and in many practical cases does) converge on a
solution that is sensibly independent of your input parameters.
Noise and approximations in the algorithms complicate this
[Cornwell 1982], but realize that the solution CANNOT be unique
if the number of positions at which you end up with Clean
components exceeds the number of independent (u,v) data points.
Be suspicious if your Clean Map contains structures which
resemble those of the dirty beam. This may mean either that you
have not Cleaned deeply enough,or that Clean has had difficulty
in some un-sampled sector of the (u,v) plane in your case. This
test is particularly important in the case of snapshot maps,for
which the side-lobes of the dirty beam have a pronounced star
(snowflake) structure.
GAIN and NITER
The depth to which Clean carries out its deconvolution is
approximately measured by the product NITER*GAIN. The first
CC extension file version corresponds to the first output
frequency channel.
A value of 0 for NITER is recognized to indicate that no
Cleaning is desired. In this case the dirty beam is always the
size of the largest field and the CLASS of the output images
are "IIMnnn" rather than "ICLnnn".
The value of NITER and the execution time needed
to reach a given Cleaning depth are minimized by setting GAIN =
1.0, but setting GAIN > 0.5 is recommended only when removing
the side-lobes of a single bright unresolved component from
surrounding fainter structure. Note that TELL may be used to
lower the GAIN after the first major cycles have removed the
bright point objects.
When Cleaning diffuse emission, GAIN = 0.1 (the
default) will be much better, for the following reason. The
search step of the algorithm begins its work at the highest peak
(which in an extended source may be a random noise "spike").
After one iteration, the spike is replaced in R by a negative
beam shape, so the next highest peaks are more likely to be found
where the spike would have had its biggest negative side-lobes
[see the diagram on p.11 of Clark (1982)]. If GAIN is high,
subsequent iterations tend to populate F(n) around the negative
sidelobe regions of the highest peaks. This "feedback" can be
turned off by making GAIN small enough that the negative
sidelobes of the first peaks examined in an extended structure
are lost in the noise, i.e. GAIN * (worst negative sidelobe
level) < signal-to-noise on the extended structure. In practice
setting GAIN << 0.1 makes Clean unacceptably slow (NITER too
large for a given Cleaning depth) so a compromise is needed.
GAINs in the range 0.1 to 0.25 are most commonly used.
If the source has some very bright compact features
embedded in weaker diffuse emission, it is illuminating to
examine the Clean Map when only the brightest structure has been
Cleaned, to check whether subsequent Cleaning of weaker diffuse
emission causes it to "go lumpy" via the sidelobe feedback
effect. This could be done with GAIN = 0.3-0.5, using either
NITER or the FIELD selection to ensure that the search does not
stray into the extended emission. Then IMAGR can be restarted
with lower GAIN, higher NITER and wider fields to tackle the
diffuse structure. TELL may be used to lower the gain during
execution of IMAGR. If the weak emission "goes lumpy" you may
need to rerun IMAGR with different combinations of GAIN and
NITER to find the most effective one for your particular case.
Ultimately you will stop IMAGR when the new Clean component
intensities approach the noise level on your map. On a map of
Stokes I, the appropriate stopping point will be indicated by
comparable numbers of positive and negative components appearing
in the CC list. On maps of Stokes Q and U, which can and will
be legitimately negative, you need to know the expected
sensitivity level of your observation to judge how deep to go.
It is NEVER worth increasing NITER and restarting IMAGR
once many negative Clean components appear in the CC list of an
I map. When this occurs, you are using Clean to shuffle the
noise in the residual map, which is about as sensible as
rearranging the deck chairs on the Titanic after it hit the
iceberg.
FLUX
This provides an alternative to NITER for terminating the
iterations in a given run of Clean. In practice, most users
prefer to control Clean by limiting each execution with NITER.
FLUX should then be set to your expected rms noise level times
the dynamic range of the dirty beam (peak/worst sidelobe), to
ensure that you do not inadvertently waste time iterating to a
high value of NITER while in fact Cleaning emission whose
sidelobes are lost in the noise.
If FLUX is between -99 and -1, then Clean stops on first
negative Clean component. If FLUX < -99, then FLUX is
milli-percent change in total flux density between major cycles
(ie FLUX=-1000 => stop Clean if < 1 percent change) A new adverb
will be added to replace the convoluted FLUX logic.
BMAJ, BMIN, BPA
The default values of 0 for these parameters invoke an
algorithm whereby the central portion of the dirty beam B is
fitted with an elliptical Gaussian function whose parameters are
then used to specify the Clean Beam H. The algorithm can be
"fooled" by positive or negative sidelobes near the main
lobe of B, and has been known to prescribe unsatisfactory forms
for H, particularly for snapshot maps.
It is normally preferable to specify BMAJ, BMIN and BPA
explicitly from your own examination of the dirty beam, or after
a trial fit using the default. The Clean Map C may be easier to
interpret if BMIN is set equal to BMAJ, so that H is a circular
Gaussian, and any elongated structures are therefore seen in
their correct orientation. The frailties of Clean's
deconvolution will be least apparent if both are set equal to
the LONGEST dimension of the dirty beam.
Attempts to "super resolve" the source by setting BMAJ and
BMIN to the SHORTEST dimension of the dirty beam (or shorter)
skate on the proverbial thin ice, the more so if the number of
Clean components in F(N) is comparable to, or larger than, the
number of independent visibility data used to make the dirty
map.
Note that if BMAJ, BMIN and BPA differ greatly from those
of the main lobe of the dirty beam, the parts of the Clean Map
derived from F(N) and from R(N) at step (3) will have greatly
different resolutions. This is very dangerous if R(N) contains
significant residual emission.
If BMAJ is set <0, then the output map contains the
residual map R(N) instead of the Clean map C. This option
allows you to display, take hard copy of, or back up, the
residual map while deciding further strategy, retaining the
ability to regenerate the Clean Map later.
NFIELD, FLDSIZE
A practical detail: when NFIELD=1 and FLDSIZE specifies an
area <= 127 by 127, the entire residual map can be loaded into
an AP with 64k, and Clean can proceed very efficiently. This
speeds up execution enormously. If NFIELD>1, even if the area
in the fields adds up to less than 127 by 127, this economy is
lost.
A particularly large economy in run time is achieved when
this default is used with 128 by 128 (or smaller) maps. In the
128 by 128 case not only will the Clean Window default to 128
by 128, but the necessary FFTs can be done entirely within the
AP; under these circumstances IMAGR proceeds at a headlong gallop.
FACTOR
This knob protrudes from the inner workings of the Clark
algorithm, enabling the user to vary the criterion [Clark 1980]
by which minor iteration cycles are ended.
[For those with an interest in the gory details - IMAGR
first notes the ratio between the brightest point which was
ignored when the residual map R'(m) was passed to the AP and the
maximum residual R'(n') at some later iteration n'; it then
uses the Clark criterion with this ratio raised to the power
FACTOR replacing unity in Clark's summation].
FACTOR = 0 (the default) recovers the Clark criterion.
FACTOR > 0 allows the minor cycles to go on longer,
speeding up the Clean slightly (about 20 percent for FACTOR = 1), but
allowing you to get closer to the situation where residuals R'
in the AP become smaller than values which were ignored when the
AP was loaded. The search for new components becomes less and
less accurate (compared with a Hogbom algorithm using all of R
in step (2)), and the representation of extended structure in
the final F(N) deteriorates.
FACTOR < 0 makes the termination criterion more
conservative and thus improves the accuracy of the Cleaning at
the expense of forcing more major cycles with a corresponding
overhead in the FFTs done at the end of each one.
It is recommended that experiments with FACTOR normally be
confined to the range -0.3 < FACTOR < +0.3, negative values
being used when Cleaning complex extended structures, and
positive values when Cleaning very simple compact structures.
MINPATCH
This parameter specifies the minimum half-width of the beam
patch that will be allowed in a minor cycle. A smaller beam
patch allows the Clean to go faster at the expense of less
accurate subtraction at each iteration. The inaccuracy can lead
to errors which will not be recovered completely at the end of
the major cycle, especially if high GAINs are used or the
source has complex structure. MINPATCH=51 is recommended for
Cleaning complicated sources. If the BEAM has large sidelobes
far from the beam center, the MINPATCH should be as large
as possible (<= 1024). (The BEAM often has large sidelobes for
VLA snap-shot images.)
The beam patch and the residuals to be Cleaned in each
minor iteration must fit in the (pseudo) AP memory. This may
limit the number of pixels Cleaned when MINPATCH is large. This
is not a consideration on most modern computers where we set the
AP size to 1 Megaword or more.
DOTV
Use of DOTV > 0 is STRONGLY recommended when Cleaning a
source for the first time. It causes the residual maps to be
displayed on the TV after each major cycle and allows you to
interact with them. See the initial section of this explain for
the new IMAGR TV options.
When Cleaning a very complicated source for the first time,
it is often worth going beyond this interactive approach, by
taking hard copy of various stages of the Clean to compare
carefully later. Consider setting NITER to a value in the range
50-200 at first. Then take hard copy of the lightly Cleaned map
for later reference, and restart IMAGR with a higher value of
NITER. Doing this increasing NITER by factors of order 2 each
time can be very instructive in showing you what Clean is doing
to the extended structures in your source. Spectral line users
may wish to Clean a typical channel in this way before deciding
how best to parameterize IMAGR for their production runs. (Note
that the trial channel should be reanalyzed using the final
choice of parameters, for consistency; IMAGR's final Clean maps
depend in detail on the relative numbers of major and minor
cycles which have been performed).
Sorting of UV data:
IMAGR will sort data into XY order if needed to fit the
weighting, data gridding (for FFT), or model gridding (for
gridded model subtractions in Clean) into the available "AP"
memory without multiple passes through the uv data.
REFERENCES
Proceedings of the NRAO-VLA Workshop on Synthesis Mapping
1982, ed. A.R.Thompson and L.R.D'Addario.
Clark, B.G. (1980). "An Efficient Implementation of the
Algorithm "Clean", Astron.Ap., 89, 377-378
Clark, B.G. (1982). "Large Field Mapping", Lecture #10 in the
NRAO-VLA Workshop on "Synthesis Mapping"
Cornwell, T.J. (1982). "Image Restoration (and the Clean
Technique)", Lecture #9 in the NRAO-VLA Workshop on
"Synthesis Mapping"
Hjellming,R.M. (1982). "An Introduction to the NRAO Very Large
Array", Section 4.
Hogbom,J.A. (1974). "Aperture Synthesis with a Non-Regular
Distribution of Interferometer Baselines",Astron.Ap.Suppl.,
15, 417-426
Schwarz, U.J. (1978). "Mathematical-statistical Description of
the Iterative Beam Removing Technique", Astron.Ap.,65, 345.
Schwarz, U.J. (1979). "The Method "Clean" - Use, Misuse and
Variations", in Proc. IAU Colloq. No.49, "Image Formation
from Coherence Functions in Astronomy", ed. C. van
Schooneveld, (Dordrecht:Reidel), p. 261-275.
Cornwell,T.J. (1982). "Can Clean be Improved ?",VLA Scientific
Memorandum No. 141.
Cornwell,T.J. (1983). "A Method of Stabilizing the Clean
Algorithm", preprint.