9.6 After initial calibration

9.6.1 Applying calibration

Having obtained your best possible calibration CL table (and BP and BL tables if bandpass or baseline-dependent errors were found), you finally get to make a calibrated data set. This is done with SPLIT, which applies the calibration and splits the database into separate files, one for each source observed.


to review the inputs.

> INDISK n ; GETN ctn  C R

to specify the input file.

> SOURCE ’ ’  C R

to write all sources.


to specify which CL table to use.


to apply BP table 1 if present.

> BLVER blin  C R

to apply BL table blin if present.

> APARM(1) 1 ; NCHAV 0  C R

to have all spectral channels within each IF averaged: read the help file closely, other useful averaging options are available.

> APARM(2) 2  C R

to have an amplitude correction made for the correlator integration time (in seconds)

> GO  C R

to run the program.

The options for SPLIT given above will apply calibration and then average the spectral channels within each IF, but not average IF channels together . To average over IF channels as well, set APARM(1) = 3 in SPLIT. (The task AVSPC no longer averages IFs although it is useful in averaging spectral channels in calibrated data sets.)

The task SPLAT can be used instead of SPLIT to do time averaging and different options in spectral averaging. SPLAT can be used also to assemble the selected sources into a multi-source file after applying the specified calibration and averaging. This option allows the user to continue calibration on a smaller data set.

At this point, it is well worth spending time to examine your output visibility data carefully. You may plot the data against time with VPLOT, IBLED, EDITR, or UVPLT, and list them with LISTR, PRTUV, or UVPRT. POSSM is now no longer useful since you have averaged your data in frequency.

9.6.2 Time averaging

It is now convenient to average the data in time using UVAVG both to reduce the bulk of the data and to increase the signal-to-noise for subsequent iterations of the self-calibration/imaging cycle. However, it is important to realize that the fringe-fitting process to this point has only removed gradients of phase over the fringe-fitting solution interval. There will still be stochastic atmospheric (and clock) phase errors affecting the data on short time scales. These phase errors can be significant over minutes at frequencies of 22 GHz and above (and possibly even at 15 GHz) and a reduction in amplitude can occur if data are directly averaged. The ionosphere can cause similar problems at lower frequencies. Self-calibration should remove such phase errors.

For data at frequencies below 15 GHz (and  minute integrations), it should be safe to proceed with UVAVG (see below). For higher frequency data, it may be worth your while to examine the phase coherence of the data first. VPLOT can be used to examine your target or calibrator data to see directly the level of residual phase error over your chosen averaging time. Alternatively, the task IBLED allows you to view the degree of coherence of data averaged over different averaging times. If there are coherence problems (and the target data has enough SNR), CALIB can be run to align the phases prior to coherent averaging. Try:


to review the inputs.

> INDISK n ; GETN ctn  C R

to specify the single-source input file.


to specify the output file.

> CALSOUR ’ ’ ; SMODEL 1, 0  C R

to use the source with a point-source model.


to not apply any tables to the input data.

> SOLTYPE ’ ’  C R

to use normal least squares.


to solve for phase.

> SOLINT (10.0/60.0)  C R

to solve for phase in 10-second intervals. This should probably be set as low as the strength of the source will allow. The limit is the integration time that gives a SNR > 2 on most baselines.

> ANTWT 1  C R

to use weights from calibration with no additional weights applied to the antennas. For the purposes of phase alignment, it is appropriate to use the data weights; this allows the noise in the solution to be distributed over the noisiest baselines. This may not be the case when using CALIB for self-calibration in the hybrid mapping sense (see 9.7).

> APARM(1) 3  C R

to require 3 antennas present for solution.

> APARM(6) 0  C R

to skip diagnostic printout.

> APARM(7) 1  C R

to set the minimum allowed SNR. This limit should be low since the SNR is calculated as a phase difference from model and this can be large. Start with a value 1.

> GO  C R

to run the program.

Note that CALIB will only give valid solutions if the signal-to-noise over the solution interval on most baselines is greater than 2 (and preferably much higher). At high frequencies on weak sources, it may not be possible to select a solution interval long enough that the signal-to-noise satisfies this criterion, yet short enough to follow the atmospheric phase variations. In such cases, it is probably best not to attempt to self-calibrate the data, but instead to use a short averaging time and to live with any coherence losses in the data.

When the data are sufficiently phase coherent, they should be averaged over time down to a reasonable size using UVAVG:


to review the inputs.

> INDISK n ; GETN ctn  C R

to specify the CALIB output file as the UVAVG input file.


to specify the output file.

> YINC 30.0  C R

to set the time-averaging interval to 30 seconds.

> OPCODE ’ ’  C R

to enable the averaging operation. There are several options controlling the averaging interval selection and reported times.

> GO  C R

to run UVAVG.

If SPLAT was used to assemble the selected sources into a multi-source file (while applying the preliminary calibration), CALIB will write an SN table which must be converted to a CL table by CLCAL. This new CL table can be applied by SPLAT with time averaging.

9.6.3 Verifying calibration

Before proceeding to image your data, it’s worth checking that the calibration performed in 9.5 is sensible. For each of your sources, produce a plot of the correlated flux density against uv distance using UVPLT. As well as identifying bad data which can then be deleted with IBLED, EDITR, WIPER, or UVFLG, these amplitude versus distances plots (especially those of your calibrator sources) can be used to identify stations where the amplitudes are too high or too low. Furthermore, by fitting simple models to the calibrator data, constant correction factors can be determined for each station which can be used to correct the amplitude calibration. It is often the case that the amplitude calibration to a certain station (particularly non-VLBA stations) is out by a constant factor, either due to uncertainties in the antenna gain or in the noise calibration signal.

In 31DEC20, two tasks allow you to examine selected portions of a uv data set statistically. UVRMS averages the selected data in SOLINT time intervals and then can plot the average values as a function of time and as a histogram. A variety of statistical parameters are computed and printed. SPRMS determines and plots the mean and standard deviation of the selected data as a function of spectral channel. Note that both tasks will combine data from multiple sources, baselines, etc. if so instructed. In 31DEC21RIRMS computes the mean and rms of the real and imaginary parts of the visibilities. It prints these in a baseline matrix and can plot them by baseline as functions of time and as histograms. These tasls have a full set of calibration and data selection adverbs, so they may be run before you run SPLIT.

Most VLBI calibrator sources can be adequately described by one or two Gaussian components. The task UVFIT can be used to fit such a model while finding constant correction factors for the antenna gains. Note that UVFIT can handle no more than 2,500,000 visibilities, so further averaging with UVAVG may be required. The following example shows how to fit antenna gains and a single elliptical Gaussian model of known position and flux to one of the single-source data sets produced by SPLIT or SPLAT (with spectral averaging)


to review the inputs.

> INDISK n ; GETN ctn  C R

to specify the input uv data set.


to specify 1 Gaussian component.


to fix the flux at 1.2 Jy.


to hold the position fixed at the origin.

> GWIDTH 0.002 , 0.001 , 45.  C R

to provide an initial guess of the Gaussian widths as 2×1 mas at a position angle of 45deg.


to fit for size.


to fit for all antenna gains with initial guess = 1.

> NITER 50  C R

to limit the fitting to 50 iterations.

> IMSIZE 0.0005 , 0.01  C R

to limit sizes to be in the range 0.5 to 10 mas.


to save the solution in a CC file of version number 1.

> INP  C R

to check the inputs.

> GO  C R

to run UVFIT.

UVFIT can also be applied to the multi-source file, but SRCNAME, DOCAL, GAINUSE etc. must then be set. It can handle up to 20 sources to be fit, using a channel-dependent input text file and can write the results in a compact text file form.

Another way to test your amplitude calibration is to use the task UVCRS for bright sources with long tracks. This task calculates correction factors for the amplitudes of the stations using regions of the uv plane where uv tracks cross. UVCRS can write the correction factors into an SN table.

Once the scale factors are determined, there are a number of options for correcting the data. The simplest option is to apply the correction factors to the single-source data sets using task VBCAL. Alternatively, the correction factors can be incorporated into the highest version CL table of the multi-source data file and task SPLIT run again to make new calibrated data files. The corrections to the CL table are done with CLCOR. Unlike VBCAL, CLCOR must be run separately for each antenna whose calibration you wish to alter; ANTENNA must be set to the antenna number you wish to change, OPCODE = ’GAIN, and CLCORPRM(1) set to the amplitude scale factor found in UVFIT; note that these are voltage gains. All higher values in the array CLCORPRM should be zero. The effect of the altered calibration can be viewed using UVPLT with DOCAL = 1. If it is satisfactory, SPLIT can be re-applied to the data. We recommend using CLCOR to perform such amplitude corrections. It now produces a new CL table each time it is used unless you specify both GAINVER and GAINUSE as having the same, non-zero version number.

Another way of incorporating amplitude corrections is to edit the calibration text files used by ANTAB. This can be accomplished by setting the FT parameters for affected stations. For instance, if the Bonn scale factor is 1.043, set the FT parameter on the BONN TSYS card to FT=(1.043*1.043). If amplitude calibration was carried out after fringe-fitting (not recommended!, then it is only necessary to rerun ANTAB, delete the latest CL table containing amplitude calibration and rerun APCAL using the highest CL table produced in the fringe-fitting step. If however, as we have described in this chapter, the amplitude calibration was done prior to fringe-fitting, then correcting the amplitudes is more involved. It is probably best to delete all CL tables except the first one and start again at 9.5. However, it may not be necessary to carry out the time consuming FRING solutions again. If the amplitude changes are small, the phase, rate and delay solutions will be essentially unchanged. Therefore, with care, the existing SN tables can be used in lieu of re-running FRING.