Python autoProcessing wrapper for SAVU


Table of contents


For information about how to create a json config file to use with the autoProcessing wrapper, please visit this site:

Python autoProcessing configurator

If you have any questions, please do not hesitate to ask directly

Introduction

The autoProcessing wrapperis a python class included in the I13analysis module. It aims to create an easy to use interface between the users and SAVU. Changing of parameters in SAVU config files is automated by the interface and several methods allow tailoring individual reconstructions as required.

autoProcessing uses a combination of SAVU calls (for most data processes) and local reconstruction for the centering and entropy metrics calculation (used in automatic COR determination). Please make sure not to start any processes on the data acquisition machine. In addition, autoProcessing also searches for intermediate SAVU data to be reused to minimize the processing time. All intermediate data is written to the <visitID>/tmp folder and only center-finding and final reconstructions are written to the <visitID>/processing folder.

(go to top / table of contents)


Calling parameters

The following calling parameters are used to configure the autoProcessing

parameteraliasdatatypedescription
help-h, --help
Use this to display all available options and help text on the command line
json file name-jsonfile, -jstring (file path)Path for the json config file which includes all the beamtime meta data. This is a mandatory argument. However, if omitted the script will look in the current working directory and if there is exactly one file with a .json suffix, this will be loaded.
objecive number-objective, -ointegerThe objective number to be used for the distortion correction.The default is <8> which corresponds to no distortion correction.
centre of reconstruction-cor, -cfloatThe centre of reconstruction, if a manual input is required. This value will be replaced by the automatic centering if the latter is called.
slice number-slice, -sintegerThe slice number to be used for testing. If not specified, the value from the original SAVU configuration file will be used.
stepping width-dxfloatStep width for the automatic centering. Defaults to 0.1
range of center finding-rangefloat

The range argument specifies which range is to be scanned for the final COR determination. The search range is [x0 - range, x0 + range]. The default range is 2.0

SAVU big--big
Use this optional argument to call SAVU with the "BIG" keyword (running on com14 instead of com10). This allows you to start more than two parallel processes.
TXM file structure--txm
Use this flag to use the TXM directory structure.
redo all calculations-redo, -r
Use this flag to force re-running all preliminary SAVU steps. Required for example after changing the slice number.
scan numbers
integersthe number of the scans to be processed. Add as many scans as required, the will be worked on in a queue. Please note that an exception during runtime will stop the whole queue.


(go to top / table of contents)


Configuration and starting up

Preparation of environment (perform only once for each beamtime)

The usage of scripts is the most convenient way. It is good practice to copy the script files and json file in the <visitID>/processing/auto_processing folder and work in there (please replace the YEAR and VISITID placeholders):

[fedID@machine ~]$ cp -r /dls_sw/i13/scripts/Malte/recoMicroCT /dls/i13/data/YEAR/VISITID/processing/recoMicroCT
[fedID@machine ~]$ cd /dls/i13/data/YEAR/VISITID/processing/recoMicroCT
[fedID@machine auto_procesing]$ chmod a+rx *

The "chmod" allows you (and everyone else, i.e. also other team members) to read and call these script files.


(go to top / table of contents)


Non-standard SAVU plugins

If you use any non-standard SAVU plugins, please make sure that each team member has all required plugins copied to his/her savu_plugins folder:

[fedID@machine ~]$ mkdir /home/fedID/savu_plugins
[fedID@machine ~]$ cp /dls_sw/i13/Scripts/Malte/savu_plugins/SAVUplugin1.py /home/fedID/savu_plugins
[fedID@machine ~]$ cp /dls/i13/data/YEAR/VISITID/processing/auto_processing/SAVUplugin2.py /home/fedID/savu_plugins
...


(go to top / table of contents)


Usage of predefined scripts

To prepare the terminal, go to your visit directory and load the required python libraries:

[fedID@machine ~]$ cd /dls/i13/data/YEAR/VISITID/processing/recoMicroCT
[fedID@machine auto_procesing]$ source loadPython

This step only needs to be performed once (but in each new console or shell).

To start a process, use for example:

python ap_full.py -j VISITID.json -o 1 -dx 0.5 -range 5 XXXXXX YYYYYY

This will start a full processing of scans XXXXXX and YYYYYY:

  1. Vo centering,
  2. COR refinement using metrics within +/- 5 pixels of the Vo-center and using a step width of 0.5
  3. full reconstruction


(go to top / table of contents)


on-the fly scripting

to use autoProcessing from within a script, you need to specify the calling parameters (all as string). You also need to load the I13analysis module, which can be found at /dls_sw/i13/scripts/Malte.

import sys
sys.path.append('/dls_sw/i13/scripts/Malte')
import I13analysis
args = ['-j', '/dls/i13/data/2018/cm19664-3/processing/auto_processing/cm19664-3.json', '-o', '1', 'XXXXXX']
ap = I13analysis.autoProcessing(args)


(go to top / table of contents)



Script examples:

All these examples can be found at /dls_sw/i13/scripts/Malte/recoMicroCT


Full autoprocessing: ap_full.py

This script will start a VO-centering to find a good starting point for COR refinement, then do a refinement using Donath's entropy metric before starting the full reconstruction.

ap_full.py
1. python ap_full.py 123456 123457

2. python ap_full.py -o 4 -s 1200 -range 4 123456 123457

3. python ap_full.py -o 3 -s 1200 123456 

Explanation of examples:

  1. Do a full processing of scans 123456 and 123457 with all the standard parameters and no distortion correction (not given the "objective" parameter defaults to no distortion correction)
  2. Do a full processing of scans 123456 and 123457 using slice 1200 for the centre finding and objective #4 for the distortion correction. The fine centre finding is performed in a range of +/- 4 pixels around the Vo-center.
  3. Do a full processing of scan 123456 using slice 1200 for the centre finding and objective #3 for the distortion correction.


(go to top / table of contents)


COR refinement and final reconstruction around a given COR start value: ap_guessCOR.py

This script will skip the VO-centering and use the value specified with "-cor XXX.XX" as a starting point for refinement.

ap_guessCOR.py
1. python ap_guessCOR.py -s 1200 -o 3 -cor 1320.0 123456

2. python ap_guessCOR.py -o 3 -cor 1270.0 123456

Explanation of examples:

  1. Do a centre-refinement and final reconstruction of scan 123456 using slice 1200 and objective #3 for distortion correction. The cor is refined around pixel 1320.0
  2. Do a centre-refinement and final reconstruction of scan 123456 using objective #3 for distortion correction. The cor is refined around pixel 1270.0


(go to top / table of contents)


Manual center finding: ap_manualCORfinding.py

This script will take the manual input "-cor XXX.XX" and reconstruct in the vincinity (determined by the "-range X.X" and "-dx Y.Y" parameters) without starting a full reconstruction.

ap_manualCORfinding.py
1. python ap_manualCORfinding.py -o 4 -s 1200 -range 40 -cor 1280 -dx 5 123456

2. python ap_manualCORfinding.py -o 2 -s 200 -range 4 -cor 1276 -dx 0.5 123456

Explanation of examples:

  1. Do a manual COR-finding for scan 123456 using the distortion correction for objective #4 and slice 1200. The COR is tested in the range 1240 to 1320 (cor - range to cor + range) in steps of 5 pixels.
  2. Do a manual COR-finding for scan 123456 using the distortion correction for objective #2 and slice 200. The COR is tested in the range 1272 to 1280 (cor - range to cor + range) in steps of 0.5 pixels.


(go to top / table of contents)


Only perform reconstruction: ap_reco.py

The "-cor XXX.XX" value will be used to perform the final reconstruction of the whole volume.

ap_reco.py
python ap_reco.py -o 3 -cor 1272.5 123456

Explanation of example:

Start the final reconstruction of scan 123456 using the distortion correction for objective #3 and the COR of 1272.5.


Output and results

Folder and file structure

All results will be written to the <visitID>/processing/reconstruction/<scanNo> folder. The contents of this folder can be:

  • scanXXXX_SavuRecon.nxs: the SAVU config file used for the final reconstruction. Useful to keep to check the processing steps again at a later stage.
  • tomopy-centering folder: The results of Donath's entropy metrics test will be in this folder. There are several files:
    • scanXXXXXX_cors.npy: the tested CORs in numpy's native file format (binary with header)
    • scanXXXXXX_metrics.npy: the metrics' results for the tested CORs in numpy's native file format (binary with header)
    • scanXXXXXX_metrics.png: A visualisation of the fitting data. The orange dots correspond to computed data points while the blue curve represents the fit around the minimum. The position of the vertical gray line shows the minimum value used for the final reconstruction.
    • scanXXXXXX_slice0000_RecTest_Center.tif: A multi-tif image with the reconstruced centre slice for the different CORs tested. Please note that the "slice" in the filename is in cropped coordinates, i.e. does not correspond to slice 0 in the full dataset.
    • scanXXXXXX_slice0000_RecTest_Center.txt: A text file to go with the tif file. It included the image numbers and centres of reconstruction to allow identification.
  • tomopy-manualcentering folder: This folder will be created if the manualTomopyCenterSearch method is called. The content is the same as for the tomopy-centering folder except around the manual COR.
  • YYYYMMDDHHMMSS_XXXXXX folder: The folder with the final SAVU reconstruction. Depending on the process list, this folder can include an hdf file with the reconstructed data and potentially a folder with individual tiff slices. (YYYY - year, MM - month, DD - day, HH - hour, MM - minute, SS - second, XXXXXX - scan number)



scanXXXX_metrics.png example

Status control

To check the status of your jobs on the cluster, use the following commands:

[fedID@machine ~]$ module load hamilton
[fedID@machine ~]$ watch qstat


To check the status of all SAVU jobs running on the cluster:

[fedID@machine ~]$ module load hamilton
[fedID@machine ~]$ qstat -u \* | grep savu


To check the status of all jobs on the com10 node (regular SAVU) or com14 node (SAVU big):

[fedID@machine ~]$ module load hamilton
[fedID@machine ~]$ qstat -u \* | grep com10
[fedID@machine ~]$ qstat -u \* | grep com14


FAQ

  1. Q: The SAVU process is finished on the cluster but the python script is locked and no output appears.
    A: The python script is waiting for the SAVU processing to be finished. If SAVU aborts with an error, the script will not recognize this. The most probable cause is that you are using non-standard SAVU plugins which have not been copied to your personal folder. Please check this section: Non-standard SAVU plugins