Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Anchor
top
top

Table of contents

Table of Contents


Info

For information about how to create a json config file to use with the autoProcessing wrapper, please visit this site:

Python autoProcessing configurator

Info

If you have any questions, please do not hesitate to ask directly

Introduction

The autoProcessing wrapperis a python class included in the I13analysis module. It aims to create an easy to use interface between the users and SAVU. Changing of parameters in SAVU config files is automated by the interface and several methods allow tailoring individual reconstructions as required.

autoProcessing uses a combination of SAVU calls (for most data processes) and local reconstruction for the centering and entropy metrics calculation (used in automatic COR determination). Please make sure not to start any processes on the data acquisition machine. In addition, autoProcessing also searches for intermediate SAVU data to be reused to minimize the processing time. All intermediate data is written to the <visitID>/tmp folder and only center-finding and final reconstructions are written to the <visitID>/processing folder.

(go to top / table of contents)


Calling parameters

The following calling parameters are used to configure the autoProcessing

parameteraliasdatatypedescription
help-h, --help
Use this to display all available options and help text on the command line
json file name-jsonfile, -jstring (file path)Path for the json config file which includes all the beamtime meta data. This is a mandatory argument. However, if omitted the script will look in the current working directory and if there is exactly one file with a .json suffix, this will be loaded.
objecive number-objective, -ointegerThe objective number to be used for the distortion correction.The default is <8> which corresponds to no distortion correction.
centre of reconstruction-cor, -cfloatThe centre of reconstruction, if a manual input is required. This value will be replaced by the automatic centering if the latter is called.
slice number-slice, -sintegerThe slice number to be used for testing. If not specified, the value from the original SAVU configuration file will be used.
stepping width-dxfloatStep width for the automatic centering. Defaults to 0.1
range of center finding-rangefloat

The range argument specifies which range is to be scanned for the final COR determination. The search range is [x0 - range, x0 + range]. The default range is 2.0

SAVU big--big
Use this optional argument to call SAVU with the "BIG" keyword (running on com14 instead of com10). This allows you to start more than two parallel processes.
TXM file structure--txm
Use this flag to use the TXM directory structure.
redo all calculations-redo, -r
Use this flag to force re-running all preliminary SAVU steps. Required for example after changing the slice number.
scan numbers
integersthe number of the scans to be processed. Add as many scans as required, the will be worked on in a queue. Please note that an exception during runtime will stop the whole queue.


(go to top / table of contents)


Configuration and starting up

Preparation of environment (perform only once for each beamtime)

The usage of scripts is the most convenient way. It is good practice to copy the script files and json file in the <visitID>/processing/auto_processing folder and work in there (please replace the YEAR and VISITID placeholders):

Code Block
languagebash
[fedID@machine ~]$ cp -r /dls_sw/i13/scripts/Malte/recoMicroCT /dls/i13/data/YEAR/VISITID/processing/recoMicroCT
[fedID@machine ~]$ cd /dls/i13/data/YEAR/VISITID/processing/auto_processingrecoMicroCT
[fedID@machine auto_procesing]$ chmod a+rx *

The "chmod" allows you (and everyone else, i.e. also other team members) to read and call these script files.


(go to top / table of contents)


Anchor
nonstandard-savu-plugins
nonstandard-savu-plugins

Non-standard SAVU plugins

If you use any non-standard SAVU plugins, please make sure that each team member has all required plugins copied to his/her savu_plugins folder:

Code Block
languagebash
[fedID@machine ~]$ mkdir /home/fedID/savu_plugins
[fedID@machine ~]$ cp /dls_sw/i13/dataScripts/YEAR/VISITID/processing/auto_processingMalte/savu_plugins/SAVUplugin1.py /home/fedID/savu_plugins
[fedID@machine ~]$ cp /dls/i13/data/YEAR/VISITID/processing/auto_processing/SAVUplugin2.py /home/fedID/savu_plugins
...


(go to top / table of contents)


Usage of predefined scripts

To prepare the terminal, go to your visit directory and load the required python libraries:

Code Block
languagebash
[fedID@machine ~]$ cd /dls/i13/data/YEAR/VISITID/processing/auto_processingrecoMicroCT
[fedID@machine auto_procesing]$ source loadPython

This step only needs to be performed once (but in each new console or shell).

To start a process, use for example:

Code Block
languagebash
python ap_full.py -j VISITID.json -o 1 -dx 0.5 -range 5 XXXXXX YYYYYY

This will start a full processing of scans XXXXXX and YYYYYY:

  1. Vo centering,
  2. COR refinement using metrics within +/- 5 pixels of the Vo-center and using a step width of 0.5
  3. full reconstruction


(go to top / table of contents)


on-the fly scripting

to use autoProcessing from within a script, you need to specify the calling parameters (all as string). You also need to load the I13analysis module, which can be found at /dls_sw/i13/scripts/Malte.

Code Block
languagepy
import sys
sys.path.append('/dls_sw/i13/scripts/Malte')
import I13analysis
args = ['-j', '/dls/i13/data/2018/cm19664-3/processing/auto_processing/cm19664-3.json', '-o', '1', 'XXXXXX']
ap = I13analysis.autoProcessing(args)


(go to top / table of contents)



Script examples:

All these examples can be found at /dls_sw/i13/scripts/Malte/auto_processing/exampleNew



Full autoprocessing: ap_full.py

This script will start a VO-centering to find a good starting point for COR refinement, then do a refinement using Donath's entropy metric before starting the full reconstruction.

Code Block
titleap_full.py
collapsetrue
import sys
sys.path.append('/dls_sw/i13/scripts/Malte/')
import I13analysis

ap = I13analysis.autoProcessing(sys.argv[1:])
for scan in ap.scans:
    print '\n\n%s: Processing scan #%i.' % (I13analysis.GetTimeString(), scan)
    ap.setScan(scan)
    ap.runVoCentering()
    ap.autoTomopyRecAndMetrics(maskData = True)
    ap.finalReconstruction()
    print '\n%s: Processing of scan %i complete.' % (I13analysis.GetTimeString(), scan)


(go to top / table of contents)


COR refinement and final reconstruction around a given COR start value: ap_guessCOR.py

This script will skip the VO-centering and use the value specified with "-cor XXX.XX" as a starting point for refinement.

Code Block
titleap_guessCOR.py
collapsetrue
import sys
sys.path.append('/dls_sw/i13/scripts/Malte/')
import I13analysis

ap = I13analysis.autoProcessing(sys.argv[1:])
for scan in ap.scans:
    print '\n\n%s: Processing scan #%i.' % (I13analysis.GetTimeString(), scan)
    ap.setScan(scan)
    ap.runSinoCreation()
    ap.autoTomopyRecAndMetrics(maskData = True)
    ap.finalReconstruction()
    print '\n%s: Processing of scan %i complete.' % (I13analysis.GetTimeString(), scan)


(go to top / table of contents)


Manual center finding: ap_manualCORfinding.py

This script will take the manual input "-cor XXX.XX" and reconstruct in the vincinity (determined by the "-range X.X" and "-dx Y.Y" parameters) without starting a full reconstruction.

Code Block
titleap_manualCORfinding.py
collapsetrue
import sys
sys.path.append('/dls_sw/i13/scripts/Malte/')
import I13analysis

ap = I13analysis.autoProcessing(sys.argv[1:])
for scan in ap.scans:
    print '\n\n%s: Processing scan #%i.' % (I13analysis.GetTimeString(), scan)
    ap.setScan(scan)
    ap.manualTomopyCenterSearch()
    print '\n%s: Processing of scan %i complete.' % (I13analysis.GetTimeString(), scan)



(go to top / table of contents)


Only perform reconstruction: ap_reco.py

The "-cor XXX.XX" value will be used to perform the final reconstruction of the whole volume.

Code Block
titleap_reco.py
collapsetrue
import sys
sys.path.append('/dls_sw/i13/scripts/Malte/')
import I13analysis

ap = I13analysis.autoProcessing(sys.argv[1:])
for scan in ap.scans:
    print '\n\n%s: Processing scan #%i.' % (I13analysis.GetTimeString(), scan)
    ap.setScan(scan)
    ap.finalReconstruction()
    print '\n%s: Processing of scan %i complete.' % (I13analysis.GetTimeString(), scan)


Output and results

Folder and file structure

All results will be written to the <visitID>/processing/reconstruction/<scanNo> folder. The contents of this folder can be:

  • scanXXXX_SavuRecon.nxs: the SAVU config file used for the final reconstruction. Useful to keep to check the processing steps again at a later stage.
  • tomopy-centering folder: The results of Donath's entropy metrics test will be in this folder. There are several files:
    • scanXXXXXX_cors.npy: the tested CORs in numpy's native file format (binary with header)
    • scanXXXXXX_metrics.npy: the metrics' results for the tested CORs in numpy's native file format (binary with header)
    • scanXXXXXX_metrics.png: A visualisation of the fitting data. The orange dots correspond to computed data points while the blue curve represents the fit around the minimum. The position of the vertical gray line shows the minimum value used for the final reconstruction.
    • scanXXXXXX_slice0000_RecTest_Center.tif: A multi-tif image with the reconstruced centre slice for the different CORs tested. Please note that the "slice" in the filename is in cropped coordinates, i.e. does not correspond to slice 0 in the full dataset.
    • scanXXXXXX_slice0000_RecTest_Center.txt: A text file to go with the tif file. It included the image numbers and centres of reconstruction to allow identification.
  • tomopy-manualcentering folder: This folder will be created if the manualTomopyCenterSearch method is called. The content is the same as for the tomopy-centering folder except around the manual COR.
  • YYYYMMDDHHMMSS_XXXXXX folder: The folder with the final SAVU reconstruction. Depending on the process list, this folder can include an hdf file with the reconstructed data and potentially a folder with individual tiff slices. (YYYY - year, MM - month, DD - day, HH - hour, MM - minute, SS - second, XXXXXX - scan number)



scanXXXX_metrics.png example

Status control

To check the status of your jobs on the cluster, use the following commands:

Code Block
languagebash
[fedID@machine ~]$ module load global/clusterhamilton
[fedID@machine ~]$ watch qstat


To check the status of all SAVU jobs running on the cluster:

Code Block
languagebash
[fedID@machine ~]$ module load global/clusterhamilton
[fedID@machine ~]$ qstat -u \* | grep savu


To check the status of all jobs on the com10 node (regular SAVU) or com14 node (SAVU big):

Code Block
languagebash
[fedID@machine ~]$ module load global/clusterhamilton
[fedID@machine ~]$ qstat -u \* | grep com10
[fedID@machine ~]$ qstat -u \* | grep com14


FAQ

  1. Q: The SAVU process is finished on the cluster but the python script is locked and no output appears.
    A: The python script is waiting for the SAVU processing to be finished. If SAVU aborts with an error, the script will not recognize this. The most probable cause is that you are using non-standard SAVU plugins which have not been copied to your personal folder. Please check this section: Non-standard SAVU plugins