Users' Software Guide

New: Remote Access Guide

To remotely connect to I18’s workstation you need NoMachine, instructions on download and setup are here (where you can also find out how to get your download your data after the end of experiment) :

https://www.diamond.ac.uk/Users/Experiment-at-Diamond/IT-User-Guide/Not-at-DLS/Nomachine.html

Ignore instructions on how to open GDA using Launchers icon.

GDA can only be run from one person’s virtual session, this would be the first team member to login. Everyone else will follow the above instructions but will have to acquire permission from the one who is in charge (a pop up window will appear on the top right view of the user who is running GDA). Once given they will also be able to operate GDA. If the nx connection is terminated (or if your PC goes into sleep etc) then permission needs to be given again.

Independently of running GDA you can look and process the data by opening Dawn at an alternative workspace in your virtual desktop. To add workspaces click on Activities and in the search bar (top centre) type Tweaks. In the menu of the window that opens click on Workspaces and change the number to eg 3. To navigate between workspaces click on Activities and on the right hand side you will see the 3 different windows which you can click and select.

 

The menus that allow you to minimise the nx session or change display settings or disconnect can be found by taking the cursor on the top righthand corner and clicking on the turned corner icon that appears. From the options displayed at the bottom you want to have “full screen on all screens” to have access to all GDA windows. To minimise nx you can click on “Iconize”. If you want to leave completely then close the iconized window.

IMPORTANT: closing NX remote connection window will keep the NX session running. As there is a limit of 10 days for open sessions after which the session will be automatically terminated, when you are done with your data collection it is better to terminate the session by logging out from it (top right: power button → username → logout). This is particularly important if you have several beamtime periods close to each other.

 

 To start GDA: In Activities choose a terminal from the proposed icons and type “gdalog”, then open another terminal where you type “gdaservers”. Once you see the message that servers have started type on the same terminal “gdaclient”.

NB: in recent GDA versions there are two Clients, one is for running measurements and the other for microscope and stage control and for setting the energy. When exiting GDA you need to exit both clients, and even then, the command window where you would re-type gdaclient will be very slow in returning the command prompt. To force GDA exit, press Ctrl+C on the command window and the prompt will become available.

NEW: Since GDA 9.18, running the command 'gdaclient' in I18 now opens two clients: the first with Scripts, Experiment, Plot and Mapping perspectives, and the second with the VMA stream and live controls. These can also be opened individually by typing 'mainclient' or 'synopticclient' on a terminal window.

 

To open Synoptic: It will be easier to have the two systems (GDA and EPICS-Synoptic) on separate workspaces. Click on Activities and you will see in the far righthand side multiple views from which you can select an empty one.  Click on the Diamond logo at the top right and find beamline I18.

 

To start Dawn :  From activities open a terminal and type: Module load Dawn. Enter and then type Dawn and enter.

To start Pymca :  From activities open a terminal and type: Module load pymca. Enter and then type pymca and enter.

 

  Retrieving data after the session. Instructions can be found here:

https://www.diamond.ac.uk/Users/Experiment-at-Diamond/IT-User-Guide/Not-at-DLS/Retrieve-data.html

You can continue using NoMachine to process your data using Diamond’s environment and tools. The desktop you have been using during the experiment will expire after end of the session but you can always create one by clicking on Create New Virtual desktop…Automatically select a node.

 

 You can also monitor progress of maps on your smart phone using ispyb :

With your browser navigate to https://ispyb.diamond.ac.uk/ and login. You will see your experiment session and going in you can see the maps that have run.

The maps you see are unprocessed (so not elemental) but at least you know they have completed.

While a map is being collected you will not see an image. But that is also the case when a map has failed. So a few tips to guess if this is the case...The number of Images will be empty. And if you expand the Auto Processing field you should be able to tell if processing is running (therefore the map is still being collected) or if it says “successful” but you see no image then it means it the map has failed at some point. The part that has been collected will still be available to view and work on (Dawn, Pymca) but it will not be displayed in ispyb.

 

 For XRD/Excalibur live view of the detector: To monitor the data being collected by the Excaliber XRD detector remotely, for setup or during data collection. In the Synoptic (EPICS) menu (see To open Synoptic above or EPICS control system below to access), in the main window (Fig. 1 Main beamline synoptic window shown below), in the bottom part of the window is button called ‘Excalibur Live View’ (above the ‘Fill Mode/Times’). Click this to open a new box and type: BL18I-EA-EXCBR-01:PVA:TX into the channelName box to display current detector display.

EPICS control system

The EPICS control system controls the motors directly and is currently used for opening shutters and looking at the state of the beamline. It can be used to control most of the motors on the beamline. The EPICS synoptic can be started by:

  • Clicking on the diamond light source symbol (launcher) in the top right-hand part in the linux workstation (Launcher =

  • From the drop down menu select ‘Beamlines’, then ‘BL18I Microfocus Spectroscopy’ Beamline. After a few seconds the main I18 synoptic will open which looks like:

Fig. 1 Main beamline synoptic window.

Opening beam shutters and interlocks

From here you can open all the other EPICS windows. There are three shutters on the beamline - First there is the port shutter (SHTR0), this is controlled by the control room (8899). Next is the experimental hutch shutter (SHTR1), allowing the beam from the optics hutch to the experimental hutch.

  • To open the experimental hutch shutter (SHTR1), interlock the hutch through the normal search procedure.

  • Under “Experiment Shutter” in the main window synoptic (ringed in red in Fig. 1), click the blue ‘Close’ button to get the drop down menu. Then select “Reset”, then “Open” (reset must be clicked first in order reset the interlock).

  • If the shutter does not open, be sure you have searched the hutch correctly. You can also check the interlocks by:

  • Clicking on the diamond light source symbol (launcher) in the top right-hand part in the linux workstation.

  • From the dropdown menu select ‘Beamlines’, then ‘BL PSS’, then ‘BL18I PSS’, then ‘LOP Overview’.

  • This opens up ‘CSS Studio’ software, then automatically a window displaying the interlock status (this does take a couple of minutes, be patient). If the hutch is interlocked correctly these should be green otherwise red indicates an area not currently interlocked.

  • If an interlock remains red after a proper hutch search, call the control room (x8899).

Note: there is also the ‘Sample shutter’ which can be opened and closed directly without the need for the ‘Reset’ selection.

Attenuators & Diagnostics (D6 & D7)

There are two main diagnostics and attenuators along the beamline. Clicking in the blue boxes in Fig. 1 labelled ‘D6’ and ‘D7’ gives the options available, with options of different foils for attenuation (mainly in terms of Al foil thickness) of the beam or cameras to monitor the beam. D6A & D7A have two filter sticks each containing the foils. On stick B is a diode, a screen for a diagnostic camera, a few more Al foils and also on are D7B the drain current screens.

Fig. 2 Diagnostic and filter options in positions D6 and D7.

  • To put in an attenuator using a foil, from the main synoptic window (Fig.1) press the relevant blue box with ‘D6’ or ‘D7’ tag.

  • In the diagnostic window (Fig. 2), select the desired filter from the box labelled ‘Filter A’ or ‘Filter B’, seeing the options in a drop down menu (achieved in the above screenshot by clicking either ’15 m Al’, ‘0.05 mm Al’ or ‘gap’.

  • Only when the green ‘InPos’ button is lit, the filter is in place.

  • Looking at the ion chamber value (discussed below) it is usually possible to see effect of the different filters.

  • D6 filter A should always have 15 m Al in place unless specifically directed by the beamline scientist.

The various foils in detail can be found here :

Ion Chamber Readings

To get a general reading on the incoming and transmitted X-ray beam just before and after your sample, a relative value can be observed in the ‘Scalar’ window. Make sure there is beam on. From the synoptic click on IONC1 for I0 and IONC2 for It. These open the following screens :

Fig. 3 Ion chamber readings (normal operating mode ‘Ion 1’ = I0 (incoming), ‘Ion 2’ = It (transmitted)).

  • To bring up this window (Fig. 3) either click on ‘User Screens’ from the top left-hand button in the main synoptic window (ringed in blue in Fig. 1) which brings up several screens including the Scalar window (Fig.3) or click on ‘Scaler’ button right of the ‘Sample Shutter’ label, then select ‘Keithley V to F’.

  • In normal operating modes ‘Ion 1’ (called I0) measures the relative intensity of the incoming X-ray beam before the sample and Ion 2 (called It) measures the relative intensity of the transmitted X-ray beam after the sample (Note: these values depend on many factors and should not be used as absolute values of the beam flux).

  • By monitoring the change in the readings on ‘Ion 1’ with the shutter open and off can indicate if the X-ray beam is reaching the sample position.

  • By monitoring the change in the readings on ‘Ion 2’ can indicate the absorption of your sample (i.e. monitoring the reading change when moving the energy above and below an elemental absorption edge).

  • If there is little to no change in readings this could be due to wrong gain reading of the ion chambers and these can be adjusted as described below.

Setting the gain on the ion chambers (Stanford amp)

  • Be sure the beam is on (hutch is correctly interlocked and all shutters are open – the synoptic screen shows a blue line to indicate where the beam is reaching (Fig. 1)).

  • From the synoptic click on ‘IONC1’ for ‘Ion 1’ reading (I0) or ‘IONC2’ for ‘Ion 2’ reading (It). These open the following screen:

Fig. 4 Window to change gain settings for ion chamber readings (Showing only ‘Ion 1’(I0) window, It is the same).

  • The value of the reading of ‘Ion 1’ should stay between 0.1 (preferably higher than 1 if possible) and 5 during your entire scan range. To check this move in energy from the start to end of your scan (if running in transmission, the same is required of ‘Ion 2’).

  • If this is not the case, the gain can be amended using the two buttons under the ‘Sensitivity’ panel. This consists of a numerical factor between 1 and 500 alongside its units (µA/V, nA/V, etc.).

  • Adjust these freely to get the best settings to cover your energy range to stay within 0.1 – 5 range as best as possible. If this is not possible, consult the beam line scientist.

  • A typical value of the sensitivity is anything between 500nA/V to 2µA/V but this will largely depend on your experimental set-up and sample.

  • In some cases you may also need to adjust the ‘Input Offset Current’ settings to access certain sensitivity values:

  • This is done by closing the ‘Experimental Shutter (SHTR2).

  • Increasing or decreasing the offset value until the ‘Ion 1’/’Ion 2’ reading lies in the range 0.001 to 0.006. For example, the offset value of 2 to 5 nA should be sufficient with the sensitivity of 500 nA/V.

Feedback: OLD. New instructions in near future

In some cases despite the beam being on and all shutters open, it may be possible the feedback has been lost. The feedback ensures that during changes of the Bragg angle of the crystals in the monochromator (changes in energy), the beam remains stable and in the same position. In cases where a large change in energy is required and so a large change in the Bragg angle, this feedback can become lost.

To diagnose, click on ‘User Screens’ in the main synoptic window (Fig. 1, blue ring) to see the feedback readings:

Fig. 5 Window for feedback diagnosis (the state shown in the image is the beam off state).

Under working conditions with the beam on, the top two current readings are similar at around 1 and the bottom two current readings are similar around 1. If feedback is lost, the readings are far from equal with one often being red with a value ~-9.99. To rectify this follow the instructions below:

  • Click on ‘User Screens’ in the main synoptic window (Fig. 1, blue ring) to bring up the DCM window (this can also be done clicking ‘DCM’ in the blue box in the main synoptic window (Fig. 1)).

  • In the screen below (Fig. 6), if feedback has been lost either or both the ‘2nd Crystal Fine Pitch’ and ‘2nd Crystal Fine Roll’ will be flashing with a value around ~250 (plus or minus).

Fig. 6 Status of the monochromator.

  • In order to reset, click on ‘User Screens’ in the main synoptic window (Fig. 1, blue ring) to bring up the fine motor screen (shown below is the ‘Fine Roll’ only but exactly the same for ‘Fine Pitch’ if it needs resetting):

Fig. 7 Fast Roll feedback settings (Exactly same for Fast Pitch).

  • If the Fine Pitch is flashing, this must be done on the Fast Pitch window, if the Fine Roll is flashing, this must be done on the Fast Roll window.

  • Usually the Feedback settings (Fig. 7, red ring) are set as ‘Auto’. This needs to be changed to the status shown in the image, by clicking ‘Auto’, selecting ‘Manual’ then selecting ‘Off’.

  • At this point, back on the DCM window, the flashing fine motor must be set to 0, by clicking the ‘To 0’ blue button. (Fig. 6, ringed red).

  • Next adjust the ‘2nd Crystal Pitch/Roll’ by hitting the positive or negative until feedback is picked up on the feedback window (Fig. 5) (if the fine crystal motor maxed out at positive ~250, adjust the main motor positive & vice versa).

  • Once the numbers on the feedback window start changing (Fig. 5), change the settings on the Fast feedback window (Fig. 7) back to ‘on’, then ‘Auto’ and the beam should recover. If feedback is still not present, repeat the above steps.

Restarting IOCs

In some cases, if a process is not behaving or failing, it can be due to a server issue. In this case it is worth restating the server or the IOC. Below are steps to restart the IOC for certain processes, all of which can be got to by clicking ‘Hardware Status’ from the main beamline synoptic window on the bottom left-hand side (fig. 1). If you have issues with the below processes, they can be restarted as follows:

  • Mapping issues – click ‘RESTART IOCS’ ringed in red in below figure 8 (when green button next to this is lit, it is completed)

  • Processing files are not appearing - click ‘procServControl’ ringed in black on ‘Processing consumer’ in the Description column. This brings up a new menu and click the ‘Restart’ button (or ‘Stop’ ‘Start’). It takes a little time so be sure the IOC has fully restarted before you continue.

  • XANES/EXAFS measurements stop/dropout - click ‘RESTART IOCS’ ringed in red (when green button next to this is lit, it is completed).

  • Issues with EXAFS/XANES scans starting - click ‘procServControl’ below button ringed in orange on ‘TFG2.da.server’ in the Description column. This brings up a new menu and click the ‘Restart’ button (or ‘Stop’ ‘Start’). It takes a little time so be sure the IOC has fully restarted before you continue. After this reset you have to restart the GDA software.

 

 

Fig. 8 Showing the Hardware status screen displaying status and controls of the IOCs.

I18 Webcams

Pan-Tilt-Zoom cameras are available in I18 for viewing the sample alignment and the overall arrangement of the experimental table. They can be repositioned on request Experimental Hutch 1 (negative x side) Experimental Hutch 2 (ceiling) Experimental Hutch 3 (positive x side)When you are finished viewing the camera it is generally advised that you close the window as connections to these camera can generate a lot of network traffic.

Moving the microscope cross (aligning to the beam)


To move the microscope use computer 2 on the KVM switch (lower monitor of two).
The program is APT USER and produces a screen of 3 motors of which one is irrelevant (it controls an auxiliary rotational stage, you can tell which screen it is because in the field "travel" it has 360. You can close this screen down if you want). The remaining two controls are for the X and one controls Y movement of the microscope, the step size of each move is adjusted in the settings window.


Using picomotors via EPICS (for tomography)

  • Setup picomotors on stage

  • Connect picomotors to controller

  • Connect controller to ethernet cable (cable can be plugged in to 4527E or 4524E on BL 18I-EA-RACK-01) and connect power supply

  • The order of setup appears to be:

    •  Make sure IOC is off (in hardware status Newport picomotors - BL18I-MO-IOC-14).

    • Turn on the picomotor controllers

    • Start the IOC

  • The controls are located, from the main synoptic, 'Equipment>Picomotors>Motorised Mirrors'

  • The soft limits are always reset to zero, so this can be adjusted or ignored.

  • The above order seems to work (although not tested repeatedly) otherwise seems the epics motors remain red after changing soft limits and can see no movement. If this happens try repeat the setup order above in terms of starting the controller & IOC.

Using syringe pump via EPICS

  • Connect syringe pump to ethernet cable through the RS232 to ethernet cable adapter, connect the other end of ethernet cable to Serial port 6 on BL 18I-EA-RACK-01 (below Xspress 3A connections) and connect power supply.

  • Start the IOC, in synoptic > hardware status > BL18I-EA-IOC-12 > procservcontrol > start

  • The controls for the syringe pump are located from the main synoptic, ‘Equipment > Fusion4000

Using Eurotherm (and hot air blower) via EPICS

  • Connect hot air blower to Eurotherm as given in the manual below and connect powersupply.

  • Ensure air is flowing before turning on heater (all other precautions, safety info for hot air blower are given in manual below)

  • Connect RS232 to ethernet adapter to back of Eurotherm and connect the ethernet cable to the port 4525E on BL 18I-EA-RACK-01.

  • Start the IOC, in synoptic >hardware status > BL 18I-EA-IOC-07 > procservControl > start

  • The controls for the Eurotherm are located from the main synoptic, ‘Equipment > Eurotherm’

  • Right now automatic mode is working (need to test manual mode).

  • Follow shutdown procedure for hot air blower as given in the manual.

GDA Mapping perspective : Setup and display of maps

This is a quick and non-comprehensive guide.

Issues not covered here may be found in the general Troubleshooting section.



Setting up a map:

-In Configure Beamline you enter Z position (t1z or t3z) and Energy.

-In Detectors select T1 X v Y if you are using stage T1 (T3 for 3). Next to this you enter Exposure time per point. Snake is for acquiring data in both directions of X. Ticking/unticking Continuous is irrelevant. If you are in Grid you enter # of data points for Fast axis: X and slow :Y (see calculation of pixel size on top of tab) while in Raster you enter pixel size (the reverse calculation of # of points gives you total number of points for the map rather than per row/column).

-In Region shape enter coordinates or draw region on camera image (see further below).

-When ready Queue scan. The queue starts automatically so if you want to explore the sample to queue a number of scans before starting then you have to Pause the queue first, add your scans and then Unpause. You can change order of priority of scans, the queue though starts from the bottom. It is helpful to clear the queue of expired scans, right-click on any of them and Clear Queue.

-You can load back the configuration of a previously run scan as long as it still appears in the queue. If you right-click on the selected scan and click Open it will populate all the parameters. Does not work 100% of the times...

-New: You can now Save current map parameters to reload them. This should update most of your previous map parameters (but do check) and more importantly recover the Processing template (where the elemental windows have been defined) if this has been lost after restarting GDA. You can also Load an existing Processing template, click on Add Processing and then Use a pre-existing file. Navigate to the tmp folder of your sp directory and you will find the files named xrf_window2-Xspress3A*.

If however everything fails and after you restart GDA you do not have the correct Processing template then you can create it :

In Processing select template “xrf_windows” and “Xspress3A”...Add....In the screen that appears click Acquire...Next....Select elements from the Periodic table (and Ka or La....). Change width to 40 chns....Finish...This template has to be ticked for all maps.

 

- Mapped Data tree : Whatever is in Bold will be displayed, to remove from view 2-ble click, to removed completely right-click and remove.

-The Sample name field does not appear in the filename but in the file itself. But you will see the name you enter in the queue. The file name which is a number automatically incremented.

-If you are using the EXAFS Selection panel in Mapping and this gets lost after GDA restart you can recover it by going into Window menu and then Show View. In the search panel of the window that pops up type EXAFS and the correct panel will be found which you can load.

-Occasionally when you start the Client the Mapping Experiment setup screen is not loaded correctly so you have no way of selecting the correct "Detector". If that's the case you can close this view and reload by going to Window menu and then Show View. In the search panel of the window that pops up type Mapping Experimental Setup and the correct panel will be loaded.

-Old bug :Going from Xas to Maps you have to restart a number of servers : Malcolm, Panda and Xspress3. Refer to Troubleshooting page 2 (Map problems). This shows you the process for Malcolm and Panda and repeat for Xspress3 server which is also in Hardware Status.

-Old bug : If you restart GDA servers the Queue in Mapping perspective may not be not the correct one...You will know because it will be empty and it will remain empty even if you queue new scans. So close this queue and load the correct one : Click on Window: Show View: Other and in type filter text type Command queue. Select it from the menu. If you then save this perspective and it should reopen with the correct queue in future.

GDA Tutorial Videos:

Running a XRF scan

Setting up a XANES Measurement in Fluorescence mode

 

Maps: Off-line Dawn Processing



Reprocessing is done in Dawn.

(As long as a processed file exists maps can still windowed or fitted in PyMCA (on a Windows machine you need to move the raw data file into the same folder as the processed data for the processed file to open.)

On-site (or offsite but on NoMachine): In a command window on a linux workstation type:

module load dawn

Enter and then type dawn

Off-site: Download the most recent version you find at : http://opengda.org/DawnDiamond/master/downloads/builds-snapshot/

When you start Dawn you will need to load the correct Perspective for the data when you want to load. To do this click on the icon circled in red :



Summary of files produced and how to use them (and which Dawn perspective) :



-Raw nxs: in the top directory. Can be viewed in DataVis (or Data Browsing). Maps not useful for much more than quick look but this file contains all the experimental parameters (eg Energy, Time, scan range). More on this later.

-Processed: Basically the signal from all detector elements has been summed. They appear in “Processed” directory (unless you’ve created them manually in which case in Processing). They are called “***-xrf_window-***”. They contain two kinds of entries, “result/data” and XRF maps. The first is what AllElementSum used to be while the second is similar to old style RGB files. Can be viewed in Mapping.



NB: At various places in this guide it is mentioned “If the Processing falls over...” and what to do about it. If this does stop working you will know it because a raw nxs file will not have an equivalent “xrf-window-**” file in Processed directory. If this happens you can reconstruct missing processed maps as in section 3 of Dawn processing. But also restart the Processing server to get automatic processing working again:

In EPICS find and click on Hardware Status on the bottom left of the Synoptic screen.



All the way at the bottom click on the procServContol of the Processing consumer. On the screen that comes up click on Restart.



File viewing/processing in detail

Viewing raw files for checking experimental parameters. Files from top directory, DataVis perspective.



You can drop the map nxs file from a file browser into Data Files in the DataVis perspective so that you can look at various entries recorded. Select the file and right-click. On the menu appearing select View Tree.

Expand entry and form there you can find:

Acquisition time is in “instrument/Xspress3A/count_time “

Energy in in “ instrument/DCM “

Map parameters eg step can be found in solstice_scan/scan_cmd: on the info appearing on the right hand side you have map coordinates and step size



Viewing XRF maps in Mapping perspective

Files from Processed directory, contain total map (/entry/result/data) and windowed elemental maps. Mapping perspective.

This view is very similar to GDA’s live view.

Drag and drop a Processed file in Mapped data. Select any of the elemental XRF maps and click on the map to see pixel mca.

If you select more than one XRF maps and right-click you get a Compare view and an RGB mixer (select from drop-down menus).

You can also drag in a saved microscope image for superimposing as in GDA’s Mapping Perspective.

There are various ways to save images, all accessible from the drop-down menu next to the Printer icon.



-“Save screenshot as” will produce a png (and this works for camera & map superimposed)

-“Export data to tif/dat” will produce a tif that may not be usable as a plain image but carries pixel elemental intensity values so can be loaded in eg ImageJ for further analysis.



Manually creating Processed files

If something has gone wrong and no processed maps have been produced then you can create them.

Files from top directory, use /entry/Xspress3A_sum/sum in Mapping perspective to produce a processed file in Processing directory

 

You need the raw nxs file (top directory) and the Mapping perspective.

-Drop the nxs file in Mapped data

-Double click on Xspress3A_sum and click anywhere on the loaded map image.

-Below, in Detector data click on the spanner (ImageTools) and select Processing Image. This opens a new tab next to Detector data.

-In Processing image tab click under Name and you should be able to type XRF Elemental maps from ROIs.

-Click on Live setup (2nd icon after Run )

-You can choose the elements and change ROI width to 40 chns. Finish

-Press Run to process. It will ask you for a file path to save. You want to be saving in the Processing directory so type “processing” after your experiment number in the path suggested (eg /dls/i18/data/2016/cm14473-5/processing).

NB: If you are reloading maps into GDA for selection of Xanes points from maps in the dialogue that comes up you need to select Link original data. If you are only interested in maps then you can accept the default option (processed only).

-The resulting file will contain the elemental maps and the result/data entry which is what you load into Pymca ROI imaging.

 

If you want to process a number of files you can use the Processing perspective instead: Read in the files (or drag and drop) in Data Slice View and select the xspress3a/data array and then 1 is range, 2 is range, 3 is x and 4 is y.

In Processing tab under name type XRF and select XRF Elemental Maps from ROIs. In the Model tab click on the Live Setup value and from the Periodic Table select the elements you want. In Output you should now see the energy spectrum. Direct the output to the processing directory.



Normalising Maps to I0



In Dawn and Processing perspective drop the raw nxs file (from top directory) in the Data Slice View. In Select dataset choose /entry/xspress3a/data array (or could be just xspress3a/data ) and you should see the following for Dawn 2.11 onwards.

 

 



For earlier versions of dawn you have to click under “Type” to setup the axes as in the following screenshot :







In Processing tab under name type Divide Internal Data. In the Model tab type / and select /entry/I0/data.

Back in Processing click on Insert Operation icon and under Divide start typing XRF and select XRF Elemental Maps from ROIs. In the Model tab click on the Live Setup value and from the Periodic Table select the elements you want. In Output you should now see the energy spectrum.

Depending on which operation is highlighted you should see the following screenshots (if you don't see the expected Output for the selected operation flick between the 2 operations and only if it looks as expected proceed to the next step).



 



Press Play (green button in Data Slice View) and direct the output to the processing directory. If you drop all the files you want to process in the Data Slice View then you can batch process them by clicking the Play button.



 

XRD Processing in Dawn

Creating a calibration file

Select the Dawn “Powder Calibration” perspective and load (drag/drop) in your reference data e.g. LaB6

 

  1. Show calibrant and beam centre” option on left of page – tick box. This will display the rings used to set the calibration and will also display d-spacings if rotated to the correct angle (ca. -140°). At both 0 and 180 ° there is a horizontal red line also displayed.

  2. Next, for the “Rings to use (from inner)” option - set to 9 if LaB6, 3 if Si

  3. Set energy of beam in right hand “Powder Diffraction” data window on right hand side of screen e.g. 13 keV (also available as eV or Å). Next set the detector type e.g. PS_cmos_i18 (this can be configured if not listed based on pixel size/number etc). Enter detector distance (approximate); this will vary as you move the calibration rings but is good to set to the rough value initially.

  4. Set to manual not automatic. This will cause a small array of buttons to appear that can modify the ring position and shape:

     

    All of the settings can also be changed by entering the numbers in the “Powder Diffraction” data window. It is very important to ensure the correct energy before starting this process.

  5. Next click on “Match rings to image”. This will match a series of yellow points between your image and the calibration rings you have aligned over it.

     

     

  6. Make sure “Point calibration” is ticked. This will use the data from the rings you have just matched. Select “Fix Energy” to ensure that the energy of your incident beam is not used as a variable (NB take care to not select it under “Ellipse Parameters”). When all looks good Run Calibration which will refine the detector distances, centre of beam, yaw/pitch/roll etc. and display your calibrated pattern at the bottom of the window.

     

  7. It is important (and no doubt obvious) to note here that the quality of the image you collect will affect your calibration. If your reference sample contains several diffraction spots (from a single large crystal diffraction vs. powder diffraction) as is the case with the LaB6 sample here, these will skew the position of the integrated ring, and could result in a poorly calibrated pattern. In this instance, it is worth moving the sample around in the beam when collecting the pattern to collect as complete rings as is possible.

     

    To save your calibration click “Export metadata to file” at the top right of the “Diffraction Calibration View” window. This will save it in a Nexus file format.

Applying Calibration and azimuthal integration

Go to the Processing Perspective and load you file in Data Slice View.

In the Processing window next to it you need to add two processes, one is the calibration and the other is azimuthal integration. To do this click the icon for Insert Operation and type Import Detector Calibration. In the window to its right you need to navigate to the calibration output file you saved at the end of the previous section.

Back in Insert Operation menu you enter a second process so type Azimuthal integration. On the dialogue on its right you have the option to choose the X axis as 2-theta, q or “resolution” which means d-spacing.

In the Output window you will see the product of the azimuthal integration. If you only have one image in the file then this is the final product that you can then export in various formats. Press the little arrow next to the Printer icon to see your options.

If you have multiple images in the file then in the Data slice view you have a Play button, this will run through the above operations for all slices contained in the file. It will produce a nexus file that you can then view in the DE Explore Perspective. It’s the Data field in Result than you want and you can scroll through the different images in the Data Slicing view below. From the Dataset Plot you can choose how to export the data as explained earlier.



Example of a pipeline for XRD processing

The specific files that will work for your data will be setup during your experiment. But the following is an a example of a typical pipeline.

The last operation is in order to subtract a background pattern that exists throughout your data eg from the substrate. To do this you need to record single frame data from the substrate (or a small map from which you create a new file with a single average entry. This needs to get processed by the pipeline (up to including baseline correction) and this output (1D plot) needs to be saved as a nexus file. This is the file that is pointed to in the model. The “dataset” path needs to reflect the name of this file.

 

OLD : Converting a series of tiffs to hdf5 (including calibration and azimuthal integration)

  1. make a copy of the template.dat file found in Y:\i18\scripts\userexpts\XRDmapping

  2. Change file path to correct location and edit numbers to match all of the tiffs you want to process. Ensure there are no blank lines after you input the numbers. Each number must be on a separate line. Save the file.

     

  3. The ".dat" file can be drag/dropped into the “Data Slice View” of the “Processing” perspective. If the file has been written correctly, a window should pop up showing the 1st 2D diffraction image in your list of files to process (check that the number of files are correct – the other dimension numbers are the detector pixel dimensions 2083*4150). You can then add in the processing steps you want (typically import detector calibration, azimuthal integration…) and then run the script.

  4. More instructions on using the processing and other perspectives can be found at DawnScience.org, on the YouTube tutorials, or by speaking to one of the scientific software group.

  5. Open the “Python” perspective and load in the “Reshape_batch_reduced_2theta.py” file. Save the file in your Dawn workspace (window on left of screen).

    Run the script and follow the instructions

    The file (currently) will be saved to the directory where the script is.

Tomography Reconstruction in Savu


Running the Reconstruction with automated Python Script

  • Login to linux workstation or remote login to linux desktop.

  • Copy Tomo_recon_setup folder and entire contents from /dls/science/groups/i18/software/Savu_reconstruction_software/savu_sab_batch to your processing directory of your experiment folder.

  • Go into Tomo_recon_setup, then script_files and open user_input_file.py in a text file editor.

  • Here you fill in the first and last scan (or slice) number. The first number is written in next to slice_start, the last is slice_end. Write in your experiment number (eNum) and year data was collected (yR) where directed to in the script only if you are running the analysis not in your processing folder.

  • Choose the number related to the type of data to be analysed from the given list. This is a standard config file to reconstruct that data type: XRF, Trans and XRF_ROI, XRD or a mix. Other options are below in the file with the addition of producing avizo (.raw) files (True or False) if you use the the XRF_ROI.nxs, XRD_ROI.nxs or XRF_pymca.nxs config file options.

  • Save file.

     

  • Open terminal in the script_files folder.

  • Type ‘module load python/3’, enter.

  • Type ‘python run_tomo_reconstruction.py’ (or type ‘python r’ then tab to complete).

  • The script is setup to generate the commands to run the reconstructions by automatically starting the reconstructions on the cluster. Once the reconstructions are complete it will then automatically copy the relevant dawn file and place them in a separate folder with the scan name in the file name. These can then be viewed in Dawn.

  • When you run, a terminal will appear that submits the reconstructions to the cluster. It will then update you, printing it is waiting for the last submitted file to start reconstruction. Once this starts it will update to say it is waiting for it to finish. Then it will inform you it is copying the files to the stack folder. Once completed it will say File Completed.

  • The slices have been reconstructed in a new folder located in the Tomo_recon_setup folder, called  ‘Tomo_reconstructed_files’. In here is a folder with a time stamp following ‘Recon_files_xxx’. Thus every time you run the reconstruction these will be put in to a separate time stamped folder.

  • In the ‘Recon_files_xxx’ folder is a folder called ‘Stack_file’ which contains the slice files which can be loaded into Dawn. If Avizo files have been produced there is a further folder named avizo containing the files.

  • If the reconstruction failed it will print ‘Reconstruction failed’. This usually means the config file setup is not correct. The ones provided are for general cases and may need modification for specific purposes.

  • See below for config file setups

Config_file changes

  • The config files supplied are typical pipelines for data reconstruction. The file can be loaded and viewed in a linux terminal and modified, for example, if you need to add in the file path for your pymca file or set the centre of rotation manually, you must first amend the config file.

  • To amend the config file, in the same directory as the config file, open a terminal and type “module load savu” in the terminal command line and ‘enter’.

  • Then open the config file by typing “savu_config” and ‘enter’.

  • After “*** Press Enter for a list of available commands. *** “  appears type “open <config_filename>.nxs”, ‘enter’.

  • Once this loads, there is a list of the current steps (known as plugins) that will be executed in order as the given number. Those that are in white text will be executed and those that are in red text are turned off and will be skipped.

  • The config file consists has several operations required for different data sets. Thus you add, delete or turn on or off the steps depending what needs to be done on the data.

  • To view the possible commands to modify the plugins, hit ‘enter’, or type the command (as shown after ‘enter’ is hit) followed by –h to show options for that command.

  • To display all current plugins type ‘disp’ or for information on a specific plugin type ‘disp <plugin_number>’.

  • Autocentering has been included but may not be giving correct cenetering. In this case, turn off VoCentering plugin (‘set X off’ where x is the lugin number), find the reconstruction plugin number (called ‘AstraRecon’) and type ‘<plugin number>.7 <centre of rotation pixel number>’, ‘enter’ (To find the pixel number see below in Finding centre of rotation manually). On how to enter values read: https://savu.readthedocs.io/en/latest/user_guides/user_training/ and info under special features.

  • For adding pymca step, you must type ‘set 3 on’ for the XRF config files, then type ‘mod 1.2 <pymca_configfile_path>’. This will then run the pymca fitting analysis on your datA.

  • Once you are happy with the config fileType ‘save <config_filename>.nxs’.

  • YOU MUST KEEP THE SAME FILE NAME IF YOU WANT TO USE THE AUTOMATED METHOD TO RUN THE RECONSTRUCTION USING THE PYTHON SCRIPT.

XRF_ROI constructions.

  • When collecting data, if you have added ‘processing’ and selected elements of interest, the XRF_ROI config can selectively reconstruct the data related to only those elements that have been chosen.

  • You can run the same was as above, choosing option 9 as the config file. If yoy want avizo files also produced, put avizo =True.

  • In the terminal when running the script, it prints out what number axis in Dawn relates to which element, also saves as a text file in the stack file folder.

  • When running the analysis this creates new files separating out the relevant sonograms and then selectively reconstructs that data. These initial files are found in a folder called ROI in the Tomo_recon_setup folder called ROI, in a time stamped folder. In here is the text file telling which axis in dawn relates to which element.

XRD_ROI constructions

  • ·Method to fit XRD data to give reconstruction based on parameters such as fwhm, to give a positional dimension to fitted aspects of XRD data.

  • First process ONE XRD map/scan to give the integrated data in the Dawn software via the dawn pipeline processing:

  • Go to processing tab in Dawn and load in the data file, choose: /entry/Excalibur/data as the dataset. This should show the raw detector data, but Y and X should be on Dimension 2 and 3.

     

  • Once the scan is loaded, load a templated/configured pipeline in the processing tab (typical pipeline is in the Tomo_recon_setup folder). To this add the calibration file in the calibration step and optimise the other steps to get the integrated XRD pattern. Save the optimised pipeline.

     

  • The green button on the scan list starts processes the whole map data, where it first asks you for a save location.

  • Next load the processed scan into the mapping perspective in dawn and click ‘integrated’ to show the map, click on the data in the map to show the processed detector data.

     

  • Next to the detector data tab is a button ‘XY plotting tool’, click the drop down arrow, go ‘Maths and Fitting’, then ‘Function fitting’.

    This opens a tab, which allows you to fit peak types to the data, the baseline is added automatically and you need only to add the peaks (position/fwhm/area). The more peaks the longer the processing will take. Once happy, save this by clicking Export Function Data to HDF5 file and save the fit.

  •  

  • Next return to the processing tab in Dawn and load the earlier configured pipeline and add the saved fit by adding a final step to the pipeline called import fit function, in this step the fit function can be added.

     

  • Load the scans in the same manner as previous and it is possible to line up several files to run. The green play button on the scan list starts processes the whole map data, where it first asks you for a save location.

  • Move the processed files from the dawn pipeline to the folder XRD_fitted_files in to the Tomo_recon_setup

  • Then go to the folder script_files and open the user input file. Choose the scan numbers of the processed data you made from Dawn and for the fitted XRD data, only config_file 10 can deal with this data. In options below, choose the data type to reconstruct that you got from the processing (area, fwhm, position). There is also an option to do the strain conversion, if you want to calculate that (using the position data), posn must be TRUE, calcStrain must be True and a number given for d0 (if written the equation that will be done after the hashtag). Once changes are made save the file.

  • This can be run in the same processes as described above.

  • Talk to I18 scientist if you encounter problems or if you need any any additional processing types.

Finding Centre of Rotation

  • In the config files, autocentering is on (VoCentering plugin) but sometimes may not be correct. Here is a reliable method to find the centre of Rotation manually if auto centering does not work.

  • In dawn load the sinogram in DataVis (quicker to use sum data).

  • Here find the pixel number of tx that needs to be input into the config file (not the actual value in mm). This is the middle point between rotation of 180 degree.

  • i.e. find that value in mm of t1x, then plot t1x only (as one of the options in the processed data file). Find the point of the line which is at the found value in mm then read off the opposing pixel number.

  • This can usually be done on one slice as the centre of rotation should not vary between slices.

  • It’s then best to do a few reconstructions across a range of centre of rotations as this is still slightly inaccurate. This can be typed into the config file as ‘start:stop:step;’ in terms of pixel number, into the AstraRecon plugin centre of rotation in the savu config file (see how to make config file changes above).

    o   If doing transmision or XRD reconstruction data use a wide range ie. 40 or 50 reconstructions as this is fast reconstruction

    o   If doing XRF reconstruction best to use lower range ie. 10 reconstructions as this is slow as each reconstruction consists of 4095 reconstructions (hence 10 x 4095 reconstructions), unless you are using ROI or pymca.

  • Once doing the reconstructions in the normal way the reconstructed slices can be loaded into dawn in the DataVis perspective, which will now have an extra axis displaying the different centre of rotations (last axis).

  • Rather than producing separate config files and inputting the centre of rotation for each slice (as these can change slightly between slices) then redoing the reconstruction, it is easier to do the following method:

    o   Find the best centre of rotation for each slice by going through the numbers on the new axis in Dawn.

    o   Note the axis number for each scan number down, then edit and type them into the get_cor.py script (follow instructions in file) to produce a new set of files that take only the chosen axis number. (Type ‘module load python/3’ in terminal in same place as the script, enter. Type ‘python get_cor.py’ to run script). Only the last three digits are needed of the scan number.

    o   The new files will be saved in the Stack_file folder of your reconstructions in a folder called cor_slice. See Running the Reconstruction via Python Script to see how and where the stack_file is generated.

Running from Terminal

  • ·In order to reconstruct the data in a terminal, write into the terminal the command, “savu_mpi <file_to_be_reconstructed> <config_file> <reconstructed_file_location>” then ‘enter’.

  • For example, if your file is in your processed folder and you want to save to the current directory where you opened the terminal, the command would be :

  • “savu_mpi /dls/i18/data/2019/cm22957-5/processed/i18-160838-xrf_window_40-Xspress3A3.nxs config_file.nxs . “ (this final part is a space, full stop and another space )

    o   <file_to_be_reconstructed> = “/dls/i18/data/2019/cm22957-5/processed/i18-160838-xrf_window_40-Xspress3A3.nxs”

    o   <config_file>  = “config_file.nxs” (if the terminal is open in the same location as the config file, otherwise it should be ‘path/config_file.nxs).

    o   <reconstructed_file_location> “ . ” (this final part is a space, full stop and another space – this is short hand for current working directory, if you want to change the location, specify the directory instead of the full stop)

  • The resultant data is in a newly created folder in the specified output directory with long name full of numbers containing same name as the end of your input file. In this folder, the reconstructed data contains “astra_recon” in the file name and can be loaded into DataVis perspective in the dawn software.

Older methods and scripts Files/scripts/Templates for reconstructions - Copy the below files saved in groups/i18/software and need copying in sp processing directory :

Pymca config file (if peak fitting for XRF)

Folder: “templates” : you may need to edit the paths for fine-theta, table_x and energy.

Recon_Setup.nxs is the configuration setup that you edit for your files. You can rename it.

Script: generate_savu_proc_list.py (generates list of files for batch fit and reconstruction)

Script: savu_batch_proc.sh

Script : gen_fluo_avizo.py

For all of the following steps you want to navigate to the processing directory of your experiment.

It will help if you put each set of processed files in a separate folder.

First you want to want to look at some sinograms in Dawn to estimate which range of centroids are good starting guess for the reconstruction.

You can view sinograms in Dawn, DataVis perspective as in the screenshot:

 

Savu configuration setup

The setup includes many parameters some of which will be "on" or "off" depending on your experiment.

To edit:

module load savu

savu_config

open Recon_Setup.nxs

disp –v –a shows you more details of current setup

set 3 off to turn on or off monitors etc

mod 6.7 N1:N2:step; where N1 –N2 is the range of centroids, you can start with a wide range and large steps and incrementally reduce to conclude on best choice.

save Recon_Setup.nxs

To run (on a different terminal) :

module load savu

savu_mpi /dls/i18/data/2017/sp****/processing/particle1-processed-files/i18-88616-xrf_window-Xspress3A.nxs Recon_Setup.nxs .

NB: there is a space before the . at the end of the command line...

you can check progress with the tail command eg : tail -f Ir72_recon/20181114164519_Xspress3A/user.log (copy command line from terminal)

You will see when it is finished, to get back to command prompt type Cntl C

Successful run should result in folder with a long name and 7 items. The reconstructed map is named fluo_p2_astra_recon_gpu.h5 and you can view it in Dawn, DataVis perspective. As in the screenshot below change dimensions 0 and 1 to be X and Y. If you had specified 1 centroid then you will only have one extra dimension (as in screenshot) which represents the elements fitted in PyMca. By sliding across the range you can visualise the particle for different elements (see below how to find which one is which).

If for the reconstruction you had chosen a range of centroids you will have one extra dimension below the XRF slide. By sliding across the centroids you can see how artefacts change and you can choose the best centroid (check for different elements). You can then edit the Recon_Setup and put the centroid you have chosen.



 

 

Numbering of XRF elements starts from 0 and to find which is which drop in DataVis the file with the long name (i18-******_Xspress3A). In Data files select the file and right click, choose View Tree. From the window that opens expand as shown in following screenshot and in the string of PeakElements you can see (and count) the XRF windows you have chosen.

Batch Reconstruction

With text editor edit generate_savu_proc_list.py and give it:

path to the input files (processed ones that you have put into a folder in processing directory). There are 2 places where you need to enter this.

path to output script savu_batch_proc.sh (should be processing directory)

path to output files : directory you have created in processing

To run :

module load python/ana

python generate_savu_proc_list.py

This will create savu_batch_proc.sh for which you need to change permissions :

chmod 777 savu_batch_proc.sh

To run the batch script :

./savu_batch_proc.sh



For Multimodal Tomography reconstruction:

module load savu/pre-release

The process list that needs to be used is in /dls/science/groups/i18/software/savu_fit_reconstruction/multimodal1_template.nxs

The templates folder (/dls/science/groups/i18/software/savu_fit_reconstruction/templates/) needs to be in the same folder as the process list.

The following changes need to be made in the template files for each dataset being reconstructed:

  • For fluorescence data reconstruction, the raw nexus file needs to be first manually processed using Dawn in order to get a processed file that has the summed signal from all the detector elements (see attached doc on how to do this:

)and the path of the resulting nexus file present in the processing folder needs to be entered in the fluo.yml template file.

  • For XRD data reconstruction, the path of the folder containing the XRD Tiff images for the dataset being reconstructed as well as the path for the XRD calibration file needs to be entered in the xrd_tiff.yml template file.

  • More detailed instructions as well as descriptions of what each parameter in the plug-ins of the process list mean is given in:





Creating a volume

Edit script gen-fluo_avizo.py and give it the folder of the reconstructed slices, output folder and name of output volume file. On line 24 you give it the number representing the element you want to create a volume from (starting from 0).



To run :

module load python/ana

python gen_fluo_avizo.py



The volume created is **.raw and can be visualised in Avizo:

If loading avizo via NoMachine create a virtual desktop on one of the visualisation nodes with GPU.

You then type :

module add avizo/vis

avizo

Open data : 32-bit-float is the type and you give it x and y dimensions plus the number of slices.

A good guide for starting off with visualizing 3D images is given here: Amira-Avizo Software | Visualizing 3D images - YouTube

Other tutorials for avizo can be found at the following link: https://xtras.amira-avizo.com/

 

Tomography Reconstruction in concentrations instead of counts

This is an interim and very manual process and assumes that sinograms have been quantified with Pymca.

The idea is that for each sinogram you compare two versions, the one in counts (raw) and the one in ppms (pymca). You carefully pick the same pixel (for a given element) and work out a “Counts to Ppm” normalisation factor. This factor can then be applied later in the processing chain so that the reconstructed volumes are in ppms.

This “normalisation” factor is not the same for all elements as there are different sensitivities, absorption coefficients etc so for each element you want to quantify you would need to repeat the process. The factor also depends on the thickness of the particle so the elemental specific factors can only be applied to more than one particles if they are of the same thickness.

As a sanity check first quantify a 2D XRF map to have an idea of what ballpark concentrations to expect. These are “bulk” so the projected voxel ppms should be approximately the bulk divided by the number of projections.

When you get to the final stage (Creating a volume) edit the script gen_fluo_avizo_ppm.py and enter the normalisation factor for the element you are creating.



PyMca Map processing

You can download PyMCA at http://pymca.sourceforge.net/download.html

Loading files in PyMca: Files post-Dec2016 : You need Pymca 5.1.4 or newer

To use this on a Linux box type in a Terminal: module load pymca and then : pymca

To download for your use outside Diamond go to http://pymca.sourceforge.net

Currently there is an issue with Windows Pymca loading the nxs maps. It needs the processed file and the original raw nxs file (top directory) to be in the same folder. So if you want to do Pymca ROI imaging work after leaving Diamond you need to copy the two groups of files in the same place.

Pymca: ROI Imaging : loading of nxs files


The files you need are the ones in the Processed directory. Correct entry is processed/result/data.

Tools : ROI Imaging: Go to folder "processed" , File types: HDF5…Select the .nxs map file

You then expand the file (click on +)…Expand “processed”...”result”: “data” which you 2ble click, tick on Signal (below in User) and then “Finish”.



PyMCA fitting in Dawn

This can be done in the Processing perspective. It requires a configuration file created and tested in PyMCA. The resulting fitted map can be loaded in the Mapping Perspective therefore giving you real stage coordinates and the ability to easily inspect the raw energy spectrum of a given pixel. You can also overlay it to the optical image of the sample acquired on the online microscope. BUT it is very slow compared to PyMCA fast fit option.

In the Data Slice View you load the processed map file configuring it as in the following screenshot:

In Processing Insert operations choose XRF Pymca fit and in Model find the path to your configuration file.

In Data Slice press Play and give it a path for the output file (in the Processing folder)

An in-house guide to PYMCA fitting and quantification can be found here.

 



PyMca : Creating/ Saving RGB files

Some instructions can be found here:



Detector characteristics for quantification

Vortex : Si thickness = 1 mm and area = 1.2 cm2 ... Be window = 12.5 µm

Retired :

Vortex (SII) : Si thickness = 0.35 mm and area = 1.7 cm2 ... Be window = 12.5 µm

Xspress : Ge thickness = 7.5 mm and area = 4.0 cm2 .... Be window = 125 µm

SGX : Si thickness = 0.45 mm and area = 5.3 cm2.... Be window = 30 µm



 

XANES map processing for Mantis

Processing of the files is now done on any of the Linux workstations

Running the conversion script

Your local contact will advise which is the one that fits the mode of collection of your stack and will put a copy of the script in your processing directory.

You then open a command window and :

cd to the processing directory…..and then

Type :

module load python/ana

python <the name of the script>

A pop-up window called ‘tk’ will appear, allowing you to navigate to where your raw data files are stored. Select your folder and click ok. There is the option to ignore files

Where would you like to save your created files? The output file is saved in the Processing folder.

Type in the bottom limit of your fluorescence energy window in eV/10. This is where you can enter the lower limit of the fluorescence window you think is of interest for the element you are observing in your sample. The value in eV should be divided by 10 before entry, e.g. if your window if from 3200-3600 eV, your lower limit would be entered as 320 and your upper limit as 360.

Type in the upper limit of your fluorescence energy window in eV/10. As above, for the upper limit of your fluorescence window.

The script will also normalise the fluorescence to I0 and you have the choice to normalise everything to the one before the last point (which sometimes goes wrong so It’s worth running with both options and comparing the results).

              

 Running Mantis

There is a Windows version of Mantis that your local contact can give you but this is from an older version. The newest one is currently available only in Diamond’s environment via NoMachine. To run in Linux open a terminal and type Module Load Mantis and then Mantis.

When you start Mantis :

  • Load Xanes Stack and select the hdf file that you produced with the conversion script.

  • In Reprocess Data click “Use pre-normalised data” to correct the spectra as in the screenshot :

 

 

NB: The X and Y axes of the map are swapped around. The newest version of Mantis has features to deal with this.

-The black bar at the bottom of the map is meant to be a scale bar, you can untick it…

-Sliding the energy bar shows the distribution of potentially different species in the map. While clicking on a pixel in the map shows the specific spectrum.

 

 

 

 

 

  • PCA (Principal Component Analysis):

You then move to PCA tab and click on Calculate. By scrolling down the principle components you can make an estimate of how many describe the real variance of the sample. Mantis is able to statistically determine how many distinct components it thinks are present in your sample. If you wish to go on and use the cluster analysis function to display the spectra for these components and show graphically the distribution of those components, you will need to do PCA first.

To get the best results on this section, the first thing to do is limit the energy range of your sample to the region at which variation in the spectrum is likely to denote a change in component for your sample. This is done via the ‘Limit energies’ button on the Preprocess panel of the Image stack tab. You can click and drag until the green region covers the energy region of interest.

Once the energies have been limited, swap to the PCA tab and click ‘Calculate PCA’. The program will then return the number of significant components it believes to be present. You should check these appear correct by looking at both the images and spectra displayed below. The first image will be completely one colour, but you should then check images for higher numbered components, checking that the significant components appear to have a clear structure to the images and spectra, as opposed to a ‘salt and pepper’ or random noise appearance.

 

 

  • Cluster analysis:

You then move to Cluster analysis tab and click on Calculate. The number of clusters to compute has to be adjusted based on some knowledge of variance of the sample and inspection of the principle components in the previous step. Mantis can show a map of where it thinks clusters of a particular component are present on the map of your sample, and show you the spectra associated with the composition of each cluster. You need to tell Mantis how many components you think are present as the number of significant components identified in PCA might be large and due only to intensity variations. The cluster error map gives an indication of the reliability of the data – spots marked in white are less well associated with any particular cluster than dark areas. You can display scatter plots showing the correlation of each component with another in terms of areas occupied.

Exporting spectra: By clicking in Save CA results you can export the average cluster spectra in different formats. The csv option allows you to load them in Athena.

 

  • Alignment of image files: Mantis allows manual alignment of your image files. This is good for aligning a set of scans where the sample stage may have wobbled slightly during the measurements. To access this function, choose ‘Align images’ from the Preprocess panel on the image stack tab.  This will bring up the alignment tab. You first need to select a reference image, which should be the image collected at the first energy. You can then automatically align your images by selecting ‘Register Images’ and a readout will appear at the bottom of the screen displaying what x and y shifts have been applied to each image. Ticking the cross correlation box will give a moving display of this process but will slow the program considerably. Once this has been done your images should be more or less in line, but you will need to then carry out any further manual alignment by choosing a point on your reference image and a corresponding point on any of your files which you can see need further alignment, then clicking ‘Apply manual shifts’. This will activate the ‘Accept changes’ button.  If you are already happy with the alignment you will need to click at the same point on another image as on your reference image, hence not changing the alignment, but activating the ‘Accept changes’ button. Clicking this button will then save your changes and return you to the main screen. You may also decide to ‘Crop Aligned Images’ which will reduce the size of your map down to only points which have been measured in all scans. After aligning your images you may wish to save the file to have text files for use in Athena created from it later. When saving you will need to type your file name followed by .hdf5 to ensure it saves in the correct format.

 

 

EXAFS Analysis

The I18 EXAFS file data format is as follows (xanes and xas files are in Experiment_1/ascii directory):
For fluorescence collected with the Xspress 9 element detector in Athena you need to select Energy : column 1 and nominator is FFI0 : Column 8
For fluorescence collected with the Vortex 4 element detector : Energy : column 1 and nominator is FFI0_vortex : Column 9
For Transmission : Energy : column 1 and nominator is lnI0It : Column 5
For basic averaging/processing of these files you can use any standard package (EXCEL, Igor, Origin) or we also recommend ATHENA.
http://cars9.uchicago.edu/~ravel/software/
For background subtraction we found that the package PySpline performs better than AUTOBK.
http://sourceforge.net/projects/pyspline/ http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-12219.pdf
A customized version is available on the downloads page. This outputs the data in a format suitable for the EXCURV fitting package. Use columns 2 and 9 for excurve.
EXAFS Analysis/Fitting FEFF
http://cars9.uchicago.edu/iffwiki/About (free version which is 1 or 2 releases behind licensed version) http://leonardo.phys.washington.edu/feff/ (latest version, license fee)

EXCURV
http://www.cse.scitech.ac.uk/cmg/EXCURV/ (Free to UK scientific community, registration required)
A version that works under cygwin and gives graphics on a windows PC is available if you ask for it. Though it currently does not work with the new version of cygwin
To run these programs on the linux systems at Diamond
For excurve, pyspline type:
module load spectroscopy
run_excurv &
run_pyspline &


For the new version of athena/artemis on the Linux workstations :
module load demeter

and then eg : dathena or dartemis

But if that doesn't work you may have to remove a folder that related to a previous demeter installation from your home directory :

Do cd /home/fedid and then rm -r .horae

after that new demeter should work -

often working on data in the ascii directory fails due to permissions issues, so files need to be copied elsewhere (/tmp or /processing ) before they can be read into Athena



Using Slackbot to monitor GDA

Slack can be used to remotely monitor status of scans without having to open a client. Users need to have a slack account (can sign up on https://slack.com/intl/en-gb/ ). Then ask the beamline scientist to invite you to the i18-users_channel. GDA can then be monitored using the following commands :

/gda-status i18 (gives the current status of GDA, whether scans are running, how much they have completed etc)

/gda-scan-end (notifies when current scan is completed)

/gda-all-done (notifies when queue of scans is completed)

/gda-monitor x minutes (monitors gda status for x no. of minutes.)

Reprocessing Data

In case any raw file doesn't get processed live and needs to be reprocessed, the corresponding notebook can be found under the …./processed/notebooks. The corresponding notebooks can be moved into a new reprocessing folder, the file names and in and out paths changed in the top lines of the notebook and run using jupyter notebook. The jupyter notebooks need to be opened using the following python programs, go to the location containing the notebook, open a  terminal, and:

For tomo reconstruction use, 

module load python/recast3d

jupyter notebook

 

For XRF and XRD reduction, open the jupyter notebook using python/3.9 and python/3.10 respectively.

 

Checking for radiation induced redox on specific energies

 

On the command line of the Jython Console type :

scan test 0 100 1 xspress3 counterTimer01

In the plot you can display the FF counts. The file is saved in the “ascii” folder.