Visualizing the magnetic field of nanoparticles with Python
Note: The following is a continuation on this post on electron holography, which I suggest reading first to get a sense of what is going on.
As you have probably seen before, magnets and their magnetic fields are often visualized as field lines like in the figure below1 which represent how the magnetic field extends outside a rectangular magnet
This is a common illustration used for magnetic fields, where magnetic field lines are drawn along lines of constant field strength – A sort of contour lines for the magnetic field strength. If we were to place a tiny compas at any position outside the magnet, the compass needle would align itself to the direction of the field lines at this position.
This is usually presented for magnets large enough hold in your hand, but it is actually possible to directly measure the magnetic field of nanoparticles in a Transmission Electron Microscope (TEM), just like this:
The TEM uses electrons as a probe to image particles, and electrons moving in a magnetic field will get affected by it. Information about the magnetic field of a sample can be saved using a technique known as Electron Holography which preserves the phase \(\phi\) of the electrons in the acquired image.
Reconstructing the above image from raw holograms involve a few different image processing techniques including Fourier analysis, frequency filtering and image matching to find the translational shift between two almost identical images, as well as other image processing techniques.
In this post I walk through all the Python code used to go from a raw hologram to a magnetic phase image such as the above. You can see the full code in this Jupyter notebook, which you can download along with the holograms if you want to play along yourself.
All the data shown was acquired as part of my bachelors thesis2 at DTU Nanolab (previously Center for Electron Nanoscopy) several years back and follows the procedure detailed in this book chapter3. I recently went back to revisit the subject to redo it in Python out of curiousity and decided do this writeup to share with anyone why might be interested.
Calculating phase image from a hologram
Importing microscope data
We start out with some common packages for scientific Python, namely numpy
and matplotlib
4. We use pyplot for plotting and will set some default parameters with plt.rcParams
so we don’t have to do manually for each plot.
The raw data from the microscope is saved in dm3
files, a common format used in electron microscopy. It is usually opened with the proprietary Gatan Digital Micrograph software or a program like ImageJ which can be difficult to work with for more complex calculations, but luckily the OpenNCEM project allows us to open it in Python (look here if using Matlab).
The raw image in im1
, our hologram, looks like this
Note that the numbered axes show pixel numbers instead of nanometers. We keep it this way for now as it helps illustrate the numerical calculalations we will be doing. However, it is still important to provide a scale when showing microscope images, and it is readily available from the metadata:
So 1 px = 0.0037 µm, but: the scale is off by a factor 10 in these specific images. This is because the objective lens (the one that forms the image) is the special Lorentz lens, and the correction has not been included correctly in the metadata. We will correct for this and convert to nanometers while we’re at it:
The image is thus 760 nm from one side to the other, and the particle is about 90 nm wide.
There is a wealth of other information in the metadata such as acceleration voltage, magnification etc. which can be accessed with the fileDM
method from ncempy
if needed.
Fourier Transformed hologram with sidebands
We can now calculate the Fast Fourier Transform (FFT) on our hologram of the particle using fft2
from numpy
. We also use fftshift
which centers the main peak in the image, as it is otherwise split into the corners for numerical reasons.
Here we see the central peak with the two “sidebands” off to the sides5. Each sideband contains both the amplitude of the wave which passed through the sample and the reference wave \(A_i(\vr)\) and \(A_r(\vr)\) respectively, as well as the electron phase \(\phi(\vr)\):
\[\hat{I}_\mathrm{s.b.}(\vq) = \fourier{A_i(\vr)A_r(\vr)e^{i\phi(\vr)}}\]Here \(\vr = (x,y)\) refers to a given point (or pixel) in the image. The next step in isolating the phase image \(\phi(\vr)\) is then to extract one of the sidebands.
Extracting a sideband
The FFT image contains three significant peaks, namely the central one (by far most intense) and the two sidebands (similar intensity). Each sideband is almost identical, with the important difference being that they are complex conjugates of each other - This means a phase image calculated from one sideband will correspond exactly to the phase image from the other sideband with opposite sign.
To select a sideband, we will find coordinates of the peaks in the image using peak_local_max
from the scikit-image
package exactly like in this example. As the central peak is always strongest we simply find the second strongest peak and check if it’s a positive or negative phase change, otherwise we take the third strongest. We write this into a function as we’ll do it a few times
We’ll wrap this into a function to automatically extract a sideband from a given FFT image. Note that the image is stored as a matrix where rows (up and down) are indexed before columns (left and right), meaning \(x\) and \(y\) must be reversed since we want a horizontal x-axis and vertical y-axis.
We can now apply our new function and take a closer look at the sideband.
The vertical and horizontal streaks are artifacts which appear because our our original image is finite at the edges. When we decompose the image into its frequency components the sharp, abrupt change at the image have to be represented by many different frequencies, giving rise to these streaks. Later you will be able to see the the result of these artifacts in the reconstructed image along the very edges of the images but this will not affect our analysis otherwise.
Phase and amplitude from sideband
We are now ready to calculate the amplitude and phase of our image!
When performing a fourier transform of an image (real values), the FFT will contain complex values. Usually when the inverse fourier transform is applied we would get back the original image containing only real values. However, by cropping out the sideband we have redefined the center of the FFT image. Now when applying the inverse fourier transform to the sideband, the resulting image will contain complex values.
This is actually intended! We can find the “usual” intensity image from the absolute value of the complex number in each pixel as \(I = \sqrt{\mathrm{Re}^2+\mathrm{Im}^2}\) with np.abs
(like when visualizing the FFT). The phase image is found from the angle of the complex number \(\varphi=\arctan\left(\frac{\mathrm{Im}}{\mathrm{Re}}\right)\) using np.angle
.
Here’s the amplitude image
Note how the amplitude image is very similar to the original hologram, but in reduced resolution since we cropped it out as part of the full FFT image. The phase image looks like this
We immediately notice some odd artifacts in the form of sudden jumps in value. We calculated the phase (or angle) of the complex numbers with np.angle
which is an implementation of atan2, a numerical version of the \(\arctan\) function which outputs an angle between \(-\pi\) and \(\pi\), and an angle is thus bound between these values.
This means that if a region of the phase image has a phase value extending above \(\pi\) for example, the value returned by atan2
will “wrap around” and return \(-\pi\) as it is does not know that it should be continuing the values of the neighbouring pixels.
Fortunately “unwrapping” phase images is a common procedure in image processing and scikit-image
has it built into the unwrap_phase
method:
Our phase image now looks like this
So far so good, we have succesfully recovered the full phase shift of the electron beam after passing through a nanoparticle! We will improve it a bit by subtracting a background reference before.
Correcting phase image with background reference
First we load in a reference background image which was acquired in an area without any sample present to get a clean background reference. Using the functions defined earlier makes quick work of extracting the sideband
Now we calculate the phase of the reference image
Finally we subtract the reference image which leaves us with a much more even phase image
The magnetic phase shift still not easily visible as the electric contribution dominates the phase image.
Hologram of particle with reversed magnetization
In order to isolate just the magnetic contribution \(\phi_m\) to the phase shift we need an image of the exact same particle where the particle’s magnetization, and thus \(\phi_m\), is reversed. Since \(\phi_m\) changes sign while \(\phi_e\) is unchanged it is a matter of simple subtraction
\[\phi = (\phi_e+\phi_m) - (\phi_e - \phi_m) = 2\phi_m\]Then we just divide by 2, giving us the pure magnetic phase image \(\phi_m\).
The magnetic reversal is done by applying a strong magnetic field inside the microscope in the opposite direction of the previous used to magnetize the particle in the first place, using one of the magnetic lenses.
We load in the image and subtract a reference phase image just like before:
This phase image is almost identical to the first, but with two important differences: \(\phi_m\) is opposite of before, and the particle is in a slightly different position in the image. This means we cannot directly subtract the two images until we have found out exactly how much the particle position changed.
Matching two similar images to adjust for translation
There is a significant shift in x and y positions between the images, but only negligible rotation. This means we can use phase cross correlation to get a good estimate of the translation between images as implemented in phase_cross_correlation
in scikit-image
6. There is a decent example with details here. Knowing this shift we can correct for it and ensure the images have a common center so we can subtract them properly.
We start out by applying a bandpass with gaussian blur to remove some high- and low-frequency details which can detract from the image matching.
As noted when extracting the first sideband, there are significant artifacts along the edges of the image which can also cause issues – image edges are generally troublesome when dealing with frequency decompositions. For this reason we will construct a “window” and multiply with our images, which ensures all values go smoothly towards \(0\) at the edges.
With the bandpass filter and window applied to both images, we are now ready to calculate the shift in x and y
The calculated translation is now applied as a geometrical transform to correct one of the unfiltered phase images. After this we can subtract one phase image from the other to isolate the magnetic phase shift (and remember to divide by 2).
The magnetic phase shift has revealed itself! Note how there is a lighter region on one side of the particle and dark on the other. This is the magnetic contribution to the phase shift.
Magnetic field lines from phase image
We can enhance the magnetic phase image by cutting out the relevant part where the images overlap and filtering some noise away.
First we make a function to cut off the parts of the image that without overlap after subtraction
We can also filter the image a bit with a gaussian blur to reduce high frequency noise, since the magnetic field (and phase shift) only varies slowly across the image. We will also apply a scalebar to show the size of the features.
The phase image now looks like this
Contour lines representing the magnetic field lines can also be overlayed directly on top of the bright field image, letting us see the field lines emanating from the particle itself. Using a divergent colormap which shows the central values as white, while displaying values above and below that as blue and red respectively, we can enhance the readability of the phase image.
We just need to make sure we crop the bright field image in the same way so the centering is right.
By plotting the contour lines we are in essence displaying the magnetic field lines as they emanate from the particle:
Now that is looking like proper magnetic field lines from a magnetic nanoparticle!
To enhance the image a bit, we will overlay the contour lines on top of the original hologram image so we get much better contrast. We do this by rescaling the magnetic phase image used for contour lines and again cropping parts of the orginal hologram to get the same center
Final thoughts
Congratulations for making it all the way through!
We have worked all the way from loading raw holograms taken in a Transmisison Electron Microscope into Python to extracting and visualizing a magnetic phase image by using various image processing techniques and a bit of Fourier transform magic.
There is still much that can be done with this data, such as calculations on the exact magnetization of the particle and how it compares to expected values from other sources, which I have skipped as this post is already running long. If you want to try the code yourself, make sure to download the associed Jupyter Notebook with data, where you can make your own calculations (preview of notebook).
While I have omitted many details on this complex topic, I hope it can server as an interesting appetizer on some of the things that can be done in Transmission Electron Microscopes, and how information that is usually not obtainable can be brought forward with some clever techniques.
-
Adapted from diagrams of the magnetic field around a bar magnet from Wikimedia Commons ↩
-
Supervisors: Takeshi Kasama, Marco Beleggia, Cathrine Frandsen. ↩
-
Takeshi Kasama, Rafal E. Dunin-Borkowski and Marco Beleggia (2011). Electron Holography of Magnetic Materials, Holography - Different Fields of Application, F. A. Ramirez (Ed.), IntechOpen, DOI: 10.5772/22366 ↩
-
All packages used in the notebook are:
numpy
,matplotlib
,ncenmpy
,scikit-image
(imports asskimage
) andmatplotlib_scalebar
. ↩ -
Remember that each pixel in the fourier transformed image
im1_fft
consists of complex values with a real value \(\mathrm{Re}\) and imaginary value \(\mathrm{Im}\). What we display as “the FFT” is the amplitude as \(r = \sqrt{\mathrm{Re}^2+\mathrm{Im}^2}\) usingnp.abs
. Further, to avoid the central peak dominating the image I also calculate the logarithm of each pixel value withnp.log
to enhance visibility of the smaller values in the sidebands relative to the peak, as well as adjustingvmin
andvmax
. ↩ -
Even if the image is rotated and scaled this method can still be used as demonstrated in this example – essentially the matching is performed on the FFT of the image to obtain rotation, followed by the same method as here after correcting for rotation. ↩