Integrating revolutionary sensor technology with Magneto-optic Kerr effect microscopy
As one of the best technologies to observe magnetic domains and other magnetic material microstructures, Magneto-optic Kerr effect (MOKE) microscopy boasts many assets. Speed, however, is not among them.
Slow resolution, caused by common camera data transfer speeds, limits the technology when it comes to observing fast magnetic microstructure dynamics. Generally, high-speed cameras are necessary for these applications, but they are typically cost-prohibitive for research groups and extremely data intensive, making real-time, vision-based feedback control impossible.
To supplement traditional MOKE microscopy with improved temporal resolution timeframes, Zhang et al. introduced a prototype sensor that mimics the human eye’s neural structure.
“We came up with the idea to integrate a new tool in computer vision, the Dynamic Vision Sensor (DVS), with the MOKE microscope to improve time resolution when observing the dynamics of magnetic structures in spintronic devices,” said author Yan Zhou. “This is the first exploration into a brand-new interdisciplinary area involving computer vision, optics and microscopy, artificial magnetic materials, and spintronics.”
With the DVS, pixels only activate when they detect a change in light intensity, or an event; the pixels with unchanged intensity are treated as negligible and don’t produce redundant data. By fine-tuning the MOKE microscope and optimizing the event camera settings, the scientists reconstructed images and slow-motion videos with comparable visual effects to those taken with a standard frame-based camera.
In addition to time resolution benefits, the DVS event camera is low latency, which may enable faster vision-based feedback control of the spintronic devices being observed.
“While further technical improvements are required, this is a significant new step in an important interdisciplinary area,” said Zhou.
Source: “Event-based vision in magneto-optic kerr effect microscopy,” by Kai Zhang, Yuelei Zhao, Zhiqin Chu, and Yan Zhou, AIP Advances (2022). The article can be accessed at http://doi.org/10.1063/5.0090714 .