Image cytometry is a measurement technique that combines microfluidics and imaging techniques to rapidly characterize individual cells in a population (see Figure 1). 1, 2 Its advantage is that the traits of individual cells can be rapidly characterized, quantified, and analyzed, which is otherwise impossible using traditional techniques that feature macroscale approaches based on cell population averaging.

In image cytometry, each cell traverses a linear transparent microchannel. As a specified region within the channel is passed, an image or series of images is recorded. The images are processed in real time or later, and a list of biomarkers per cell is extracted. Bright-field imaging extracts markers including cell size, shape, opacity, color, elasticity, granularity, and insights from stained cellular components. 3 Fluorescence imaging characterizes the amount and position of key biomolecular species within the cell, including DNA and actin. Due to the low light nature of fluorescence relative to the necessary camera integration times, this study is limited to bright-field imaging, which is common among researchers pushing the limits of high-throughput image cytometry.

It could be assumed that this inability to characterize fluorescence would be detrimental to image cytometry. However, prior research indicates that performing label-free bright-field and dark-field images may alone provide enough cellular information to “eliminate the need for the specific biomarkers” required for conventional flow cytometry. 4 Some researchers have shown that bright-field imaging, while application-dependent, is capable of elucidating key phenotypes and states, including cancer malignancy and invasiveness, stem cell pluripotency and differentiation, activated/naïve leukocytes, and cytoskeletal and nuclear changes. 5 This is positive news because it is extremely difficult to perform fluorescence microscopy at high framing rates due to relatively low light emission compounded by the use of ultrashort exposure times (e.g., ~10–10 μs). 6 Also, by avoiding a labeling step, this characterization technique may be used in future in vivo applications where staining- and labeling-steps are unfavorable or simply not viable. A short discussion on highspeed fluorescence imaging is included later in this article.

Fig. 1 - Schematic illustration showing the image cytometry workflow, where a sample of cells is sent through a microchannel, imaged, and then processed. That extracted data can then be used to perform an action on an individual cell.

Several commercially available image cytometers can perform such characterizations with throughputs on the order of thousands of cells per second. But these technologies are unable to process data at high speeds, eliminating the ability to run real-time feedback loops that are critical for future artificial intelligence (AI) and machine learning applications. This article illustrates how off-the-shelf hardware — i.e., camera, frame grabbers, processors — and software can be integrated to process individual cells in real time with microsecond latencies at high framing speeds (10–100 kfps) and ultra-short exposure times. This system can be designed to make inferences derived from images on the order of microseconds, providing a basis for executing microsecond and microscale decisions on the individual cells.

Image Cytometry and Feedback

Fig. 2 - A sample time series of colored polystyrene beads flowing through a microchannel. The system can collect key image data and kinematic data in real time.

Experimental System. In this model, red, blue and clear polystyrene beads were flowed through a Cole Parmer 200-μm-wide microchannel. A Nikon SMZ18 was set to 13.5× magnification, and an AMETEK Vision Research Phantom S710 was configured to record at 52,000 fps and 18 μs. A photonic 5100 LED fiber light back-lit the sample. The data was captured and pushed through a SPICAtek DHS-RT station, which can capture up to 16 CXP channels on two Euresys frame grabbers. The station can process the data in real time and store up to 45 minutes of highspeed, uncompressed video at these speeds. Typical image data is shown in Figure 2, where red, blue, and clear particles can be seen traversing the microchannel at high speeds.

Technical Challenges. Real-time imaging at high speeds (10–100 kfps) faces three primary challenges: selection of the proper camera sensor and architecture, configuration of the correct camera settings, and the use of a backend PC capable of handling the high throughputs.

Camera Sensor and Architecture. The scientific high-speed camera market is dominated by onboard RAM-based camera architectures with gigabit Ethernet (GbE) communication protocols. With this architecture, the camera records all the image data to a fixed-size RAM buffer that is later off-loaded to external memory storage. The size of these buffers can vary between a few gigabytes to fractions of a terabyte, permitting high-speed recordings that are generally a few seconds in length. This strategy is highly effective for capturing individual and relatively short high-speed events, but recording and processing continuously for minutes or hours is very limited.

In addition, the fastest communication is generally over 10 GbE, allowing save speeds from high-speed cameras at rates of ~400 MB•s-1. However, CoaXPress (CXP)-based machine vision cameras are capable of streaming off upwards of 7 GB•s-1 and were specifically developed for high-speed image transmission and machine vision applications. They directly integrate with commercially available CXP frame grabber cards, providing the interface between the camera and the printed circuit motherboards. The card directly connects to a PCIe slot with the proper size and number of lanes, which are throughput dependent.

Fig. 3 - Sample of the cytoTracker GUI that is used to control the camera, visualize real-time results, and process image data.

Camera Settings. The image sensor must be fast enough in terms of fps and exposure time to properly image cells while mitigating temporal aliasing and/or motion blur. Resolution (pixel areal density) is selected based on the particle size relative to the field of view. In this case, the smallest particle had a diameter of ~10 pixels. At elevated particle speeds, it is not uncommon to use ultra-short exposure times ranging from ~0.1 to 20 μs. In this case, the Phantom S710 was configured to record at 52,000 fps with an exposure time of 18 μs. The camera was configured with cytoTracker software, and a list of cellular analytics was generated showing specific particles that traversed the field of view (see Figure 3). This data also detailed cell size, shape, and color, along with position and speed.

Backend PC and Results. The system processed an average time per frame of 6.2 μs, which permits speeds of ~100 kfps at a selected resolution of 640 × 200. Most calculations were performed on unstitched images, and image classification was performed using selective ROI stitching. Given that the main processing cycle runs on a single core, the other cores were used to perform classification and calculations and to display results. See Figure 4 for an overview of the hardware and software architecture.

Fig. 4 - Block diagram showing the hardware and software architecture in the real-time image cytometry setup. The machine vision camera streams images out from the microscope/cytometry setup and into the backend hardware. Note: FG in the schematic denotes “frame grabber.”

In this system, it was possible to store data in a 30 minute cyclic buffer while performing calculations, allowing the user to review portions of the content in real time. Image portions of the detected cells were stored in HSV format in up to 256 GB of RAM to help the classification algorithms. Based on preliminary results, each particle and/or cell used an average of 2 KBytes (8 bits) in HSV format. This setup achieved more than 500,000 samples in 1 GB. Such high sample rates enable very detailed classification and allow for parallel statistical calculations. The system can also be configured to save the image data in a form of RAW pixel values that can be directly discovered and read into software such as CellProfiler. 7

With this architecture, it is possible to send event-signals from cell presence, cell count, and cell image/phenotype, in addition to many other possible identification triggers. Calibrated triggers that match an event with time and position are also an option. This is ideal for researchers who would like to filter, destroy, and/or sort specific microparticles in real time. The maximum latency in the system is 300 μs, or ~15 frames at 52,000 fps. In this case, since it takes a particle or cell ~2,400 frames to cross the entire field of view, there is more than enough time to perform a real-time action on the cell and observe the result within the channel for validation and troubleshooting purposes.

Conclusion

The aim of this study was to set a benchmark for what is possible in real-time image cytometry. It is hoped that researchers will better understand what is possible for the next generation of biological applications that demand single-cell precision and rapid real-time results. This model system demonstrates a level of flexibility that may accommodate diverse cytometry applications. It is important to note that this technology is not limited to image cytometry but can be expanded to other applications that demand real-time analysis on the order of Gpixels•s-1.

Discussion of Fluorescence. In fluorescence applications, frame rates are typically restricted to below ~1,500 fps. As was briefly mentioned above, this is due to the lack of photons available from the fluorescence event, which puts a hard limit on acceptable exposure times. If such an experiment were attempted, signal-to-noise ratio (SNR) would likely be too low to generate an image of usable quality and clarity. Furthermore, if exposure time was increased to incorporate more photons in an attempt to achieve higher SNR, the cells would exhibit severe motion blur. To overcome these issues, researchers may implement the use of image intensifiers or make use of slower flow speeds coupled with time delayed integration (TDI).

Machine Learning. Given that the software system provides acquisition and recording, real-time detection, and classification, it is possible for software functions to be executed in parallel. As a result, this architecture is amenable to machine learning for determination of trigger-based patterns or more complex conditions, and it can also be used to refine blob analysis and detection algorithms. The software allows external access, so it can be controlled via separate programs built in Python or similar languages, making it easier to integrate into other AI systems.

References

  1. Y. Han, et al., “Review: Imaging Technologies For Flow Cytometry,” Lab Chip, 2016, 16, 4639–4647.
  2. H. Mikami, et al., “High-Speed Imaging Meets Single-Cell Analysis,” Chem, 4, 11 2018, 2278–2300.
  3. D. R. Gossett, et al., “Hydrodynamic stretching of single cells for large population mechanical phenotyping,” PNAS, 2012, 7630–7635.
  4. M. Doan and A. E. Carpenter, “Leveraging machine vision in cell-based diagnostics to do more with less,” Nature Materials 18, 2019, 410-427.
  5. D. D. Carlo, “A Mechanical Biomarker of Cell State in Medicine,” Journal of Laboratory Automation, 17, 2012, 32–42.
  6. Y. J. Heo, et al., “Real-time Image Processing for Microscopy-based Label-free Imaging Flow Cytometry in a Microfluidic Chip,” Scientific Reports 7, 2017, 11651.
  7. T. R. Jones, et al., “CellProfiler Analyst: data exploration and analysis software for complex image-based screens,” BMC Bioinformatics, 2008, 9, 482.

This article was written by Kyle D. Gilroy, PhD, Field Applications Engineer for Vision Research, Wayne, NJ. For more information, visit here .