Scalable platforms and trends in medical imaging algorithms

This article discusses the current trends in medical imaging algorithms, the fusion of imaging modes, and the scalable platform to implement these algorithms. Field programmable gate arrays provide data acquisition and co-processing support for the scalable CPU platform, making more complex imaging possible.

Medical imaging

The role of medical imaging technology in the field of healthcare is becoming more and more important. This is because the healthcare industry is struggling to detect—and even predict—the diseases that are still in the early stages and actively promote non-invasive treatments, while at the same time reducing the cost of diagnosis and treatment. The combination of the fusion of diagnostic imaging modes and the development and progress of imaging algorithms are the main factors driving the development of new instruments that can achieve the above goals.

To provide the functionality needed to meet these healthcare industry goals, device developers are turning to scalable, commercial off-the-shelf (COTS) central processing unit (CPU) platforms that support field programmable gate array (FPGA) applications For data collection and co-processing. To efficiently develop flexible and scalable medical imaging equipment, equipment developers must consider several factors. These factors include the development of imaging algorithms, the coordinated use of multiple imaging technologies (fusion of imaging modes), and the scalability of the platform.

The development of imaging algorithms requires the use of advanced and intuitive modeling tools for continuous improvement of digital signal processing algorithms. These advanced algorithms require an extensible system platform that can significantly improve image processing performance. These scalable platforms should allow smaller, more portable devices to be realized.

To achieve near real-time analysis, the system platform must match the software (CPU) and hardware (the number of configurable logic gates). These processing platforms must meet different performance price points and must be able to cope with the different requirements of multiple imaging technologies. FPGAs can be easily integrated into multi-core CPU platforms, providing DSP processing power for very flexible systems and achieving the highest performance.

System architecture and design engineers must quickly distinguish the algorithms on these platforms and then use advanced development tools and intellectual property (IP) libraries to debug them. This process accelerates platform deployment, thereby maximizing manufacturer profits.

Algorithm development

You should start with a trend analysis in the imaging algorithm of each imaging mode, including considering how to use FPGA and IP.

Magnetic resonance imaging (MRI) generates cross-sectional images of the human body. Three functions implemented with FPGA are used to reconstruct the three-dimensional volume from the cross-section. First, the fast Fourier transform (FFT) generates grayscale 2D slices, usually matrices, from data in the frequency domain. Then, the reconstruction of the three-dimensional volume involves interpolation between slices to produce a slice spacing to approximate the spacing between pixels, so that the image can be seen from any 2D plane. Next, iterative resolution sharpening is performed. This function uses a spatial de-blurring technique based on an iterative inverse filtering process to reduce noise while refocusing the image structure. Therefore, the overall visual diagnostic resolution of the cross-section is greatly improved.

Ultrasound (imaging). The presence of particles in an ultrasound image is a phenomenon called speckle. Speckle is caused by the interaction of different independent scattering materials (similar to multi-channel radio frequency reflections in the wireless field), and is a multiplicative property. Ultrasonic images can eliminate speckle by lossy compression. First, take the logarithm of the image; the speckle noise becomes added to the effective signal. Then, the wavelet lossy compression is adopted by the JPEG2000 encoder to minimize the noise.

X-ray. Motion correction for X-ray imaging of the sigmoid artery is an algorithm that minimizes the impact of cardiac respiration cycles—breathing and beating—on imaging. The motion of the 3D + time coronary artery model is projected onto a 2D X-ray image, which supports the de-warping function (translation and zoom) --- to correct this motion and get a clearer image calculation.

Molecular imaging. Molecular imaging is the characterization and measurement of biological processes at the cellular and molecular level. Its purpose is to detect and capture images of diseased cells and molecules and monitor them. For example, X-ray imaging, positron emission tomography (PET) and single-photon emission computed computed tomography (SPECT) can be used in combination with low-resolution images of organ function, cells and molecules at the resolution of the corresponding anatomical features As low as 0.5mm. The trend of more miniaturized equipment and the exploration of new algorithms have pushed performance beyond the performance of multi-core CPUs, and these compact systems must use FPGA technology.

Fusion of imaging modes. The realization of early disease diagnosis and non-invasive treatment promotes the combination of imaging technology. For example, the above-mentioned situation can be seen in PET / Computed Tomography (CT) system and x-ray therapy / CT equipment. To meet current performance requirements, higher resolution images are required, which requires a sophisticated geometric microarray detector plus FPGA to preprocess the photon and electronic signals. After the preprocessing is completed, these signals are combined and processed by the CPU and FPGA coprocessor to generate detailed body images.

Non-real-time (NRT) image fusion, or image registration, is often used to arrange and compare organ function images and anatomical images that are imaged at different times. However, NRT image registration is problematic due to changes in the patient's position, different basic scan contours, and natural movement of the patient's internal organs. FPGA processing is used for real-time fusion of PET and CT, allowing organ function images and anatomical images to be collected and fused during one imaging, rather than superimposing the images in the later stage as in the past. The fused image can provide better clarity and positioning accuracy for surgical treatment.

Image processing used to guide doctors during surgery involves registering CT or MRI images before surgery with real-time 3D ultrasound or X-ray images to promote the application of non-invasive treatments such as ultrasound, magnetic resonance interference, and x-ray therapy. In this area, various algorithms have been developed to provide optimized image registration results for certain imaging modalities and treatment combinations.

In this type of fusion combined system, the FPGA with high-speed serial interconnection can reduce the interconnection requirements of connecting the data acquisition function to the post-processing part of the system. By eliminating the need for additional circuit boards and cables, it greatly reduces The cost of the overall system.

Imaging algorithm

Several different imaging algorithms are commonly used in FPGAs. These algorithms include enhancement, stabilization, wavelet analysis and distributed vector processing.

Image enhancement algorithms usually use convolution or linear filtering. The high-pass filtered image and the low-pass filtered image are linearly combined and weighted by matrix multiplication to produce an image with enhanced detail and reduced noise.

The stabilization of the video image includes the normalized rotation and scaling effects of the video data sequence to finally achieve a balance of noise between consecutive frames. In addition, the algorithm smoothes the jagged edges of the still images extracted from the video and can correct the image shake to about one tenth of a pixel.

The wavelet analysis algorithm is designed to help obtain the event information in the signal. The wavelet analysis algorithm uses window technology-by changing the size of the window-to analyze a small section of the signal. In order to obtain higher accuracy, wavelet analysis allows the use of longer time intervals for low-frequency information and shorter time intervals for high-frequency signals. Applications of wavelet analysis algorithms include the detection of discontinuities and breakpoints, self-similarity checking, signal suppression, signal or image noise reduction, image compression, and fast multiplication of large matrices.

The recent S-transform algorithm combines the advantages of FFT and wavelet transform. It reveals frequency changes in space and time. Applications of this function include texture analysis and noise filtering. S-transform algorithm is a kind of intensive calculation, which will make the execution speed of traditional CPU become very slow. Distributed vector processing can solve this problem. By combining vector and parallel computing inside the FPGA, the processing time can be shortened by 25 times.

One method of early cancer detection is to use the function of malignant tumors to mobilize new blood supply. The digital sensor detects the infrared energy released by the patient's body. Therefore, it can detect the slight difference between the increase in blood flow due to cancer and normal conditions. The typical application of this function is based on a programmable pulse array, realized by a general workstation and a dedicated hardware engine based on FPGA. The FPGA engine can accelerate the core algorithm to nearly 1,000 times the speed that a current workstation can achieve.

For these complex imaging algorithms, multiple FPGA module component functions are necessary. For example, CT reconstruction requires functions such as interpolation, fast Fourier transform and convolution. In the field of ultrasound imaging, processing methods include color stream processing, convolution, beamforming, and elasticity estimation. General imaging algorithms include many similar functions, such as color space conversion, graphics overlay, 2D median filtering, scaling, frame and field conversion, contrast enhancement, sharpening, edge detection, threshold, translation, polarity and Cartesian conversion Non-uniformity correction and pixel replacement.

Scalable platform

In the past, many imaging systems were built as proprietary computer systems. But with the advent of current high-performance commercial off-the-shelf (COTS) CPU boards, system engineers can implement designs in more creative ways. Although the NRT processing of many algorithms is acceptable in terms of software alone, real-time image processing still requires hardware assistance. The current FPGA has a built-in DSP module, a high-bandwidth memory module, and a large programmable array, etc., which are very suitable for providing such hardware-assisted hardware equipment.

Altera Corporation (San Jose) has been working closely with its partners to provide a combination of FPGA co-processing resources + COTS CPU solutions. For single-board computers (SBCs) from Intel and AMD, Altera ’s built-in serializer-deserializer StraTIx II GX FPGA can directly run PCI Express compatible coprocessor boards for algorithm offload. For AMD's single-board computers with dual slots, XtremeData (Schlumberg, Illinois, USA) provides a coprocessor daughter card that can be directly plugged into the socket of the AMD Opteron processor First-class CPU + FPGA processing solution (see Figure 1). A four-socket AMD single board computer can provide a combination of multiple CPU + FPGA coprocessors (1 +3, 2 +2 or 3 +1) to improve the performance of algorithm-intensive applications. But the ultimate platform scalability can be achieved by using multiple 1U server blades, each 1U server blade executes a CPU + FPGA coprocessor solution.

The application speed-up effect of these platforms depends on the algorithm: the more parallel calculations that can be loaded into the FPGA in an algorithm, the faster the overall execution speed. For example, when using an FPGA-based hardware acceleration for a CT imaging algorithm-adding an FPGA coprocessor to each CPU (3 GHz), the execution speed of the entire application is 10 times faster. As a result, the power consumption, size, and cost of the system are significantly reduced.

Development methodology

The discussion naturally included consideration of the method of algorithm development and the corresponding algorithm execution tools.

Algorithm tool. The imaging system architects use advanced software tools to model different algorithms and evaluate the results obtained. Advanced general-purpose tools for digital signal processing are the MATLAB processing engine and Simulink emulator graphical user interface from MathWorks (USA, Massachusetts, Natick). Most OEMs (original equipment manufacturers) and medical design studios use MATLAB to develop fast and precise algorithms, such as digital image processing, quantitative image analysis, pattern recognition, digital image encoding and compression, criminal investigation image processing, and 2D wavelet transform. In addition to algorithm development, MATLAB can be used to simulate fixed-point algorithms commonly used in FPGAs, with an optional toolkit that can generate C code that runs on a general-purpose CPU or FPGA.

Algorithm division and debugging. Once the algorithm development is completed, the system architect must decide how to divide the functions of the CPU and FPGA to provide the best overall solution-a solution that can balance performance, cost, reliability, and life. The equipment architect complains that it is a challenge to divide and debug many units in a high-performance hardware system. In the past, many designs used pipelined methods in FPGAs. In other words, the algorithm is divided into various functions and executed in a sequential pipeline. The operation of the debugging pipeline accounts for 90% of the integration work. Because the execution time of each function must be balanced against the maximum amount of computational processing, and the visibility and latency of local memory are limited facts, making things difficult.

The solution is a more software-centric system design. This system is based on a distributed coprocessor computing model. Within this model, each function of the coprocessor is an executor (for example, a functional subprocessor), which has message-based The ability to transfer control and data messages between machines. Complete switching between all memory, CPU and sub-processors provides complete observability, making debugging easier. Message passing exists internally between FPGA sub-coprocessors; externally it appears between other CPUs and coprocessors in the system.

Altera's FPGA internal Avalon switch structure and programmable system-on-chip (SOPC) integration tools automatically generate flexible cross-switch structures between all functional units. The pre-tested IP provides an interface from FPGA to host CPU and from FPGA to dual in-line memory module (DIMM) memory. The pre-tested message network infrastructure supports communication control between the host CPU, FPGA sub-processor, and FPGA memory controller. A simplified debugging method is to combine the message and the full switch, so that the development process has the greatest flexibility. Finally, the data channel can be software defined (redefined) during execution, so that the data can be intercepted or redirected, thereby improving the observability during system integration and debugging.

Design tools and IP. Although tools like MATLAB can be optimized for algorithm development using software, they do not yet support execution in FPGAs. Designers can accelerate their implementation on FPGAs by using electronic design automation (EDA) tools and IP.

Video and image processing suites and DSP libraries provide IP building blocks that accelerate the development and implementation of complex imaging algorithms. Video and image processing module group, and other IP modules and reference designs (including in-phase / quadrature (IQ) modem, JPEG2000 compression algorithm, fast Fourier transform / inverse Fourier transform, and edge detection, etc.) provide designers IP, designers can use these IPs to quickly complete FPGA implementations of computationally intensive tasks.

in conclusion

With the aging of the population during the baby boom that year, efforts are being made to find new diagnosis and treatment methods for extremely common diseases such as heart disease and cancer, including early detection and minimally invasive surgery. The combination of various diagnostic imaging techniques and the development of related algorithms have driven the development of new equipment to meet the needs of these patients. Advanced algorithms require a scalable system platform that can significantly improve image processing performance.

The FPGA integrated into the COTS multi-core CPU platform provides powerful digital signal processing functions for the most flexible and highest performance systems. To help speed up the implementation of these complex imaging algorithms on these platforms, advanced development tools and IP libraries are needed. Relevant software tools and IP libraries have been developed.

Diwali Diya Design scented candle  is tea light candle. It is popular used for Malaysia market, it is made by paraffin wax and stearic acid. The weight is from 7tg to 23g, and burning time is from 2hours to 8hours. We can make different colors, such as white, red, yellow, green, etc, and among these, white is the most common tealight.

10G Tealight Candle

10G Tealight Candle,Yellow Tealight Candle,Flameless Tealight Candle,Mini Unscented Tealight Candle

Shijiazhuang Zhongya Candle Co,. Ltd. , https://www.zycandlefactory.com