Ultrasound imaging accompanies each of us, from several months before we are born, throughout our lives. To monitor our development, 17 million ultrasound examinations are performed each year in France in the private sector, and around twice that if we add those done in the public sector. Thus, ultrasound imaging is the most widely used type of imaging for diagnostics after radiography. The world market for ultrasonography is still growing, and is worth an estimated 4.9 billion dollars (data from 2009). Ultrasonography plays a central role, both in hospitals and in doctors’ surgeries. The reasons for its ever-growing success are mainly based on its portability, its reasonable cost in comparison with other methods, its performances in terms of yielding results in real time and the fact that it uses non-ionizing waves. Historically, ultrasonography is associated with specialties in obstetrics for pregnancy-monitoring and cardiology. Today, ultrasound medical imaging covers a far wider range of specialties like no other type of imaging. Imaging of the digestive system, breasts, liver with elastography, thyroid or prostate, are examples of the most commonly performed operations [HTT 09].
In the future, significant progress is expected which will enable a doctor to reach a more certain diagnosis in a shorter period of time. For this to happen, technological innovations will be accompanied by methodological developments in signal- and image processing. 3D or 4D imaging techniques, particularly in the area of cardiovascular care, are likely to emerge in the near future, with the development of new probes and new modes of acquisition, e.g. using sparse sampling techniques. At the same time, progress in modeling, simulation and image processing will be at the heart of new quantitative analysis software built into ultrasound machines. The diagnosis of myocardial infarction, the replacement of the heart valves or the detection of atherosclerosis are examples of medical exams for which these innovations will be essential (Figure 1.1).
Other means of imaging based on estimation of the physical properties of healthy and diseased tissues will also supplement conventional imaging tools. Different modes of quasi-static, dynamic or transient elastography should be able to quantify the elasticity of tissues for diagnosing liver disorders or quantify the development of cancer, for instance.
In this chapter, we are going to present both the physical basics of ultrasound (US) imaging and the main advances expected of the echography of tomorrow. The chapter begins with a presentation of the physical principles upon which ultrasound imaging is based. Then, we shall detail the different modes of imaging in ultrasound systems: B-mode, M-mode, Doppler modes, contrast and harmonic imaging. The hardware aspects will also be touched upon, with a discussion of the different types of linear or sectorial probes used in clinical practice, depending on the compromise between resolution and penetration. We shall then return to statistical analysis of the US image using the properties of the ultrasound speckle. This knowledge will help to simulate realistic images and sequences which are useful to validate the image formation models and processing methods. The final part of this chapter will be given over to the advances in ultrasound image acquisition and processing which will serve as a diagnostic aid to doctors. We shall present the most recent probe technologies which are based on new materials to transmit and receive ultrasounds in 2D and 3D with sensor matrices. These probes can be based on innovative methods of image formation using the methods of synthetic aperture, “tagging” or sparse sampling. We shall also illustrate the contribution made by new techniques such as elastography, nonlinear imaging or parametric imaging, and the performances of real-time tracking methods or motion estimation more generally. The end of the chapter will be devoted to multimodality imaging. Using the example of bi-modal US/Optical imaging, we shall show that the combination of the anatomical information provided by the US image with the functional or metabolic information provided by the other modes of imaging facilitates a more effective aid to diagnosis and monitoring of the evolution of diseases.
US waves are pressure waves whose frequency is greater than the maximum audible frequency of 20 kHz. These US waves are mechanical vibrations. They need a source to give rise to them, and a support medium (a solid, liquid or gas) in order to propagate. If the vibration generated by the source is oscillating, the particles of the medium initially at rest will oscillate around their equilibrium position when the US wave passes through. There are two modes of oscillation: a longitudinal mode where the particles oscillate along the direction of the wave’s propagation, forming a longitudinal or compression wave, and a transversal mode, where the particles oscillate in the direction transversal to the direction of the wave’s propagation, forming a transversal or shear wave [SZA 04].
In most diagnostic echogram exams, it is soft tissues which are explored, and the range of frequencies of oscillation of the wave is between 2 and 20 MHz for the most common external applications and 30 to 50 MHz for internal explorations, such as intra-vascular imaging. At the frequencies used for diagnostic imaging, the shear waves are greatly attenuated and can therefore be neglected. In addition, as soft tissues are composed primarily of water, the propagation of the US wave in these tissues is very similar to its propagation in liquids. In a liquid, the particles oscillate along the direction of propagation of the wave, forming a longitudinal wave. In water, for the applications in question here, the movement of the particles is roughly a few tens of nanometers and their velocity is a few cm/s, whereas the phase velocity c at which the wave propagates in the medium is around 1500 m/s.
When a sinusoidal disturbance propagates in a liquid medium, regions of compression and dilatation form in the medium when the wave passes through. This phenomenon of compression and dilatation is periodic and is observable in two different ways: at a given time or at a given position. It is due to the displacement of the particles. Particle displacement is greater in the dilatation zone than in the compression zone. The periodicity of the displacement is called the wavelength λ if we observe that displacement as a function of the position at a given time, and period T if we observe it as a function of time at a given position. These two values are linked by the phase velocity λ = cT or indeed c = λf when f = 1/T, representing the frequency of the oscillations emitted by the source.
In order to study the propagation of a US wave, we need to distinguish two types of media: isotropic and anisotropic media. An isotropic medium is one which exhibits no single prevailing direction of propagation. This means that it is the source generating the initial disturbance which imposes the direction of propagation, rather than the medium. Conversely, with an anisotropic medium it is the medium which imposes the direction of propagation. Such is the case, for instance, in most tissues with an oriented fibrous structure.
As we have just seen, there are two types of waves which are distinguished by their mode of propagation. These waves are also differentiated by the shape of what is called their wavefront, i.e. the set of points of a medium which simultaneously experience the same change in pressure as the wave passes. Thus, waves can be classified into three wave shapes:
In reality, any given wave is a combination of the three forms described above. For reasons of simplicity, we shall only examine the interaction of a plane wave with the medium. By way of example, consider the configuration represented in Figure 1.2. A point source generates a spherical wave. Target 1, which is “near” to the source, is then subjected to the influence of that spherical wave. That is, if the dimension of that target is larger than the wavelength, not all the points on the anterior face are subject to the same pressure at the same time. On the other hand, as regards Target 2, which is a long way from the source, the wavefront which reaches this target can be considered to be plane.
We consider a perfectly elastic medium, i.e. a medium which, when subjected to a stress, deforms with no internal friction (with no losses) and regains its original form exactly when the stress is removed. As the wave propagates in this elastic and isotropic medium, it locally imposes a stress or pressure causing a displacement of the particles as the wave passes, and a deformation of the medium. In echography, the pressure generated by the source is approximately 1 MPa, causing a local displacement of the particles of a few tens of nanometers. Hence, the displacements are sufficiently small for us to consider the stress/strain relation to be linear. That is, for a wave propagating in direction z, it is written as follows:
where Kzz is the longitudinal stress in direction z and Kyz and Kxz are the shear stresses in directions y and x. εzz is the longitudinal strain in direction z, and εz and εxz the shear strains in directions y and x.
The strains are calculated on the basis of the particle displacements (U,V,W) in the three directions
The constants v and μ are the Lamé parameters. They depend on the propagation medium and are linked to usual values such as Young’s modulus (E in N/m2 or Pa), the bulk modulus (K in N/m2 or Pa) or the compressibility coefficient (β in m2/N or Pa-1),
The wave equation in direction z is obtained by applying Newton’s second law; the sum of the forces is equal to the product of the mass and the acceleration caused by that force. Thus, by summing the volumetric forces in direction z, we obtain:
where t is the time and ρ the density of the medium.
If we take into account the same initial remark as in the case of the application of US waves to soft tissues – that the shear waves are greatly attenuated and are therefore negligible – the wave equation is reduced to the equation of the compression wave:
where
In soft tissues, K ≈ 2.2 GPa and μ ≈ 0.1 MPa. Note that the phase velocity depends only on the compressibility (1/K) and density of the medium.
In the case of a sinusoidal particle displacement of amplitude W0 with angular frequency ω = 2π f, the solution to the wave equation is:
with k = ω/c being the wavenumber. The sign (−) indicates a progressive plane wave, moving in the direction + z, and vice versa.
Here, we shall retain the hypothesis of a sinusoidal excitation source giving rise to a plane wave, propagating in direction z. The particle velocity is obtained from the time-derivative of the particle displacement: . The velocity is therefore out of phase by 90° in relation to the displacement. The stress Kzz expressed above is linked to the pressure p by the relation:
If we differentiate the displacement of the particles generated by a compression wave, we have: p = ±jk(v + 2μ)W = ±jωρcW = ±ρcuz. Note that the pressure is linked to the particle displacement by the term z = ρc, which represents the acoustic impedance expressed in kg.m-2.s-1 or “Rayleigh”, in tribute to Lord Rayleigh for his work on acoustic waves. Acoustic impedance is an important parameter in ultrasound imaging because it determines the amplitude of the echoes registering on the echogram.
The acoustic intensity is the total energy, per unit time, of a US wave traversing the unit of surface perpendicular to the direction of propagation of the wave. The intensity is expressed in W/m2 or in mW/cm2 for medical applications. In the case of a sinusoidal plane wave, we express the average intensity over a period as a function of the maximum pressure p0:
In echography, the waves are usually brief pulses comprising a few oscillations, and the pressure field varies greatly depending on the spatial position, i.e. the distance between the emitting source and the point of observation. In general, the pressure field is emitted by a focused source, and the maximum intensity is experienced at a distance corresponding to the focal point. For these reasons, other definitions of intensity are used.
By defining an instantaneous intensity, dependent on time and on the position in the pressure field, , the intensity can be averaged over the pulse repetition frequency time TPRF – this is Ispta (spatial peak temporal averaged intensity):
The intensity can be averaged over the duration τ of the emission Isppa (spatialpeak pulse averaged intensity):
The maximum intensity Im can be calculated between two times around the maximum pressure. Thus, for a sinusoidal pulse, Im is calculated over a half-period around the maximum
EXAMPLE.– for five periods of oscillations at 5 MHz, the pulse duration is τ = 1 μs. The pulse repetition frequency (TPRF) is limited by the time taken for the wave to propagate in both directions. For a there-and-back distance of 15 cm, the propagation time is 100 μs, considering a velocity c = 1500 m/s. The pulse repetition frequency time can therefore be no less than 100 μs in this example. For a pulse repetition frequency of 5 kHz (TPRF = 200 μs) and a probe emitting a rectangular sinusoidal plane wave with pulse duration τ = 3 μs at 5 MHz and a maximum pressure of 500 kPa, the intensities are
and
with the pulse envelope being rectangular in the example: Im = Isppa. In practice, the envelope is often Gaussian in form, which yields Im > Isppa.
Approval of an ultrasound system by the “United States Food and Drug Administration” (F.D.A.) is based on a series of intensity values (Table 1.1 and Table 1.2).
The acoustic power P is linked to the intensity by the relation i(t) = dP/dA = p(t)u(t), where A represents the unit of surface area.
Ultrasound systems refer to the Mechanical Index (MI) to quantify the level of emission. This parameter is defined as the ratio of the maximum amplitude of the pressure pulse emitted (in MPa) to the square root of the frequency (in MHz): It usually varies between 0 and 2, with a value of 2 corresponding to an intensity Ispta of 720 mW/cm2.
A US imaging system requires a sensor – usually piezoelectric in nature – to convert electrical signals into US waves and vice versa. Thus, the sensor generates a US beam, and converts the US pressure field it receives into electrical signals. To acquire 2D or 3D ultrasound data, the beam needs to be moved in one or more directions. In echography, the displacement of the beam is controlled electronically, meaning that the piezoelectric elements are grouped into subsets, with each active subset able to emit a US beam and receive waves.
For instance, with a probe comprising 128 elements (channels or beams), the matrix is commonly grouped into sets of 32, with a sweep of 1 element to acquire each new scan line. This means we need to acquire 32 × 97 = 3104 raw signals1 to form an image comprising 97 radio-frequency (RF) signals.
The choice of a probe is determined by its resonant frequency and its bandwidth. A US pulse has a broad spectrum which will shift toward low frequencies during the course of propagation because the acoustic medium behaves like a low-pass filter. In view of the medium being observed, we need to use a probe with an appropriate resonant frequency and bandwidth. For instance, when exploring the breast or the heart, the resonant frequency of the transducer is respectively 7.5 MHz and 3.5 MHz.
Three other parameters are very important for appreciating the quality of an echographic probe: the axial, lateral and azimuth resolutions of the US beam.
– The axial resolution is defined as the minimum distance (∆z) perceptible by the probe between two sufficiently close reflective structures in planes perpendicular to the direction of propagation of the ultrasound wave (Figure 1.3).
Let n be the number of periods of the sinusoidal pulse emitted, and λ its wavelength; in this case, we have: ∆z = nλ/2. This axial resolution is sub-millimetric in the field of echography. It depends on the shape and duration of the US wave, and therefore on the impedance matching of the probe; on the wavelength of the acoustic signal but also on the bandwidth of the probe for probes with broad frequency bands. For instance, with a probe emitting 4 sinusoidal periods at 5 MHz, the axial resolution is ∆z = 0.6 mm. One way of increasing the axial resolution is to increase the frequency of the emitted wave but, as we shall see later on, the attenuation of the wave is greater when the frequency increases.
– The lateral and azimuth resolutions are among the most significant factors affecting the quality of an ultrasound image. They are defined by the capacity of the probe and of the ultrasound system to distinguish between two nearby structures situated in the same plane perpendicular to the axis of the beam. Their values depend on the width of the US beam. Therefore, one way of increasing the lateral resolution is to work with a focused beam. The width of the beam at the focal point Wb is linearly dependent on the wavelength of the signal emitted: Wb = f#λ, where f# is the aperture number (also commonly called the “f number”), which is the relation between the focal length and the aperture of the beam. In the case of linear array, where the transducers are rectangular, the lateral resolution expresses the resolution in a direction parallel to the width of the transducers. The azimuth resolution (or elevation resolution) expresses the resolution in a direction parallel to their length – it expresses the thickness of the imaging plane (Figure 1.4). The lateral and azimuth resolutions are of approximately a millimeter.
The lateral resolution depends on the geometry of the transducer, the frequency used and the focusing of the beam. For instance, with a linear probe at 5 MHz focused at 4 cm with an aperture comprising 32 active elements and a pitch (distance between two elements) of λ, we have Wb = 1.3 mm.
These three resolutions determine an elementary resolution volume.
When an incident plane compression wave encounters a plane interface separating two isotropic elastic media (1 and 2) with different acoustic impedances (Z1 ≠ Z2), some of the energy is reflected, propagating at the same velocity (c1) as the incident wave, and the rest of the energy is transmitted into medium 2, within which the wave will propagate with a different velocity (c2).
In writing the following equations, we consider a longitudinal incident wave. If we mark the incident, reflected and transmitted waves as I, R and T respectively, the boundary conditions at the interface S are written thus:
We can show that the reflected and transmitted waves have the same frequency as the incident wave, and that they are situated in the plane of incidence. We can also deduce the Snell–Descartes law, which can be used to determine the directions of propagation of the reflected and transmitted waves, i.e. the reflection- and transmission angles as a function of the angle of incidence and of the celerities in the two media. These angles are defined between the direction of propagation and the normal to the interface. By definition, they vary between 0 and π/2:
We therefore define the coefficients of amplitude reflection R and transmission T, obtained by continuity of the pressure and particle velocity across the interface.
or indeed the coefficients of energy reflection R and transmission T, obtained by
Note that if the impedance of medium 2 is negligible in comparison to that of medium 1, which is the case when medium 2 is air, αR ≈ 1 and αT ≈ 0. That is to say that all the energy is reflected; there are no transmitted waves. Table 1.3 gives a few values for the reflection coefficients for a wave with normal incidence angle.
In a homogenous, non-absorbent medium, a plane wave maintains constant amplitude and direction. In reality, in biological tissues, propagation no longer obeys these principles. For example, the intensity of the wave decreases as it penetrates deeper into the medium. Similarly, the plane wave is not preserved (the direction and/or shape of the wave is altered). This attenuation of the US wave has various causes, which we are going to study separately although they are interrelated. It may result from the absorbent nature of the medium. The phenomenon of absorption transforms the incident ultrasound energy into heat. This energy transformed into heat constitutes a bona fide loss. Attenuation may also be caused by the inhomogeneities of the medium.
Diffusion, in the broadest sense, occurs when a wave propagates through a non-uniform medium. Some of the energy is redirected and appears separately to the initial wave. Either it is merely delayed or its actual direction is altered. The simplest case is that of a plane interface at a normal incidence angle, discussed in the previous section. This is an example of reflection and transmission of the wave, which can be easily resolved by looking at the specific impedances of the two media. Remember that the plane interface theory applies when the dimensions of the object are larger than the wavelength. In the field of medicine, such interfaces are rare, and the discontinuities are very variable in shape, size, position and orientation. Diffusion exists when the size of the inhomogeneity is small in comparison to the wavelength of the incident wave. The inhomogeneity then behaves like a point source, and the energy is re-emitted throughout the whole of the space in the form of a spherical wave.
Attenuation covers all losses, i.e. energy which is not transmitted through the medium and would not be picked up by a receiver facing the emitting probe. Thus, reflection, refraction, diffusion, diffraction and absorption all contribute to attenuation. The pressure of a monochrome plane wave propagating in direction z decreases exponentially as a function of the distance covered: p(z) = p(z = 0)e−az, where p(z = 0) is the pressure at (z = 0) and is the coefficient of attenuation of the pressure, expressed in nepers per centimeter or, more usually in medical applications, in decibels per centimeter
This attenuation coefficient is itself proportional to the frequency. Consequently, attenuation is much greater at high frequencies. In biological tissues, attenuation is roughly 1 dB cm-1 MHz-1. In order to illustrate the effect of this attenuation, let us take a probe working at 3 MHz and compare the amplitude of the echoes of two identical targets at a distance of 10 cm (a round trip of 20 cm). In these conditions, the attenuation is 1 × 20 × 3 = 60 dB, i.e. there is a ratio of 1000 between the amplitude of the two echoes. If we consider a probe working at 6 MHz, the attenuation is 120 dB (a ratio of 1,000,000). It is for this reason that high frequencies are not used to image organs that are close to the probe. In echography, we are led to strike a balance between a high frequency – which is to be desired if we want good image definition – and a lower frequency to obtain good penetration. Table 1.4 shows the acoustic characteristics of biological tissues in comparison with those of other materials.
An elementary cell whose resolution is defined by the characteristics of the transducer (axial, lateral and azimuth resolutions), corresponds to the smallest volume within which point targets cannot be individually resolved, but whose presence contributes to the creation of an echo. The dimension of these reflectors is smaller than the wavelength emitted (Figure 1.5). The US wave reaches the point scatterers at different times, depending on how far they are from the transducer; they in turn emit spherical waves and the pressure at each point of the acoustic medium traversed by the echoes is the result of the summation of the wave emitted by each reflector.
Also, as the piezoelectric element is not a point source, the signal which it delivers is the result of the summation of the pressure waves at each point on its surface. If the difference between the arrival times of the echoes is less than the duration of the emitted pulse, it is not possible to distinguish the contribution of each reflector, and the high-frequency signal gives us no indication about the number of position of the targets. Interference, resulting from the summation of the reflected waves, then occurs. The point corresponding to the projection of the elementary resolution cell in the image plane will appear brighter if the dominant interferences are constructive; if not, the point will be darker.
This set of points lends the appearance of granite to the echogram image – this characteristic is known as texture noise or b. For certain forms of diagnosis, speckle is considered to be parasitic noise, and image processing methods have been developed to reduce it (filtering, spatial composition, frequency composition, etc.). On the other hand, with other forms of diagnosis (e.g. with liver disease), the texture of the image is a piece of information which can be used to characterize different tissues. Speckle depends on parameters relating to the structure of the tissues (density, size, composition, distribution of the reflectors), so healthy and diseased tissues may well exhibit different textures.
The phenomena discussed hitherto have been limited to US waves of low intensity, for which the variation in density of the medium is slight. Ultrasound systems use focused pressure fields to achieve local pressures of several MPa. In such cases, the acoustic pressure is no longer negligible in comparison to the pressure in the medium at rest. Thus, high levels of acoustic pressure generate waves whose propagation is nonlinear. The shape of these waves depends on the amplitude of the acoustic pressure, the medium and the distance covered. Therefore, it is necessary to attempt to take account of the exact shape of the wave, rather than simply its linear form. Another motivation is to study the nonlinear behavior of biological tissues for tissue characterization: such is the case, for example, when using the parameter B/A. This parameter stems from the expression in equation form of the pressure in relation to the density, and of its limited expansion into a Taylor series.
where ρ0 and s0 are the density and entropy at the equilibrium point with and Table 1.5 gives the values of the B/A ratio for different media.
Medium | B / A |
Water (30°C, 1 atm) | 5.2 |
Blood | 6.3 |
Liver | 7.6 |
Spleen | 7.8 |
Fat | 11.1 |
While the use of contrast agents in other modes of imaging has been common practice for many years, the development and commercialization of contrast agents designed specifically for ultrasound imaging only occurred very recently. As with other types of imaging, the injection of contrast agents during an ultrasound exam is intended to facilitate the detection and diagnosis of specific diseases. Contrastenhanced ultrasound imaging is based on the backscattering of ultrasound wave by microbubbles. These microparticles are injected into the bloodstream intravenously, in pellet or drip form. These particles cannot pass through the endothelial wall, so they show up perfused areas in contrast to non-perfused areas. Many different solid, liquid or gaseous particles have been tested as ultrasound contrast agents (UCAs), but gaseous particles are most effective because the liquid/air interface presents a significant discontinuity in terms of acoustic impedance. UCAs comprise microbubbles – either unprotected or encapsulated to increase their lifetime – of a few microns in diameter.
The microbubbles do not only behave as simple scatterers, they vibrate in a nonlinear fashion. An acoustic wave is composed of an alternating pattern of high and low pressures. When this acoustic wave interacts with a microbubble, alternately the microbubble is compressed during the pressurization phase and expands during the depressurization phase. As the bubble can expand more easily than it can contract, the change in its diameter is not symmetrical. Instead of producing a sinusoidal wave, it generates a non-symmetrical wave. This asymmetry causes harmonics, meaning that the diffused waves are broad band. In addition to the emitted frequency (the fundamental frequency) they contain harmonics of that frequency (mainly the second harmonic, which is double the emitted frequency). In addition, these microbubbles are resonating systems, meaning that the amplitude of the backscattered wave is greater when the incident wave is at a frequency near to its own resonant frequency. The product of the resonant frequency of the bubble by the radius of that bubble is around 3. Therefore, by happy coincidence, the resonant frequency of these bubbles, which have a radius of a few microns, is within the frequency range used in diagnostic echography.
It is this nonlinear behavior of the microbubbles and that property of resonance that are exploited by new imaging techniques developed for contrast-enhanced imaging.
Pulse echo imaging techniques are based on the determination of the amplitude and delay of an ultrasound signal reflected by a medium, and thereby generate a representation of its structure. Generally, a pulsed echo system can be represented by a block diagram of different functions around a main element which is the converter. The transmitter converter is periodically excited by an electrical pulse. An acoustic wave is then caused, which interacts with the medium within which it is propagating. Reflection and diffusion waves are formed. The portion of that wave which is reflected and backscattered in the direction of the receiver converter is converted into an electrical signal: the RF signal. The RF signal is then processed to extract the data necessary to create an image. An RF signal provides one-dimensional (1D) spatial information which indicates the depth of the interfaces. To construct the complete image of a cross-section of an acoustic medium, it is necessary to move the US beam within the plane of the cross-section. This imaging system enables the velocity of the ultrasounds to remain constant within the acoustic medium being sounded (c = 1540 m/s in soft tissues) and each transducer element to be unidirectional.
The controller of the beamformer is a crucial element. It synchronizes the emitted and received signals, knowing the width and depth of the area being explored. This region is transposed into a number of rows to be scanned and a number of focal points per row. The controller determines which piezoelectric elements to activate for the row in question, and the delay to apply to each element depending on the depth and orientation of the desired focal point. The controller begins with the first row, feeding voltage (±100 V) and current (±2A) to the piezoelectric elements involved in the active part of the antenna. The high-voltage electrical signal emitted passes through a transmission/reception hub to protect the receiver part, wherein the voltage levels are of the order of a micro- or millivolt. Before being transmitted, this electrical pulse is properly adjusted at each active piezoelectric element, to produce a focused beam, directed at the desired focal point, for the row under investigation.
After propagation and interaction with the medium being explored, the return wave is received by the elements and converted into a low-voltage signal. This signal is then amplified by one or more VCAs (voltage controlled amplifiers) and then digitized by an ADC (analog-to-digital converter) whose sampling frequency is set at around 4 times the central frequency of the probe. The VCA is configured in such a way that the received signal is amplified as a function of the time (i.e. of the depth) so as to compensate for the attenuation of the wave within the tissues. The number of VCAs and ADCs depends on the number of active channels used for beamforming.
The signals are passed through the beamformer when they are received. Each signal is differentiated and weighted, and then the phases of all the signals are added together (“coherent signal summing”) to form the RF signal corresponding to a row of the image. All the above operations, from emission to reception, are repeated to form each row of the image. The number of rows may vary greatly, but generally an image will contain around a hundred rows. The beamforming operations are performed by a specialized device – an FPGA (field programmable gate array), a DSP (digital signal processor) or a combination of the two. The choice of hardware architecture depends on the number of channels used for digital beamforming.
The more the number of channels increases, the more the computation time per beam needs to decrease so as not to slow down the image cadence. This image cadence, which can range from 30 to 300 frames per second, is a major asset in echography.
After beamforming upon receipt, the RF signal thus constituted is filtered to reduce the noise outside of the bandwidth. The treatments performed after beamforming depend on the mode or modes of imaging selected. For instance, the classic display of ultrasound images in grayscale corresponds to B-mode ultrasound, with B representing Brightness. B-mode uses demodulation, envelope detection and logarithmic compression, to aid the viewing of slightly echogenic structures against highly echogenic structures. These operations may be followed by 2D noise-reduction or contrast-enhancement treatments. The last operation is scan conversion, which enables the image to be displayed in a format appropriate for the means of data acquisition. For instance, with an abdominal probe acquiring data by sectorial geometry (as is done with fetal ultrasounds), the scan converter ensures the final image is displayed in this same format.
Doppler mode can be used to study movement – mainly the movement of the bloodstream – in a region which is covered by B-mode. In this mode, various types of display are possible: a spatial map of velocities with a color Doppler, or the computation of a sonogram to visualize the change in speed of the flow over time. Sonogram computation is done by way of a local Fourier transform of the demodulated RF signal as a function of the depth for a given direction in the image, and repeating this process for multiple emissions and receptions.
All the post-treatments performed after beamforming are done in real time. The choice of components such as FPGA, DSP or GPU (graphics processor unit) is guided by the constraints of portability and low power consumption, or of computation power for video display with superposition of images in the case of a color Doppler.
There is one more mode of emission/reception – that of a continuous wave (CW mode), used to measure velocities by the Doppler Effect. This mode offers a more accurate measurement of velocities but no longer facilitates the spatial localization of those velocities. In this specific mode, a separate emitter and a receiver are necessary, and the chain of acquisition is analog: an analog beamformer followed by a demodulator. Indeed, the dynamic of amplitude of the continuous signals received and modern ADCs are not, at present able to digitize these signals with a sufficient signal-to-noise ratio (SNR) to estimate the velocity. After beamforming and demodulation, the signal can then be sampled at lower frequencies (around a kHz) and digitized over a greater dynamic of amplitude. Figure 1.7 illustrates all the functions, from the acquisition of the signals to the display of the ultrasound image.
The probe is the crucial part in an ultrasound imaging system, because it is the component which generates the US wave and receives the echoes. Initially, they comprised one or more piezoelectric elements performing mechanical scanning of the medium; today, however, the majority of ultrasound probes comprise a set of piezoelectric elements performing electronic scanning of the medium.
Electronic scanning consists either of successively activating a set of elements in order to enable the beam thus created to move around, or to reorientate the beam in different directions using delay laws on the elements, on emission and on reception.
The choice of central frequency for the elements used, of their shape, geometric dimensions and ordering, is strongly linked to the intended application.
When we speak of the frequency of a probe, we are in fact speaking of the frequency of the elements which make it up. In general, an indication will also be given of the bandwidth of the elements.
The frequency and bandwidth have a direct impact on the physical resolution of the system. Indeed, in order to have good axial resolution, we need to be able to transmit signals of very short duration. This implies rapid variations in the signal, and, consequently, high frequencies. A good axial resolution therefore implies a high imaging frequency. Yet a high central frequency is not enough; we also need to have a large bandwidth. Indeed, the narrower the band of a signal, the longer that signal is. In addition, the lateral and azimuth resolutions are improved at higher frequency.
However, there are drawbacks to increasing the frequency of the probe. For instance, the attenuation of the US waves is also greater when the frequency is higher. Hence, the choice of working frequency will always be a compromise between resolution and depth of exploration. For example, for external cardiac imaging, we tend to work at a few MHz so as to be able to see all of the cardiac muscle, whose largest dimension is between 10 and 15 cm. the spatial resolution is much poorer than at 10 MHz, but at that frequency, it would not be possible to see the heart in its entirety. Conversely, for peripheral vascular imaging (e.g. of the carotid), it is possible to work at 10 MHz because the depth of the vessel in which we are interested in only a few centimeters. Therefore, it is possible to see the vascular wall with good spatial resolution. If good resolution is needed for a deep internal organ, we need to try to get the probe as close as possible. This is what is done, for instance, in intravascular, transvaginal or transrectal imaging.
Another issue of which we need to be wary when increasing the frequency is the control of the position of the network lobes. These are secondary lobes which exist because of the composition of the probe, comprising regularly-spaced elements rather than a single continuous element. These network lobes are equivalent in antenna processing to the emergence of aliases when we run a Fourier transform of the sampled signals. In order to have network lobes beyond the field of view, we need to space the elements at least a distance equal to half the wavelength apart. Increasing the frequency leads to a decrease in the wavelength, so we need to decrease the distance between the elements if we wish to limit the influence of the network lobes.
The most intuitive way of arranging the elements is to line them up to form what we call a linear probe. In this case, if the translational motion of the active part is occurring in a direction perpendicular to the axis of the US beam, we obtain a rectangular image. This is the type of probe that is used, for instance, in peripheral vascular imaging. Unfortunately, the usefulness of this approach is limited if we want to obtain an image with lateral dimensions greater than a few centimeters.
In this case, we arrange the elements across a convex surface. The beam is perpendicular to the anterior face of the probe, so translational motion of the active part yields a sectorial scan of the medium. With this scanning technique, we obtain an image whose dimensions are greater than those obtained with linear probes. This type of probe is used for abdominal imaging, and particularly for monitoring fetal development.
Finally, there is an even more restrictive geometric configuration. If we wish to image the heart, it is important to have a fairly broad sector of an image, but the window of view is reduced because the heart is behind the ribs. This being the case, the probe used is a linear alignment of elements across a reduced surface area, and all of the elements are used simultaneously. Delays are introduced in the interests of electronic disalignment of the ultrasound beam.
Pulse echo imaging techniques are based on the representation of the amplitude on the basis of the delay of an ultrasound signal reflected by a medium to represent its structure.
In order to deal with the requirements of the different domains of application, there are variants in the use of this signal to create a set of modes of ultrasound imaging which are made available to the doctor. The modes of imaging commonly used for echography are: B-mode, M-mode, harmonic mode and pulse inversion mode.
The oldest and simplest mode of the pulse echo method is “A-mode” (or “A-scan”). This technique uses the emission of ultrasound wave and reception of the echo along a single line of propagation. The vertical deflection corresponds to the amplitude of the RF signal (Figure 1.9). The horizontal deflection is a linear timebase which can be converted into a distance scale if the speed of propagation of the acoustic waves is known.
M-mode or TM (Time Motion) mode is an offshoot from A-mode. It is used to visualize the motion of structures whose position or shape varies over time. M-mode shows the evolution over time of an A-type signal. The transducer is stationary. Vertical scanning of the screen shows the position of the echoes in terms of depth. The amplitude of these echoes is represented by a modulation of the spot intensity. A slow horizontal timebase can be used to juxtapose successive A-type signals. In particular, this mode is used to study the motions of the heart. In M-mode, an immobile echogenic structure is represented by a straight line. If the structure is moving, we obtain a curve of displacement over time which, in the case of the heart, has the period of the echocardiogram (see Figure 1.10).
B-mode is at present the most widely used of medical acoustic imaging systems. It is also derived from A-mode. The two-dimensional (2D) image is constructed by juxtaposition of a large number of rows, each of which expresses an A-mode echogram. The different rows are obtained either by moving the transducer so that the propagation paths of the ultrasounds always remain in the same plane or by using an array of transducers, which enables us to explore several rows without moving the array. On the viewing screen, each row corresponds in position and direction to the trajectory of the ultrasound waves for each position of the transducer. The amplitude of the signals (after treatment) is represented on screen by modulation of the intensity of the trace. The resulting image is therefore a 2D representation of the distribution of discontinuities in acoustic impedance over a cross-section of the object being sounded. The imaging plane is formed by the direction of propagation of the ultrasound wave and the direction of displacement of the transducer or the axis of the array otherwise (Figure 1.10).
C-mode provides 2D images at “constant depth”, i.e. in a plane perpendicular to the direction of propagation of the waves. The sweeping motion of the transducer is such that the point located at a constant distance along the propagation path (axis of the beam) creates a plane. In practical terms, a time portal selects the signal corresponding to the reflection on this plane (delay constant in comparison to the emission). The luminosity of the spot is proportional to the amplitude of the echo in question.
Harmonic ultrasound imaging is based on selection by filtering of the component at double the transmission frequency. The technique was first developed in the 1990s to use nonlinear vibration of microbubbles acting as contrast agents, working with the hypothesis that propagation in tissues was linear and the harmonics were generated only by the bubbles. In reality, the tissues also exhibit nonlinear behavior which deforms the wave transmitted over the course of propagation, and causes the apparition of harmonics. As harmonic imaging uses double the transmission frequency, the resolution is improved by the same ratio.
Harmonic imaging is restricted by the bandwidth of the echographic probes. Indeed, it is crucial to prevent any overlap between the bandwidth of the transmission signal and the bandwidth of its harmonics (Figure 1.11). This is done by limiting the transmission band and choosing it to be in the lower part of the bandwidth of the probe, which results in a decreased spatial resolution. A compromise needs to be found in terms of choosing the emission frequency.
The limitation of imaging with the second harmonic is the frequency overlap of the bandwidth of the transmission signal and that of the second harmonic. Pulse inversion imaging helps combat this effect. In pulse inversion imaging, a sequence of two pulses in phase opposition is transmitted successively into the medium. The principle of pulse inversion exploits the asymmetry of the response of the scatterers of these two pulses transmitted in phase opposition. Indeed, in the case of a linear scatterer, the sum is zero, whereas in the case of a nonlinear scatterer, the result of the summing is a non-zero value (Figure 1.12). Pulse inversion uses the whole bandwidth of the echoes received and provides a higher-resolution image. The drawback to this mode is the halving of the imaging frequency.