Keywords

1 Introduction

Optical coherence tomography (OCT) is a non-contact imaging technique which generates cross-sectional images of tissue with high resolution. Therefore it is especially valuable in organs, where traditional microscopic tissue diagnosis by means of biopsy is not available—such as the human eye.

Since OCT is completely noninvasive, it provides in vivo images without impacting the tissue that is imaged. Fast scanning rates and quick signal processing allows for image visualization in real time and at video rate. As shown in Fig. 3.1, the resolution of OCT is much higher than that of other medical imaging methods like ultrasound or magnetic resonance imaging (MRI). It combines an axial resolution that can reach that of confocal microscopy with a lateral resolution comparable to confocal scanning laser ophthalmoscopy. Typically OCT systems have a resolution of 20–5 μm. Due to the interferometric measurement method , the axial resolution is defined by the light source, not the focusing optics. Therefore it is overcoming the limitations of optical focusing due to the limited pupil size of the eye. The extended focus and the operation with light in the near-infrared maintain a penetration depth of a few hundred microns, covering the whole retina.

Fig. 3.1
figure 1

Comparison of resolution in axial and lateral direction between some medical imaging techniques for different body parts. Skin/cornea: reflectance confocal microscopy (RCM) . Retina: confocal scanning laser ophthalmoscopy (cSLO), adaptive optics scanning laser ophthalmoscopy (AOSLO) , optical coherence tomography (OCT), adaptive optics optical coherence tomography (AO-OCT) . General: magnetic resonance imaging (MRI), computed tomography (CT), medical ultrasound

With a lack of alternative diagnostic tools for depth resolved assessment of the retina, and the distinct characteristics of OCT it is no surprise that the first commercially available OCT was an ophthalmic imaging device. It entered the marked in 1996, only 5 years after the inception of OCT was founded. Despite the technological promise that OCT offered, in the first years only a total of ~180 units were sold until 1999 [1]. This can be understood examining the technology that was initially introduced to the market. Time-domain OCT technology (TD-OCT , see subsequent section for the working principles) requires acquisition of a depth scan for every location and subsequently offers very slow imaging speed and poor image quality. Usability and the impact of the noisy images on clinical diagnosis limited adaption of this new technology.

The introduction of spectral domain OCT (SD-OCT) was able to overcome the limitations of TD-OCT. Image quality and imaging speed were significantly improved by SD-OCT, which is able to capture the whole depth information simultaneously.

In 2006 Heidelberg Engineering introduced SPECTRALIS—the first imaging platform that combined SD-OCT technology with a scanning laser ophthalmoscope (SLO) . Use of the SLO facilitates co-localization of the fundus scan with the cross-sectional OCT images and opened up previously unknown diagnostic possibilities. Through this technological combination this instrument is capable of precise motion tracking, allowing for re-scanning the same location at a later point in time for follow-up assessment and therapy control.

Incorporating functions that build upon this OCT technology created a clinical need for OCT and it has become the standard tool for imaging in macula diseases , diabetic retinopathy and glaucoma , to name a few examples from a wide range of retinal applications. The ability to segment retinal layers allows for thickness measurement, which improves glaucoma diagnosis, because thinning of the nerve fiber layer marks the onset and progression of the disease. The anterior segment of the eye also benefits from OCT imaging. Biometric measurements of the eye’s anatomy including the axial eye length allow for precise choice of intraocular lenses.

This chapter of the book will concentrate on the technical implementation of general OCT technology and on the SPECTRALIS instrument . This information will support the clinical chapters within this book and offers context to how technology impacts the various applications of OCT in the eye. First, the working principle will be explained, followed by some technical parameters like resolution, sensitivity and roll-off which are important measures to rate and select OCT systems. The chapter will continue with the implementation of OCT together with confocal scanning laser ophthalmoscopy for motion tracking during OCT acquisition and follow-up functionality in SPECTRALIS. Further analysis of the OCT signal allows for functional extensions of OCT-imaging to detect blood flow and tissue properties like birefringence and elasticity. The last section of this chapter gives a summary of functional OCT methods.

2 Technique and Theory of OCT

2.1 Principle Idea of OCT

OCT is often compared to medical ultrasound because of the similar working principles. Both medical imaging techniques direct waves to the tissue under examination, where the waves echo off the tissue structure. The back reflected waves are analyzed and their delay is measured to reveal the depth in which the reflection occurred. OCT uses light in the near-infrared, which travels much faster than ultrasound. The delays of the back-reflected waves cannot be measured directly, so a reference measurement is used. Through the use of an interferometer, part of the light is directed to the sample and another portion is sent to a reference arm with a well-known length.

The idea of low-coherence interferometry is the underlying principle for all OCT implementations. Temporal coherence is a property of a light source and characterizes the temporal continuity of a wave train sent out by the source and measured at a given point in space. Wave trains emerging from a light source of low temporal coherence maintain a fixed phase relation only over a very limited time interval corresponding to a confined travel range, the coherence length or coherence gate. A light source with a broad spectral bandwidth is composed of a range of wavelengths. Such a broadband source has low coherence, while monochromatic laser light has a narrow spectral line and features a coherence length of at least several meters. An interferometer splits light, coming from a source, into two separate paths and combines the light coming back from the two paths at the interferometer output. There, under certain conditions, interference can be observed: coherent waves superimpose and their electromagnetic field amplitudes add constructively (i.e. they reinforce each other) or destructively (i.e. they cancel out each other) or meet any condition in between. The associated light intensity can be measured as an electrical signal using a photo detector. This signal is a function of the difference in optical path length between both arms. For a low coherent light source (like a SLD or a pulsed laser source ) interference is only possible if the optical paths are matched to be equal in length within the short coherence length of the source, which usually is in the order of micrometers.

2.2 Technical realizations of OCT

In the first implementation of OCT [2], the reference length was modulated for each depth scan and the record of the intensity of the combined light at the sensor gave the reflectance profile of the sample. This variant is called time-domain OCT (TD-OCT) and the main setup is shown in Fig. 3.2.

Fig. 3.2
figure 2

Working principle of TD-OCT : light from the light source is split into the reference beam and the central beam. Back reflected light from both arms is combined again and recorded by the detector. To record one depth profile of the sample (A-scan) the reference arm needs to be scanned. This has to be repeated for each lateral scan position. Figure reprinted from [3]

As depicted, the light of a low-coherence source is guided to the interferometer, which in this example is a fiber-based implementation . In a system using bulk optics the fiber coupler is replaced by a beam splitter. The input beam is split into the sample beam and into the reference beam travelling to a mirror on a translational stage. The back-reflected light from each arm is combined and only interferes if the optical path lengths match and therefore the time travelled by the light is nearly equal in both arms. Modulations in intensity, also called interference fringe bursts , are detected by the photodiode. The amount of back-reflection or back-scattering from the sample is derived directly by the envelope of this signal (see Fig. 3.2, lower row).

For each sample point, the reference mirror is scanned in depth (z) direction and the light intensity is recorded on the photo detector. Thereby a complete depth profile of the sample reflectivity at the beam position is generated, which—in analogy to ultrasound imaging—is called A-scan (amplitude scan ).

To create a cross-sectional image (or B-Scan ), the sample beam is scanned laterally across the sample. This abbreviation originated in ultrasound imaging, where B-Scan means brightness scan.

Fourier domain OCT (FD-OCT , also frequency domain OCT) is the second generation of OCT technology and provides a more efficient implementation of the principle of low-coherence interferometry. In contrast to TD-OCT, FD-OCT uses spectral information to generate A-scans without the need for mechanical scanning of the optical path length.

Two methods were established to acquire the spectral information of the interferometric signal. Both record an interference spectrum, also called spectral interferogram , from which the A-scan is computed via Fourier transformation . The unique properties of this interferogram are given in more detail later together with a simplified mathematical description.

Spectrometer based FD-OCT, which is commonly referred to as spectral domain OCT (SD-OCT) was first proposed by Fercher et al. in 1995 [4]. The principle optical setup is depicted in Fig. 3.3 (top left): it is similar to TD-OCT, but the point detector is replaced by a spectrometer. The spectrometer uses a diffractive element to spatially separate the different wavelength contributions into a line image which is recorded by a high speed line scan camera. Each read-out of the camera constitutes a spectral interferogram with a superposition of fringe patterns, as will be explained below. A superluminescent diode (SLD) is commonly chosen as a broadband light source, because it features a large bandwidth and a relatively high power output.

Fig. 3.3
figure 3

Optical setup of spectrometer based OCT (SD-OCT) in the upper left inset and swept source OCT (SS-OCT) in the upper right inset. While SD-OCT uses a spectrometer for wavelength separation, SS-OCT features a light source which sweeps the wavelength in time. Both implementations record an interference spectrum which carries the depth in formation of the sample. FFT is used to transform the interference signal into the A-scan. (Figure taken from Drexler et al. [5])

The principle of Swept-source OCT (SS-OCT) has first been demonstrated 2 years after SD-OCT in 1997 [6] and was immediately applied in ophthalmology for measurement of intraocular distances [7]. The optical setup is similar to TD-OCT, but the broadband light source is replaced by an optical source which rapidly sweeps a narrow line-width over a broad range of wavelengths, see top right inset of Fig. 3.3. During one sweep, each wavelength component of the interferometric signal is detected sequentially by a high speed photo-detector. Commercially available sources can realize high sweep rates (>100 kHz), which require ultrafast detection and analog-digital (AD) conversion in the GHz range. One wavelength sweep constitutes a spectral interferogram with fringe patterns, as in SD-OCT.

For each sample point, this spectral interferogram is recorded as is shown exemplarily in the lower left inset of Fig. 3.3. The original source spectrum (black solid line) is modulated with numerous rapid oscillations. Different to TD-OCT, the interferogram contains information for all depth layers of the sample simultaneously. To extract their individual contribution as a function of their depth position, Fourier Transformation is required. The amplitude of the complex-valued Fourier transform is squared to yield power values. The resulting A-scan (see bottom right inset of Fig. 3.3) includes a mirror term, which is rejected in the final image and is attributed to inherent properties of the Fourier transform.

Comparing the two implementations of FD-OCT , equivalent parameters are used to describe and quantify the system’s performance. For example, in SD-OCT, the acquisition speed is limited by the linescan rate of the camera, whereas in SS-OCT it is given by the sweep rate of the swept-source and subsequent AD conversion. Additional measures of performance are described in more detail in the section on the image properties of OCT.

Compared to TD-OCT , the spectral OCT techniques have allowed for a dramatic increase in signal-to-noise ratio (SNR) and imaging speed [8,9,10]. They have paved the way for volumetric and real-time imaging in ophthalmology, a field that is highly impacted by sample motion.

2.3 Signal formation in OCT

To fully appreciate the working principle of FD-OCT and to understand the formation of the spectral interferogram, a closer look at the signal formation is given in the following section.

First, a sample consisting of one discrete layer at depth position z is considered. z is defined as half of the optical path length difference between the reference mirror and the sample layer. For a given z, conditions of constructive and destructive interference alternate as a function of wavelengths of the broadband source, resulting in a periodic modulation of the source spectrum. An example of such a fringe pattern is shown for two reflective layers at different depth in Fig. 3.4a, b. The modulation frequency or fringe spacing is uniquely linked to the depth position z by k = π/z, The larger z, the narrower are the fringes and the higher is the corresponding modulation frequency.

Fig. 3.4
figure 4

Spectral OCT-Interferograms : (a) Interference fringes caused by a single reflector at 50 μm with a reflectivity of 10%. (b) Same as (a) but reflector at 300 μm with 5% reflectivity. The comparison of (a) and (b) reflects the fact, that the z-depth of the signal is encoded in the frequency k of the interference modulation, whereas the reflectivity of the reference arm (R r) and of the backscattering sample surface (R s) determines the amplitude of the modulation signal. (c) Interferogram with both reflectors and (d) the OCT A-scan, which is calculated from (c) by Fourier Transformation. The smaller autocorrelation signal is caused by interference of light reflected at R1 and R2 (red arrow)

The associated modulation amplitude is proportional to \( \sqrt{R} \), where R denotes the power reflectivity of the sample layer. The total interferogram consists of a superposition of both single interferograms (Fig. 3.4c). Fourier Transformation and conversion to power values generates the reflectivity profile of the sample.

In the more general case, a sample of extended depth and multiple reflective layers gives rise to a superposition of many different modulation patterns, each with a specific frequency and amplitude. In the following, a simplified mathematical description is given for this case.

Each of the N layers is characterized by its depth position z n and its ability to reflect or backscatter light given by R n. Z n is defined as half of the optical path length difference between the reference mirror and the nth layer of the sample.

The optical power density S(k) of the light source is described as a function of wavenumber k = 2π/λ as is standard practice in OCT literature. The spectral interferogram I D(k) is then given by

$$ {I}_D(k)\propto S(k)\sum \limits_{n=1}^N\sqrt{R_n{R}_R}\ \left(\cos 2k{z}_n\right), $$
(3.1)

where R R denotes the reflectivity of the reference mirror. For the sake of simplicity, only the term which encodes the sample properties is shown, which represents the cross-correlation of the electric field amplitudes of the sample and the reference arm. Also, the refractive index of the sample is omitted and absorption is neglected. Generally, a constant (DC) term and an auto-correlation term, which accounts for self-interference within the sample, contribute to the final spectral interferogram as well. The interested reader may refer to [11] for a comprehensive derivation.

Eq. (3.1) is now considered for TD-OCT, where two details are essential: Firstly, the photodetector used for TD-OCT cannot resolve the individual spectral contributions of the source k to the measured signal I D. Mathematically, the detection corresponds to an integration of I D(k) over the bandwidth range of the source. Secondly, the reference arm is scanned, the detected signal thereby getting a function of the reference arm position z R. The photodetector signal is then given by:

$$ {I}_D\left({z}_R\right)\propto {S}_0\sum \limits_{n=1}^N\ \gamma \left({z}_n\right)\sqrt{R_n{R}_R}\ \left(\cos 2{k}_0{z}_n\right) $$
(3.2)

S 0 is the spectrally integrated power of the source and the coherence envelope γ(z n) is the inverse Fourier transform of the normalized power spectrum S(k). For a Gaussian shaped spectrum, the coherence function, also sometimes referred to as fringe-visibility, is given by:

$$ \gamma \left({z}_n\right)=\exp \left(-\ln 2\frac{2{z}_n}{l_c}\right) $$
(3.3)

The coherence envelope quickly drops to zero if 2zn > lc, i.e the optical path length difference exceeds the coherence length of the light source l c, thereby acting as a depth selector. While scanning the reference mirror this coherence gate is shifted through the sample. The resulting sample reflectivity profile is convolved with the coherence function and is modulated by a cosinusoidal carrier. Eq. (3.2) is a mathematical description of the interference fringe bursts described in Fig. 3.2.

Different to TD-OCT , FD-OCT acquires the spectral interferogram I D(k) as described by Eq. (3.1), i.e. spectrally resolved and containing signal components of the whole depth simultaneously. To access the sample reflectivity profile, inverse Fourier transformation has to be applied to Eq. (3.1), which finally yields the A-scan:

$$ {I}_D(z)\propto \sum \limits_{n=1}^N\sqrt{R_n{R}_R}\ \left[\gamma \left(2{z}_n\right)+\gamma \left(-2{z}_n\right)\right] $$
(3.4)

with γ(z n) as defined by Eq. (3.3). Again, as in TD-OCT, for each reflector site the detected signal \( {\sqrt{R}}_n \) is convolved with the coherence function, which therefore defines the axial point-spread function of the system. Because I D (k) is a real-valued function, its complex-valued Fourier transform has an ambiguity between positive and negative frequencies, which gives rise to the mirror terms in Eq. (3.4). From the spectral interferogram it is not possible to decide if the optical path difference between sample arm and reference arm is positive or negative. Note, that only the Fourier amplitude is shown in Eq. (3.4), the phase is omitted. The Fourier amplitudes are squared to obtain power values, that represent the OCT signal in structural OCT images.

In the following, the main characteristic properties of OCT images are presented. If not stated otherwise, these properties are equal for all three OCT variants.

2.4 Lateral and Axial Resolution and Image Dimensions

In OCT, the axial and lateral properties are decoupled from each other. Lateral resolution is defined by the objective and the focusing media in front of the sample. While all axial properties of the interferometric technique are defined by the coherence properties of the light source and the sampling of the signal at the detector. This unique property of OCT can be used in retinal imaging to achieve high axial resolution despite the limited pupil diameter of the eye.

Like described in the previous section, the image information in axial direction along the A-scan is reconstructed from an interferometric measurement of delays of light which is backscattered or reflected from the sample. Therefore the properties of the light source and the sampling of the interferometric signal define the axial properties of the OCT system. The axial resolution in air δz of an OCT system equals the round-trip coherence length of the source and is defined by its wavelength λ 0 and its spectral bandwidth Δλ [3]:

$$ \delta z={l}_c=\frac{2\ \ln (2)}{\pi}\cdot \frac{{\lambda_0}^2}{\varDelta {\lambda}_{FWHM}}. $$
(3.5)

The spectral bandwidth Δλ FWHM is the wavelength range of the source, defined as the width at the intensity level equal to half the maximum intensity (FWHM, full width at half maximum). In the lower left inset of Fig. 3.3 the bandwidth is labeled by the wavenumber equivalent Δk. Wavenumber k can be converted to wavelength by λ = 2π/k.

The central wavelength of OCT-systems is chosen to achieve maximal penetration depth into the tissue under examination. For ophthalmic systems, the wavelength is usually around 850 nm or around 1050 nm, to allow light penetration through the retinal pigment epithelium (REP) and thereby enable imaging of the choroid. Another important consideration is absorption of ocular media as it causes attenuation of the light which reaches the retina and further reduction of the signal light on the way back towards the detector. Absorption of the ocular media is very similar to that of water, which is depicted in Fig. 3.5.

Fig. 3.5
figure 5

OCT axial resolution depends on the spectral bandwidth of the light source and on the center wavelength. The exemplary plots of identical axial resolution in the eye show that the bandwidth needs to be increased for longer center wavelengths to maintain the same resolution. As indicated by the dotted curve of the absorption coefficient of water [12], not all wavelengths are equally suitable. For greater wavelengths, the eye is considerably less transparent

Figure 3.5 also shows a family of curves based on Eq. (3.5), on which the axial resolution is constant. It is evident that, for a longer wavelength, the bandwidth of the light source needs to be increased to achieve the same axial resolution. The water absorption curve (dashed red line) shows that absorption is increased for 1050 nm compared to 850 nm. The spectral width of the absorption dip limits the maximum achievable resolution, e.g. for 2 μm (green solid line) a bandwidth of 175 nm would be needed exceeding the width of the spectral window.

Axial imaging depth defines the axial range which is covered in a B-Scan. It is defined by the maximum fringe frequency which can be detected, because maximum frequency of the interference spectrum decodes the maximum depth (see exemplary interferogram in Fig. 3.4). Therefore the imaging depth z max is defined by the number sample points N on the full recorded spectral width Δλ:

$$ {z}_{max}=\frac{1}{4}\ \frac{{\lambda_0}^2}{\varDelta \lambda}\ N. $$
(3.6)

In SD-OCT systems, N is given the number of pixels of the line detector the spectrum is imaged on. For SS-OCT it is given by the number of readouts of the photo diode during one sweep of the light source. The maximum imaging range, divided by 0.5 N gives the axial sampling of the B-Scan. This number characterizes how many micrometer per pixels are imaged and provide the axial scaling of the scan. It is often mistaken with the axial resolution, which defines the minimal distance of structures, which can still be distinguished in the OCT-B-Scan.

All lateral system parameters of an OCT system depend on the focusing optics and in particular on the numerical aperture (NA, see Chap. 2), as well as on the sampling density and scanning amplitude of the scan system. The same equations describing lateral image parameters hold for cSLO and OCT imaging. A schematic overview is shown in the left part of Fig. 3.6.

Fig. 3.6
figure 6

Left: Lateral image parameters of retinal OCT depend on the focusing of the probing beam by the human eye. Right: Schematic of the sampling of an OCT volume

The lateral resolution is given by the spot size of the probing beam. For a Gaussian beam profile , the spot size is defined as radius w 0 of the beam waist, where the intensity drops to 1/e². In contrast to that, the lateral resolution is defined by the beam diameter at half maximum (FWHM). This is taken into account by multiplying the double beam radius by \( \sqrt{2\ \ln 2} \), leading to the expression:

$$ \delta x=\sqrt{2\ \ln 2}{w}_0=\sqrt{2\ \ln 2}\ \frac{2{\lambda}_0}{\pi }\ \frac{f_{sys,}}{n\cdot d}=\sqrt{2\ln 2}\ \frac{\lambda_0}{\pi\ NA} $$
(3.7)

Here, f sys denotes the focusing length of the optical system, n the refractive index of the media and d the diameter of the beam (decay to 1/e²) entering the focusing lens. Tight focusing would result in a higher lateral resolution, but at the same time it reduces the depth of focus. The depth of focus b (sometimes also denominated as confocal parameter) determines the axial range, where the beam waist \( \omega (z)\le \sqrt{2}\cdot {\omega}_0 \) and is defined as:

$$ b=\frac{2\pi \cdot n}{\lambda_0}\ {w_0}^2=\frac{n\cdot {\lambda}_0}{2\pi \cdot N{A}^2} $$
(3.8)

Thus, the focal volume is defined by its width δx and the axial extension b. Outside the focal volume the intensity coming back from the sample is reduced considerably. Therefore a compromise of focal depth and lateral resolution needs to be found with the optical design of the OCT system. In retinal OCT, with a focal length of the eye of f eye = 16.7 mm and the refractive index n vitreous = 1.336, the lateral resolution is typically about 10 μm, resulting in a depth of focus of approximately 700 μm.

As OCT measures optical delays, all axial distances are optical distances. To achieve scaling in geometrical distances to allow for instance thickness measurements, the refractive index n of the medium needs to be known and axial distances measured in OCT scans are divided by the refractive index n.

To cover a lateral field of view (FOV), the incident OCT beam is scanned. The maximum scan angle Θ max defines the maximum field of view. To record a 3D data set, the sample beam is stepped in the second lateral direction after each B-Scan, as shown in the right part of Fig. 3.6. The recorded B-Scan series is stacked together. From this volume, a transversal image can be calculated, referred to as enface OCT image. The step width of the scanner defines the lateral sampling in both directions. Usually a B-Scan is sampled more densely than the slow direction y of a volume.

2.5 Sensitivity and Roll-Off

The OCT A-scan presents a profile of backscattered light intensity over tissue depth. The height of a signal compared to the image noise floor is called signal to noise ratio (SNR). The SNR is different for each individual structure, because the signal strength is determined by the backscattering properties, often referred to as reflectivity. Backscattering originates from local changes in refractive index within the tissue due to alterations in the microscopic structure or in the density of scattering particles. The detection of reflectivity enables OCT to reveal the internal structure of an object and is particularly useful to visualize its layer architecture. However, without elaborate modelling, the OCT signal does not provide an absolute quantitative measure of local reflectivity. Due to absorption and scattering in the upper layers less light will reach the lower layers and backscattered light from lower layers is attenuated on its return path again.

Sensitivity has been established as a useful figure of merit to characterize or compare the performance of an OCT system. It is defined by the minimum sample reflectance the system can detect by achieving a SNR of 1. In OCT, the SNR is calculated as the ratio of the OCT power value to the standard deviation of the background power and therefore is proportional to the sample reflectance R.

An OCT signal which is generated by specular reflection of an ideal mirror (i.e. R = 1) generates an SNR equal to the sensitivity of the OCT system. SNR and sensitivity are commonly specified in units of power decibel (dB) denoting a logarithmic scaling of the OCT power values. FD-OCT systems can achieve a sensitivity of 100 dB and more, which corresponds to the ability to detect even very weakly reflecting structures with a reflectivity as low as R = 10−10.

In the retina, the retinal pigment epithelium (RPE) and the internal limiting membrane (ILM) yield high OCT signals. Single A-scans sometimes can be affected by specular reflection on the ILM or the center of the macula. The maximum signal level and the noise floor span a range of about 40 dB for healthy retinal tissue and clear media.

In a linear scale, the OCT power values exceed the limited number of distinct grey values of common display devices and the perception of the human eye. Therefore, the power signal needs to be mapped to grey scale in a meaningful way. Usually, a logarithmic transformation or a comparable mathematical operation is first applied to the data, compressing the distribution of power values to approach a more Gaussian-like shape. The resulting data is then mapped to 8 bit grey values. The mapping can be further adapted by applying different curves for gamma-correction. This allows to variably assign a range of signal power levels within an OCT B-Scan to a range of grey values and thus increase the contrast for distinct regions of interest.

The highest attainable sensitivity in FD-OCT is limited only by shot noise. This means that compared to the inherent and unavoidable characteristic noise of photons, other noise sources can be neglected. The sensitivity is then given by the number of photons which can be detected from a sample with reflectivity R = 1. Therefore , it depends linearly on the incident optical power, the efficiency of photon detection and the sensor integration time. Consequently, there is a principal tradeoff between acquisition speed and system sensitivity.

Every FD-OCT system has a characteristic decrease in sensitivity with imaging depth, also called roll-off. It is related to the finite spectral resolution of the system component providing spectral separation. As shown in Fig. 3.4a, b, deeper layers are encoded in fringes with higher frequency and therefore require higher spectral resolution than more superficial layers.

For SD-OCT, the spectral bandwidth focused onto one pixel of the line sensor needs to be resolved. Two main contributions are therefore responsible for the characteristic decrease in sensitivity: the finite pixel size of the line detector and the finite spot size created by the spectrometer optics. For SS-OCT, the spectral interferogram is sampled sequentially. Its spectral resolution is determined by the instantaneous line width of the swept laser source and may be impacted by the bandwidth of the analog-to-digital conversion. SD-OCT is assumed to have a pronounced roll-off. This is not generally true, because swept laser sources used in commercial SS-OCT systems typically have a finite coherence length of several millimeters resulting in a roll-off of about 2–3 dB/mm [13].

2.6 Signal Averaging and Speckle

The interferometric principle of OCT gives rise to a granular intensity pattern called speckle , which inherently exists due to the coherent detection scheme of OCT.

Within the coherence volume or resolution element which is given essentially by the optical lateral and axial resolution of the system, mutual interference from multiple scattering events can occur. As a result, the OCT signal from a single resolution element can vary to a large amount and is sensitive to variations in scan geometry. Homogenously scattering tissue manifests in a speckle pattern with a typical speckle size corresponding to the size of the resolution element and the spatial average brightness reflecting the backscattering properties of the tissue.

Structural OCT images suffer from speckle noise because it might obscure small image features or hamper the recognition of layer boundaries. A common way to reduce speckle and thereby improving the visibility of structures is achieved by signal averaging . The tissue is scanned multiple times and the OCT power values are averaged to generate the final OCT B-Scan. The intrinsic variation in scan geometry together with patient movement serves the purpose to induce the necessary variation in the speckle pattern. Averaging not only reduces the speckle noise but also reduces fluctuations in background noise. The SNR constantly increases with the square root of the number of acquisitions.

However, changes in speckle pattern reflect changes in the distribution of scattering particles within the resolution element. This is used to distinguish steady tissue from moving particles for blood flow imaging (see Chap. 6).

3 SPECTRALIS OCT

The SPECTRALIS device (see Fig. 3.7) was introduced by Heidelberg Engineering in 2006 based on the Heidelberg Retina Angiograph 2 (HRA2) . It incorporates two complementary imaging techniques: confocal scanning laser ophthalmoscopy (cSLO) and optical coherence tomography (OCT) . It is a modular ophthalmic imaging platform which allows clinicians and researchers to configure their individual device by combining different imaging modalities. Depending on the integrated modalities, the device is marketed as SPECTRALIS HRA , SPECTRALIS OCT or SPECTRALIS HRA+OCT .

Fig. 3.7
figure 7

The SPECTRALIS HRA+OCT combines confocal (cSLO) imaging with OCT and offers numerous different imaging modalities including MultiColor, Fluorescein angiography and OCTA

The cSLO part of the SPECTRALIS device offers a variety of laser sources providing different illumination wavelengths and detection schemes. These include cSLO reflectance imaging in the near infrared (IR), in the green and blue wavelength range, as well as fluorescence imaging modes for angiography (Fluorescein angiography FA, Indo-cyanin green angiography ICGA) and for autofluorescence (blue and IR). Some selected applications are presented in Chap. 2.

OCT is usually combined with IR confocal imaging , though other combinations are possible as well. Confocal imaging creates a transversal image of the retina corresponding to the en-face plane of OCT. It allows the operator to adjust the SPECTRALIS camera to target the region on the retina. Live images are presented throughout the imaging procedure to control image acquisition and quality. Furthermore, the SPECTRALIS system utilizes the IR cSLO scans for automatic motion tracking.

The SPECTRALIS OCT is based on spectral domain OCT technology, implementing a broadband superluminescent diode (SLD) for illumination and a spectrometer as detection unit. The SLD has a center wavelength of 880 nm and a spectral bandwidth of 40 nm (full-width-half-maximum, FWHM), resulting in an axial resolution of approximately 7 μm in the eye. Based on laser safety guidelines, the optical output power is limited to 1.2 mW.

The SLD, the interferometer and the scanning unit are mounted in the SPECTRALIS camera head. The interferometric OCT signal is coupled into a fiber and directed to the detection unit of the SPECTRLALIS which is located in the housing of the power supply.

The SPECTRALIS features two independent scanning units to support simultaneous cSLO and OCT imaging. The scan pupil of each unit is relayed by imaging optics including the SPECTRALIS objective onto the entrance pupil of the patient’s eye. Essentially, the scan angle determines the field of view (FOV) of the imaging area on the retina, the diameter of the scan pupil (aperture) defines the diffraction limited optical lateral resolution.

The OCT scanning unit is comprised of two linear scanners, which are driven synchronously with the read-out of the line scan camera in the spectrometer. The OCT frame rate is therefore determined by the scan density (i.e. the number of A-scans within one B-Scan) and the camera’s read-out time. The OCT2 module supports a line rate of 85 kHz, resulting in a frame rate of about 110 Hz for the fastest scan pattern.

As discussed in the technical section, the spectral resolution of the spectrometer determines the characteristic roll-off in sensitivity with imaging depth. Compared to the first generation OCT device (40 kHz A-scan rate), the roll-off of SPECTRALIS with OCT2 Module has been improved considerably to less than 5 dB over an imaging depth of 1.9 mm.

There is a tradeoff between acquisition speed and sensitivity : The higher the line rate, the faster the image acquisition but the less that photons can be detected. Acquisition speed therefore is inherently coupled to the sensitivity of the system. For retinal imaging, the maximum laser power is set by the exposure limit according to the laser safety guidelines. Therefore, to compensate for shorter integration time, the power can only be increased up to this limit. At the same time, eye motion, heart beat and any motion in general requires accelerated acquisition.

Some eye motion occurs at frequencies faster than the OCT frame rate and requires software algorithms to ensure precise and reliable positioning of the OCT scan pattern. Some of the most important software functionalities of the SPECTRALIS rely on software-based motion compensation: image registration, automatic real time (ART) for noise reduction, auto-rescan ability and fovea-to-disc-alignment.

The cSLO and the OCT image are simultaneously recorded and displayed side-by-side in the acquisition window of the SPECTRALIS software, like shown in Fig. 3.8. The cSLO image is used to position the selected OCT scan pattern, which is displayed superimposed on the fundus image (green line in Fig. 3.8). Active eye tracking (TruTrack™) then locks the scan to this position during the acquisition. This is accomplished by an algorithm that repeatedly detects motion in the SLO image and repositions the OCT beam accordingly.

Fig. 3.8
figure 8

The acquisition window of the SPECTRALIS software displays cSLO image and OCT image side-by-side: CSLO image (left) of the optic nerve head. The green line marks the selected position of the OCT B-Scan (right)

As a result, the OCT image is precisely aligned even in cases with eye movement during image acquisition. In addition, the co-registration of OCT and cSLO images allows for follow-up examinations at exactly the same position and at any later point in time.

The algorithm to combine multiple images which have been captured in the same location is called ART mean (automatic real time mean). Single OCT images are averaged in real-time to decrease noise and enhance contrast within the final OCT image. While ART is active, the SNR of the image is continuously increasing with approximately the square root of the number of averaged single B-Scans, to a maximum selected by the user. As a result, faint signals elevate from the noise floor and the contrast between single retinal layers is increased.

Moreover, the inherent variability of scanning due to eye and patient motion reduces the granular pattern of speckle because of slight variations in the optical path of the OCT. Speckle reduction allows for detection of tissue structures that would otherwise be obscured by large speckle spots. The result of ART processing can be appreciated by comparing the OCT images shown in Fig. 3.9.

Fig. 3.9
figure 9

Signal averaging using ART significantly reduces the speckle pattern and increases contrast and SNR

The ability to scan the same position repeatedly over any periods of time is of great value for disease detection, progression analysis and treatment control. Follow-up scans (FUP) are co-registered to baseline images, which allows for reliably identifying even small changes. As an example, Fig. 3.10 presents a FUP series for treatment control of wet age-related macular degeneration (AMD) , showing the initial clinical finding (inset 1) and two follow-up scans several weeks after treatment (insets 2 and 3). In an exemplary manner, the red lines indicate the correspondence of scan locations within the series. The precision of the placement of the follow up scans has been evaluated by means of retinal thickness measurements on FUP examinations. A measurement reproducibility of 1 μm was confirmed [14].

Fig. 3.10
figure 10

Follow-up series for treatment control of wet AMD: two follow-up images (2, 3) are co-registered to the baseline image (1). The exact same scan position allows for identifying changes. The red lines indicate identical scan locations

For glaucoma diagnosis, thickness maps of retinal nerve fiber layer (RNFL) are derived from a combination of circle and radial OCT scans on the optic disc. Sectorial and global RNFL thickness measurements require a reliable point-to-point comparison to assess progression and to accurately compare with reference data. It is essential to remove the influence of head tilt and eye rotation for each individual scan. Moreover, it has to be taken into account that the anatomy can significantly vary among individuals. The Anatomic Positioning System (APS) creates an anatomic map of each patient’s eye using two anatomic landmarks: the center of the fovea and the center of Bruch’s membrane opening. All scans are aligned along this fovea-to-disc axis, and the sectors are defined relative to this axis, as is depicted in Fig. 3.11 for two individuals (Fig. 3.11a, b). As a result, sectorial analysis is less affected by anatomical diversity. This improves the classification based on the reference database and increases diagnostic precision.

Fig. 3.11
figure 11

(ac) The Anatomic Positioning System (APS) ensures that the circle scans of the ONH scan pattern are aligned along the fovea-to-disc axis for each patient individually (upper row). Without APS, the influence of head tilt and eye rotation can impede the sectorial analysis of RNFL thickness and assessment of progression (c, lower row). The white lines through the ONH indicate the six sectors according to the Garway-Heath regions for classification of RNFL thickness

APS can also reduce the influence of head tilt and eye rotation on RNFL analysis. Without APS, differences in patient alignment may impede the sectorial analysis of RNFL thickness and thereby impact the assessment of progression, which is presented in Fig. 3.11 (for the same eye, Fig. 3.11c).

The segmentation of retinal layers is a basic prerequisite for many subsequent visualization and analysis features, such as the display of retinal thickness profiles or the definition and visualization of retinal slabs between any retinal boundaries. As part of the SPECTRALIS viewing software, the segmentation editor allows for a user-defined evaluation adapted to the specific pathology.

Per default, the internal limiting membrane (ILM) and Bruch’s membrane (BM) are segmented. In circle scans, the segmentation of the RNFL is displayed. Additionally, a multi-layer segmentation can be initiated, which allows to separate all visible layers of the retina. In Fig. 3.12, a multi-layer segmentation is shown together with the naming convention used throughout the software.

Fig. 3.12
figure 12

Multi-layer segmentation of a retinal OCT scan with the naming convention used throughout the software

If the retina is affected by pathology or by poor image quality, the automatic segmentation may fail. The segmentation editor tool therefore supports manual segmentation of some scans and corrects the rest of the volume dataset accordingly.

Volumetric OCT datasets are generally many GBytes in size and must be visualized in a suitable way to support the clinician with their diagnosis. Transverse section analysis, which is available for volume scans with a minimum density, offers an intuitive view of 3D OCT data. Interpolated B-Scans that are orthogonal to the acquired B-Scans and transversal (or enface) images are generated. After multi-layer segmentation , 2D projection images of various retinal slabs are available. Average, maximum or minimum intensity projection are standard procedures in image processing to visualize 3D data. For a pre-defined volume or slab, the average, maximum or minimum OCT signal along z (depth) direction is selected and mapped to the projection image. Enface images of vitreoretinal border region, RPE and choroid generated by maximum intensity projection, are depicted in Fig. 3.13.

Fig. 3.13
figure 13

Enface OCT images can be calculated from OCT volume scans, which have been segmented accordingly: For each slab—vitreoretinal, RPE and choroid—maximum intensity projection along the depth direction is used to generate transversal images

OCT imaging below the RPE may be impacted by the system’s specific roll-off and the individual pigmentation of the RPE due to enhanced scattering. The contrast of choroidal vascular detail and the visibility of the choroidal-scleral interface (CSI) may be important in assessment of choroidal pathologies, e.g. in pachychoroid disease. To increase sensitivity in depth and thereby enhance visualization of the choroidal vascular plexus and the CSI, the SPECTRALIS allows for Enhanced Depth Imaging (EDI). Imaging of the lamina cribrosa benefits from the setting as well. For EDI, the characteristic roll-off is reversed in depth. The optimum imaging position, also called sweet spot, is moved to the lower part of the displayed OCT image. Technically, the EDI mode is realized by shifting the position of the reference mirror. Deeper layers then have smaller differences in optical path length and are therefore encoded in interference fringes of lower spatial frequency: their OCT signal gains an additional SNR of 2–3 dB as it is not affected by the roll-off anymore. However, EDI cannot account for the losses induced by scattering, which may be enhanced for several pathologies and affects all layers below.

An emerging area of interest is widefield OCT. While widefield technology in other imaging modalities, such as angiography and autofluorescence is already widely used, the application of widefield OCT is still currently being adopted into clinical practice. Widefield OCT imaging may provide significant benefit in the visualization of multifocal macular disorders or in the understanding of peripheral vitreoretinal diseases.

Widefield OCT imaging is feasible using the SPECTRALIS wide field objective (WFO), which provides a 55° field of view, i.e. a scan length of about 12 mm. The macula, the optic nerve head and areas beyond the vessel arcades can be captured in a single B-Scan. Like standard OCT, widefield OCT can be combined with a variety of cSLO imaging modalities. Figure 3.14 presents an example for the combination of widefield MultiColor imaging with widefield OCT.

Fig. 3.14
figure 14

Wide-field MultiColor cSLO (left) combined with wide-field OCT provides a 55° field of view

A variety of OCT scan patterns and comprehensive scan protocols allow for a systematic and workflow optimized examination. For glaucoma diagnosis and treatment control, the detection of slight changes in the RNFL thickness is essential. The RNFL of healthy eyes is visualized on OCT images as a highly reflective layer that becomes increasingly thick as it approaches the optic disc. The thickness of the peripapillary nerve fiber layer can be determined from three peripapillary circular scans (Fig. 3.15, top left)—which are defined by the scan protocol ONH-RC. The RNFL of each circle is automatically segmented (Fig. 3.15, top right) and the thickness values are compared with a reference database. The results are analyzed within predefined sectors (called Garway-Heath sectors ) as well as globally (Fig. 3.15, lower row).

Fig. 3.15
figure 15

Nerve fiber layer thickness analysis : Three peripapillary circular sans are placed at the optic nerve head with a fixed starting point relative to the macula position (top left inset) and in each circle scan the RNFL and ILM are segmented (top right inset). Standardized measurements include thickness in predefined segments (bottom left) and comparison of the thickness with a normative database (bottom right inset). The black line indicates the measurement of the individual patient in comparison with the average thickness for this age and population (green line) and in comparison with margins of normative data base (green—normal, yellow—borderline and red—out of normal range)

A radial line-scan pattern is part of the ONH-RC scan protocol and allows for assessing the thickness of the neuro-retinal rim based on the detection of the disk margin. From each B-Scan, the shortest distance from Bruch’s membrane opening (BMO) to the ILM is determined and indicated by a cyan arrow in the B-Scan (see Fig. 3.16, top right). The analysis is therefore called BMO-based minimum rim width (BMO-MRW) . It takes into account the variable geometry of the neural tissue as it exits the eye via the optic nerve head. BMO-MRW data can be classified based on a reference database according to Garway-Heath sectors as well as globally (Fig. 3.16, lower row).

Fig. 3.16
figure 16

BMO-MRW analysis : A radial line-scan pattern is placed at the ONH (top left inset) and in each B-Scan the shortest distance of the BMO endpoints to the ILM are found and indicated by the cyan arrow (top right inset). Standardized measurements include thickness in predefined segments (bottom left) and the BMO minimal rim width according to the previously found landmarks in the OCT B-Scans (bottom right inset). The black line indicates the measurement of the individual patient in comparison to margins of normative data base (green—normal, yellow—borderline and red—out of normal range)

Glaucoma may also involve the loss of retinal ganglion cells around the macular region. From a dense volume scan pattern of the macula (Posterior Pole scan protocol), analysis and follow-up of the ganglion cell layer (GCL) is assessed. For each B-Scan, the GCL is segmented. The resulting thickness map is color-coded and allows for comparing GCL thickness on a region-based approach, see Fig. 3.17.

Fig. 3.17
figure 17

Segmentation of the ganglion cell layer (GCL ) and resulting color-coded thickness maps

4 Additional OCT Contrast Mechanisms and New Technologies

Since more than a decade structural OCT measurements have been used very successfully in clinical routine for diagnosis of retinal and neurodegenerative diseases (see Chaps. 4 and 5). OCT technology was also established for measuring and assessing structural parameters within the eye bulbus e.g. as the chamber angle or the corneal thickness (see Chap. 12), and for planning cataract and refractive surgeries (Chap. 14). Furthermore, during the last years research work investigating additional or complementary contrast mechanism based on OCT technology has been published continuously. In the following section, a short overview of the most important contrast mechanisms is given.

4.1 OCT Angiography (OCTA)

OCT signal originating from blood vessels shows a much larger variance compared to the OCT signal of stationary tissue. The signal alterations in the vessels are due to the flow of the back-scattering particles (mainly erythrocytes). Increased variance is observed for both, for the intensity and for the phase of the complex valued OCT signal and is used to compute OCTA images.

For OCTA images B-Scans at the same position are repetitively acquired, and sophisticated mathematical and statistical algorithms were developed to discriminate vascular structures from stationary tissue based on the variance of the OCT signal. These algorithms face several challenges: The presence of fast eye movements (bulk motion) causes a signal variance also for stationary tissue, which needs to be separated from the variance caused by the retinal blood flow. Also the blood flow in larger vessels within the inner retina can cause so-called projection artefacts in the deeper vascular plexus . See also reference [15] for an overview and Chap. 6 as well as herein referenced literature for further details and ophthalmic applications of OCTA technology.

4.2 Quantitative Measurement of Retinal Blood Flow

Whereas in OCTA the blood flow and thus the geometry of the different vascular plexus are visualized, no quantification of the blood flow e.g. in units of microliter per second can be made. For such a quantitative assessment of the ocular blood flow, also the knowledge of the velocity profile within the vessel is required. In addition, the vessel diameter and geometry needs to be known, which can be derived from structural OCT and OCTA data. If the blood flow contains a velocity component in z-direction—as it is the case for the large vessels at the rim of the optical nerve head—this velocity component can be extracted from OCT phase measurements of consecutive A-scans. Additionally to the angle-dependency, these phase shift measurements are sensitive to bulk motion and to phase instabilities of the OCT system. However, for vessels which are predominantly orientated in lateral direction more sophisticated decorrelation and/or extensions of the phase-based methods are required to still be able to measure all velocity components. This is a very active field of research and promising approaches including literature references are presented in Chap. 7.

4.3 OCT with Visible Light (Vis-OCT)

Up to now all commercial available OCT systems used in ophthalmology operate in the near infrared wavelength range between 0.8 and 1.3 μm. Shifting the OCT light source to the visible wavelength range would imply several challenges and disadvantages, but would have also two major advantages presented in the following:

4.3.1 Resolution

With vis-OCT both, the lateral and especially the axial resolution of retinal OCT images could be significantly improved. The dependency of the axial resolution on the OCT wavelength was discussed already above, and from Eq. (3.5) it is obvious, that the use of a broad visible spectrum at shorter wavelength (e.g. 450–700 nm) increases the achievable axial resolution in the submicron range. It is about 8× higher, compared to a standard infrared OCT centered at 880 nm with 80 nm bandwidth.

The lateral resolution of OCT en-face images is given by the Rayleigh criterion, where r defines the minimum distance between two resolvable structures. Therefore, for a given numerical aperture it is proportional to the center wavelength:

$$ r=\frac{0.61\cdot \lambda }{NA}\to r=6.8\cdot \lambda \kern0.875em \mathrm{for}\ NA\approx 0.09,\left({\mathrm{d}}_{\mathrm{pupil}}\approx 3\ \mathrm{mm}\right). $$
(3.9)

Due to the presence of optical aberrations the dilation of the pupil in general does not result in an improvement of resolution. Thus, the transition to visible light e.g. centered at 500 nm could improve the lateral resolution by a factor of 1.6 resp. 2 compared to OCT images acquired at 800 nm resp. 1000 nm center wavelength.

4.3.2 Spectral Imaging, Oximetry

In addition, the spectral information of the back-scattered visible light could be used for spectroscopic analysis. Since the absorption curves for oxygenated (HbO2) and deoxygenated hemoglobin (Hb) show characteristic differences in the visible range, the spectral data acquired in vis-OCT could be used as well to determine the oxygen saturation of the arterial and venous blood flow [16]. The oxygen saturation is defined as the percentage of oxygen-saturated hemoglobin (HbO2) with regard to the total amount of oxygenated and deoxygenated hemoglobin. Such a measurement could give complementary information to the quantitative blood flow data, since the knowledge of arterial and venous oxygen saturation together with reliable blood flow data would allow to estimate the total oxygen supply to the retina.

Recently, promising results obtained with experimental vis-OCT systems have been published [16, 17]. But this technique also comes along with limitations and technical challenges: the optical design needs to be carefully corrected for chromatic aberrations and cost-efficient broadband light sources are not commercially available. In addition, there are fundamental problems: due to the potential photo-chemical action of blue light, the laser exposure limits are very strict, resulting in reduced sensitivity of the OCT system. Visible light also causes bleaching of the photo pigments and appears very bright for the patient leading to considerable discomfort. The most important limitation of vis-OCT is most likely the inaccessibility of structures below the intact RPE, due to its strong absorption of visible light . Therefore, the vascular plexus within the choroidea (choroidocapillaris and larger choroidal arteries and veins) cannot be imaged and assessed with vis-OCT resp. vis-OCTA.

4.4 OCT Elastography (OCE)

Many ocular diseases are associated with a change of mechanical tissue properties. Examples are keratoconus (mechanical properties of the cornea), presbyopia (stiffening of the lens), glaucoma (often associated with a stiffening of the sclera and/or lamina cribrosa), arteriosclerosis (reduction of vessel elasticity) and others. Therefore , a reliable, non-contact method to in-vivo measure the elasticity of different ocular tissues species can potentially have a big impact for early diagnosis of these diseases, maybe even before structural changes can be detected with conventional OCT.

Brillouin scattering [18] is one approach to measure the Young (or shear) modulus, which characterizes the tissue elasticity. Unfortunately, due to the extremely small wavelength shift , a laser with a very narrow spectral band is needed, and high technical effort is required to separate the Brillouin shifted backscattered photons from elastically backscattered background and to measure the small wavelength shift.

A different very promising approach is OCE [19]. The review article from Kirby et al. gives a comprehensive introduction and overview to the technology [20]. For OCTE measurements a mechanical load is applied to the tissue under examination. The tissue will respond to the applied stress, and its displacement is measured by an appropriate OCT system. Different methods to provide the mechanical load have been proposed and tested. The mechanical load can be applied as static force, as sinusoidal vibration, or as a fast transient, resulting in different requirements for the imaging system . For clinical use, a non-contact technique would be highly preferable; the most promising methods are the application of an air-puff [21, 22] or the excitation by focused ultrasound [20, 23].

4.5 Polarization Sensitive OCT (PS-OCT)

Polarization sensitive OCT is an extension to standard spectral domain or swept source OCT. Additional contrast is provided by measuring and evaluating the change of the polarization state of the backscattered probe light due to the interaction with the tissue under examination. PS-OCT can be also considered as an improvement of the previous SLO-based approach of scanning laser polarimetry (SLP) . Since in PS-OCT the depth of the backscattered light is exactly known, the effects of different polarization changing tissue layers along the beam path can be separated properly. This has always been a problem of SLP technology, where on one hand the birefringence of the cornea needs to be corrected and on the other hand the RNFL birefringence data could be corrupted by the birefringent contribution of deeper layers as the sclera, causing so-called atypical RNFL patterns [24].

Mathematically the polarization of the light and the polarization changing properties of a tissue sample can be described with the Jones formalism, which is discussed in detail in the review paper from de Boer, Hitzenberger and Yasuno [25]. This formalism considers the electrical field vector \( \overrightarrow{\mathrm{E}} \) to describe the polarization state of the electromagnetic light wave propagating in z-direction. E x and E y are its complex components. The polarization changing properties of a medium are described by the Jones matrix J, which is a 2 × 2 component matrix with complex values.

In case of n consecutively transmitted tissue layers, the resulting Jones matrix J can be written as the product of n individual Jones matrices J = J nJ n−1….J 1. Thus, the polarization state \( {\overrightarrow{E}}^{\prime } \) of the transmitted light wave is then given by:

$$ \overrightarrow{{\mathrm{E}}^{\prime }}=\left(\begin{array}{c}\mathrm{E}{\acute{\mkern6mu}}_{\mathrm{x}}\\ {}\mathrm{E}{\acute{\mkern6mu}}_{\mathrm{y}}\end{array}\right)=\mathrm{J}\cdot \overrightarrow{\mathrm{E}}=\left(\begin{array}{cc}{\mathrm{J}}_{11}& {\mathrm{J}}_{12}\\ {}{\mathrm{J}}_{21}& {\mathrm{J}}_{22}\end{array}\right)\cdot \left(\begin{array}{c}{\mathrm{E}}_{\mathrm{x}}\\ {}{\mathrm{E}}_{\mathrm{y}}\end{array}\right)={\mathrm{J}}_{\mathrm{n}}\cdot {\mathrm{J}}_{\mathrm{n}-1}\cdot \dots \cdot {\mathrm{J}}_2\cdot {\mathrm{J}}_1\cdot \overrightarrow{\mathrm{E}} $$
(3.10)

The amplitude and the relative phase of the two components E´x and E´y after passing through the media are measured for two different initial polarization vectors \( \overrightarrow{E} \). Thus, the equation above yields four complex equations, which allow for calculating the complex components of the Jones matrix J. The retardation as well as the orientation of the prevalence axis and also the dichroism of the transmitted layer can be calculated from the Jones matrix (see [25]).

In general, four measurements (amplitude and phase) are required in order to determine the complex components of the Jones matrix : two linear independent polarization vectors are applied to the sample and two detection units measure the components of two orthogonal (or at least linear independent) polarization states. In most of the recently published papers on PS-OCT systems, swept source technology has been used for two reasons (see e.g. ref. [26]). Firstly, SS-OCT systems usually have an axial imaging range of several millimeters. Therefore, the two required input polarization states can be elegantly encoded in one single A-scan. The two orthogonal polarization states are divided into separate beam paths, one component is delayed (typically by half of the z-range, i.e. 2–3 mm) and finally the two polarization states are recombined. The two delayed wavelength sweeps with different polarization vectors are then scanned over the retina. Secondly, the detection unit can be easily duplicated for SS-OCT systems. The optical set-up usually is arranged in a way, that in total four pin photodiodes form two balanced detection units, each detecting the signal for a different (linear independent) polarization vector.

Thus, the four independent measurements required in the Jones formalism can be very efficiently extracted from one A-scan, which is recorded in two different polarization dependent detection channels. Both channels yield information about the two depth encoded input polarization states by separately evaluating the upper and lower half of the total OCT z-range.

Several effects can cause a change of the polarization: In biological tissue especially the so-called form birefringence plays an important role. In birefringent tissue the refractive index depends on the orientation of the polarization of the incident light. Thus, the component polarized along the slow axis experiences phase retardation with respect to the component polarized parallel to the fast axis. If the difference of the refractive index ∆n is known, measuring the retardation δ allows for determining the thickness of the birefringent layer. In addition, the angle θ of the prevalence axis yields information about the orientation of the anisotropic tissue.

Form birefringence is caused by rod-like structures having a spacing of the rods smaller than the wavelength. This is the case for axon bundles within the RNFL, which are visualized in the retardation map of Fig. 3.18. Another example for form birefringence in the retina is the Henle fiber layer around the macula. This birefringent contrast can be used also for differentiation of tissue species and improved segmentation of tissue layers.

Fig. 3.18
figure 18

OCT “en-face” image of a young healthy volunteer is shown in the left image (a). In (b) the double pass phase retardation map, which was calculated from the ps-OCT dataset, is displayed. It shows a strong retardation signal for the superior and inferior RNFL bundles. Note that there is also around the fovea some birefringence, which is caused by the radially orientated Henle fibers (adapted from reference [26])

Another polarization changing effect is dichroism, which describes the polarization dependent attenuation of light due to polarization dependent absorption within the tissue. This effect is of less importance in the living human eye [25], and therefore will not further be discussed in this chapter.

Finally, strongly scattering tissue has a depolarizing effect, i.e. the polarization is scrambled. Since within the healthy human retina mainly the retinal pigment epithelium (RPE) shows such strong depolarizing properties, this effect has been used in the past to improve the imaging contrast and segmentation of the RPE layer (see Fig. 3.19), as well as to detect the absence of RPE in patients with geographic atrophy [28].

Fig. 3.19
figure 19

PS-OCT B-Scan through the fovea of a human retina: intensity image (a) and DOPU (degree of polarization uniformity) contrast (b), corresponding to the color scale from black (DOPU = 0) to red (DOPU = 1). The inner retina shows no or only very little depolarization effects and therefore the DOPU value is close to 1 (orange to red pixels). In deeper layers the situation is different: backscattered light from the end tips of the photoreceptors (ETPR) , the RPE and Bruchs membrane is to an important part statistically polarized resulting in significantly lower DOPU values (yellow, green and blue pixels). To avoid erroneous polarization data, areas below a certain intensity threshold are displayed in gray. Image size: 15° (horizontal) × 0.75 mm (vertical, optical distance) (adapted from reference [27])

In addition, the migration of RPE cells into the inner retina layers can be visualized by displaying B-Scans with depolarization contrast (see Miuara et al. [29]). However, it should be noted, that the degree of depolarization cannot be directly calculated for a single pixel, since due to the coherent detection scheme, the OCT signal for each pixel is fully polarized, and thus the depolarization resp. the degree of polarization cannot be measured. However, in case of strong depolarizing tissue, the polarization of adjacent pixels is completely uncorrelated and therefore varies statistically. Thus, when averaging the polarization over a sliding, localized area (kernel), the mean polarization is close to 1 for polarization preserving structures, whereas it is considerably reduced for polarization scrambling tissue. The Hitzenberger group introduced for this mean polarization parameter the denomination “DOPU”, which stands for degree of polarization uniformity [27].

4.6 Adaptive Optics OCT (AO-OCT)

Adaptive optics is a concept to improve the resolution of an optical imaging instrument by actively compensating the static and dynamic aberrations of the optical system (see also Chap. 17). In retinal imaging, the diffraction-limited optical resolution is determined by the numerical aperture of the human eye. The given focal length of the eye and the pupil diameter, which can be dilated at maximum to about 8 mm, restrict the NA to approximately 0.24. Thus, theoretically a resolution of about 2 μm (Rayleigh criterion) can be achieved for 840 nm. However in practice, the optical resolution is reduced for dilated pupils, since the optical aberrations of the human eye rapidly increase with the pupil diameter. The deterioration of the optical systems outweighs the theoretical benefit of a higher NA. Therefore, in commercial OCT systems typically a beam diameter of 2 mm is entering the eye, resulting in a lateral resolution of about 9 μm.

The goal of adaptive optics is to actively compensate the optical aberrations by means of an adaptive optical component, as e.g. a deformable mirror, and thus realize the diffraction limited resolution also for fully dilated pupils.

In contrast to the lateral resolution , the axial resolution of OCT systems is independent of the numerical aperture and can be optimized to 3 μm as demonstrated in the past [30, 31], by increasing the bandwidth of the light source. Thus, with dilated pupils >6 mm and distortion compensating adaptive optics as well as an improved axial resolution an isotropic point spread function with a width of <3 μm in all dimensions could be achieved, which would enable measurements on a cellular level. Such a resolution enhancement has been demonstrated in complex laboratory set-ups, including adaptive optics with a wave front measurement in order to provide an online feedback mechanism to control the adaptive correction element. In several studies this technique has been used for different applications in the retina, for example for in-vivo investigation of photoreceptor disc shredding [32] and for visualization of micro-capillaries [33].

Since the technical effort and the costs for the realization of adaptive optics OCT systems are considerably high, computational approaches have been recently proposed to numerically correct OCT data for optical aberrations. The basis for this approach is the acquisition of phase stable volumetric OCT data. On the basis of en-face OCT data the complex pupil function, which contains information about the wave front distortions, can be calculated and numerically corrected. Finally, the corrected pupil function is used to recalculate the aberration corrected en-face images of the retina. With this method in principal also the defocus caused by the Gaussian beam profile along the z-axis can be numerically compensated. For more details the reader may refer to the references [34,35,36].

4.7 High Speed OCT

Since the beginning of the clinical use of OCT imaging devices in ophthalmology, the acquisition speed was constantly increased with the goal to minimize the time required to record a complete and dense 3D volume stack of the retina. The reasons for the demand for high speed OCT systems are manifold: Eye movements, as microsaccades and ocular drifts are present even for fixating eyes. They interfere with the scan pattern and cause artefacts. Sophisticated eye tracking algorithms can eliminate these artefacts by detecting the eye movement, rejecting the corrupted data, repositioning the scan system, and reacquiring the rejected scans. However, this usually results in prolonged acquisition times especially for patients with poor fixation ability. Higher acquisition speeds can be used to either increase the A-scan density with the benefit of an improved digital lateral resolution or to extend the field of view further to the periphery without increasing the acquisition time. The downside for high speed scanning OCT is the reduced illumination time, which in general results in a decrease of sensitivity.

In order to achieve A-scan rates in the MHz rate, the following two approaches have been investigated:

4.7.1 Fourier Domain Mode Locked (FDML) Lasers with MHz Sweep Rate

The use of so-called frequency domain mode-locked lasers as OCT light source was first proposed by Huber, Wojtkowski and Fujimoto in 2006 [37]. FDML lasers usually consist of a fiber-based ring resonator, a semiconductor optical amplifier (SOA) , a tunable bandpass filter, fiber-based components as polarization controller, isolator, and laser output coupler . Klein et al. used such a swept source FDML laser at 1050 nm center wavelength with an A-scan rate of 6.7 MHz to acquire a dense ultra-wide field fundus OCT volume within 0.3 s [38].

4.7.2 Parallelization of OCT Data Acquisition

Another approach to reduce imaging time for dense OCT volumes is based on lateral parallelization of the acquisition by simultaneously capturing A-scans at multiple locations of the sample. The most common parallelization technique is the line-field approach, which has been demonstrated for SD-OCT as well as for SS-OCT systems. Here, instead of a scanned focus spot a complete line is projected onto the sample. Therefore, only one scanner is required to scan a 2D area. In SD-based line-field OCT systems , a two dimensional detector (CCD or CMOS chip) is used. One dimension (line of pixels) samples the illuminated line on the tissue and the other dimension (column of pixels) is used to measure spectrally-resolved the interference fringe pattern referring to the corresponding line pixel . Thus, the read-out of the 2D detector yields the complete information for the entire B-Scan. Experimental line-field SD-OCT systems have been demonstrated for retinal [39] and corneal [40] imaging.

In line-field SS-OCT systems only a 1D line camera is used to image the illuminated line on the sample. The line camera is read out after each step of the wavelength sweep, i.e. after one complete sweep, the data for a full B-Scan are acquired. In ref. [41] a line-field SS-OCT system is described, which enabled the acquisition of volumetric OCT data at an effective A-scan rate of up to 1 MHz.

Finally, the line-field SS-OCT technique can be applied also to full-field SS-OCT just by replacing the line detector by a 2D image sensor. One wavelength sweep then provides the OCT data for a complete 3D volume stack (see Chap. 8 and [42, 43]).

5 Summary and Conclusion

OCT is an extremely valuable imaging technique to generate cross-sectional images with high axial resolution for tissue diagnosis. It is especially useful in ophthalmology as the transparency of the ocular media allows for imaging of the retina even at the back of the eye. Therefore, not only the first demonstration in the laboratory in 1991 was in eye, but also the first commercial device was an ophthalmic device entering the market only 5 years later.

In the last 25 years, a tremendous development of OCT technology took place. New OCT variants, moving from time-domain acquisition to frequency-domain measurement of spectral interference, allowed for an enormous increase in acquisition speed and at the same time an increase of tissue contrast in the images. This was the starting point of usage of OCT in daily clinical practice in ophthalmology. OCT was combined with confocal scanning laser ophthalmoscopes, featuring various fluorescence imaging techniques, into multimodal imaging platforms like the SPECTRALIS. The insights gained into the course of retinal diseases and glaucoma could be incorporated into numerous diagnostic tools.

Beyond structural imaging, the OCT signal can be further analyzed to enable functional imaging of tissue. One technique is OCT angiography which can visualize the blood vessel network. As it works contactless and does not require any dye, it got accepted as a clinical imaging tool very fast. Other techniques like PS-OCT, detecting tissue birefringence or OCT electrography for measurement of mechanical tissue properties have also shown great potential in experimental settings. OCT with visible light carries the potential of significant increase in axial resolution and the additional information of oxygenation measurement as a metabolic biomarker. However, the considerable increase in hardware complexity on the one hand and drawbacks like tradeoffs regarding penetration depth or imaging speed on the other have so far hindered development of commercial devices.

Nevertheless, it can be assumed that OCT development will continue and that either the availability of new components or the finding of an unconditional clinical benefit will lead to the breakthrough of both the methods described here that have not yet been commercially implemented and those still unknown today.