Respiratory infections

Acute exacerbations of chronic obstructive pulmonary disease

Cameron et al. [1] investigated the possible role of viruses in 107 episodes of acute exacerbations of chronic obstructive pulmonary disease (AECOPD). They used modern diagnostic tests, including immunofluorescence assay and polymerase chain reaction, on nasopharyngeal aspirates. They identified infections in 64% of cases, with viruses as probable responsible pathogens in 43%: influenza type A (13%) and rhinovirus (8%) were the most common organisms. Bacteria were found in 23% of cases, with Haemophilus influenzae as the most common pathogen (11%). The authors failed to find significant differences in clinical characteristics and outcome between virus-infected and noninfected patients. In his related editorial, Luyt [2] comments more generally on virus in intensive care unit (ICU) patients, emphasizing (1) the questionable relationship between viral diseases and outcome, (2) the doubtful reliability of modern diagnostic tests with a risk of false-positive results and (3) as a consequence, the relevance of viral detection in respiratory tract secretions: detection of a virus does not mean infection.

Parapneumonic effusions and empyemas

Based on the use of chest radiography and/or ultrasound, Tu et al. [3] studied 175 febrile medical ICU patients with pleural effusion. After thoracentesis, they found that 45% of these patients complicated parapneumonic effusions or empyemas. The presumptive causes of such complicated infected pleural effusions were hospital-acquired pneumonia (47%), community-acquired pneumonia (46%) or primary empyema (5%). Cultures were positive in 75% of cases and isolated anaerobes or Candida species were found in less than 10% of cases. Among the 55 patients with positive cultures and nontuberculous effusions, 20 received initial inadequate antibiotic treatment, with a mortality rate of 75% vs 46% in the 35 patients whose treatment had been adequate. These data confirm that complicated parapneumonic effusions and empyemas are severe disease in medical ICU patients, and that inadequate initial antimicrobial therapy is an important prognostic factor.

Severe acute respiratory syndrome

In a retrospective study conducted in one ICU in Hong Kong, receiving only patients with confirmed or suspected severe acute respiratory syndrome (SARS), Gomersall et al. [4] described the transmission of SARS to healthcare workers (35 doctors and 152 nurses). During the study period, 67 patients were admitted to the ICU with a median length of ICU stay of 13 days. With rigorously applied infection control procedures, only five ICU staff members (four nurses and one healthcare assistant) developed SARS, despite long staff exposure times (284 h for doctors and 119 h for nurses) and a substandard physical and architectural environment. Three of the five cases occurred early in the outbreak (within 10 days of admission of the first patient with SARS), suggesting that infection control procedures must be applied rigorously from the start of an epidemic. Thus, the risk for healthcare workers to acquire SARS is low.

Community-acquired pneumonia

A rapid immunochromatographic membrane test (ICT), the new Streptococcus pneumoniae urinary antigen assay, was studied in a population of 140 ICU patients: 32 without community-acquired pneumonia (CAP), 32 with pneumococcal CAP and 76 with nonpneumococcal CAP. The ICT was positive in 23/34 patients with pneumococcal CAP, 11 of 76 patients with nonpneumococcal CAP and none of 32 patients without CAP. Sensitivity, specificity, and positive and negative predictive values were 72%, 90%, 68% and 92%, respectively. Among patients who had received prior antibiotic therapy, the operating characteristics of ICT were not modified. These results reported by Lasocki et al. [5], comparable with those observed in outpatients and non-ICU inpatients, do not permit definite conclusions regarding the clinical interest and impact of such diagnostic techniques on the management of patients with severe CAP.

Nosocomial maxillary sinusitis

Maxillary sinusitis is a frequent complication occurring in intubated patients, often underestimated and under-diagnosed. The systematic search and treatment of nosocomial maxillary sinusitis (NMS) have been discussed to prevent the occurrence of ventilator-associated pneumonia.

Prevention

A prospective, open-label randomized study conducted in 79 multiple trauma patients expected to be mechanically ventilated for more than 3 days was designed to investigate the efficacy of locally applied nasal decongestant agent (xylometazoline, 0.1%) and corticosteroids (100 μg budesonide) for preventing NMS. Compared with the placebo group, patients receiving topical treatment less frequently developed radiological maxillary sinusitis (54% vs 82%; p < 0.01); in contrast, the incidences of infectious maxillary sinusitis (8% vs 20%; p = 0.11), and of VAP (15% vs 27%; NS) were not significantly different. This first clinical evaluation suggests that the use of pharmacological agents to reduce inflammation may have a beneficial effect in preventing maxillary sinusitis; however, from the data reported by Pneumatikos et al. [6], it is difficult to recommend this practice as a routine prevention strategy in mechanically ventilated patients.

Diagnosis

The diagnosis of NMS in ICU patients treated with mechanical ventilation (MV) is usually based on computerized tomography (CT) and isolation of organisms from material obtained after transnasal puncture. To avoid the transportation of patients outside the ICU, transnasal puncture based on sinus echography merits valuation. To this end, Vargas et al. [7] conducted a prospective study in 60 patients with a clinical suspicion of maxillary sinusitis (120 sinuses were examined). The definition of positive echography was the visualization of the hyperechogenic posterior wall of the sinus associated or not with extension to the internal and external walls of the sinus. When sinus ultrasound was positive, a transnasal puncture was performed and considered as positive if fluid was obtained from sinus aspiration. The study was designed to evaluate the value of echography to obtain positive transnasal puncture. Sinus ultrasound was positive in 84 cases, and 78 of these 84 transnasal punctures were positive. On the other hand, when sinus ultrasound was negative, no sinusitis was observed on CT. This study suggests that transnasal puncture could be performed directly on the results of a sinus echography. However, as other echographic examination, sinus ultrasound can be operator-dependent. Thus, before generalization, precise description and standardization of this procedure in the specific population of mechanically ventilated patients must be specified.

Nosocomial pneumonia

There is considerable evidence to suggest that specific interventions can be effectively employed to prevent ventilator-associated pneumonia (VAP). Measures to prevent VAP extend into all aspects of daily intensive care practice, including oral care and the suction of respiratory secretions. Mori et al. [8] studied whether oral care contributes to preventing VAP by comparing 1,252 patients who received oral care with 414 historical controls. The oral care nursing plan included the following protocol three times daily: (1) suction of oropharyngeal secretions after increasing cuff pressure; (2) position the patient's head to the side and assess the conditions of soft and hand tissues in the oral cavity, cleanse the oral cavity using a swab soaked in 20-fold diluted povidone–iodine gargle; (3) cleanse the oral cavity using a tooth brush and rinse with 30 ml weakly acidic water; (4) repeat cleansing with the povidone–iodine-soaked swab; (5) suction of the oral cavity and the portion of the trachea above the cuff followed by restoration of the cuff pressure. With such a protocol, the incidence of VAP decreased from 10.4% in the control group to 3.9% in the oral care group; in particular, in the case of early-onset VAP (VAP occurring within 48–96 h after the start of MV), it was one-tenth that observed in the non-oral care group. No differences were observed in the outcome variables. Unfortunately, this study evaluating nonpharmacological simple measures was conducted over an 8-year period with introduction of new methods for preventing VAP (semi-recumbent positioning, subglottic secretion drainage, closed tracheal suction devices, non-use of H2-blockers) which introduce confounding effects over such a long study period. The results of this study confirm the important impact of oral care on the occurrence of VAP, suggesting that this variable has to be controlled in each study evaluating pharmacological and nonpharmacological prophylactic measures, particularly in the case of multicentre trials.

Three articles were dedicated to modalities of endotracheal suction for preventing VAP: one cost–efficacy analysis, a related editorial, and one meta-analysis.

Lorente et al. [9] compared tracheal suction costs and incidence of VAP using closed system without daily change versus an open tracheal suction system in a randomized study conducted in 457 patients. There were no differences in (1) incidence (13.9% vs 14.1%), and incidence-density (14.1 per 1000 days vs 14.6 per 1000 days of MV) of VAP; (2) tracheal suction costs per patient/day (2.3 ± 3.7 vs 2.4 ± 0.5 Euro). However, in patients treated with MV for more than 4 days, use of the closed system without daily change was associated with lower cost. In his editorial, Maggiore [10] stressed the limitations of this study (large proportion of patients treated with MV for less than 2 days, variable costs of open and closed systems depending on the manufacturer and the country) and the absence of effect of the type of suction system on the incidence of VAP.

Vonberg et al. [11] confirmed these data in a meta-analysis of randomized controlled trials. They selected nine trials including 648 patients in the open suction group and 644 in the closed suction group. Twenty per cent of patients treated with an open system and 19% of patients treated with a closed system developed VAP. Thus, the pooled relative risk for closed suction system was 0.95 (95% CI 0.76–1.18). Today, the potential risks and benefits of the closed suction system are more theoretical than based on controlled evaluation: efficacy of suction, with possible impact on occurrences of tracheal tube obstruction, atelectasis, increased work of breathing vs possible decreased bacterial contamination of caregivers potentially associated with cross-transmission and occurrence of “exogenous” pneumonia.

Venstein et al. [12] proposed an algorithm based on direct examination of endotracheal aspirate and/or protected telescoping catheter specimens as an alternative between the well-known “clinical” and “bacteriological” approaches. In an observational study conducted in 76 patients suspected of having VAP, they retrospectively found an overall rate of 80% of appropriate management (based on comparison with the results of protected specimens), and considered such a result as good. Consequently, they suggest that a clinico-bacteriological strategy may be one of the simple way to correctly diagnose VAP and improve therapeutic decisions.

However, the true immediate and long-term impacts of the algorithm described in this article merit evaluation: Is this strategy appropriate to prevent resorting to broad-spectrum drug coverage in the large majority of patients who develop signs and symptoms suggestive of pneumonia, thus minimizing the emergence of resistance flora and redirect the search for another infection site?

Two studies were conducted in specific populations of ICU patients who developed VAP.

Orlikowski et al. [13] analysed the prognosis of pneumonia in a cohort of 63 patients with Guillain–Barre syndrome treated with MV. Forty-eight had early-onset (< 5 days) and 15 had late-onset (> 5 days) pneumonia. Antibiotics were selected empirically and adjusted according to microbiological results. The empirical regimen proved adequate in 48 (76%) cases. A time > 2 days from admission to ventilation was the only variable independently associated with the occurrence of early-onset pneumonia, which was, moreover, chiefly related to aspiration. The microbiological characteristics of early-onset pneumonia, i. e. the high frequency of polymicrobial pneumonia (49%) and the identified responsible pathogens, confirm the high risk of aspiration pneumonia in this population. The observed mortality rate was 14%.

Combes et al. [14] conducted a study to establish more accurately the impact of piperacillin resistance on the outcomes of Pseudomonas VAP for patients who had all received appropriate empiric antibiotics. Based on 115 patients with P. aeruginosa pneumonia included in the randomized trial comparing two durations of treatment, the authors demonstrated that patients with piperacillin-resistant Pseudomonas strains had higher 28-day mortality than those with susceptible strains (37% vs 19%; p = 0.04). However, only age, female gender and severity of illness (SOFA score and severe underlying comorbidities) were identified as associated with 28-day mortality by multivariable analysis. Piperacillin resistance was not. Morbidity variables such as durations of MV and ICU stay, as well as VAP recurrence rates, did not differ for resistant and susceptible strains. This study confirms the difficulty encountered in correctly evaluating mortality and morbidity directly attributable to the development of VAP; however, it strongly underlines that patients with VAP due to resistant strains benefit from appropriate empirical therapy.

Early identification and treatment of patients with VAP due to antibiotic-resistant strains by means of an appropriate diagnostic strategy, identification of risk factors for resistance and knowledge of the bacterial ecology of the ICU in which the patient is hospitalized are confirmed as of utmost importance.

Sepsis and infection

Sepsis: definition, prevention and costs

Several studies from various European countries have documented rates of systemic inflammatory response syndrome (SIRS) and sepsis in ICU patients and associated outcomes. However, the SIRS criteria have been criticised because of their frequency and lack of specificity. The SOAP study evaluated these in a 2-week (2002) prospective study conducted in 198 ICUs from 24 European countries, and including 3,147 admissions [15]. A very high proportion of patients (93%) had SIRS at some time during their ICU stay; the presence of > 2 SIRS criteria was associated with a higher risk of development of severe sepsis and shock, and correlated with organ failure and mortality. The study confirms the prognostic importance of SIRS, especially when > 2 criteria are present.

The addition of biochemical markers to the clinical signs of SIRS is being considered to enhance the diagnostic specificity of the syndrome, as suggested in the PIRO system. C-reactive protein (CRP) is one such marker. However, patients with liver dysfunction may not mount an adequate CRP response to infection, as shown by Mackenzie and Woodhouse [16], who found that bacteraemic patients had lower serum CRP levels (median 103 mg/l), associated with liver dysfunction, than their counterparts without such dysfunction (146 mg/l). Serum CRP concentrations may thus be unreliable for diagnosis and monitoring of bacterial sepsis in patients with liver disease.

Statins appear to have broad immunomodulatory and anti-inflammatory effects, and several recent retrospective cohort studies suggest that reduced rates and severity of infection may be associated with statin therapy prior to the occurrence of infection. Kruger et al. analysed a cohort of 438 patients with bacteraemia, 66 of whom were receiving statin therapy [17]. All-cause hospital mortality (10.6% vs 23.1%, p = 0.022) and death attributable to bacteraemia (6.1% vs 18.3%, p = 0.014) were lower in patients receiving statin therapy. This reduction persisted after adjustment for imbalances and was most apparent in patients who continued to receive statin therapy after the diagnosis of bacteraemia (n = 56). As commented upon in an accompanying editorial [18], this study adds to a growing list of papers suggesting a benefit of statin therapy in sepsis, calling for a randomized trial of statin therapy in sepsis.

In a relatively large study from the Italian GIVITTI database, Rossi et al. examined the variations of costs (hospital viewpoint) and outcome across various diagnostic groups of ICU patients [19]. Costs included length of ICU stay and cost in euros of all diagnostic and therapeutic procedures, drugs and equipment used, and consultations by physicians from other units. Cost per surviving patient, mostly determined by length of stay, was higher for ALI/ARDS, nontraumatic intracranial haemorrhage, multiple trauma, and emergency abdominal surgery, whereas losses (expenditure for patients who died/total number of patients) were higher for ALI/ARDS and lower for multiple trauma. Planned coronary and major abdominal surgery and short-stay patients appeared most cost-efficient. The authors conclude that cost of treatment in an ICU varies widely for different types of patients, and that further studies should address major determinants of high costs and low cost-efficiency.

Antimicrobial resistance and ICU-acquired infections

Simplified methods for accurate surveillance of infection are being sought. To determine whether the dipstick method could help predict urinary tract infections in catheterized ICU patients, Schwartz and Barone collected urine samples from 106 surgical ICU patients and compared dipstick urinalysis findings with subsequent quantitative culture results [20]. The presence of nitrite on dipstick urinalysis was the best indicator of infection (91.8% specificity) but had poor sensitivity (29.5%) and positive and negative likelihood ratios of 3.52 and 0.56, respectively. The dipstick urinalysis cannot be relied on in screening for potential catheter-associated urinary tract infections.

Optimizing antimicrobial dosing, especially avoiding underdosing, is an important goal to achieve effectiveness of therapy and prevent the emergence of resistance. This may be especially difficult to achieve during continuous venovenous haemofiltration (CVVH).

To predict doses and dosing intervals, Bouman et al. compared the observed CVVH clearance of various antibiotics with values predicted from the fraction unbound to protein (sieving rate, SC) and ultrafiltration rate in 45 patients on CVVH [21]. They found that amoxicillin, ceftazidime, ciprofloxacin, fluconazole, metronidazole, and vancomycin were easily filtered (mean SC > 0.7) but not flucloxacillin (mean SC 0.3); the difference between observed and predicted maintenance dose was small for all drugs, with the exception of ceftazidime and vancomycin. The authors conclude that predicted CVVH removal provides a reliable estimate for dosing antibiotics, except for those having both a narrow therapeutic index and predominantly renal clearance (e.g. vancomycin).

“Cycling” or “rotating” antibiotics has been advocated to reduce the risk of emergence and selection of bacterial resistance, but the frequency of cycles is unclear. In a 2-year prospective study in one ICU, Damas et al. examined the effect of rotating antibiotics within three subunits, using three different empirical regimens for 8 months each [22]. No significant change in overall antibiotic susceptibility was observed during the 2-year period. However, a concomitant decrease in the susceptibility of several species was observed for antibiotics used as the first-line therapy in the unit. The authors conclude that homogeneous antibiotic use for periods of several months induces bacterial resistance in common pathogens.

To assess the distribution of bacterial species and potential risk of antimicrobial resistance during long-term use of selective digestive decontamination (SDD), Heineger et al. examined these data over a 5-year period in an ICU routinely using SDD together with the “search and destroy” strategy for methicillin-resistant Staphylococcus aureus (MRSA) [23], and compared these with the average recorded within the German ICU surveillance system (SARI). Overall, 1,913 patients (26% of admissions) received SDD (colistin, tobramycin, amphotericin B). Resistance rates remained low during the 5-year period, whether for MRSA (2.76 and 2.58 isolates/1000 patient-days), or for aminoglycoside resistance in Pseudomonas aeruginosa (0.24/1000 patient-days), both well below the SARI average (4.26/1000 and 0.52/1,000 patient-days, respectively). However, the relative frequency of enterococci and coagulase-negative staphylococci (CNS) was higher than in SARI (23.2% vs 17.3% and 25.0% vs 20.6%, respectively). The authors conclude that routine use of SDD was not associated with increased antimicrobial resistance in an ICU with low baseline resistance rates and a vigorous control program, although the relative increase in enterococci and CNS is of concern.

Hemodynamics

Oxygenation, hemodynamic and coagulation monitoring

Mixed venous oxygen content reflects the relationship between whole-body oxygen consumption and cardiac output under conditions of constant arterial oxygen content. Since physically dissolved oxygen reflected by partial venous oxygen pressure can be neglected and haemoglobin might remain constant over the period of time of measurement, SvO2 could be used as a parameter able to describe peripheral oxygenation. SvO2 measurement necessitates the insertion of a pulmonary artery catheter, which is now considered a costly and invasive procedure. Alternatively, the measurement of ScvO2 with commercially available fibreoptic catheters has been suggested. In healthy subjects the oxygen saturation in the inferior vena cava is higher than in the superior vena cava, and thus SvO2 is superior to ScvO2. By contrast, in critically ill patients ScvO2 is superior to SvO2, since inferior vena cava saturation is lower than ScvO2 during cardio-circulatory shock and/or anaesthesia. This latter procedure reduces cerebral oxygen extraction, increases cerebral blood flow and thus increases ScvO2.

Despite these discrepancies, the use of ScvO2 was suggested as a surrogate for SvO2 and contradictory results were reported [24]. Some of them indicated high correlation coefficients between ScvO2 and SvO2 under various conditions, and other studies reported a lack of equivalence between ScvO2 and SvO2. Even if, according to the physiology, a precise determination of SvO2 by ScvO2 is unlikely, many authors reported parallel tracking of SvO2 and ScvO2 in different pathological conditions. Lastly, early goal-oriented therapy to maintain ScvO2 over 70% during the first 6 h have been shown to reduce mortality among patients with severe sepsis and septic shock [25].

The aim of the study performed by Varpula et al. was to assess, after initial resuscitation, the correlation and agreement between SvO2 and ScvO2 and to compare the ScvO2–SvO2 difference with lactate, oxygen-derived and haemodynamic parameters in early septic shock in the ICU [26]. In 16 septic shock patients receiving norepinephrine, these authors demonstrated that the mean SvO2 was always below the mean ScvO2. The bias of difference (ScvO2–SvO2) was 4.2%, and 95% limits of agreement ranged from –8.1% to 16.5%. The ScvO2–SvO2 difference closely and significantly correlated to cardiac index and oxygen transport. Varpula et al. also demonstrated that changes in ScvO2 and SvO2 were parallel in only 55% of two successive measurements. This study suggested that ScvO2 could not be used instead of SvO2 in guiding treatment during septic shock in ICU patients, in contrast with the results reported at the emergency department in the very early and hypovolaemic phase of severe sepsis and septic shock [25]. The authors concluded that the usefulness of ScvO2 to guide septic shock treatment in ICU patients needs to be evaluated in a prospective multicentre randomized trial.

Several recent studies have emphasized the value of respiratory variation as a surrogate for stroke volume (SV) in predicting preload responsiveness in mechanically ventilated patients [27]. To confirm that the MV-induced pulse-pressure variation could be the consequence of MV-induced SV variation, there was a need for technologies allowing acquisition of beat-by-beat SV values. Two modalities were developed and validated for this purpose, i. e. oesophageal Doppler and pulse contour cardiac output (PCCO). Although both techniques report beat-to-beat changes in SV neither has been shown to accurately measure beat-to-beat changes in SV rather than steady-state cardiac output averaged over time. Gunn et al., in an experimental study including a very small population of five purpose-bred research hounds, tested the ability of these two techniques to predict absolute changes in SV determined by one of two gold standards: aortic root flow probe or conductance catheter, which gave estimates of left ventricle SV as the difference between measured maximal and minimal left ventricle volume [28]. These authors demonstrated that for any given haemodynamic state in their study (baseline, norepinephrine, nitroprusside and dobutamine), both the oesophageal Doppler monitoring-derived stroke distance and PCCO derived SV accurately, following the directional changes in aortic flow probe-derived SV. However, these two techniques did not estimate consistently absolute SV values or track the proportional changes in SV compared with aortic flow probe determination.

Gunn et al. [28] suggested that these two devices change their relationship to SV as the cardiovascular state varies. For oesophageal Doppler monitoring these changes could be linked with changes in the distribution of flow between upper and lower arterial circuits and/or to the onset of a non-fully developed laminar flow. For the PCCO device, the three-element Windkessel model used to determine SV could be erroneous when arterial tone or myocardial contractility varies. This paper suffered from many methodological limitations, which suggests further studies with a more appropriate and well-validated design to confirm or refute these potential limitations [29]. However, this experimental study did not preclude the use of these two techniques to assess fluid responsiveness in critically ill patients, [30] since volume expansion has no significant influence on arterial tone, cardiac contractility and blood flow redistribution able to induce measurement error in the determination of absolute values of SV. Thus these two beat-by-beat SV determination techniques remain pertinent to evaluate fluid responsiveness in mechanically ventilated critically ill patients.

Prediction of the onset of cardiogenic pulmonary oedema may also require sensitive monitoring techniques. Schochat and colleagues [31] assessed the value of internal thoracic impedance monitor to predict cardiogenic pulmonary oedema in patients at risk. They examined 265 consecutive patients with no clinical signs of pulmonary oedema, extracardiac respiratory failure or pacemaker patients admitted for cardiac conditions. Monitoring of the lung's electrical impedance was used for predicting cardiogenic pulmonary oedema since accumulation of blood and fluid decreases impedance values. Thirty-seven patients developed cardiogenic pulmonary oedema while being monitored. Internal thoracic impedance decreased by more than 12% of baseline in all of them; the authors suggest in this preliminary study that this monitoring may be suitable for early prediction of cardiogenic pulmonary oedema, before the appearance of the clinical signs.

Another important monitoring aspect concerns coagulation after cardiac surgery. Merlani and colleagues [32] determined activated partial thromboplastin time (aPTT) in coagulation management after heart surgery by using point-of-care (POC) aPTT determination. They randomized 42 patients planned for valve surgery and 84 for coronary artery bypass grafting with cardiopulmonary bypass to postoperative coagulation management monitored either by central laboratory aPTT or by POC aPTT. Heparin was administered according to guidelines. For patients with valves, in the POC group the time to the desired coagulation state was reduced, as was the thoracic blood loss, reducing the number of patients who received transfusions. This improvement was not observed in the coronary artery bypass group. Side effects were similar in the two groups.

Colloid–crystalloid controversy

The colloid–crystalloid controversy refers to the debate on the respective merits and demerits of infusion of colloid or crystalloid solutions in patients suffering from hypovolaemic hypotension. Even though the recent SAFE study on albumin 4% versus saline fluid resuscitation did not report a significant superiority of albumin with regard to outcome in critically ill patients [33], and increasing data suggest renal toxicity and severe adverse effects of colloids [34], the respective amplitude of cardiac response following colloid or saline administration remained a subject of controversy.

Verheij et al. studied the effects on volume expansion and myocardial function of colloids and crystalloids in the treatment of hypovolaemia after cardiac and major vascular surgery [35]. Using a 90-min fluid challenge protocol modified from Weil et al., the authors compared in 67 randomly assigned patients the cardiac response following a fluid challenge with saline 0.9%, colloids gelatine 4%, hydroxyethylstarch 6% or albumin 5%. To reach the same objective, a significant larger infusion of saline (1,800 ml) than colloid (1,600 ml) was required and was associated with a significant decrease in the colloid osmotic pressure in the saline group and a significant increase in the plasma volume in the colloid group (3% vs 19%). Colloid infusion induced a larger and significant increase in cardiac index and stroke work index than saline infusion without a difference between the colloid groups. Verheij et al. [35] suggested that the difference in fluid response between crystalloid and colloid fluids referred only to differences in cardiac filling rather than in cardiac function. In their study the lack of change in the slope of the relationship of filling pressure to global end-diastolic volume index suggested no difference in myocardial compliance among the groups even though colloid osmotic pressure was higher in the colloid groups. Lastly, it might be emphasized that whatever the group, the increase in mean arterial pressure was similar. It could be suggested that the observed difference between colloid and saline solution should be offset in this study by an increase in the amount of infused saline solution, since the observed differences concerned cardiac filling rather than cardiac function.

It must be also kept in mind that, whatever the choice between colloid and crystalloid, three main principles might guide the clinician: start as early as possible the administration of volume expansion; control the efficacy of volume expansion with predefined goal-oriented therapy and be careful with the potential onset of severe adverse effects.

Cardiac arrest: mechanical ventilation, emergency teams

Adequate ventilation is essential for the success of resuscitation after cardiac arrest but its management remains debated. Constant flow insufflation of oxygen (CFIO) at a rate of 15 l/min via capillaries inserted in the wall of an endotracheal Boussignac tube has been previously reported for this purpose, since chest compression generates sufficient ventilation to achieve correct gas exchanges [36].

Bertrand et al., in a prospective multicentre randomized clinical trial, assigned 944 patients resuscitated for out-of-hospital cardiac arrest to either standard endotracheal intubation and MV or use of CFIO at a flow rate of 15 l/min [37]. Due to some methodological problems met during the study and to the very desperate prognosis of this cohort of patients selected as nonresponders to initial defibrillation, this trial must be seen primarily as an effectiveness study, since it suffered from a lack of power. Bertrand et al. reported no significant difference in outcome regarding return to spontaneous ventilation (CFIO 21% vs 20% with MV), hospital admission (17% vs 16 %) and ICU discharge (2.4% vs 2.3%). The main finding of their study concerned the improved detection of SpO2 in patients ventilated with CFIO. The exact mechanism of this observation needs clarification, even if its relationship with a significant increase in survival was not demonstrated. However this observation suggests that peripheral circulation and oxygenation was more improved with CFIO than with conventional MV. Lastly, a lower incidence of rib cage fracture was observed in the CFIO group. By increasing pulmonary volume and intrathoracic pressure CFIO could have attenuated chest wall injury secondary to chest massage. CFIO appears to be a safe method to provide ventilation during cardio-pulmonary resuscitation and could be used instead of conventional MV if a MV device is not available.

Lellouche et al. examined the problem of airway humidification for induced hypothermia after cardiac arrest. In a prospective, cross-over randomized study of nine adult patients hospitalized for cardiac arrest, Lellouche and colleagues [38] assessed the efficacy of different humidification devices after the induction of moderate hypothermia (33 °C for 24 h). Patients were randomly ventilated during hypothermia with a heat and moisture exchanger, a heated humidifier, and an active heat and moisture exchanger. Each system demonstrated limitations under moderate hypothermia. Heat and moisture exchangers led to major under-humidification, with absolute humidity below 25 mgH2O/l, and they should be used with caution. Heated humidifiers were mostly adequate but led to over-humidification in some cases with the currently recommended settings.

In a retrospective observational study on 279 cardiac arrests involving ward patients, the authors described the timing of cardiac arrest detection, in relation to episodes of Medical Emergency Team (MET) review and routine nursing observations [39]. Peak levels of cardiac arrest detection occurred during times of routine overnight nursing clinical observations, between 02:00 and 03:00 (OR 3.06) and between 06:00 and 07:00 (OR 1.95). After the introduction of the MET there was an inverse link between detection of cardiac arrests and levels of MET activation over the 24-h period. The authors concluded in favour of an increased overnight utilization and earlier MET activation, in order to reduce the incidence of cardiac arrests.

ECG severe adverse effects of psychotrope overdosage

The Brugada syndrome, characterized by ST segment elevation in the right precordial leads (V1- V3) associated with the presence of a right bundle branch block, is a rare familial entity with autosomal dominant inheritance and variable expression. Such an ECG aspect is observed occasionally in patients with myocardial infarction, pulmonary embolism or after cocaine and psychotropic drug overdosages.

Monteban-Kooistra et al. retrospectively studied electrocardiographic indicators of toxicity in 35 patients admitted for tricyclic antidepressants overdose over a 4-year period [40]. They observed a Brugada-like pattern in six of these patients (17%, two deaths) which resolved quickly after sodium bicarbonate administration. Due to the retrospective design of that study and the small number of patients, it was not possible to determine whether the Brugada pattern was explained by the severity of the intoxication, a synergistic effect with other ingested medications or to an increased sensitivity of the patient.

As for the classical and frequently observed intraventricular conduction disturbances, the administration of sodium bicarbonate is required and rapidly efficacious. The diagnosis of a Brugada pattern in the setting of poisoning with tricyclic antidepressants is not a reason to perform the flecainide test to diagnose a true covert Brugada syndrome.

QT prolongation after drug overdose may expose patients to life-threatening arrhythmias and may need specific therapy. For this reason, Isbister et al. developed guidelines for the management of QT prolongation after citalopram overdose, including decontamination with activated charcoal and cardiac monitoring, based on a simulation study using a previously developed pharmacokinetic–pharmacodynamic model which predicted the time-course of QT prolongation and the effect of citalopram dose and use of therapy [41]. Such a model may help clinicians to decide which patients to decontaminate and monitor. This article was later discussed in the journal [42, 43].

Arrhythmias

Among arrhythmias recorded in the ICU, atrial fibrillation (AF) is the most widely observed. The prevalence of AF in ICUs is much higher than in the general population even if this incidence is heterogeneous and highly dependent on the underlying condition of the patient (i. e. cardiac patients, age, severity of the illness assessed by SAPS II, sSIRS, etc.). Many other factors could explain the onset of AF in ICU patients, such as hypovolaemia, hypoxia, electrolyte disorders, especially dyskaliaemia, stress, pain and anxiety.

Seguin et al. [44] reported a prospective observational study to evaluate the incidence and risk factors of AF in trauma patients requiring surgical intensive care management. In this population, they observed a 5.5% occurrence of AF (16/293). At the onset of AF (median 3 days), potassium and troponin levels were not different between patients with or without AF. Patients who suffered from AF have received more fluids and catecholamines and suffered more frequently from septic shock. A multivariate analysis, which should be interpreted with caution because of the low incidence of outcome events, suggested five independent risk factors for developing AF: catecholamines, SAPS > 30, age > 40 years, presence of SIRS and three or more traumatized regions. Although hospital mortality was twice as high in the AF group (31% vs 15%) this difference was not significant.

When sustained AF develops in ICU patients, whatever its aetiology, anticoagulation with unfractioned heparin is mandatory unless there are contraindications. By contrast, the need for antiarrhythmic agents and/ or cardioversion might not be discussed before correcting underlying disorders such as hypovolaemia or switching inotropic and vasopressive agents, and after echocardiographic evaluation.

Critically ill elderly patients

Elderly population is growing in developed countries, in the hospitals and in the ICUs. In the early years of the twenty-first century, patients > 75 years old represent 20–25% (vs 12% in the late 1990s), and 5% of ICU patients are ≥ 85 years old, with strong variability between studies in the admission policies and demographic contexts. In a group of 578 patients aged 80 years or older, De Rooij et al. distinguished unplanned surgical, unplanned medical and planned surgical admissions [45]. They found ICU mortality of 34.0%, 37.7% and 10.6% respectively; 12 months after hospital discharge, they observed 62.1%, 69.2% and 21.6% mortality rates. In addition, the median survival of planned surgical patients did not differ from survival observed in the same age group in the general population. Independent risk factors for ICU mortality were variables reflecting the severity of illness; only altered renal function was an independent risk factor for 1-year mortality.

Torres et al. [46] evaluated short- and long-term outcomes of patients ≥ 65 years old treated at an intermediate care unit. Patients were admitted from the emergency department, acute hospital wards, ICU, or directly from other hospitals and required high-dependency care (monitoring, inotropic agents, noninvasive ventilation, arterial or venous catheter), but were not severely ill as confirmed by a mean APACHE II score of 15 points (9 points without age points). Hospital stay was longer and 2-year mortality higher in patients 65 or over (34% compared with 10.6% in patients < 65 years). In contrast, no statistically significant differences were observed in hospital mortality, discharge to a long-term facility; or 2-year readmission. The authors confirmed that severity of illness and therapeutic intervention were predictors of short-term mortality, whereas comorbidity was the strongest predictor of long-term mortality.

Garrouste-Orgeas et al. [47] report the results of a study evaluating triage decisions after ICU admission requests and subsequent outcomes in 180 patients aged 80 years or over. ICU admission was refused in 73.3% of cases: because the patients were considered too sick to benefit in 44%, and too well to benefit in 29%. Hospital mortality was 62.5% in the admitted patients, 70.8% in the group “too sick to benefit” and 17.6% in the group “too well to benefit”. Similarly, 1-year mortality was 70.8%, 87.3% and 47% respectively. Self-sufficiency 1 year after hospital discharge was not modified by ICU stay; in contrast, quality of life was poorer in ICU patients than in the same-age general population.

In their editorial commenting on these three studies, Boumendil and Guidet [48] emphasize the importance of several points: (1) Evaluation of outcomes of critically ill elderly patients is influenced by a selection bias with only older patients in good condition being admitted in the ICU. (2) Older age is associated with lower resource intensity and lower hospital costs. (3) ICU mortality and hospital mortality are mainly related to the diagnosis type of admission and initial severity. (4) Long-term surviving patients globally regain their previous health status.

Elderly patients in ICUs are a major concern and a matter of debate in developed countries. In general, age by itself needs not be a barrier to ICU admission; physicians probably tend to overestimate the importance of age in survival and underestimate the quality of life for elderly survivors. To avoid underutilization of ICUs for the elderly, admission policies must be better defined. We specifically need to identify elderly patients who may not benefit from intensive care. Finally, studies evaluating long-term outcomes, particularly quality of life and costs, are needed so that patients' families and physicians can make decisions based on expected outcomes and patient/family wishes.

Acute respiratory distress syndrome

Biological markers

Biological markers of collagen deposition and degradation and surfactant inactivation are present in bronchoalveolar lavage fluid (BALF) of patients with acute respiratory distress syndrome (ARDS). The relationships between these markers and the elastic behaviour of the respiratory system in ARDS patients could give pathophysiological insights into this syndrome. Demoule et al. [49] assessed the relationships between compliance measured on pressure–volume curves (at zero and 10 cmH2O) and biological markers of collagen turnover and surfactant degradation in (BALF) in 17 patients with early ARDS. Compliance was significantly correlated with markers of collagen turnover and surfactant degradation, these correlations being stronger when the pressure–volume (PV) curves were traced at 10 cmH2O. The authors concluded that respiratory system compliance may be influenced by collagen turnover and surfactant degradation.

Predicting progression of aspiration pneumonitis to full-blown ARDS is difficult. El Solh et al. [50] hypothesized that alveolar plasminogen activator inhibitor-1 (PAI-1) could identify patients who had been observed to aspirate gastric contents to be at risk for progression to ARDS. In 51 patients with witnessed aspiration, alveolar fluid sampling was performed within 8 h of intubation. Alveolar PAI-1 antigen levels were 2687 ± 1498 mg/ml in those who progressed to ARDS (n = 17) and 587 ± 535 ng/ml in those with uncomplicated aspiration pneumonitis (n = 34). A cut-off level of alveolar PAI-1 > 1518 ng/ml had a 82% sensitivity and 97% specificity in predicting progression to ARDS. Plasma levels of PAI-1 antigen did not differ between the two groups. The authors concluded that alveolar PAI-1 antigen levels can be used as a valid biomarker of progression to ARDS in those patients with documented aspiration.

Karagiorga and colleagues [51] presented an interesting paper that investigated the levels of neutral lipids in the (BALF) from 13 patients with fat embolism syndrome (FES), 11 with ALI/ARDS of diverse origin and 5 without cardiopulmonary disease. Total BAL protein in the FES group was significantly higher than in ALI/ARDS. The alterations in individual phospholipid classes were similar. Total cholesterol, lipid esters and monoglycerides were significantly higher in the FES group than in the other groups. The level of platelet-activating factor in FES was significantly higher. Patients with FES and ALI/ARDS had phospholipase A2 activity significantly higher than controls. The authors suggested that the levels of neutral lipids in BALF can be used to distinguish patients with FES from ALI/ARDS.

In a prospective multicentre observational study on 68 ARDS patients requiring MV, the Quebec Critical Care Network [52] found that higher initial serum concentration of Clara cell protein (CC-16), a lung epithelium-specific small protein, was associated with increased risk of death, fewer ventilator-free days, and increased frequency of nonpulmonary multiple organ failure (ρ = 0.36). The median serum levels of CC-16 were significantly higher in nonsurvivors than survivors on days 0–2 (19.93 μg/l, IQR 11.8–44.32, vs. 8.9 μg/l, 5.66–26.38) and sustained up to day 14, allowing the conclusion that CC-16 is a valuable biomarker of ARDS that may help predict outcome among ARDS patients at high risk of death. This new interesting predictor needs, however, prospective validation in a large ARDS population.

Monitoring

Pulmonary arterial hypertension (PAH) is a frequent finding in ARDS patients and may provide prognostic information. Direct measurement of pulmonary artery pressure by means of a pulmonary artery catheter is invasive. In contrast, CT can measure pulmonary artery trunk diameter, which may provide noninvasive estimates of pulmonary artery pressure. Beiderlinden et al. [53] carried out a study in 103 mechanically ventilated ARDS patients who were admitted to a referral centre. Patients underwent pulmonary artery catheterization and chest CT on admission: 95 patients had PAH as assessed by right heart catheterization. Pulmonary artery trunk diameter (≥ 29 mm) assessed by chest CT had poor sensitivity and specificity (0.54 and 0.63 respectively) to predict moderate/severe PAH. The positive predictive value was 0.83 and the negative predictive value was 0.28. The diagnosis of PAH by CT measures of pulmonary artery trunk diameter was incorrect in 43.7% of patients. The authors concluded that chest CT is an unreliable clinical tool to predict pulmonary artery pressure in ARDS patients.

ALI/ARDS is characterized by permeability oedema and may be aggravated by fluid loading. Groeneveld and Verheij [54] studied 22 consecutive patients with sepsis-induced ALI/ARDS under MV (12 from direct injury, i. e. pneumonia, and 10 from extrapulmonary sources) before and after fluid loading with crystalloids or colloids. Lung protein permeability or pulmonary leak index was assessed by 67Ga-labeled transferrin and 99mTc-labeled red blood cells. Extravascular lung water (EVLW) and intrathoracic or pulmonary blood volume (ITBV/PBV) were also measured by thermal-dye dilution as an index of permeability. The ratio EVLW/ITBV or EVLW/PBV was always poorly correlated with the pulmonary leak index, and it was concluded that these ratios are an imperfect measure of increased protein permeability, independently of fluid status and the colloid osmotic pressure.

Recording PV curves of the respiratory system in ARDS patients provides important physiological understanding of the respiratory system. A major drawback is that muscle paralysis is required, thus limiting the clinical use of this measure. Decailliot and colleagues [55] studied the feasibility of recording PV curves without muscle paralysis. In 19 haemodynamically stable ARDS patients, PV curves were obtained from zero PEEP and 10 cmH2O PEEP, with FiO2 1, and apnoeic sedation and neuromuscular blockade were employed in random order. In two patients, fluid resuscitation was given during apnoeic sedation. A high level of agreement for the linear part of the PV curve, the upper inflection point and the lower inflection point was obtained. In five patients in whom an oesophageal balloon was inserted, high agreement was also obtained for chest wall compliance. Authors concluded that PV curves obtained under apnoeic sedation are an alternative to muscle paralysis, but the level of intraindividual variability indicate that PV curves without paralysis should be used with caution in the clinical setting.

Gas exchange

Patients with ARDS have a high Vd/Vt ratio. When these patients are mechanically ventilated, proper conditioning of inspired gas is commonly performed with heat and moisture exchangers (HME) or heated humidifiers (HH). HME, however, contribute to added instrumental dead space and further compromise alveolar ventilation. Moran and co-workers [56] analysed the effects of substituting HME by HH in ARDS patients. Such intervention, without changes in clinically adjusted ventilator parameters, induced a PaCO2 decrease from 46 ± 9 mmHg to 40 ± 8 mmHg (p < 0.001), as expected. The authors also studied the effects of a progressive reduction in Vt during HH ventilation so as to keep PaCO2 unchanged ( 46 ± 9 vs 45 ± 9 mmHg). Such strategy allowed for a significant reduction in Vt (8.3 ± 1.6 vs 6.9 ± 1.8 ml/kg predicted body weight, p < 0.001), Pplat (25 ± 6 vs 21 ± 6 cmH2O, p < 0.001), and physiologic Vd (279 ± 74 vs 243 ± 79 ml, p < 0.001), and a significant increase in respiratory system compliance (35 ± 12 vs 42 ± 15 ml/cmH2O, p = 0.001). The authors concluded that substituting HME by HH and maintaining isocapnic conditions by reducing Vt improved respiratory system compliance, suggesting less overdistension.

Breathing pure oxygen may cause resorption atelectasis, loss of aerated lung volume and increased intrapulmonary shunting of blood. Aboab et al. [57] examined, in a group of 14 ALI/ARDS patients ventilated with an average Vt of 6 ml/kg, the impact of 100% oxygen breathing on derecruitment. Four combinations of PEEP and FiO2 were studied in random order and during a period of 30 min: PEEP 5 ± 1 and 14 ± 3 cmH2O, FiO2 0.6 and 1. A significant decrease in recruited volume and PaO2/FiO2 ratio were only observed at the end of the ventilation period with PEEP 5 and FiO2 1 (68 ± 53 vs 39 ± 43 ml, p = 0.02, and 196 ± 104 vs 153 ± 83 mmHg, p = 0.03, respectively). The authors concluded that derecruitment occurs in ALI/ARDS patients ventilated at low PEEP and FiO2 1 and that this is prevented by high PEEP levels. This article was accompanied by an editorial comment from Hedenstierna [58].

Mechanisms explaining improvement in gas exchange when patients with hypoxaemic respiratory failure are changed from supine to prone position include alveolar recruitment and/or redistribution of inspired gas. It is unknown whether such mechanisms may influence outcome. Lemasson and co-workers [59] performed a retrospective analysis to study the relationships between the gas exchange response during the first prone position session and the outcome. They found that lack of gas exchange improvement (increase in PaO2/FiO2 ≥ 20%) was a significant outcome predictor, with an 82.5% increase in the risk of death. Nevertheless, the association disappeared when change in oxygenation from day 1 to day 2 was taken into account. The authors' conclusion was that in this population, the most important predictor of mortality is underlying illness.

Mechanical ventilation

In the year 2006 several manuscripts have been published in this field, dealing with a broad spectrum of interests such as noninvasive ventilation, physiology of patient–ventilator interaction and weaning.

Noninvasive mechanical ventilation

Due to the high number of recent publications that have further documented the benefits and expanded the indications, noninvasive mechanical ventilation (NIV) has gained a lot of interest in the medical world in the past few years; however, its actual use in everyday life is still not well explored.

In a prospective survey in 70 French ICUs, Demoule et al. [60]. examined the change of NIV utilization compared with a previous study performed 5 years earlier. A total of 1076 patients received ventilatory support in the 3-week study period. Overall NIV use increased from 16% in 1997 to 24% in 2002 in all ICU patients requiring ventilatory assistance and from 35% to 52% in those without endotracheal intubation before or at ICU admission, showing a clear rise in the use of NIV over this 5-year period. In particular, significant increases were observed for acute-on-chronic respiratory failure and de novo respiratory failure, while NIV was basically never used in patients with coma. The success rate remained unchanged (62%) compared with the study performed in 1997, but since a larger number of patients were treated in 2002 with NIV, the proportion of patients successfully treated in the French ICUs increased significantly over the 5-year period from 9% to 13% of all patients receiving ventilatory support.

When applying NIV, great attention should be paid to the risks of the technique and in particular to unduly delay the time of endotracheal intubation. Demoule et al. [61], in a companion paper, investigated whether the risks and benefits of NIV differ between patients with de novo respiratory failure and those with cardiogenic pulmonary oedema (CPE) or acute-on-chronic respiratory failure (AOC) after adjustment for disease severity. Data were obtained from the above-mentioned survey and the outcomes were compared with those with endotracheal intubation without the use of NIV. The success of NIV was independently associated with survival in both de novo (OR 0.05) and CPE-AOC (OR 0.03) groups. A decreased risk of nosocomial pneumonia was observed only in the latter group, while a shorter ICU stay than when conventional ventilation was used as first-line treatment was demonstrated in both groups. NIV failure was associated with ICU mortality in the de novo group, so that the authors raise a note of caution when applying NIV in this indication. This study confirmed, on the other hand, that NIV should be considered the first-line ventilatory treatment for AOC and CPE-related acute respiratory failure.

One of the key issues when applying NIV is to obtain a good tolerance to this mode of ventilation, without frequent intervention by the operators. Battisti et al. [62], evaluated the feasibility of using a knowledge-based system designated to automatically titrate pressure support (PS), in order to maintain the patients in a “comfort zone”. The algorithm employed adapts the level of PS to continuously monitored patient data, pursuing the goal of keeping the patient in a zone defined as a respiratory rate < 30 bpm, tidal volume above a minimum threshold set by the operator according to the patient's body weight, and end-tidal expiratory CO2 < 55 mmHg. In this physiological study this automatic system was applied for 45 min after initial start-up in conventional PS. During this brief period the closed-loop mode was well tolerated, minute ventilation was kept constant and respiratory rate significantly dropped from its pre-NIV value, so that also PaCO2 was significantly decreased in the subgroup of hypercapnic patients. These preliminary data suggest that this automatic adjustment system may be used during NIV, but this should be validated in future clinical studies.

Conventional mechanical ventilation

Both during NIV and conventional MV it is important to optimize patient comfort and reduce the work of breathing, and finally to achieve a good match between patient respiratory efforts and ventilator deliver breath. The following three papers deal with these issues, while the other two with the problem of secretion removal and rehabilitation.

Thille et al. [63] assessed for the first time the incidence in “real life” of patient-ventilator asynchrony in intubated patients. 62 patients were prospectively studied as soon as they were able to trigger all ventilator breaths, either during PS or assist-control ventilation. Various types of asynchrony were defined (i. e. double triggering, ineffective triggering, autotriggering, short and prolonged cycles). Asynchrony detection was based on flow and airway pressure signals analyzed by two blinded investigators and was defined as the number of asynchrony events divided by the total respiratory rate computed as the sum of the number of ventilator cycles and of wasted efforts. 15/62 (24%) had a significant asynchrony (i. e. > 10% of the respiratory efforts), with double triggering being more common during ACV, while no difference with PS was found for ineffective triggering. This latter was associated with less sensitive inspiratory trigger, higher level of PS, higher tidal volume and pH. A high incidence of asynchrony was also associated with a longer duration of MV. Therefore it was concluded that patient-ventilator asynchrony is quite common during assisted modes of ventilation and that optimization of ventilatory settings may reduce the occurrence of mismatching.

It is well known that in mechanically ventilated patients, both resistance and elastance of respiratory system may vary considerably from time to time. This variation may change the level of assistance during PS and Proportional assist ventilation (PAV) resulting in over or under assist, patient–ventilator asynchrony and/or increased inspiratory muscle effort. Kondili et al. [64] have studied the physiological short tem effect on respiratory motor output of a newly developed software used during PAV (PAV+) which automatically adjusts the flow and volume gain factors such as to represent always constant fractions of the measured values of resistance and elastance. In 10 sedated patients respiratory workload was artificially increased during PAV+ and PS. Although with both modes minute ventilation was preserved after load application, during PAV+ the patients were able to maintain constant minute ventilation with statistically significant less inspiratory effort than during PS. In addition, with PAV+ the index of neuroventilatory coupling remained relatively independent of load and the magnitude of load induced tidal volume reduction and breathing frequency increase was significantly smaller than with PS. Clearly further studies are needed to examine the response to sustained load changes and to establish whether PAV+ may influence the clinical outcome.

Volta and co-workers [65] assessed in an observational short-term study the hypothesis that different levels of FiO2 may modulate the respiratory drive and dyspnea in 13 patients ventilated with PS for acute respiratory failure. They found that the respiratory drive (P0.1) can be modulated by varying the FiO2, since hyperoxaemia (FiO2 = 60%) was associated with a with a decrease in P0.1, dyspnea, breathing frequency and minute ventilation. The reduction in respiratory drive was statistically related to an improvement in dyspnea, even for SaO2 > 60% and SaO2 higher than 98%, suggesting a direct influence of PaO2 on respiratory drive. Since there is scientific evidence that prolonged exposure to hyperoxaemia does not result in demonstrable pulmonary toxicity if PaO2 is < 255 mmHg, the Authors suggested that moderate level of hyperoxaemia may reduce the need for sedation in patients with dyspnea and discomfort.

MV may be used not only to deliver a “true” ventilatory support but also in certain instances as a rehabilitative tool. In their multicentre controlled study, Clini and co-workers [66] randomized 46 tracheotomized patients weaned from MV to receive usual chest physiotherapy (control) or intrapulmonary percussive ventilation (IPV) via the tracheotomy tube in addition to the previous rehabilitative treatment. IPV is a ventilatory technique that delivers bursts of high-flow respiratory gas in the lung at high respiratory rates. During this ventilation a continuous positive pressure is maintained, while a high-velocity percussive inflow opens airways and might enhance mobilization of intrabronchial secretions. After 15 days of treatment the patients in the intervention group had a significantly better improvement in oxygenation rate and higher maximal expiratory pressure, while at 1-month follow-up they had a lower incidence of pneumonia. No major side effects were observed with the use of IPV. The physiological mechanisms leading to these positive results were not directly investigated and therefore further studies are needed to assess the efficacy of IPV, especially in critically ill patients.

Respiratory distress in trauma

Trauma patients with respiratory distress need to be diagnosed early on. In a multicentric cohort of 1,481 patients with blunt and penetrating trauma, cared for by a prehospital mobile ICU, Raux et al. [67] assessed the role of respiratory rate (RR) and SpO2 in predicting death. Injury Severity Score (ISS) and Trauma Related Injury Severity Score (TRISS) were calculated. Systolic arterial blood pressure, heart rate and Glasgow coma scale were recorded in 99% of the patients, but not RR (63%) and SpO2 (67%). Whatever the manner of expressing RR and SpO2 (continuous, five classes, dichotomous), none was significant in predicting mortality with TRISSn. The authors concluded that RR and SpO2 do not add significant value to other variables when predicting mortality in severe trauma patients. This paper prompts us to reflect on how often RR and SpO2 are disregarded during extra-hospital management of acute trauma.

Weaning from mechanical ventilation

Weaning from MV is still an important aspect of ICU care, accounting for a large proportion of human and economical resources.

The adoption of a standardized approach consisting, among other issues, of measuring weaning index has became a standard practice in most of the ICUs in order to expedite the process. The ratio of frequency to tidal volume (f/VT) has been considered the most useful index in the clinical practice, but does not assess ventilatory endurance, an important prerequisite for successful weaning. Vassilakopoulos et al. [68] explored the usefulness of a derived index of endurance (i. e. the mean inspiratory airway pressure during controlled ventilation over maximum inspiratory pressure PI/MIP) in 120 mechanically ventilated patients, 75 of whom were used to test the performance of various indices, while 45 were employed to prospectively validate previously derived threshold values. The combination of the two indices, i. e. f/VT and PI/MIP, was able to accurately predict weaning outcome, providing prognostic information not offered by either index alone. In particular, the prospective validation showed that the chosen cut-off point had 89% sensitivity, 67% specificity and 85% correct classifications. The combination was also superior to any index alone.

The pathophysiology underlying weaning failure is complex, and the relative weight of the different factors involved is not completely understood. In particular, most of the indices proposed are related to assess the function of the respiratory pump and/or gas exchange, while little emphasis has been placed so far on the cardiac function and, in particular, volume status. Mekontso-Dessap et al. [69] investigated the role of B-type natriuretic peptide (BNP), a powerful marker of cardiac dysfunction, in the weaning process. One hundred and two patients were prospectively recruited when considered ready to undergo a 1-h weaning trial, where BNP was measured just before and at the end. Logistic regression analysis identified high BNP level before the trial and the product of airway pressure and breathing frequency as independent risk factors for weaning failure. Interestingly, the subset of patients in whom the weaning process failed, but then succeeded on a later occasion after diuretic therapy, decreased their BNP levels from those recorded at the time of the first attempt. Given that BNP is a predictor of left ventricular dysfunction and its plasma level correlated to left ventricular filling pressure, the authors concluded that this hormone could be useful to guide therapy when managing acute left ventricular failure or fluid overload during weaning.

Among the weaning indices, f/VT has been most frequently used in clinical practice in the past few years, based on a study published in the year 1991. The usefulness of this index has been recently challenged by the ACCP Task Force, which concluded in its meta-analysis that f/VT is not reliable in predicting weaning outcome. Tobin and Jubran [70] extracted data from all the studies using this index as a weaning predictor in an attempt to determine whether variation in the reliability of f/VT was explained by spectrum and test-referral bias, as reflected by variation in pre-test probability of success. As a matter of fact a meta-analysis is statistically valid when it is free of significant heterogeneity. The authors found a marked heterogeneity in pre-test probability of success outcome among studies in the meta-analysis. They also entered the data from the ACCP Task Force into a bayesian model using the pre-probability test (prevalence of success) as the operating point, and they found that post-test probabilities of success were closely correlated with the values predicted by the original 1991 study (r = 0.86 for positive predicted value and r = 0.82 for negative predictive value). Average sensitivity, considered the most precise measure of screening-test reliability, was 0.87, and average specificity was 0.52. Thus, contrary to the conclusion reached by the ACCP Task Force, it was demonstrated that f/VT index is a reliable screening test for successful weaning.

This paper was accompanied by an editorial by Connors [71]. The complex mathematical procedure of the paper might have obscured to the average reader what was the main point of the study. It is true that Tobin and Jubran re-evaluated the value of f/VT, but according to the editorial, the paper also asks us to change the way we think about the process of weaning. The f/VT index can be thought of as a screening test with high sensitivity but relatively low specificity, and should therefore be employed very early in the course of MV to identify patients, as soon as possible, who can breathe on their own. Applying a confirmatory test thereafter increases the specificity of the combined tests and reduces the number of false positives. This model may prove very useful for minimizing time on the ventilator and reducing the discomfort and risk of continued intubation in a patient who is ready to wean.