922 resultados para Central pulse pressure
Resumo:
BACKGROUND: Several conversion tables and formulas have been suggested to correct applanation intraocular pressure (IOP) for central corneal thickness (CCT). CCT is also thought to represent an independent glaucoma risk factor. In an attempt to integrate IOP and CCT into a unified risk factor and avoid uncertain correction for tonometric inaccuracy, a new pressure-to-cornea index (PCI) is proposed. METHODS: PCI (IOP/CCT(3)) was defined as the ratio between untreated IOP and CCT(3) in mm (ultrasound pachymetry). PCI distribution in 220 normal controls, 53 patients with normal-tension glaucoma (NTG), 76 with ocular hypertension (OHT), and 89 with primary open-angle glaucoma (POAG) was investigated. PCI's ability to discriminate between glaucoma (NTG+POAG) and non-glaucoma (controls+OHT) was compared with that of three published formulae for correcting IOP for CCT. Receiver operating characteristic (ROC) curves were built. RESULTS: Mean PCI values were: Controls 92.0 (SD 24.8), NTG 129.1 (SD 25.8), OHT 134.0 (SD 26.5), POAG 173.6 (SD 40.9). To minimise IOP bias, eyes within the same 2 mm Hg range between 16 and 29 mm Hg (16-17, 18-19, etc) were separately compared: control and NTG eyes as well as OHT and POAG eyes differed significantly. PCI demonstrated a larger area under the ROC curve (AUC) and significantly higher sensitivity at fixed 80% and 90% specificities compared with each of the correction formulas; optimum PCI cut-off value 133.8. CONCLUSIONS: A PCI range of 120-140 is proposed as the upper limit of "normality", 120 being the cut-off value for eyes with untreated pressures
Resumo:
BACKGROUND: Noninvasive intraocular pressure (IOP) measurement in mice is critically important for understanding the pathophysiology of glaucoma. Rebound tonometry is one of the methods that can be used for obtaining such measurements. We evaluated the ability of the rebound tonometer (RT) to determine IOP differences among various mouse strains and whether differences in corneal thickness may affect IOP measurements in these animals. MATERIALS AND METHODS: Five different commonly used mouse strains (BALB/C, CBA/CAHN, AKR/J, CBA/J, and 129P3/J) were used. IOP was measured in eyes from 12 nonsedated animals (6 male and 6 female) from each strain at 2 to 3 months of age using the RT. IOPs were measured in all animals, on 2 different days between 10 AM and 12 PM. Subsequently, a number of eyes from each strain were cannulated to provide a calibration curve specific for that strain. Tonometer readings for all strains were converted to apparent IOP values using the calibration data obtained from the calibration curve of the respective strain. For comparison purposes, IOP values were also obtained using the C57BL/6 calibration data previously reported. IOP for the 5 strains, male and female animals, and the different occasion of measurement were compared using repeat measures analysis of variance. The central corneal thickness (CCT) of another group of 8 male animals from each of the 5 strains was also measured using an optical low coherence reflectometry (OLCR) pachymeter modified for use with mice. CCT values were correlated to mean IOPs of male animals and to the slopes and intercept of individual strain calibration curves. RESULTS: Noninvasive IOP measurements confirm that the BALB/C strain has lower and the CBA/CAHN has higher relative IOPs than other mouse strains while the AKR/J, the CBA/J, and the 129P3/J strains have intermediate IOPs. There is a very good correlation of apparent IOP values obtained by RT with previously reported true IOPs obtained by cannulation. There was a small but statistically significant difference in IOP between male and female animals in 2 strains (129P3/J and AKR/J) with female mice having higher relative IOPs. No correlation between CCT and IOP was detected. CCT did not correlate with any of the constants describing the calibration curves in the various strains. CONCLUSIONS: Noninvasive IOP measurement in mice using the RT can be used to help elucidate IOP phenotype, after prior calibration of the tonometer. CCT has no effect on mouse IOP measurements using the RT.
Resumo:
BACKGROUND: Ibopamine is an alpha-adrenergic agent and causes an elevation of intraocular pressure in eyes with increased outflow resistance. It has been proposed as a test substance for the detection of early ocular hydrodynamic disorders. PATIENTS AND METHODS: A total of 64 normal-tension glaucoma suspect eyes without anti-hypertensive treatment were enrolled. A daily pressure curve was registered with measurements at 7:00 am, 8:00 am, 12:00 am, 17:00 pm using an applanation tonometer and a contour tonometer followed by instillation of ibopamine 2% in both eyes. Tonometry was performed every 15 minutes during the following hour. An IOP increase of > 2.0 mmHg was considered positive. RESULTS: The positive test group showed a significant pressure increase from 18.04 to 22.06 mmHg. Ocular pulse amplitude increased from 2.96 to 3.97 mmHg and was positively correlated with the pressure. Intraocular pressure was unchanged in the negative test group. Central corneal thickness was not significantly different in the two groups (p = 0.32). CONCLUSIONS: Ibopamine 2% eye drops have a positive pressure effect in 50% of suspected normal-tension glaucoma eyes and may differentiate between eyes with normal trabecular outflow capacity and eyes with increased resistance in the trabecular meshwork that are prone to pressure peaks and deterioration to glaucoma.
Resumo:
An epidural puncture was performed using the lumbosacral approach in 18 dogs, and the lack of resistance to an injection of saline was used to determine that the needle was positioned correctly. The dogs' arterial blood pressure and epidural pressure were recorded. They were randomly assigned to two groups: in one group an injection of a mixture of local anaesthetic agents was made slowly over 90 seconds and in the other it was made over 30 seconds. After 10 minutes contrast radiography was used to confirm the correct placement of the needle. The mean (sd) initial pressure in the epidural space was 0.1 (0.7) kPa. After the injection the mean maximum epidural pressure in the group injected slowly was 5.5 (2.1) kPa and in the group injected more quickly it was 6.0 (1.9) kPa. At the end of the period of measurement, the epidural pressure in the slow group was 0.8 (0.5) kPa and in the rapid group it was 0.7 (0.5) kPa. Waves synchronous with the arterial pulse wave were observed in 15 of the dogs before the epidural injection, and in all the dogs after the epidural injection.
Resumo:
ABSTRACT: INTRODUCTION: Low blood pressure, inadequate tissue oxygen delivery and mitochondrial dysfunction have all been implicated in the development of sepsis-induced organ failure. This study evaluated the effect on liver mitochondrial function of using norepinephrine to increase blood pressure in experimental sepsis. METHODS: Thirteen anaesthetized pigs received endotoxin (Escherichia coli lipopolysaccharide B0111:B4; 0.4 mug/kg per hour) and were subsequently randomly assigned to norepinephrine treatment or placebo for 10 hours. Norepinephrine dose was adjusted at 2-hour intervals to achieve 15 mmHg increases in mean arterial blood pressure up to 95 mmHg. Systemic (thermodilution) and hepatosplanchnic (ultrasound Doppler) blood flow were measured at each step. At the end of the experiment, hepatic mitochondrial oxygen consumption (high-resolution respirometry) and citrate synthase activity (spectrophotometry) were assessed. RESULTS: Mean arterial pressure (mmHg) increased only in norepinephrine-treated animals (from 73 [median; range 69 to 81] to 63 [60 to 68] in controls [P = 0.09] and from 83 [69 to 93] to 96 [86 to 108] in norepinephrine-treated animals [P = 0.019]). Cardiac index and systemic oxygen delivery (DO2) increased in both groups, but significantly more in the norepinephrine group (P < 0.03 for both). Cardiac index (ml/min per.kg) increased from 99 (range: 72 to 112) to 117 (110 to 232) in controls (P = 0.002), and from 107 (84 to 132) to 161 (147 to 340) in norepinephrine-treated animals (P = 0.001). DO2 (ml/min per.kg) increased from 13 (range: 11 to 15) to 16 (15 to 24) in controls (P = 0.028), and from 16 (12 to 19) to 29 (25 to 52) in norepinephrine-treated animals (P = 0.018). Systemic oxygen consumption (systemic VO2) increased in both groups (P < 0.05), whereas hepatosplanchnic flows, DO2 and VO2 remained stable. The hepatic lactate extraction ratio decreased in both groups (P = 0.05). Liver mitochondria complex I-dependent and II-dependent respiratory control ratios were increased in the norepinephrine group (complex I: 3.5 [range: 2.1 to 5.7] in controls versus 5.8 [4.8 to 6.4] in norepinephrine-treated animals [P = 0.015]; complex II: 3.1 [2.3 to 3.8] in controls versus 3.7 [3.3 to 4.6] in norepinephrine-treated animals [P = 0.09]). No differences were observed in citrate synthase activity. CONCLUSION: Norepinephrine treatment during endotoxaemia does not increase hepatosplanchnic flow, oxygen delivery or consumption, and does not improve the hepatic lactate extraction ratio. However, norepinephrine increases the liver mitochondria complex I-dependent and II-dependent respiratory control ratios. This effect was probably mediated by a direct effect of norepinephrine on liver cells.
Resumo:
Strain rate significantly affects the strength of a material. The Split-Hopkinson Pressure Bar (SHPB) was initially used to study the effects of high strain rate (~103 1/s) testing of metals. Later modifications to the original technique allowed for the study of brittle materials such as ceramics, concrete, and rock. While material properties of wood for static and creep strain rates are readily available, data on the dynamic properties of wood are sparse. Previous work using the SHPB technique with wood has been limited in scope to variability of only a few conditions and tests of the applicability of the SHPB theory on wood have not been performed. Tests were conducted using a large diameter (3.0 inch (75 mm)) SHPB. The strain rate and total strain applied to a specimen are dependent on the striker bar length and velocity at impact. Pulse shapers are used to further modify the strain rate and change the shape of the strain pulse. A series of tests were used to determine test conditions necessary to produce a strain rate, total strain, and pulse shape appropriate for testing wood specimens. Hard maple, consisting of sugar maple (Acer saccharum) and black maple (Acer nigrum), and eastern white pine (Pinus strobus) specimens were used to represent a dense hardwood and a low-density soft wood. Specimens were machined to diameters of 2.5 and 3.0 inches and an assortment of lengths were tested to determine the appropriate specimen dimensions. Longitudinal specimens of 1.5 inch length and radial and tangential specimens of 0.5 inch length were found to be most applicable to SHPB testing. Stress/strain curves were generated from the SHPB data and validated with 6061-T6 aluminum and wood specimens. Stress was indirectly corroborated with gaged aluminum specimens. Specimen strain was assessed with strain gages, digital image analysis, and measurement of residual strain to confirm the strain calculated from SHPB data. The SHPB was found to be a useful tool in accurately assessing the material properties of wood under high strain rates (70 to 340 1/s) and short load durations (70 to 150 μs to compressive failure).
Resumo:
Pulse wave velocity (PWV) is a surrogate of arterial stiffness and represents a non-invasive marker of cardiovascular risk. The non-invasive measurement of PWV requires tracking the arrival time of pressure pulses recorded in vivo, commonly referred to as pulse arrival time (PAT). In the state of the art, PAT is estimated by identifying a characteristic point of the pressure pulse waveform. This paper demonstrates that for ambulatory scenarios, where signal-to-noise ratios are below 10 dB, the performance in terms of repeatability of PAT measurements through characteristic points identification degrades drastically. Hence, we introduce a novel family of PAT estimators based on the parametric modeling of the anacrotic phase of a pressure pulse. In particular, we propose a parametric PAT estimator (TANH) that depicts high correlation with the Complior(R) characteristic point D1 (CC = 0.99), increases noise robustness and reduces by a five-fold factor the number of heartbeats required to obtain reliable PAT measurements.
Resumo:
BACKGROUND: A concentrate for bicarbonate haemodialysis acidified with citrate instead of acetate has been marketed in recent years. The small amount of citrate used (one-fifth of the concentration adopted in regional anticoagulation) protects against intradialyser clotting while minimally affecting the calcium concentration. The aim of this study was to compare the impact of citrate- and acetate-based dialysates on systemic haemodynamics, coagulation, acid-base status, calcium balance and dialysis efficiency. METHODS: In 25 patients who underwent a total of 375 dialysis sessions, an acetate dialysate (A) was compared with a citrate dialysate with (C+) or without (C) calcium supplementation (0.25 mmol/L) in a randomised single-blind cross-over study. Systemic haemodynamics were evaluated using pulse-wave analysis. Coagulation, acid-base status, calcium balance and dialysis efficiency were assessed using standard biochemical markers. RESULTS: Patients receiving the citrate dialysate had significantly lower systolic blood pressure (BP) (-4.3 mmHg, p < 0.01) and peripheral resistances (PR) (-51 dyne.sec.cm-5, p < 0.001) while stroke volume was not increased. In hypertensive patients there was a substantial reduction in BP (-7.8 mmHg, p < 0.01). With the C+ dialysate the BP gap was less pronounced but the reduction in PR was even greater (-226 dyne.sec.cm-5, p < 0.001). Analyses of the fluctuations in PR and of subjective tolerance suggested improved haemodynamic stability with the citrate dialysate. Furthermore, an increase in pre-dialysis bicarbonate and a decrease in pre-dialysis BUN, post-dialysis phosphate and ionised calcium were noted. Systemic coagulation activation was not influenced by citrate. CONCLUSION: The positive impact on dialysis efficiency, acid-base status and haemodynamics, as well as the subjective tolerance, together indicate that citrate dialysate can significantly contribute to improving haemodialysis in selected patients.
Resumo:
PURPOSE: To evaluate a widely used nontunneled triple-lumen central venous catheter in order to determine whether the largest of the three lumina (16 gauge) can tolerate high flow rates, such as those required for computed tomographic angiography. MATERIALS AND METHODS: Forty-two catheters were tested in vitro, including 10 new and 32 used catheters (median indwelling time, 5 days). Injection pressures were continuously monitored at the site of the 16-gauge central venous catheter hub. Catheters were injected with 300 and 370 mg of iodine per milliliter of iopamidol by using a mechanical injector at increasing flow rates until the catheter failed. The infusion rate, hub pressure, and location were documented for each failure event. The catheter pressures generated during hand injection by five operators were also analyzed. Mean flow rates and pressures at failure were compared by means of two-tailed Student t test, with differences considered significant at P < .05. RESULTS: Injections of iopamidol with 370 mg of iodine per milliliter generate more pressure than injections of iopamidol with 300 mg of iodine per milliliter at the same injection rate. All catheters failed in the tubing external to the patient. The lowest flow rate at which catheter failure occurred was 9 mL/sec. The lowest hub pressure at failure was 262 pounds per square inch gauge (psig) for new and 213 psig for used catheters. Hand injection of iopamidol with 300 mg of iodine per milliliter generated peak hub pressures ranging from 35 to 72 psig, corresponding to flow rates ranging from 2.5 to 5.0 mL/sec. CONCLUSION: Indwelling use has an effect on catheter material property, but even for used catheters there is a substantial safety margin for power injection with the particular triple-lumen central venous catheter tested in this study, as the manufacturer's recommendation for maximum pressure is 15 psig.
Resumo:
INTRODUCTION: It is unclear to which level mean arterial blood pressure (MAP) should be increased during septic shock in order to improve outcome. In this study we investigated the association between MAP values of 70 mmHg or higher, vasopressor load, 28-day mortality and disease-related events in septic shock. METHODS: This is a post hoc analysis of data of the control group of a multicenter trial and includes 290 septic shock patients in whom a mean MAP > or = 70 mmHg could be maintained during shock. Demographic and clinical data, MAP, vasopressor requirements during the shock period, disease-related events and 28-day mortality were documented. Logistic regression models adjusted for the geographic region of the study center, age, presence of chronic arterial hypertension, simplified acute physiology score (SAPS) II and the mean vasopressor load during the shock period was calculated to investigate the association between MAP or MAP quartiles > or = 70 mmHg and mortality or the frequency and occurrence of disease-related events. RESULTS: There was no association between MAP or MAP quartiles and mortality or the occurrence of disease-related events. These associations were not influenced by age or pre-existent arterial hypertension (all P > 0.05). The mean vasopressor load was associated with mortality (relative risk (RR), 1.83; confidence interval (CI) 95%, 1.4-2.38; P < 0.001), the number of disease-related events (P < 0.001) and the occurrence of acute circulatory failure (RR, 1.64; CI 95%, 1.28-2.11; P < 0.001), metabolic acidosis (RR, 1.79; CI 95%, 1.38-2.32; P < 0.001), renal failure (RR, 1.49; CI 95%, 1.17-1.89; P = 0.001) and thrombocytopenia (RR, 1.33; CI 95%, 1.06-1.68; P = 0.01). CONCLUSIONS: MAP levels of 70 mmHg or higher do not appear to be associated with improved survival in septic shock. Elevating MAP >70 mmHg by augmenting vasopressor dosages may increase mortality. Future trials are needed to identify the lowest acceptable MAP level to ensure tissue perfusion and avoid unnecessary high catecholamine infusions.
Resumo:
BACKGROUND Chronic pain is associated with generalized hypersensitivity and impaired endogenous pain modulation (conditioned pain modulation; CPM). Despite extensive research, their prevalence in chronic pain patients is unknown. This study investigated the prevalence and potential determinants of widespread central hypersensitivity and described the distribution of CPM in chronic pain patients. METHODS We examined 464 consecutive chronic pain patients for generalized hypersensitivity and CPM using pressure algometry at the second toe and cold pressor test. Potential determinants of generalized central hypersensitivity were studied using uni- and multivariate regression analyses. Prevalence of generalized central hypersensitivity was calculated for the 5th, 10th and 25th percentile of normative values for pressure algometry obtained by a previous large study on healthy volunteers. CPM was addressed on a descriptive basis, since normative values are not available. RESULTS Depending on the percentile of normative values considered, generalized central hypersensitivity affected 17.5-35.3% of patients. 23.7% of patients showed no increase in pressure pain threshold after cold pressor test. Generalized central hypersensitivity was more frequent and CPM less effective in women than in men. Unclearly classifiable pain syndromes showed higher frequencies of generalized central hypersensitivity than other pain syndromes. CONCLUSIONS Although prevalent in chronic pain, generalized central hypersensitivity is not present in every patient. An individual assessment is therefore required in order to detect altered pain processing. The broad basic knowledge about central hypersensitivity now needs to be translated into concrete clinical consequences, so that patients can be offered an individually tailored mechanism-based treatment.
Resumo:
Fluvial cut-and-fill sequences have frequently been reported from various sites on Earth. Nevertheless, the information about the past erosional regime and hydrological conditions have not yet been adequately deciphered from these archives. The Quaternary terrace sequences in the Pisco valley, located at ca. 13°S, offer a manifestation of an orbitally-driven cyclicity in terrace construction where phases of sediment accumulation have been related to the Minchin (48–36 ka) and Tauca (26–15 ka) lake level highstands on the Altiplano. Here, we present a 10Be-based sediment budget for the cut-and-fill terrace sequences in this valley to quantify the orbitally forced changes in precipitation and erosion. We find that the Minchin period was characterized by an erosional pulse along the Pacific coast where denudation rates reached values as high as 600±80 mm/ka600±80 mm/ka for a relatively short time span lasting a few thousands of years. This contrasts to the younger pluvial periods and the modern situation when 10Be-based sediment budgets register nearly zero erosion at the Pacific coast. We relate these contrasts to different erosional conditions between the modern and the Minchin time. First, the sediment budget infers a precipitation pattern that matches with the modern climate ca. 1000 km farther north, where highly erratic and extreme El Niño-related precipitation results in fast erosion and flooding along the coast. Second, the formation of a thick terrace sequence requires sufficient material on catchment hillslopes to be stripped off by erosion. This was most likely the case immediately before the start of the Minchin period, because this erosional epoch was preceded by a >50 ka-long time span with poorly erosive climate conditions, allowing for sufficient regolith to build up on the hillslopes. Finally, this study suggests a strong control of orbitally and ice sheet forced latitudinal shifts of the ITCZ on the erosional gradients and sediment production on the western escarpment of the Peruvian Andes at 13° during the Minchin period.
Resumo:
The bedrock topography beneath the Quaternary cover provides an important archive for the identification of erosional processes during past glaciations. Here, we combined stratigraphic investigations of more than 40,000 boreholes with published data to generate a bedrock topography model for the entire plateau north of the Swiss Alps including the valleys within the mountain belt. We compared the bedrock map with data about the pattern of the erosional resistance of Alpine rocks to identify the controls of the lithologic architecture on the location of overdeepenings. We additionally used the bedrock topography map as a basis to calculate the erosional potential of the Alpine glaciers, which was related to the thickness of the LGM ice. We used these calculations to interpret how glaciers, with support by subglacial meltwater under pressure, might have shaped the bedrock topography of the Alps. We found that the erosional resistance of the bedrock lithology mainly explains where overdeepenings in the Alpine valleys and the plateau occur. In particular, in the Alpine valleys, the locations of overdeepenings largely overlap with areas where the underlying bedrock has a low erosional resistance, or where it was shattered by faults. We also found that the assignment of two end-member scenarios of erosion, related to glacial abrasion/plucking in the Alpine valleys, and dissection by subglacial meltwater in the plateau, may be adequate to explain the pattern of overdeepenings in the Alpine realm. This most likely points to the topographic controls on glacial scouring. In the Alps, the flow of LGM and previous glaciers were constrained by valley flanks, while ice flow was mostly divergent on the plateau where valley borders are absent. We suggest that these differences in landscape conditioning might have contributed to the contrasts in the formation of overdeepenings in the Alpine valleys and the plateau.
Resumo:
The multiple high-pressure (HP), low-temperature (LT) metamorphic units of Western and Central Anatolia offer a great opportunity to investigate the subduction- and continental accretion-related evolution of the eastern limb of the long-lived Aegean subduction system. Recent reports of the HP–LT index mineral Fe-Mg-carpholite in three metasedimentary units of the Gondwana-derived Anatolide–Tauride continental block (namely the Afyon Zone, the Ören Unit and the southern Menderes Massif) suggest a more complicated scenario than the single-continental accretion model generally put forward in previous studies. This study presents the first isotopic dates (white mica 40Ar–39Ar geochronology), and where possible are combined with P–T estimates (chlorite thermometry, phengite barometry, multi-equilibrium thermobarometry), on carpholite-bearing rocks from these three HP–LT metasedimentary units. It is shown that, in the Afyon Zone, carpholite-bearing assemblages were retrogressed through greenschist-facies conditions at c. 67–62 Ma. Early retrograde stages in the Ören Unit are dated to 63–59 Ma. In the Kurudere–Nebiler Unit (HP Mesozoic cover of the southern Menderes Massif), HP retrograde stages are dated to c. 45 Ma, and post-collisional cooling to c. 26 Ma. These new results support that the Ören Unit represents the westernmost continuation of the Afyon Zone, whereas the Kurudere–Nebiler Unit correlates with the Cycladic Blueschist Unit of the Aegean Domain. In Western Anatolia, three successive HP–LT metamorphic belts thus formed: the northernmost Tavşanlı Zone (c. 88–82 Ma), the Ören–Afyon Zone (between 70 and 65 Ma), and the Kurudere–Nebiler Unit (c. 52–45 Ma). The southward younging trend of the HP–LT metamorphism from the upper and internal to the deeper and more external structural units, as in the Aegean Domain, points to the persistence of subduction in Western Anatolia between 93–90 and c. 35 Ma. After the accretion of the Menderes–Tauride terrane, in Eocene times, subduction stopped, leading to continental collision and associated Barrovian-type metamorphism. Because, by contrast, the Aegean subduction did remain active due to slab roll-back and trench migration, the eastern limb (below Southwestern Anatolia) of the Hellenic slab was dramatically curved and consequently teared. It therefore is suggested that the possibility for subduction to continue after the accretion of buoyant (e.g. continental) terranes probably depends much on palaeogeography.
Resumo:
Purpose The sedimentation sign (SedSign) has been shown to discriminate well between selected patients with and without lumbar spinal stenosis (LSS). The purpose of this study was to compare the pressure values associated with LSS versus non-LSS and discuss whether a positive SedSign may be related to increased epidural pressure at the level of the stenosis. Methods We measured the intraoperative epidural pressure in five patients without LSS and a negative SedSign, and in five patients with LSS and a positive SedSign using a Codman TM catheter in prone position under radioscopy. Results Patients with a negative SedSign had a median epidural pressure of 9 mmHg independent of the measurement location. Breath and pulse-synchronous waves accounted for 1–3 mmHg. In patients with monosegmental LSS and a positive SedSign, the epidural pressure above and below the stenosis was similar (median 8–9 mmHg). At the level of the stenosis the median epidural pressure was 22 mmHg. A breath and pulse-synchronous wave was present cranial to the stenosis, but absent below. These findings were independent of the cross-sectional area of the spinal canal at the level of the stenosis. Conclusions Patients with LSS have an increased epidural pressure at the level of the stenosis and altered pressure wave characteristics below. We argue that the absence of sedimentation of lumbar nerve roots to the dorsal part of the dural sac in supine position may be due to tethering of affected nerve roots at the level of the stenosis.