897 resultados para Transfusion threshold
Resumo:
Background: It is known that 20-30% of fresh frozen plasma (FFP) is used in intensive care units (ICUs), but little is known about variations in decision making between clinicians in relation to coagulopathy management. Our aim was to describe ICU clinicians' beliefs and practice in relation to FFP treatment of non-bleeding coagulopathic critically ill patients.
Resumo:
Transfusion has been associated with significant morbidity and mortality in megaloblastic anaemia (MA). This retrospective study was undertaken to examine the usefulness of transfusion in the management of MA. Fifty-two patients with MA were identified. Of the 20 transfused patients 13 were treated with diuretics and six with potassium supplements. The mean haemoglobin (Hb) of the transfused group was 6.5 g/dl (range 4.8-10.4 g/dl), and of the 32 non-transfused patients 10.5 g/dl (range 5.6-17.0 g/dl). The Hb and packed cell volume (PCV) were significantly lower in the transfused group. Only two of 32 non-transfused group were given potassium supplements. In this small group of patients with MA, transfusion appeared to be safe and no complications of transfusion were identified. However, advice was not being followed. We would suggest that, although transfusion has a minor role in the management of MA, consideration must be given to the general hazards of transfusion.
Resumo:
The term RBC-transfusion-dependence is widely-used by hematologists to describe a condition of severe anemia typically arising when erythropoiesis is reduced such that a person continuously requires =1 RBC-transfusions over a specified interval. Defining a person as RBC-transfusion-dependent has important implications in diverse hematological disorders especially because it strongly-correlated with decreased survival. Conversely, becoming RBC-transfusion-independent or receiving fewer RBC-transfusions over a specified interval is defined as improvement or response in many disease- and/or therapy-setting. Whether this correlates with improved survival is controversial. We used a structured expert-panel consensus panel process to define RBC-transfusion-dependence and -independence or improvement. We suggest these definitions may prove useful to persons studying or treating these diseases.
Resumo:
A criterion is derived for delamination onset in transversely isotropic laminated plates under small mass, high velocity impact. The resulting delamination threshold load is about 21% higher than the corresponding quasi-static threshold load. A closed form approximation for the peak impact load is then used to predict the delamination threshold velocity. The theory is validated for a range of test cases by comparison with 3D finite element simulation using LS-DYNA and a newly developed interface element to model delamination onset and growth. The predicted delamination threshold loads and velocities are in very good agreement with the finite element simulations. Good agreement is also shown in a comparison with published experimental results. In contrast to quasi-static impacts, delamination growth occurs under a rapidly decreasing load. Inclusion of finite thickness effects and a proper description of the contact stiffness are found to be vital for accurate prediction of the delamination threshold velocity
Resumo:
BACKGROUND: To compare the ability of Glaucoma Progression Analysis (GPA) and Threshold Noiseless Trend (TNT) programs to detect visual-field deterioration.
METHODS: Patients with open-angle glaucoma followed for a minimum of 2 years and a minimum of seven reliable visual fields were included. Progression was assessed subjectively by four masked glaucoma experts, and compared with GPA and TNT results. Each case was judged to be stable, deteriorated or suspicious of deterioration
RESULTS: A total of 56 eyes of 42 patients were followed with a mean of 7.8 (SD 1.0) tests over an average of 5.5 (1.04) years. Interobserver agreement to detect progression was good (mean kappa = 0.57). Progression was detected in 10-19 eyes by the experts, in six by GPA and in 24 by TNT. Using the consensus expert opinion as the gold standard (four clinicians detected progression), the GPA sensitivity and specificity were 75% and 83%, respectively, while the TNT sensitivity and specificity was 100% and 77%, respectively.
CONCLUSION: TNT showed greater concordance with the experts than GPA in the detection of visual-field deterioration. GPA showed a high specificity but lower sensitivity, mainly detecting cases of high focality and pronounced mean defect slopes.
Resumo:
Purpose: The authors sought to quantify neighboring and distant interpoint correlations of threshold values within the visual field in patients with glaucoma. Methods: Visual fields of patients with confirmed or suspected glaucoma were analyzed (n = 255). One eye per patient was included. Patients were examined using the 32 program of the Octopus 1-2-3. Linear regression analysis among each of the locations and the rest of the points of the visual field was performed, and the correlation coefficient was calculated. The degree of correlation was categorized as high (r > 0.66), moderate (0.66 = r > 0.33), or low (r = 0.33). The standard error of threshold estimation was calculated. Results: Most locations of the visual field had high and moderate correlations with neighboring points and with distant locations corresponding to the same nerve fiber bundle. Locations of the visual field had low correlations with those of the opposite hemifield, with the exception of locations temporal to the blind spot. The standard error of threshold estimation increased from 0.6 to 0.9 dB with an r reduction of 0.1. Conclusion: Locations of the visual field have highest interpoint correlation with neighboring points and with distant points in areas corresponding to the distribution of the retinal nerve fiber layer. The quantification of interpoint correlations may be useful in the design and interpretation of visual field tests in patients with glaucoma.
Resumo:
PURPOSE: To evaluate the sensitivity and specificity of the screening mode of the Humphrey-Welch Allyn frequency-doubling technology (FDT), Octopus tendency-oriented perimetry (TOP), and the Humphrey Swedish Interactive Threshold Algorithm (SITA)-fast (HSF) in patients with glaucoma. DESIGN: A comparative consecutive case series. METHODS: This was a prospective study which took place in the glaucoma unit of an academic department of ophthalmology. One eye of 70 consecutive glaucoma patients and 28 age-matched normal subjects was studied. Eyes were examined with the program C-20 of FDT, G1-TOP, and 24-2 HSF in one visit and in random order. The gold standard for glaucoma was presence of a typical glaucomatous optic disk appearance on stereoscopic examination, which was judged by a glaucoma expert. The sensitivity and specificity, positive and negative predictive value, and receiver operating characteristic (ROC) curves of two algorithms for the FDT screening test, two algorithms for TOP, and three algorithms for HSF, as defined before the start of this study, were evaluated. The time required for each test was also analyzed. RESULTS: Values for area under the ROC curve ranged from 82.5%-93.9%. The largest area (93.9%) under the ROC curve was obtained with the FDT criteria, defining abnormality as presence of at least one abnormal location. Mean test time was 1.08 ± 0.28 minutes, 2.31 ± 0.28 minutes, and 4.14 ± 0.57 minutes for the FDT, TOP, and HSF, respectively. The difference in testing time was statistically significant (P <.0001). CONCLUSIONS: The C-20 FDT, G1-TOP, and 24-2 HSF appear to be useful tools to diagnose glaucoma. The test C-20 FDT and G1-TOP take approximately 1/4 and 1/2 of the time taken by 24 to 2 HSF. © 2002 by Elsevier Science Inc. All rights reserved.
Resumo:
We propose a low-complexity closed-loop spatial multiplexing method with limited feedback over multi-input-multi-output (MIMO) fading channels. The transmit adaptation is simply performed by selecting transmit antennas (or substreams) by comparing their signal-to-noise ratios to a given threshold with a fixed nonadaptive constellation and fixed transmit power per substream. We analyze the performance of the proposed system by deriving closed-form expressions for spectral efficiency, average transmit power, and bit error rate (BER). Depending on practical system design constraints, the threshold is chosen to maximize the spectral efficiency (or minimize the average BER) subject to average transmit power and average BER (or spectral efficiency) constraints, respectively. We present numerical and Monte Carlo simulation results that validate our analysis. Compared to open-loop spatial multiplexing and other approaches that select the best antenna subset in spatial multiplexing, the numerical results illustrate that the proposed technique obtains significant power gains for the same BER and spectral efficiency. We also provide numerical results that show improvement over rate-adaptive orthogonal space-time block coding, which requires highly complex constellation adaptation. We analyze the impact of feedback delay using analytical and Monte Carlo approaches. The proposed approach is arguably the simplest possible adaptive spatial multiplexing system from an implementation point of view. However, our approach and analysis can be extended to other systems using multiple constellations and power levels.
Resumo:
We employ the time-dependent R-matrix (TDRM) method to calculate anisotropy parameters for positive and negative sidebands of selected harmonics generated by two-color two-photon above-threshold ionization of argon. We consider odd harmonics of an 800-nm field ranging from the 13th to 19th harmonic, overlapped by a fundamental 800-nm IR field. The anisotropy parameters obtained using the TDRM method are compared with those obtained using a second-order perturbation theory with a model potential approach and a soft photon approximation approach. Where available, a comparison is also made to published experimental results. All three theoretical approaches provide similar values for anisotropy parameters. The TDRM approach obtains values that are closest to published experimental values. At high photon energies, the differences between each of the theoretical methods become less significant.
Resumo:
The environmental quality of land can be assessed by calculating relevant threshold values, which differentiate between concentrations of elements resulting from geogenic and diffuse anthropogenic sources and concentrations generated by point sources of elements. A simple process allowing the calculation of these typical threshold values (TTVs) was applied across a region of highly complex geology (Northern Ireland) to six elements of interest; arsenic, chromium, copper, lead, nickel and vanadium. Three methods for identifying domains (areas where a readily identifiable factor can be shown to control the concentration of an element) were used: k-means cluster analysis, boxplots and empirical cumulative distribution functions (ECDF). The ECDF method was most efficient at determining areas of both elevated and reduced concentrations and was used to identify domains in this investigation. Two statistical methods for calculating normal background concentrations (NBCs) and upper limits of geochemical baseline variation (ULBLs), currently used in conjunction with legislative regimes in the UK and Finland respectively, were applied within each domain. The NBC methodology was constructed to run within a specific legislative framework, and its use on this soil geochemical data set was influenced by the presence of skewed distributions and outliers. In contrast, the ULBL methodology was found to calculate more appropriate TTVs that were generally more conservative than the NBCs. TTVs indicate what a "typical" concentration of an element would be within a defined geographical area and should be considered alongside the risk that each of the elements pose in these areas to determine potential risk to receptors.