976 resultados para Standard method
Resumo:
We describe a method for evaluating an ensemble of predictive models given a sample of observations comprising the model predictions and the outcome event measured with error. Our formulation allows us to simultaneously estimate measurement error parameters, true outcome — aka the gold standard — and a relative weighting of the predictive scores. We describe conditions necessary to estimate the gold standard and for these estimates to be calibrated and detail how our approach is related to, but distinct from, standard model combination techniques. We apply our approach to data from a study to evaluate a collection of BRCA1/BRCA2 gene mutation prediction scores. In this example, genotype is measured with error by one or more genetic assays. We estimate true genotype for each individual in the dataset, operating characteristics of the commonly used genotyping procedures and a relative weighting of the scores. Finally, we compare the scores against the gold standard genotype and find that Mendelian scores are, on average, the more refined and better calibrated of those considered and that the comparison is sensitive to measurement error in the gold standard.
Resumo:
Diabetic nephropathy and end-stage renal failure are still a major cause of mortality amongst patients with diabetes mellitus (DM). In this study, we evaluated the Clinitek-Microalbumin (CM) screening test strip for the detection of microalbuminuria (MA) in a random morning spot urine in comparison with the quantitative assessment of albuminuria in the timed overnight urine collection ("gold standard"). One hundred thirty-four children, adolescents, and young adults with insulin-dependent DM Type 1 were studied at 222 outpatient visits. Because of urinary tract infection and/or haematuria, the data of 13 visits were excluded. Finally, 165 timed overnight urine were collected in the remaining 209 visits (79% sample per visit rate). Ten (6.1%) patients presented MA of > or =15 microg/min. In comparison however, 200 spot urine could be screened (96% sample/visit rate) yielding a significant increase in compliance and screening rate (P<.001, McNemar test). Furthermore, at 156 occasions, the gold standard and CM could be directly compared. The sensitivity and the specificity for CM in the spot urine (cut-off > or =30 mg albumin/l) were 0.89 [95% confidence interval (CI) 0.56-0.99] and 0.73 (CI 0.66-0.80), respectively. The positive and negative predictive value were 0.17 (CI 0.08-0.30) and 0.99 (CI 0.95-1.00), respectively. Considering CM albumin-to-creatinine ratio, the results were poorer than with the albumin concentration alone. Using CM instead of quantitative assessment of albuminuria is not cost-effective (35 US dollars versus 60 US dollars/patient/year). In conclusion, to exclude MA, the CM used in the random spot urine is reliable and easy to handle, but positive screening results of > or =30 mg albumin/l must be confirmed by analyses in the timed overnight collected urine. Although the screening compliance is improved, in terms of analysing random morning spot urine for MA, we cannot recommend CM in a paediatric diabetic outpatient setting because the specificity is far too low.
Resumo:
There is no accepted way of measuring prothrombin time without time loss for patients undergoing major surgery who are at risk of intraoperative dilution and consumption coagulopathy due to bleeding and volume replacement with crystalloids or colloids. Decisions to transfuse fresh frozen plasma and procoagulatory drugs have to rely on clinical judgment in these situations. Point-of-care devices are considerably faster than the standard laboratory methods. In this study we assessed the accuracy of a Point-of-care (PoC) device measuring prothrombin time compared to the standard laboratory method. Patients undergoing major surgery and intensive care unit patients were included. PoC prothrombin time was measured by CoaguChek XS Plus (Roche Diagnostics, Switzerland). PoC and reference tests were performed independently and interpreted under blinded conditions. Using a cut-off prothrombin time of 50%, we calculated diagnostic accuracy measures, plotted a receiver operating characteristic (ROC) curve and tested for equivalence between the two methods. PoC sensitivity and specificity were 95% (95% CI 77%, 100%) and 95% (95% CI 91%, 98%) respectively. The negative likelihood ratio was 0.05 (95% CI 0.01, 0.32). The positive likelihood ratio was 19.57 (95% CI 10.62, 36.06). The area under the ROC curve was 0.988. Equivalence between the two methods was confirmed. CoaguChek XS Plus is a rapid and highly accurate test compared with the reference test. These findings suggest that PoC testing will be useful for monitoring intraoperative prothrombin time when coagulopathy is suspected. It could lead to a more rational use of expensive and limited blood bank resources.
Resumo:
QUESTION UNDER STUDY: Purpose was to validate accuracy and reliability of automated oscillometric ankle-brachial (ABI) measurement prospectively against the current gold standard of Doppler-assisted ABI determination. METHODS: Oscillometric ABI was measured in 50 consecutive patients with peripheral arterial disease (n = 100 limbs, mean age 65 +/- 6 years, 31 men, 19 diabetics) after both high and low ABI had been determined conventionally by Doppler under standardised conditions. Correlation was assessed by linear regression and Pearson product moment correlation. Degree of inter-modality agreement was quantified by use of Bland and Altman method. RESULTS: Oscillometry was performed significantly faster than Doppler-assisted ABI (3.9 +/- 1.3 vs 11.4 +/- 3.8 minutes, P <0.001). Mean readings were 0.62 +/- 0.25, 0.70 +/- 0.22 and 0.63 +/- 0.39 for low, high and oscillometric ABI, respectively. Correlation between oscillometry and Doppler ABI was good overall (r = 0.76 for both low and high ABI) and excellent in oligo-symptomatic, non-diabetic patients (r = 0.81; 0.07 +/- 0.23); it was, however, limited in diabetic patients and in patients with critical limb ischaemia. In general, oscillometric ABI readings were slightly higher (+0.06), but linear regression analysis showed that correlation was sustained over the whole range of measurements. CONCLUSIONS: Results of automated oscillometric ABI determination correlated well with Doppler-assisted measurements and could be obtained in shorter time. Agreement was particularly high in oligo-symptomatic non-diabetic patients.
Resumo:
NAFLD (non-alcoholic fatty liver disease) and NASH (non-alcoholic steatohepatitis) are of increasing importance, both in connection with insulin resistance and with the development of liver cirrhosis. Histological samples are still the 'gold standard' for diagnosis; however, because of the risks of a liver biopsy, non-invasive methods are needed. MAS (magic angle spinning) is a special type of NMR which allows characterization of intact excised tissue without need for additional extraction steps. Because clinical MRI (magnetic resonance imaging) and MRS (magnetic resonance spectroscopy) are based on the same physical principle as NMR, translational research is feasible from excised tissue to non-invasive examinations in humans. In the present issue of Clinical Science, Cobbold and co-workers report a study in three animal strains suffering from different degrees of NAFLD showing that MAS results are able to distinguish controls, fatty infiltration and steatohepatitis in cohorts. In vivo MRS methods in humans are not obtainable at the same spectral resolution; however, know-how from MAS studies may help to identify characteristic changes in crowded regions of the magnetic resonance spectrum.
Resumo:
OBJECTIVE: In ictal scalp electroencephalogram (EEG) the presence of artefacts and the wide ranging patterns of discharges are hurdles to good diagnostic accuracy. Quantitative EEG aids the lateralization and/or localization process of epileptiform activity. METHODS: Twelve patients achieving Engel Class I/IIa outcome following temporal lobe surgery (1 year) were selected with approximately 1-3 ictal EEGs analyzed/patient. The EEG signals were denoised with discrete wavelet transform (DWT), followed by computing the normalized absolute slopes and spatial interpolation of scalp topography associated to detection of local maxima. For localization, the region with the highest normalized absolute slopes at the time when epileptiform activities were registered (>2.5 times standard deviation) was designated as the region of onset. For lateralization, the cerebral hemisphere registering the first appearance of normalized absolute slopes >2.5 times the standard deviation was designated as the side of onset. As comparison, all the EEG episodes were reviewed by two neurologists blinded to clinical information to determine the localization and lateralization of seizure onset by visual analysis. RESULTS: 16/25 seizures (64%) were correctly localized by the visual method and 21/25 seizures (84%) by the quantitative EEG method. 12/25 seizures (48%) were correctly lateralized by the visual method and 23/25 seizures (92%) by the quantitative EEG method. The McNemar test showed p=0.15 for localization and p=0.0026 for lateralization when comparing the two methods. CONCLUSIONS: The quantitative EEG method yielded significantly more seizure episodes that were correctly lateralized and there was a trend towards more correctly localized seizures. SIGNIFICANCE: Coupling DWT with the absolute slope method helps clinicians achieve a better EEG diagnostic accuracy.
Resumo:
This study describes the development and validation of a gas chromatography-mass spectrometry (GC-MS) method to identify and quantitate phenytoin in brain microdialysate, saliva and blood from human samples. A solid-phase extraction (SPE) was performed with a nonpolar C8-SCX column. The eluate was evaporated with nitrogen (50°C) and derivatized with trimethylsulfonium hydroxide before GC-MS analysis. As the internal standard, 5-(p-methylphenyl)-5-phenylhydantoin was used. The MS was run in scan mode and the identification was made with three ion fragment masses. All peaks were identified with MassLib. Spiked phenytoin samples showed recovery after SPE of ≥94%. The calibration curve (phenytoin 50 to 1,200 ng/mL, n = 6, at six concentration levels) showed good linearity and correlation (r² > 0.998). The limit of detection was 15 ng/mL; the limit of quantification was 50 ng/mL. Dried extracted samples were stable within a 15% deviation range for ≥4 weeks at room temperature. The method met International Organization for Standardization standards and was able to detect and quantify phenytoin in different biological matrices and patient samples. The GC-MS method with SPE is specific, sensitive, robust and well reproducible, and is therefore an appropriate candidate for the pharmacokinetic assessment of phenytoin concentrations in different human biological samples.
Resumo:
Background Finite element models of augmented vertebral bodies require a realistic modelling of the cement infiltrated region. Most methods published so far used idealized cement shapes or oversimplified material models for the augmented region. In this study, an improved, anatomy-specific, homogenized finite element method was developed and validated to predict the apparent as well as the local mechanical behavior of augmented vertebral bodies. Methods Forty-nine human vertebral body sections were prepared by removing the cortical endplates and scanned with high-resolution peripheral quantitative CT before and after injection of a standard and a low-modulus bone cement. Forty-one specimens were tested in compression to measure stiffness, strength and contact pressure distributions between specimens and loading-plates. From the remaining eight, fourteen cylindrical specimens were extracted from the augmented region and tested in compression to obtain material properties. Anatomy-specific finite element models were generated from the CT data. The models featured element-specific, density-fabric-based material properties, damage accumulation, real cement distributions and experimentally determined material properties for the augmented region. Apparent stiffness and strength as well as contact pressure distributions at the loading plates were compared between simulations and experiments. Findings The finite element models were able to predict apparent stiffness (R2 > 0.86) and apparent strength (R2 > 0.92) very well. Also, the numerically obtained pressure distributions were in reasonable quantitative (R2 > 0.48) and qualitative agreement with the experiments. Interpretation The proposed finite element models have proven to be an accurate tool for studying the apparent as well as the local mechanical behavior of augmented vertebral bodies.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
Phosphorus (P) is an essential macronutrient for all living organisms. Phosphorus is often present in nature as the soluble phosphate ion PO43– and has biological, terrestrial, and marine emission sources. Thus PO43– detected in ice cores has the potential to be an important tracer for biological activity in the past. In this study a continuous and highly sensitive absorption method for detection of dissolved reactive phosphorus (DRP) in ice cores has been developed using a molybdate reagent and a 2-m liquid waveguide capillary cell (LWCC). DRP is the soluble form of the nutrient phosphorus, which reacts with molybdate. The method was optimized to meet the low concentrations of DRP in Greenland ice, with a depth resolution of approximately 2 cm and an analytical uncertainty of 1.1 nM (0.1 ppb) PO43–. The method has been applied to segments of a shallow firn core from Northeast Greenland, indicating a mean concentration level of 2.74 nM (0.26 ppb) PO43– for the period 1930–2005 with a standard deviation of 1.37 nM (0.13 ppb) PO43– and values reaching as high as 10.52 nM (1 ppb) PO43–. Similar levels were detected for the period 1771–1823. Based on impurity abundances, dust and biogenic particles were found to be the most likely sources of DRP deposited in Northeast Greenland.
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^
Resumo:
An easily implemented extension of the standard response method of tidal analysis is outlined. The modification improves the extraction of both the steady and the tidal components from problematic time series by calculating tidal response weights uncontaminated by missing or anomalous data. Examples of time series containing data gaps and anomalous events are analyzed to demonstrate the applicability and advantage of the proposed method.
Resumo:
Due to the ongoing trend towards increased product variety, fast-moving consumer goods such as food and beverages, pharmaceuticals, and chemicals are typically manufactured through so-called make-and-pack processes. These processes consist of a make stage, a pack stage, and intermediate storage facilities that decouple these two stages. In operations scheduling, complex technological constraints must be considered, e.g., non-identical parallel processing units, sequence-dependent changeovers, batch splitting, no-wait restrictions, material transfer times, minimum storage times, and finite storage capacity. The short-term scheduling problem is to compute a production schedule such that a given demand for products is fulfilled, all technological constraints are met, and the production makespan is minimised. A production schedule typically comprises 500–1500 operations. Due to the problem size and complexity of the technological constraints, the performance of known mixed-integer linear programming (MILP) formulations and heuristic approaches is often insufficient. We present a hybrid method consisting of three phases. First, the set of operations is divided into several subsets. Second, these subsets are iteratively scheduled using a generic and flexible MILP formulation. Third, a novel critical path-based improvement procedure is applied to the resulting schedule. We develop several strategies for the integration of the MILP model into this heuristic framework. Using these strategies, high-quality feasible solutions to large-scale instances can be obtained within reasonable CPU times using standard optimisation software. We have applied the proposed hybrid method to a set of industrial problem instances and found that the method outperforms state-of-the-art methods.
Resumo:
A new online method to analyse water isotopes of speleothem fluid inclusions using a wavelength scanned cavity ring down spectroscopy (WS-CRDS) instrument is presented. This novel technique allows us simultaneously to measure hydrogen and oxygen isotopes for a released aliquot of water. To do so, we designed a new simple line that allows the online water extraction and isotope analysis of speleothem samples. The specificity of the method lies in the fact that fluid inclusions release is made on a standard water background, which mainly improves the δ D robustness. To saturate the line, a peristaltic pump continuously injects standard water into the line that is permanently heated to 140 °C and flushed with dry nitrogen gas. This permits instantaneous and complete vaporisation of the standard water, resulting in an artificial water background with well-known δ D and δ18O values. The speleothem sample is placed in a copper tube, attached to the line, and after system stabilisation it is crushed using a simple hydraulic device to liberate speleothem fluid inclusions water. The released water is carried by the nitrogen/standard water gas stream directly to a Picarro L1102-i for isotope determination. To test the accuracy and reproducibility of the line and to measure standard water during speleothem measurements, a syringe injection unit was added to the line. Peak evaluation is done similarly as in gas chromatography to obtain &delta D; and δ18O isotopic compositions of measured water aliquots. Precision is better than 1.5 ‰ for δ D and 0.4 ‰ for δ18O for water measurements for an extended range (−210 to 0 ‰ for δ D and −27 to 0 ‰ for δ18O) primarily dependent on the amount of water released from speleothem fluid inclusions and secondarily on the isotopic composition of the sample. The results show that WS-CRDS technology is suitable for speleothem fluid inclusion measurements and gives results that are comparable to the isotope ratio mass spectrometry (IRMS) technique.
Resumo:
In this study, the development of a new sensitive method for the analysis of alpha-dicarbonyls glyoxal (G) and methylglyoxal (MG) in environmental ice and snow is presented. Stir bar sorptive extraction with in situ derivatization and liquid desorption (SBSE-LD) was used for sample extraction, enrichment, and derivatization. Measurements were carried out using high-performance liquid chromatography coupled to electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS). As part of the method development, SBSE-LD parameters such as extraction time, derivatization reagent, desorption time and solvent, and the effect of NaCl addition on the SBSE efficiency as well as measurement parameters of HPLC-ESI-MS/MS were evaluated. Calibration was performed in the range of 1–60 ng/mL using spiked ultrapure water samples, thus incorporating the complete SBSE and derivatization process. 4-Fluorobenzaldehyde was applied as internal standard. Inter-batch precision was <12 % RSD. Recoveries were determined by means of spiked snow samples and were 78.9 ± 5.6 % for G and 82.7 ± 7.5 % for MG, respectively. Instrumental detection limits of 0.242 and 0.213 ng/mL for G and MG were achieved using the multiple reaction monitoring mode. Relative detection limits referred to a sample volume of 15 mL were 0.016 ng/mL for G and 0.014 ng/mL for MG. The optimized method was applied for the analysis of snow samples from Mount Hohenpeissenberg (close to the Meteorological Observatory Hohenpeissenberg, Germany) and samples from an ice core from Upper Grenzgletscher (Monte Rosa massif, Switzerland). Resulting concentrations were 0.085–16.3 ng/mL for G and 0.126–3.6 ng/mL for MG. Concentrations of G and MG in snow were 1–2 orders of magnitude higher than in ice core samples. The described method represents a simple, green, and sensitive analytical approach to measure G and MG in aqueous environmental samples.