31 resultados para Standard method
Resumo:
QUESTION UNDER STUDY: Purpose was to validate accuracy and reliability of automated oscillometric ankle-brachial (ABI) measurement prospectively against the current gold standard of Doppler-assisted ABI determination. METHODS: Oscillometric ABI was measured in 50 consecutive patients with peripheral arterial disease (n = 100 limbs, mean age 65 +/- 6 years, 31 men, 19 diabetics) after both high and low ABI had been determined conventionally by Doppler under standardised conditions. Correlation was assessed by linear regression and Pearson product moment correlation. Degree of inter-modality agreement was quantified by use of Bland and Altman method. RESULTS: Oscillometry was performed significantly faster than Doppler-assisted ABI (3.9 +/- 1.3 vs 11.4 +/- 3.8 minutes, P <0.001). Mean readings were 0.62 +/- 0.25, 0.70 +/- 0.22 and 0.63 +/- 0.39 for low, high and oscillometric ABI, respectively. Correlation between oscillometry and Doppler ABI was good overall (r = 0.76 for both low and high ABI) and excellent in oligo-symptomatic, non-diabetic patients (r = 0.81; 0.07 +/- 0.23); it was, however, limited in diabetic patients and in patients with critical limb ischaemia. In general, oscillometric ABI readings were slightly higher (+0.06), but linear regression analysis showed that correlation was sustained over the whole range of measurements. CONCLUSIONS: Results of automated oscillometric ABI determination correlated well with Doppler-assisted measurements and could be obtained in shorter time. Agreement was particularly high in oligo-symptomatic non-diabetic patients.
Resumo:
NAFLD (non-alcoholic fatty liver disease) and NASH (non-alcoholic steatohepatitis) are of increasing importance, both in connection with insulin resistance and with the development of liver cirrhosis. Histological samples are still the 'gold standard' for diagnosis; however, because of the risks of a liver biopsy, non-invasive methods are needed. MAS (magic angle spinning) is a special type of NMR which allows characterization of intact excised tissue without need for additional extraction steps. Because clinical MRI (magnetic resonance imaging) and MRS (magnetic resonance spectroscopy) are based on the same physical principle as NMR, translational research is feasible from excised tissue to non-invasive examinations in humans. In the present issue of Clinical Science, Cobbold and co-workers report a study in three animal strains suffering from different degrees of NAFLD showing that MAS results are able to distinguish controls, fatty infiltration and steatohepatitis in cohorts. In vivo MRS methods in humans are not obtainable at the same spectral resolution; however, know-how from MAS studies may help to identify characteristic changes in crowded regions of the magnetic resonance spectrum.
Resumo:
OBJECTIVE: In ictal scalp electroencephalogram (EEG) the presence of artefacts and the wide ranging patterns of discharges are hurdles to good diagnostic accuracy. Quantitative EEG aids the lateralization and/or localization process of epileptiform activity. METHODS: Twelve patients achieving Engel Class I/IIa outcome following temporal lobe surgery (1 year) were selected with approximately 1-3 ictal EEGs analyzed/patient. The EEG signals were denoised with discrete wavelet transform (DWT), followed by computing the normalized absolute slopes and spatial interpolation of scalp topography associated to detection of local maxima. For localization, the region with the highest normalized absolute slopes at the time when epileptiform activities were registered (>2.5 times standard deviation) was designated as the region of onset. For lateralization, the cerebral hemisphere registering the first appearance of normalized absolute slopes >2.5 times the standard deviation was designated as the side of onset. As comparison, all the EEG episodes were reviewed by two neurologists blinded to clinical information to determine the localization and lateralization of seizure onset by visual analysis. RESULTS: 16/25 seizures (64%) were correctly localized by the visual method and 21/25 seizures (84%) by the quantitative EEG method. 12/25 seizures (48%) were correctly lateralized by the visual method and 23/25 seizures (92%) by the quantitative EEG method. The McNemar test showed p=0.15 for localization and p=0.0026 for lateralization when comparing the two methods. CONCLUSIONS: The quantitative EEG method yielded significantly more seizure episodes that were correctly lateralized and there was a trend towards more correctly localized seizures. SIGNIFICANCE: Coupling DWT with the absolute slope method helps clinicians achieve a better EEG diagnostic accuracy.
Resumo:
This study describes the development and validation of a gas chromatography-mass spectrometry (GC-MS) method to identify and quantitate phenytoin in brain microdialysate, saliva and blood from human samples. A solid-phase extraction (SPE) was performed with a nonpolar C8-SCX column. The eluate was evaporated with nitrogen (50°C) and derivatized with trimethylsulfonium hydroxide before GC-MS analysis. As the internal standard, 5-(p-methylphenyl)-5-phenylhydantoin was used. The MS was run in scan mode and the identification was made with three ion fragment masses. All peaks were identified with MassLib. Spiked phenytoin samples showed recovery after SPE of ≥94%. The calibration curve (phenytoin 50 to 1,200 ng/mL, n = 6, at six concentration levels) showed good linearity and correlation (r² > 0.998). The limit of detection was 15 ng/mL; the limit of quantification was 50 ng/mL. Dried extracted samples were stable within a 15% deviation range for ≥4 weeks at room temperature. The method met International Organization for Standardization standards and was able to detect and quantify phenytoin in different biological matrices and patient samples. The GC-MS method with SPE is specific, sensitive, robust and well reproducible, and is therefore an appropriate candidate for the pharmacokinetic assessment of phenytoin concentrations in different human biological samples.
Resumo:
Background Finite element models of augmented vertebral bodies require a realistic modelling of the cement infiltrated region. Most methods published so far used idealized cement shapes or oversimplified material models for the augmented region. In this study, an improved, anatomy-specific, homogenized finite element method was developed and validated to predict the apparent as well as the local mechanical behavior of augmented vertebral bodies. Methods Forty-nine human vertebral body sections were prepared by removing the cortical endplates and scanned with high-resolution peripheral quantitative CT before and after injection of a standard and a low-modulus bone cement. Forty-one specimens were tested in compression to measure stiffness, strength and contact pressure distributions between specimens and loading-plates. From the remaining eight, fourteen cylindrical specimens were extracted from the augmented region and tested in compression to obtain material properties. Anatomy-specific finite element models were generated from the CT data. The models featured element-specific, density-fabric-based material properties, damage accumulation, real cement distributions and experimentally determined material properties for the augmented region. Apparent stiffness and strength as well as contact pressure distributions at the loading plates were compared between simulations and experiments. Findings The finite element models were able to predict apparent stiffness (R2 > 0.86) and apparent strength (R2 > 0.92) very well. Also, the numerically obtained pressure distributions were in reasonable quantitative (R2 > 0.48) and qualitative agreement with the experiments. Interpretation The proposed finite element models have proven to be an accurate tool for studying the apparent as well as the local mechanical behavior of augmented vertebral bodies.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
Phosphorus (P) is an essential macronutrient for all living organisms. Phosphorus is often present in nature as the soluble phosphate ion PO43– and has biological, terrestrial, and marine emission sources. Thus PO43– detected in ice cores has the potential to be an important tracer for biological activity in the past. In this study a continuous and highly sensitive absorption method for detection of dissolved reactive phosphorus (DRP) in ice cores has been developed using a molybdate reagent and a 2-m liquid waveguide capillary cell (LWCC). DRP is the soluble form of the nutrient phosphorus, which reacts with molybdate. The method was optimized to meet the low concentrations of DRP in Greenland ice, with a depth resolution of approximately 2 cm and an analytical uncertainty of 1.1 nM (0.1 ppb) PO43–. The method has been applied to segments of a shallow firn core from Northeast Greenland, indicating a mean concentration level of 2.74 nM (0.26 ppb) PO43– for the period 1930–2005 with a standard deviation of 1.37 nM (0.13 ppb) PO43– and values reaching as high as 10.52 nM (1 ppb) PO43–. Similar levels were detected for the period 1771–1823. Based on impurity abundances, dust and biogenic particles were found to be the most likely sources of DRP deposited in Northeast Greenland.
Resumo:
Due to the ongoing trend towards increased product variety, fast-moving consumer goods such as food and beverages, pharmaceuticals, and chemicals are typically manufactured through so-called make-and-pack processes. These processes consist of a make stage, a pack stage, and intermediate storage facilities that decouple these two stages. In operations scheduling, complex technological constraints must be considered, e.g., non-identical parallel processing units, sequence-dependent changeovers, batch splitting, no-wait restrictions, material transfer times, minimum storage times, and finite storage capacity. The short-term scheduling problem is to compute a production schedule such that a given demand for products is fulfilled, all technological constraints are met, and the production makespan is minimised. A production schedule typically comprises 500–1500 operations. Due to the problem size and complexity of the technological constraints, the performance of known mixed-integer linear programming (MILP) formulations and heuristic approaches is often insufficient. We present a hybrid method consisting of three phases. First, the set of operations is divided into several subsets. Second, these subsets are iteratively scheduled using a generic and flexible MILP formulation. Third, a novel critical path-based improvement procedure is applied to the resulting schedule. We develop several strategies for the integration of the MILP model into this heuristic framework. Using these strategies, high-quality feasible solutions to large-scale instances can be obtained within reasonable CPU times using standard optimisation software. We have applied the proposed hybrid method to a set of industrial problem instances and found that the method outperforms state-of-the-art methods.
Resumo:
A new online method to analyse water isotopes of speleothem fluid inclusions using a wavelength scanned cavity ring down spectroscopy (WS-CRDS) instrument is presented. This novel technique allows us simultaneously to measure hydrogen and oxygen isotopes for a released aliquot of water. To do so, we designed a new simple line that allows the online water extraction and isotope analysis of speleothem samples. The specificity of the method lies in the fact that fluid inclusions release is made on a standard water background, which mainly improves the δ D robustness. To saturate the line, a peristaltic pump continuously injects standard water into the line that is permanently heated to 140 °C and flushed with dry nitrogen gas. This permits instantaneous and complete vaporisation of the standard water, resulting in an artificial water background with well-known δ D and δ18O values. The speleothem sample is placed in a copper tube, attached to the line, and after system stabilisation it is crushed using a simple hydraulic device to liberate speleothem fluid inclusions water. The released water is carried by the nitrogen/standard water gas stream directly to a Picarro L1102-i for isotope determination. To test the accuracy and reproducibility of the line and to measure standard water during speleothem measurements, a syringe injection unit was added to the line. Peak evaluation is done similarly as in gas chromatography to obtain &delta D; and δ18O isotopic compositions of measured water aliquots. Precision is better than 1.5 ‰ for δ D and 0.4 ‰ for δ18O for water measurements for an extended range (−210 to 0 ‰ for δ D and −27 to 0 ‰ for δ18O) primarily dependent on the amount of water released from speleothem fluid inclusions and secondarily on the isotopic composition of the sample. The results show that WS-CRDS technology is suitable for speleothem fluid inclusion measurements and gives results that are comparable to the isotope ratio mass spectrometry (IRMS) technique.
Resumo:
In this study, the development of a new sensitive method for the analysis of alpha-dicarbonyls glyoxal (G) and methylglyoxal (MG) in environmental ice and snow is presented. Stir bar sorptive extraction with in situ derivatization and liquid desorption (SBSE-LD) was used for sample extraction, enrichment, and derivatization. Measurements were carried out using high-performance liquid chromatography coupled to electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS). As part of the method development, SBSE-LD parameters such as extraction time, derivatization reagent, desorption time and solvent, and the effect of NaCl addition on the SBSE efficiency as well as measurement parameters of HPLC-ESI-MS/MS were evaluated. Calibration was performed in the range of 1–60 ng/mL using spiked ultrapure water samples, thus incorporating the complete SBSE and derivatization process. 4-Fluorobenzaldehyde was applied as internal standard. Inter-batch precision was <12 % RSD. Recoveries were determined by means of spiked snow samples and were 78.9 ± 5.6 % for G and 82.7 ± 7.5 % for MG, respectively. Instrumental detection limits of 0.242 and 0.213 ng/mL for G and MG were achieved using the multiple reaction monitoring mode. Relative detection limits referred to a sample volume of 15 mL were 0.016 ng/mL for G and 0.014 ng/mL for MG. The optimized method was applied for the analysis of snow samples from Mount Hohenpeissenberg (close to the Meteorological Observatory Hohenpeissenberg, Germany) and samples from an ice core from Upper Grenzgletscher (Monte Rosa massif, Switzerland). Resulting concentrations were 0.085–16.3 ng/mL for G and 0.126–3.6 ng/mL for MG. Concentrations of G and MG in snow were 1–2 orders of magnitude higher than in ice core samples. The described method represents a simple, green, and sensitive analytical approach to measure G and MG in aqueous environmental samples.
Resumo:
BACKGROUND Dimethyl sulfoxide (DMSO) is essential for the preservation of liquid nitrogen-frozen stem cells, but is associated with toxicity in the transplant recipient. STUDY DESIGN AND METHODS In this prospective noninterventional study, we describe the use of DMSO in 64 European Blood and Marrow Transplant Group centers undertaking autologous transplantation on patients with myeloma and lymphoma and analyze side effects after return of DMSO-preserved stem cells. RESULTS While the majority of centers continue to use 10% DMSO, a significant proportion either use lower concentrations, mostly 5 or 7.5%, or wash cells before infusion (some for selected patients only). In contrast, the median dose of DMSO given (20 mL) was much less than the upper limit set by the same institutions (70 mL). In an accompanying statistical analysis of side effects noted after return of DMSO-preserved stem cells, we show that patients in the highest quartile receiving DMSO (mL and mL/kg body weight) had significantly more side effects attributed to DMSO, although this effect was not observed if DMSO was calculated as mL/min. Dividing the myeloma and lymphoma patients each into two equal groups by age we were able to confirm this result in all but young myeloma patients in whom an inversion of the odds ratio was seen, possibly related to the higher dose of melphalan received by young myeloma patients. CONCLUSION We suggest better standardization of preservation method with reduced DMSO concentration and attention to the dose of DMSO received by patients could help reduce the toxicity and morbidity of the transplant procedure.
Resumo:
10.1002/hlca.19900730309.abs In three steps, 2-deoxy-D-ribose has been converted into a phosphoramidite building block bearing a (t-Bu)Me2Si protecting group at the OH function of the anomeric centre of the furanose ring. This building block was subsequently incorporated into DNA oligomers of various base sequences using the standard phosphoramidite protocol for automated DNA synthesis. The resulting silyl-oligomers have been purified by HPLC and selectively desilylated to the corresponding free apurinic DNA sequences. The hexamer d (A-A-A-A-X-A) (X representing the apurinic site) which was prepared in this way was characterized by 1H- and 31P-NMR spectroscopy. The other sequences as well as their fragments, which formed upon treatment with alkali base, were analyzed by polyacrylamide gel electrophoresis.
Resumo:
BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.
Resumo:
A fast and automatic method for radiocarbon analysis of aerosol samples is presented. This type of analysis requires high number of sample measurements of low carbon masses, but accepts precisions lower than for carbon dating analysis. The method is based on online Trapping CO2 and coupling an elemental analyzer with a MICADAS AMS by means of a gas interface. It gives similar results to a previously validated reference method for the same set of samples. This method is fast and automatic and typically provides uncertainties of 1.5–5% for representative aerosol samples. It proves to be robust and reliable and allows for overnight and unattended measurements. A constant and cross contamination correction is included, which indicates a constant contamination of 1.4 ± 0.2 μg C with 70 ± 7 pMC and a cross contamination of (0.2 ± 0.1)% from the previous sample. A Real-time online coupling version of the method was also investigated. It shows promising results for standard materials with slightly higher uncertainties than the Trapping online approach.
Resumo:
Behavioural tests to assess affective states are widely used in human research and have recently been extended to animals. These tests assume that affective state influences cognitive processing, and that animals in a negative affective state interpret ambiguous information as expecting a negative outcome (displaying a negative cognitive bias). Most of these tests however, require long discrimination training. The aim of the study was to validate an exploration based cognitive bias test, using two different handling methods, as previous studies have shown that standard tail handling of mice increases physiological and behavioural measures of anxiety compared to cupped handling. Therefore, we hypothesised that tail handled mice would display a negative cognitive bias. We handled 28 female CD-1 mice for 16 weeks using either tail handling or cupped handling. The mice were then trained in an eight arm radial maze, where two adjacent arms predicted a positive outcome (darkness and food), while the two opposite arms predicted a negative outcome (no food, white noise and light). After six days of training, the mice were also given access to the four previously unavailable intermediate ambiguous arms of the radial maze and tested for cognitive bias. We were unable to validate this test, as mice from both handling groups displayed a similar pattern of exploration. Furthermore, we examined whether maze exploration is affected by the expression of stereotypic behaviour in the home cage. Mice with higher levels of stereotypic behaviour spent more time in positive arms and avoided ambiguous arms, displaying a negative cognitive bias. While this test needs further validation, our results indicate that it may allow the assessment of affective state in mice with minimal training— a major confound in current cognitive bias paradigms.