951 resultados para Reproducibility
Resumo:
BACKGROUND: The renal enzyme renin cleaves from the hepatic alpha(2)-globulin angiotensinogen angiotensin-(1-10) decapeptide [Ang-(1-10)], which is further metabolized to smaller peptides that help maintain cardiovascular homeostasis. The Ang-(1-7) heptapeptide has been reported to have several physiological effects, including natriuresis, diuresis, vasodilation, and release of vasopressin and prostaglandins. METHODS: To investigate Ang-(1-7) in clinical settings, we developed a method to measure immunoreactive (ir-) Ang-(1-7) in 2 mL of human blood and to estimate plasma concentrations by correcting for the hematocrit. A sensitive and specific antiserum against Ang-(1-7) was raised in a rabbit. Human blood was collected in the presence of an inhibitor mixture including a renin inhibitor to prevent peptide generation in vitro. Ang-(1-7) was extracted into ethanol and purified on phenylsilylsilica. The peptide was quantified by radioimmunoassay. Increasing doses of Ang-(1-7) were infused into volunteers, and plasma concentrations of the peptide were measured. RESULTS: The detection limit for plasma ir-Ang-(1-7) was 1 pmol/L. CVs for high and low blood concentrations were 4% and 20%, respectively, and between-assay CVs were 8% and 13%, respectively. Reference values for human plasma concentrations of ir-Ang-(1-7) were 1.0-9.5 pmol/L (median, 4.7 pmol/L) and increased linearly during infusion of increasing doses of Ang-(1-7). CONCLUSIONS: Reliable measurement of plasma ir-Ang-(1-7) is achieved with efficient inhibition of enzymes that generate or metabolize Ang-(1-7) after blood sampling, extraction in ethanol, and purification on phenylsilylsilica, and by use of a specific antiserum.
Resumo:
Determination of the sub-chondral bone density, or more precisely the internal density spot, can be used to evaluate the capability of a knee to sustain normal kinematics. To use this technique as a mean of knee kinematics control, the position of the internal density spot must be determined in a reproducible way. This paper presents a definition of an intrinsic polar coordinate system, allowing to measure the position of the internal density spot of the tibial plateau. Tests of reproducibility gave good results and justify the use of this coordinate system for comparison of the internal density spot position between left and right paired knees.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
PURPOSE: To improve the tag persistence throughout the whole cardiac cycle by providing a constant tag-contrast throughout all the cardiac phases when using balanced steady-state free precession (bSSFP) imaging. MATERIALS AND METHODS: The flip angles of the imaging radiofrequency pulses were optimized to compensate for the tagging contrast-to-noise ratio (Tag-CNR) fading at later cardiac phases in bSSFP imaging. Complementary spatial modulation of magnetization (CSPAMM) tagging was implemented to improve the Tag-CNR. Numerical simulations were performed to examine the behavior of the Tag-CNR with the proposed method, and to compare the resulting Tag-CNR with that obtained from the more commonly used spoiled gradient echo (SPGR) imaging. A gel phantom, as well as five healthy human volunteers, were scanned on a 1.5T scanner using bSSFP imaging with and without the proposed technique. The phantom was also scanned with SPGR imaging. RESULTS: With the proposed technique, the Tag-CNR remained almost constant during the whole cardiac cycle. Using bSSFP imaging, the Tag-CNR was about double that of SPGR. CONCLUSION: The tag persistence was significantly improved when the proposed method was applied, with better Tag-CNR during the diastolic cardiac phase. The improved Tag-CNR will support automated tagging analysis and quantification methods.
Resumo:
High performance liquid chromatography (HPLC) is the reference method for measuring concentrations of antimicrobials in blood. This technique requires careful sample preparation. Protocols using organic solvents and/or solid extraction phases are time consuming and entail several manipulations, which can lead to partial loss of the determined compound and increased analytical variability. Moreover, to obtain sufficient material for analysis, at least 1 ml of plasma is required. This constraint makes it difficult to determine drug levels when blood sample volumes are limited. However, drugs with low plasma-protein binding can be reliably extracted from plasma by ultra-filtration with a minimal loss due to the protein-bound fraction. This study validated a single-step ultra-filtration method for extracting fluconazole (FLC), a first-line antifungal agent with a weak plasma-protein binding, from plasma to determine its concentration by HPLC. Spiked FLC standards and unknowns were prepared in human and rat plasma. Samples (240 microl) were transferred into disposable microtube filtration units containing cellulose or polysulfone filters with a 5 kDa cut-off. After centrifugation for 60 min at 15000g, FLC concentrations were measured by direct injection of the filtrate into the HPLC. Using cellulose filters, low molecular weight proteins were eluted early in the chromatogram and well separated from FLC that eluted at 8.40 min as a sharp single peak. In contrast, with polysulfone filters several additional peaks interfering with the FLC peak were observed. Moreover, the FLC recovery using cellulose filters compared to polysulfone filters was higher and had a better reproducibility. Cellulose filters were therefore used for the subsequent validation procedure. The quantification limit was 0.195 mgl(-1). Standard curves with a quadratic regression coefficient > or = 0.9999 were obtained in the concentration range of 0.195-100 mgl(-1). The inter and intra-run accuracies and precisions over the clinically relevant concentration range, 1.875-60 mgl(-1), fell well within the +/-15% variation recommended by the current guidelines for the validation of analytical methods. Furthermore, no analytical interference was observed with commonly used antibiotics, antifungals, antivirals and immunosuppressive agents. Ultra-filtration of plasma with cellulose filters permits the extraction of FLC from small volumes (240 microl). The determination of FLC concentrations by HPLC after this single-step procedure is selective, precise and accurate.
Resumo:
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.
Resumo:
PURPOSE: To compare different techniques for positive contrast imaging of susceptibility markers with MRI for three-dimensional visualization. As several different techniques have been reported, the choice of the suitable method depends on its properties with regard to the amount of positive contrast and the desired background suppression, as well as other imaging constraints needed for a specific application. MATERIALS AND METHODS: Six different positive contrast techniques are investigated for their ability to image at 3 Tesla a single susceptibility marker in vitro. The white marker method (WM), susceptibility gradient mapping (SGM), inversion recovery with on-resonant water suppression (IRON), frequency selective excitation (FSX), fast low flip-angle positive contrast SSFP (FLAPS), and iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) were implemented and investigated. RESULTS: The different methods were compared with respect to the volume of positive contrast, the product of volume and signal intensity, imaging time, and the level of background suppression. Quantitative results are provided, and strengths and weaknesses of the different approaches are discussed. CONCLUSION: The appropriate choice of positive contrast imaging technique depends on the desired level of background suppression, acquisition speed, and robustness against artifacts, for which in vitro comparative data are now available.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
BACKGROUND: Legionella species cause severe forms of pneumonia with high mortality and complication rates. Accurate clinical predictors to assess the likelihood of Legionella community-acquired pneumonia (CAP) in patients presenting to the emergency department are lacking. METHODS: We retrospectively compared clinical and laboratory data of 82 consecutive patients with Legionella CAP with 368 consecutive patients with non-Legionella CAP included in two studies at the same institution. RESULTS: In multivariate logistic regression analysis we identified six parameters, namely high body temperature (OR 1.67, p < 0.0001), absence of sputum production (OR 3.67, p < 0.0001), low serum sodium concentrations (OR 0.89, p = 0.011), high levels of lactate dehydrogenase (OR 1.003, p = 0.007) and C-reactive protein (OR 1.006, p < 0.0001) and low platelet counts (OR 0.991, p < 0.0001), as independent predictors of Legionella CAP. Using optimal cut off values of these six parameters, we calculated a diagnostic score for Legionella CAP. The median score was significantly higher in Legionella CAP as compared to patients without Legionella (4 (IQR 3-4) vs 2 (IQR 1-2), p < 0.0001) with a respective odds ratio of 3.34 (95%CI 2.57-4.33, p < 0.0001). Receiver operating characteristics showed a high diagnostic accuracy of this diagnostic score (AUC 0.86 (95%CI 0.81-0.90), which was better as compared to each parameter alone. Of the 191 patients (42%) with a score of 0 or 1 point, only 3% had Legionella pneumonia. Conversely, of the 73 patients (16%) with > or =4 points, 66% of patients had Legionella CAP. CONCLUSION: Six clinical and laboratory parameters embedded in a simple diagnostic score accurately identified patients with Legionella CAP. If validated in future studies, this score might aid in the management of suspected Legionella CAP.
Resumo:
OBJECTIVES: This study aimed to assess the validity of COOP charts in a general population sample, to examine whether illustrations contribute to instrument validity, and to establish general population norms. METHODS: A general population mail survey was conducted among 20-79 years old residents of the Swiss canton of Vaud. Participants were invited to complete COOP charts, the SF-36 Health Survey; they also provided data on health service use in the previous month. Two thirds of the respondents received standard COOP charts, the rest received charts without illustrations. RESULTS: Overall 1250 persons responded (54%). The presence of illustrations did not affect score distributions, except that the illustrated 'physical fitness' chart drew greater non-response (10 vs. 3%, p < 0.001). Validity tests were similar for illustrated and picture-less charts. Factor analysis yielded two principal components, corresponding to physical and mental health. Six COOP charts showed strong and nearly linear relationships with corresponding SF36 scores (all p < 0.001), demonstrating concurrent validity. Similarly, most COOP charts were associated with the use of medical services in the past month. Only the chart on 'social support' partly deviated from construct validity hypotheses. Population norms revealed a generally lower health status in women and an age-related decline in physical health. CONCLUSIONS: COOP charts can be used to assess the health status of a general population. Their validity is good, with the possible exception of the 'social support' chart. The illustrations do not affect the properties of this instrument.
Resumo:
BACKGROUND: In patients with Kawasaki disease, serial evaluation of the distribution and size of coronary artery aneurysms (CAA) is necessary for risk stratification and therapeutic management. Although transthoracic echocardiography is often sufficient for this purpose initially, visualization of the coronary arteries becomes progressively more difficult as children grow. We sought to prospectively compare coronary magnetic resonance angiography (MRA) and x-ray coronary angiography findings in patients with CAA caused by Kawasaki disease. METHODS AND RESULTS: Six subjects (age 10 to 25 years) with known CAA from Kawasaki disease underwent coronary MRA using a free-breathing T2-prepared 3D bright blood segmented k-space gradient echo sequence with navigator gating and tracking. All patients underwent x-ray coronary angiography within a median of 75 days (range, 1 to 359 days) of coronary MRA. There was complete agreement between MRA and x-ray angiography in the detection of CAA (n=11), coronary artery stenoses (n=2), and coronary occlusions (n=2). Excellent agreement was found between the 2 techniques for detection of CAA maximal diameter (mean difference=0.4 +/- 0.6 mm) and length (mean difference=1.4 +/- 1.6 mm). The 2 methods showed very similar results for proximal coronary artery diameter (mean difference=0.2 +/- 0.5 mm) and CAA distance from the ostia (mean difference=0.1 +/- 1.5 mm). CONCLUSION: Free-breathing 3D coronary MRA accurately defines CAA in patients with Kawasaki disease. This technique may provide a non-invasive alternative when transthoracic echocardiography image quality is insufficient, thereby reducing the need for serial x-ray coronary angiography in this patient group.
Resumo:
In the vast majority of bottom-up proteomics studies, protein digestion is performed using only mammalian trypsin. Although it is clearly the best enzyme available, the sole use of trypsin rarely leads to complete sequence coverage, even for abundant proteins. It is commonly assumed that this is because many tryptic peptides are either too short or too long to be identified by RPLC-MS/MS. We show through in silico analysis that 20-30% of the total sequence of three proteomes (Schizosaccharomyces pombe, Saccharomyces cerevisiae, and Homo sapiens) is expected to be covered by Large post-Trypsin Peptides (LpTPs) with M(r) above 3000 Da. We then established size exclusion chromatography to fractionate complex yeast tryptic digests into pools of peptides based on size. We found that secondary digestion of LpTPs followed by LC-MS/MS analysis leads to a significant increase in identified proteins and a 32-50% relative increase in average sequence coverage compared to trypsin digestion alone. Application of the developed strategy to analyze the phosphoproteomes of S. pombe and of a human cell line identified a significant fraction of novel phosphosites. Overall our data indicate that specific targeting of LpTPs can complement standard bottom-up workflows to reveal a largely neglected portion of the proteome.
Resumo:
The use of laparoscopic surgery has increased rapidly. However, a technically feasible procedure is not automatically recommendable. Thus, if cholecystectomy and fundoplication are currently fully validated techniques, this does not hold true for gastroplasty and kidney harvesting for transplantation: these operations are feasible indeed but their efficacy remains to be proved. Laparoscopic oncology has been shown to be feasible too, but its efficacy has not been documented yet.
Resumo:
The paper analyses and compares infrasonic and seismic data from snow avalanches monitored at the Vallée de la Sionne test site in Switzerland from 2009 to 2010. Using a combination of seismic and infrasound sensors, it is possible not only to detect a snow avalanche but also to distinguish between the different flow regimes and to analyse duration, average speed (for sections of the avalanche path) and avalanche size. Different sensitiveness of the seismic and infrasound sensors to the avalanche regimes is shown. Furthermore, the high amplitudes observed in the infrasound signal for one avalanche were modelled assuming that the suspension layer of the avalanche acts as a moving turbulent sound source. Our results show reproducibility for similar avalanches on the same avalanche path.