92 resultados para Calibration phantom


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The flourishing number of publications on the use of isotope ratio mass spectrometry (IRMS) in forensicscience denotes the enthusiasm and the attraction generated by this technology. IRMS has demonstratedits potential to distinguish chemically identical compounds coming from different sources. Despite thenumerous applications of IRMS to a wide range of forensic materials, its implementation in a forensicframework is less straightforward than it appears. In addition, each laboratory has developed its ownstrategy of analysis on calibration, sequence design, standards utilisation and data treatment without aclear consensus.Through the experience acquired from research undertaken in different forensic fields, we propose amethodological framework of the whole process using IRMS methods. We emphasize the importance ofconsidering isotopic results as part of a whole approach, when applying this technology to a particularforensic issue. The process is divided into six different steps, which should be considered for a thoughtfuland relevant application. The dissection of this process into fundamental steps, further detailed, enablesa better understanding of the essential, though not exhaustive, factors that have to be considered in orderto obtain results of quality and sufficiently robust to proceed to retrospective analyses or interlaboratorycomparisons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The reported prevalence of late-life depressive symptoms varies widely between studies, a finding that might be attributed to cultural as well as methodological factors. The EURO-D scale was developed to allow valid comparison of prevalence and risk associations between European countries. This study used Confirmatory Factor Analysis (CFA) and Rasch models to assess whether the goal of measurement invariance had been achieved; using EURO-D scale data collected in 10 European countries as part of the Survey of Health, Ageing and Retirement in Europe (SHARE) (n = 22,777). The results suggested a two-factor solution (Affective Suffering and Motivation) after Principal Component Analysis (PCA) in 9 of the 10 countries. With CFA, in all countries, the two-factor solution had better overall goodness-of-fit than the one-factor solution. However, only the Affective Suffering subscale was equivalent across countries, while the Motivation subscale was not. The Rasch model indicated that the EURO-D was a hierarchical scale. While the calibration pattern was similar across countries, between countries agreement in item calibrations was stronger for the items loading on the affective suffering than the motivation factor. In conclusion, there is evidence to support the EURO-D as either a uni-dimensional or bi-dimensional scale measure of depressive symptoms in late-life across European countries. The Affective Suffering sub-component had more robust cross-cultural validity than the Motivation sub-component.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: Activity monitoring is considered a highly relevant outcome measure of respiratory rehabilitation. This study aimed to assess the usefulness of a new accelerometric method for characterization of walking activity during a 3-week inpatient rehabilitation program. METHODS: After individual calibration of the accelerometer at different walking speeds, whole-day physical activity was recorded for 15 patients with chronic obstructive pulmonary disease on the first and the last days of the program, and for 10 healthy subjects. Data were expressed as percentage of time spent in inactivity, low level activity, and medium level activity, with the latter corresponding to usual walking speed. RESULTS: The patients spent more time being inactive and less time walking than healthy subjects. At the end of the rehabilitation program, medium level activity had increased from 4% to 7% of total recording time. However, the change was not significant after periods of imposed exercise training were excluded. Walking activity increased to a greater degree among the patients with preserved limb muscle strength at entry to the program. Although health status scores improved, the changes did not correlate with the changes in walking activity. CONCLUSION: The findings lead to the conclusion that this new accelerometric method provides detailed analysis of walking activity during respiratory rehabilitation and may represent an additional useful measure of outcome.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among the various determinants of treatment response, the achievement of sufficient blood levels is essential for curing malaria. For helping us at improving our current understanding of antimalarial drugs pharmacokinetics, efficacy and toxicity, we have developed a liquid chromatography-tandem mass spectrometry method (LC-MS/MS) requiring 200mul of plasma for the simultaneous determination of 14 antimalarial drugs and their metabolites which are the components of the current first-line combination treatments for malaria (artemether, artesunate, dihydroartemisinin, amodiaquine, N-desethyl-amodiaquine, lumefantrine, desbutyl-lumefantrine, piperaquine, pyronaridine, mefloquine, chloroquine, quinine, pyrimethamine and sulfadoxine). Plasma is purified by a combination of protein precipitation, evaporation and reconstitution in methanol/ammonium formate 20mM (pH 4.0) 1:1. Reverse-phase chromatographic separation of antimalarial drugs is obtained using a gradient elution of 20mM ammonium formate and acetonitrile both containing 0.5% formic acid, followed by rinsing and re-equilibration to the initial solvent composition up to 21min. Analyte quantification, using matrix-matched calibration samples, is performed by electro-spray ionization-triple quadrupole mass spectrometry by selected reaction monitoring detection in the positive mode. The method was validated according to FDA recommendations, including assessment of extraction yield, matrix effect variability, overall process efficiency, standard addition experiments as well as antimalarials short- and long-term stability in plasma. The reactivity of endoperoxide-containing antimalarials in the presence of hemolysis was tested both in vitro and on malaria patients samples. With this method, signal intensity of artemisinin decreased by about 20% in the presence of 0.2% hemolysed red-blood cells in plasma, whereas its derivatives were essentially not affected. The method is precise (inter-day CV%: 3.1-12.6%) and sensitive (lower limits of quantification 0.15-3.0 and 0.75-5ng/ml for basic/neutral antimalarials and artemisinin derivatives, respectively). This is the first broad-range LC-MS/MS assay covering the currently in-use antimalarials. It is an improvement over previous methods in terms of convenience (a single extraction procedure for 14 major antimalarials and metabolites reducing significantly the analytical time), sensitivity, selectivity and throughput. While its main limitation is investment costs for the equipment, plasma samples can be collected in the field and kept at 4 degrees C for up to 48h before storage at -80 degrees C. It is suited to detecting the presence of drug in subjects for screening purposes and quantifying drug exposure after treatment. It may contribute to filling the current knowledge gaps in the pharmacokinetics/pharmacodynamics relationships of antimalarials and better define the therapeutic dose ranges in different patient populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coronary artery calcification (CAC) is quantified based on a computed tomography (CT) scan image. A calcified region is identified. Modified expectation maximization (MEM) of a statistical model for the calcified and background material is used to estimate the partial calcium content of the voxels. The algorithm limits the region over which MEM is performed. By using MEM, the statistical properties of the model are iteratively updated based on the calculated resultant calcium distribution from the previous iteration. The estimated statistical properties are used to generate a map of the partial calcium content in the calcified region. The volume of calcium in the calcified region is determined based on the map. The experimental results on a cardiac phantom, scanned 90 times using 15 different protocols, demonstrate that the proposed method is less sensitive to partial volume effect and noise, with average error of 9.5% (standard deviation (SD) of 5-7mm(3)) compared with 67% (SD of 3-20mm(3)) for conventional techniques. The high reproducibility of the proposed method for 35 patients, scanned twice using the same protocol at a minimum interval of 10 min, shows that the method provides 2-3 times lower interscan variation than conventional techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of copper (Cu) filtration on image quality and dose in different digital X-ray systems was investigated. Two computed radiography systems and one digital radiography detector were used. Three different polymethylmethacrylate blocks simulated the pediatric body. The effect of Cu filters of 0.1, 0.2, and 0.3 mm thickness on the entrance surface dose (ESD) and the corresponding effective doses (EDs) were measured at tube voltages of 60, 66, and 73 kV. Image quality was evaluated in a contrast-detail phantom with an automated analyzer software. Cu filters of 0.1, 0.2, and 0.3 mm thickness decreased the ESD by 25-32%, 32-39%, and 40-44%, respectively, the ranges depending on the respective tube voltages. There was no consistent decline in image quality due to increasing Cu filtration. The estimated ED of anterior-posterior (AP) chest projections was reduced by up to 23%. No relevant reduction in the ED was noted in AP radiographs of the abdomen and pelvis or in posterior-anterior radiographs of the chest. Cu filtration reduces the ESD, but generally does not reduce the effective dose. Cu filters can help protect radiosensitive superficial organs, such as the mammary glands in AP chest projections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Forensic examinations of ink have been performed since the beginning of the 20th century. Since the 1960s, the International Ink Library, maintained by the United States Secret Service, has supported those analyses. Until 2009, the search and identification of inks were essentially performed manually. This paper describes the results of a project designed to improve ink samples' analytical and search processes. The project focused on the development of improved standardization procedures to ensure the best possible reproducibility between analyses run on different HPTLC plates. The successful implementation of this new calibration method enabled the development of mathematical algorithms and of a software package to complement the existing ink library.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We use numerical simulations to investigate how the chain length and topology of freely fluctuating knotted polymer rings affect their various spatial characteristics such as the radius of the smallest sphere enclosing momentary configurations of simulated polymer chains. We describe how the average value of a characteristic changes with the chain size and how this change depends on the topology of the modeled polymers. Although the scaling profiles of a spatial characteristic for distinct knot types do not intersect (at least, in the range of our data), the profiles for nontrivial knots intersect the corresponding profile obtained for phantom polymers, i.e., those that are free to explore all available topological states. For each knot type, this point of intersection defines its equilibrium length with respect to the spatial characteristic. At this chain length, a polymer forming a given knot type will not tend to increase or decrease. on average, the value of the spatial characteristic when the polymer is released from its topological constraint. We show interrelations between equilibrium lengths defined with respect to spatial characteristics of different character and observe that they are related to the lengths of ideal geometric configurations of the corresponding knot types.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating for camera saturation which takes into account the variable activity in the field of view, i.e. time-dependent dead-time effects. The algorithm presented here accomplishes this task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RATIONALE AND OBJECTIVES: To determine optimum spatial resolution when imaging peripheral arteries with magnetic resonance angiography (MRA). MATERIALS AND METHODS: Eight vessel diameters ranging from 1.0 to 8.0 mm were simulated in a vascular phantom. A total of 40 three-dimensional flash MRA sequences were acquired with incremental variations of fields of view, matrix size, and slice thickness. The accurately known eight diameters were combined pairwise to generate 22 "exact" degrees of stenosis ranging from 42% to 87%. Then, the diameters were measured in the MRA images by three independent observers and with quantitative angiography (QA) software and used to compute the degrees of stenosis corresponding to the 22 "exact" ones. The accuracy and reproducibility of vessel diameter measurements and stenosis calculations were assessed for vessel size ranging from 6 to 8 mm (iliac artery), 4 to 5 mm (femoro-popliteal arteries), and 1 to 3 mm (infrapopliteal arteries). Maximum pixel dimension and slice thickness to obtain a mean error in stenosis evaluation of less than 10% were determined by linear regression analysis. RESULTS: Mean errors on stenosis quantification were 8.8% +/- 6.3% for 6- to 8-mm vessels, 15.5% +/- 8.2% for 4- to 5-mm vessels, and 18.9% +/- 7.5% for 1- to 3-mm vessels. Mean errors on stenosis calculation were 12.3% +/- 8.2% for observers and 11.4% +/- 15.1% for QA software (P = .0342). To evaluate stenosis with a mean error of less than 10%, maximum pixel surface, the pixel size in the phase direction, and the slice thickness should be less than 1.56 mm2, 1.34 mm, 1.70 mm, respectively (voxel size 2.65 mm3) for 6- to 8-mm vessels; 1.31 mm2, 1.10 mm, 1.34 mm (voxel size 1.76 mm3), for 4- to 5-mm vessels; and 1.17 mm2, 0.90 mm, 0.9 mm (voxel size 1.05 mm3) for 1- to 3-mm vessels. CONCLUSION: Higher spatial resolution than currently used should be selected for imaging peripheral vessels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The large spatial inhomogeneity in transmit B, field (B-1(+)) observable in human MR images at hi h static magnetic fields (B-0) severely impairs image quality. To overcome this effect in brain T-1-weighted images the, MPRAGE sequence was modified to generate two different images at different inversion times MP2RAGE By combining the two images in a novel fashion, it was possible to create T-1-weigthed images where the result image was free of proton density contrast, T-2* contrast, reception bias field, and, to first order transmit field inhomogeneity. MP2RAGE sequence parameters were optimized using Bloch equations to maximize contrast-to-noise ratio per unit of time between brain tissues and minimize the effect of B-1(+) variations through space. Images of high anatomical quality and excellent brain tissue differentiation suitable for applications such as segmentation and voxel-based morphometry were obtained at 3 and 7 T. From such T-1-weighted images, acquired within 12 min, high-resolution 3D T-1 maps were routinely calculated at 7 T with sub-millimeter voxel resolution (0.65-0.85 mm isotropic). T-1 maps were validated in phantom experiments. In humans, the T, values obtained at 7 T were 1.15 +/- 0.06 s for white matter (WM) and 1.92 +/- 0.16 s for grey matter (GM), in good agreement with literature values obtained at lower spatial resolution. At 3 T, where whole-brain acquisitions with 1 mm isotropic voxels were acquired in 8 min the T-1 values obtained (0.81 +/- 0.03 S for WM and 1.35 +/- 0.05 for GM) were once again found to be in very good agreement with values in the literature. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The treatment of some cancer patients has shifted from traditional, non-specific cytotoxic chemotherapy to chronic treatment with molecular targeted therapies. Imatinib mesylate, a selective inhibitor of tyrosine kinases (TKIs) is the most prominent example of this new era and has opened the way to the development of several additional TKIs, including sunitinib, nilotinib, dasatinib, sorafenib and lapatinib, in the treatment of various hematological malignancies and solid tumors. All these agents are characterized by an important inter-individual pharmacokinetic variability, are at risk for drug interactions, and are not devoid of toxicity. Additionally, they are administered for prolonged periods, anticipating the careful monitoring of their plasma exposure via Therapeutic Drug Monitoring (TDM) to be an important component of patients' follow-up. We have developed a liquid chromatography-tandem mass spectrometry method (LC-MS/MS) requiring 100 microL of plasma for the simultaneous determination of the six major TKIs currently in use. Plasma is purified by protein precipitation and the supernatant is diluted in ammonium formate 20 mM (pH 4.0) 1:2. Reverse-phase chromatographic separation of TKIs is obtained using a gradient elution of 20 mM ammonium formate pH 2.2 and acetonitrile containing 1% formic acid, followed by rinsing and re-equilibration to the initial solvent composition up to 20 min. Analyte quantification, using matrix-matched calibration samples, is performed by electro-spray ionization-triple quadrupole mass spectrometry by selected reaction monitoring detection using the positive mode. The method was validated according to FDA recommendations, including assessment of extraction yield, matrix effects variability (<9.6%), overall process efficiency (87.1-104.2%), as well as TKIs short- and long-term stability in plasma. The method is precise (inter-day CV%: 1.3-9.4%), accurate (-9.2 to +9.9%) and sensitive (lower limits of quantification comprised between 1 and 10 ng/mL). This is the first broad-range LC-MS/MS assay covering the major currently in-use TKIs. It is an improvement over previous methods in terms of convenience (a single extraction procedure for six major TKIs, reducing significantly the analytical time), sensitivity, selectivity and throughput. It may contribute to filling the current knowledge gaps in the pharmacokinetics/pharmacodynamics relationships of the latest TKIs developed after imatinib and better define their therapeutic ranges in different patient populations in order to evaluate whether a systematic TDM-guided dose adjustment of these anticancer drugs could contribute to minimize the risk of major adverse reactions and to increase the probability of efficient, long lasting, therapeutic response.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tractography is a class of algorithms aiming at in vivo mapping the major neuronal pathways in the white matter from diffusion magnetic resonance imaging (MRI) data. These techniques offer a powerful tool to noninvasively investigate at the macroscopic scale the architecture of the neuronal connections of the brain. However, unfortunately, the reconstructions recovered with existing tractography algorithms are not really quantitative even though diffusion MRI is a quantitative modality by nature. As a matter of fact, several techniques have been proposed in recent years to estimate, at the voxel level, intrinsic microstructural features of the tissue, such as axonal density and diameter, by using multicompartment models. In this paper, we present a novel framework to reestablish the link between tractography and tissue microstructure. Starting from an input set of candidate fiber-tracts, which are estimated from the data using standard fiber-tracking techniques, we model the diffusion MRI signal in each voxel of the image as a linear combination of the restricted and hindered contributions generated in every location of the brain by these candidate tracts. Then, we seek for the global weight of each of them, i.e., the effective contribution or volume, such that they globally fit the measured signal at best. We demonstrate that these weights can be easily recovered by solving a global convex optimization problem and using efficient algorithms. The effectiveness of our approach has been evaluated both on a realistic phantom with known ground-truth and in vivo brain data. Results clearly demonstrate the benefits of the proposed formulation, opening new perspectives for a more quantitative and biologically plausible assessment of the structural connectivity of the brain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.