955 resultados para METHOD-R


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: b-value is the parameter characterizing the intensity of the diffusion weighting during image acquisition. Data acquisition is usually performed with low b value (b~1000 s/mm2). Evidence shows that high b-values (b>2000 s/mm2) are more sensitive to the slow diffusion compartment (SDC) and maybe more sensitive in detecting white matter (WM) anomalies in schizophrenia.Methods: 12 male patients with schizophrenia (mean age 35 +/-3 years) and 16 healthy male controls matched for age were scanned with a low b-value (1000 s/mm2) and a high b-value (4000 s/mm2) protocol. Apparent diffusion coefficient (ADC) is a measure of the average diffusion distance of water molecules per time unit (mm2/s). ADC maps were generated for all individuals. 8 region of interests (frontal and parietal region bilaterally, centrum semi-ovale bilaterally and anterior and posterior corpus callosum) were manually traced blind to diagnosis.Results: ADC measures acquired with high b-value imaging were more sensitive in detecting differences between schizophrenia patients and healthy controls than low b-value imaging with a gain in significance by a factor of 20- 100 times despite the lower image Signal-to-noise ratio (SNR). Increased ADC was identified in patient's WM (p=0.00015) with major contributions from left and right centrum semi-ovale and to a lesser extent right parietal region.Conclusions: Our results may be related to the sensitivity of high b-value imaging to the SDC believed to reflect mainly the intra-axonal and myelin bound water pool. High b-value imaging might be more sensitive and specific to WM anomalies in schizophrenia than low b-value imaging

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Although several approaches have been already used to reduce radiation dose, CT doses are still among the high doses in radio-diagnostic. Recently, General Electric introduced a new imaging reconstruction technique, adaptive statistical iterative reconstruction (ASIR), allows to taking into account the statistical fluctuation of noise. The benefits of ASIR method were assessed through classic metrics and the evaluations of cardiac structures by radiologists. Methods and materials: A 64-row CT (MDCT) was employed. Catphan600 phantom acquisitions and 10 routine-dose CT examinations performed at 80 kVp were reconstructed with FBP and with 50% of ASIR. Six radiologists then assessed the visibility of main cardiac structures using the visual grading analysis (VGA) method. Results: On phantoms, for a constant value of SD (25 HU), CTDIvol is divided by 2 (8 mGy to 4 mGy) when 50% of ASIR is used. At constant CTDIvol, MTF medium frequencies were also significantly improved. First results indicated that clinical images reconstructed with ASIR had a better overall image quality compared with conventional reconstruction. This means that at constant image quality the radiation dose can be strongly reduced. Conclusion: The first results of this study shown that the ASIR method improves the image quality on phantoms by decreasing noise and improving resolution with respect to the classical one. Moreover, the benefit obtained is higher at lower doses. In clinical environment, a dose reduction can still be expected on 80 kVp low dose pediatric protocols using 50% of iterative reconstruction. Best ASIR percentage as a function of cardiac structures and detailed protocols will be presented for cardiac examinations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The durability of concrete is a most important aspect in pavement life. Deterioration of the interstate portland cement concrete pavement has prompted various studies of factors which may contribute to the durability. Studies of cores taken from deteriorated areas indicated that the larger particles of coarse aggregate may contribute greatly to the problem. This indication was mainly due to the analysis of the cracking pattern which showed that most of the cracks passed through the larger aggregates and the larger aggregate particles were more cracked than the smaller particles. The purpose of this project is to determine if the size of the coarse aggregate has a bearing on the durability of freeze and thaw beams. A secondary purpose of this project is to determine what effect the method of curing and proportions have on the durability of freeze and thaw beams.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many times during the past four years we have seen ranges in the durability factor for a single coarse aggregate source that were too great to be explained by variations in the coarse aggregate alone. The durability test (ASTM C 666 Method B) as presently used is a test of the concrete system rather than that of a particular coarse aggregate. An informal study of current durability factor data indicates that w/c ratio and/or percentage of air may be critical to beam growth and durability factor. The purpose of this project, R-258, is to determine the extent w/c ratio and air content variations have on beam growth and durability factor when other factors including coarse aggregate gradation are held constant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main problems of bridge maintenance in Iowa is the spalling and scaling of the decks. This problem stems from the continued use of deicing salts during the winter months. Since bridges will frost or freeze more often than roadways, the use of deicing salts on bridges is more frequent. The salt which is spread onto the bridge dissolves in water and permeates into the concrete deck. When the salt reaches the depth of the reinforcing steel and the concentration at that depth reaches the threshold concentration for corrosion (1.5 lbs./yd. 3 ), the steel will begin to oxidize. The oxidizing steel must then expand within the concrete. This expansion eventually forces undersurface fractures and spalls in the concrete. The spalling increases maintenance problems on bridges and in some cases has forced resurfacing after only a few years of service. There are two possible solutions to this problem. One solution is discontinuing the use of salts as the deicing agent on bridges and the other is preventing the salt from reaching or attacking the reinforcing steel. This report deals with one method which stops the salt from reaching the reinforcing steel. The method utilizes a waterproof membrane on the surface of a bridge deck. The waterproof membrane stops the water-salt solution from entering the concrete so the salt cannot reach the reinforcing steel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

R commands to calculate the secondary production estimates using the size-frequency method after Hynes and Coleman (1968), Benke (1979) and Huryn (1996).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple method determining airborne monoethanolamine has been developed. Monoethanolamine determination has traditionally been difficult due to analytical separation problems. Even in recent sophisticated methods, this difficulty remains as the major issue often resulting in time-consuming sample preparations. Impregnated glass fiber filters were used for sampling. Desorption of monoethanolamine was followed by capillary GC analysis and nitrogen phosphorous selective detection. Separation was achieved using a specific column for monoethanolamines (35% diphenyl and 65% dimethyl polysiloxane). The internal standard was quinoline. Derivatization steps were not needed. The calibration range was 0.5-80 μg/mL with a good correlation (R(2) = 0.996). Averaged overall precisions and accuracies were 4.8% and -7.8% for intraday (n = 30), and 10.5% and -5.9% for interday (n = 72). Mean recovery from spiked filters was 92.8% for the intraday variation, and 94.1% for the interday variation. Monoethanolamine on stored spiked filters was stable for at least 4 weeks at 5°C. This newly developed method was used among professional cleaners and air concentrations (n = 4) were 0.42 and 0.17 mg/m(3) for personal and 0.23 and 0.43 mg/m(3) for stationary measurements. The monoethanolamine air concentration method described here was simple, sensitive, and convenient both in terms of sampling and analytical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The activity of the renin-angiotensin system is usually evaluated as plasma renin activity (PRA, ngAI/ml per h) but the reproducibility of this enzymatic assay is notoriously scarce. We compared the inter and intralaboratory reproducibilities of PRA with those of a new automated chemiluminescent assay, which allows the direct quantification of immunoreactive renin [chemiluminescent immunoreactive renin (CLIR), microU/ml]. METHODS: Aliquots from six pool plasmas of patients with very low to very high PRA levels were measured in 12 centres with both the enzymatic and the direct assays. The same methods were applied to three control plasma preparations with known renin content. RESULTS: In pool plasmas, mean PRA values ranged from 0.14 +/- 0.08 to 18.9 +/- 4.1 ngAI/ml per h, whereas those of CLIR ranged from 4.2 +/- 1.7 to 436 +/- 47 microU/ml. In control plasmas, mean values of PRA and of CLIR were always within the expected range. Overall, there was a significant correlation between the two methods (r = 0.73, P < 0.01). Similar correlations were found in plasmas subdivided in those with low, intermediate and high PRA. However, the coefficients of variation among laboratories found for PRA were always higher than those of CLIR, ranging from 59.4 to 17.1% for PRA, and from 41.0 to 10.7% for CLIR (P < 0.01). Also, the mean intralaboratory variability was higher for PRA than for CLIR, being respectively, 8.5 and 4.5% (P < 0.01). CONCLUSION: The measurement of renin with the chemiluminescent method is a reliable alternative to PRA, having the advantage of a superior inter and intralaboratory reproducibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ethyl glucuronide (EtG) is a minor and direct metabolite of ethanol. EtG is incorporated into the growing hair allowing retrospective investigation of chronic alcohol abuse. In this study, we report the development and the validation of a method using gas chromatography-negative chemical ionization tandem mass spectrometry (GC-NCI-MS/MS) for the quantification of EtG in hair. EtG was extracted from about 30 mg of hair by aqueous incubation and purified by solid-phase extraction (SPE) using mixed mode extraction cartridges followed by derivation with perfluoropentanoic anhydride (PFPA). The analysis was performed in the selected reaction monitoring (SRM) mode using the transitions m/z 347-->163 (for the quantification) and m/z 347-->119 (for the identification) for EtG, and m/z 352-->163 for EtG-d(5) used as internal standard. For validation, we prepared quality controls (QC) using hair samples taken post mortem from 2 subjects with a known history of alcoholism. These samples were confirmed by a proficiency test with 7 participating laboratories. The assay linearity of EtG was confirmed over the range from 8.4 to 259.4 pg/mg hair, with a coefficient of determination (r(2)) above 0.999. The limit of detection (LOD) was estimated with 3.0 pg/mg. The lower limit of quantification (LLOQ) of the method was fixed at 8.4 pg/mg. Repeatability and intermediate precision (relative standard deviation, RSD%), tested at 4 QC levels, were less than 13.2%. The analytical method was applied to several hair samples obtained from autopsy cases with a history of alcoholism and/or lesions caused by alcohol. EtG concentrations in hair ranged from 60 to 820 pg/mg hair.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The ratio of the rates of non-synonymous and synonymous substitution (d(N)/d(S)) is commonly used to estimate selection in coding sequences. It is often suggested that, all else being equal, d(N)/d(S) should be lower in populations with large effective size (Ne) due to increased efficacy of purifying selection. As N-e is difficult to measure directly, life history traits such as body mass, which is typically negatively associated with population size, have commonly been used as proxies in empirical tests of this hypothesis. However, evidence of whether the expected positive correlation between body mass and d(N)/d(S) is consistently observed is conflicting. Results: Employing whole genome sequence data from 48 avian species, we assess the relationship between rates of molecular evolution and life history in birds. We find a negative correlation between dN/dS and body mass, contrary to nearly neutral expectation. This raises the question whether the correlation might be a method artefact. We therefore in turn consider non-stationary base composition, divergence time and saturation as possible explanations, but find no clear patterns. However, in striking contrast to d(N)/d(S), the ratio of radical to conservative amino acid substitutions (K-r/K-c) correlates positively with body mass. Conclusions: Our results in principle accord with the notion that non-synonymous substitutions causing radical amino acid changes are more efficiently removed by selection in large populations, consistent with nearly neutral theory. These findings have implications for the use of d(N)/d(S) and suggest that caution is warranted when drawing conclusions about lineage-specific modes of protein evolution using this metric.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Effective cancer treatment generally requires combination therapy. The combination of external beam therapy (XRT) with radiopharmaceutical therapy (RPT) requires accurate three-dimensional dose calculations to avoid toxicity and evaluate efficacy. We have developed and tested a treatment planning method, using the patient-specific three-dimensional dosimetry package 3D-RD, for sequentially combined RPT/XRT therapy designed to limit toxicity to organs at risk. METHODS AND MATERIALS: The biologic effective dose (BED) was used to translate voxelized RPT absorbed dose (D(RPT)) values into a normalized total dose (or equivalent 2-Gy-fraction XRT absorbed dose), NTD(RPT) map. The BED was calculated numerically using an algorithmic approach, which enabled a more accurate calculation of BED and NTD(RPT). A treatment plan from the combined Samarium-153 and external beam was designed that would deliver a tumoricidal dose while delivering no more than 50 Gy of NTD(sum) to the spinal cord of a patient with a paraspinal tumor. RESULTS: The average voxel NTD(RPT) to tumor from RPT was 22.6 Gy (range, 1-85 Gy); the maximum spinal cord voxel NTD(RPT) from RPT was 6.8 Gy. The combined therapy NTD(sum) to tumor was 71.5 Gy (range, 40-135 Gy) for a maximum voxel spinal cord NTD(sum) equal to the maximum tolerated dose of 50 Gy. CONCLUSIONS: A method that enables real-time treatment planning of combined RPT-XRT has been developed. By implementing a more generalized conversion between the dose values from the two modalities and an activity-based treatment of partial volume effects, the reliability of combination therapy treatment planning has been expanded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagnosis of chronic inflammatory demyelinating polyneuropathy (CIDP) is based on a set of clinical and neurophysiological parameters. However, in clinical practice, CIDP remains difficult to diagnose in atypical cases. In the present study, 32 experts from 22 centers (the French CIDP study group) were asked individually to score four typical, and seven atypical, CIDP observations (TOs and AOs, respectively) reported by other physicians, according to the Delphi method. The diagnoses of CIDP were confirmed by the group in 96.9 % of the TO and 60.1 % of the AO (p < 0.0001). There was a positive correlation between the consensus of CIDP diagnosis and the demyelinating features (r = 0.82, p < 0.004). The European CIDP classification was used in 28.3 % of the TOs and 18.2 % of the AOs (p < 0.002). The French CIDP study group diagnostic strategy was used in 90 % of the TOs and 61 % of the AOs (p < 0.0001). In 3 % of the TOs and 21.6 % of the AOs, the experts had difficulty determining a final diagnosis due to a lack of information. This study shows that a set of criteria and a diagnostic strategy are not sufficient to reach a consensus for the diagnosis of atypical CIDP in clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to compare O2 consumption (VO2) determination by the gas-exchange (VO2GE) and Fick (VO2F) methods in cardiac surgical patients. A total of 10 mechanically ventilated postoperative patients were studied prospectively. Thermodilution was performed using three randomly applied techniques: room temperature saline injected at end expiration, room temperature saline randomly injected in the respiratory cycle, and iced saline injected at end expiration. The influence of the number of thermodilution determinations was assessed by comparing results from 2 and 10 injections. The variability of VO2F was greater than that of VO2GE. There was no bias between VO2GE and VO2F values using injectate at room temperature. Accuracy and precision were not improved by increasing the number of cardiac output determinations from 2 to 10. A significant bias was observed using ice-cold injectate, VO2F being 18.0 +/- 15.4 ml/min/m2 lower than VO2GE (p = 0.001). Published results when comparing VO2F and VO2GE are discrepant. However, a significant bias was found in all studies using cold injectate, with lower VO2F values. We conclude that iced injectate should not be used to assess VO2 in critically ill patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to propose a way of using the Tocher's method of clustering to obtain a matrix similar to the cophenetic one obtained for hierarchical methods, which would allow the calculation of a cophenetic correlation. To illustrate the obtention of the proposed cophenetic matrix, we used two dissimilarity matrices - one obtained with the generalized squared Mahalanobis distance and the other with the Euclidean distance - between 17 garlic cultivars, based on six morphological characters. Basically, the proposal for obtaining the cophenetic matrix was to use the average distances within and between clusters, after performing the clustering. A function in R language was proposed to compute the cophenetic matrix for Tocher's method. The empirical distribution of this correlation coefficient was briefly studied. For both dissimilarity measures, the values of cophenetic correlation obtained for the Tocher's method were higher than those obtained with the hierarchical methods (Ward's algorithm and average linkage - UPGMA). Comparisons between the clustering made with the agglomerative hierarchical methods and with the Tocher's method can be performed using a criterion in common: the correlation between matrices of original and cophenetic distances.