998 resultados para ADDITIONAL MEASUREMENTS


Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present further %CaCO3 data from Site U1313 across the Pliocene-Pleistocene intensification of Northern Hemisphere glaciation. This data was measured on the U1313 secondary splice. We also present tie points between the primary and secondary splice for this interval based on graphical tuning of L* (sediment lightness).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the first simultaneous measurements of the Thomson scattering and electron cyclotron emission radiometer diagnostics performed at TCABR tokamak with Alfven wave heating. The Thomson scattering diagnostic is an upgraded version of the one previously installed at the ISTTOK tokamak, while the electron cyclotron emission radiometer employs a heterodyne sweeping radiometer. For purely Ohmic discharges, the electron temperature measurements from both diagnostics are in good agreement. Additional Alfven wave heating does not affect the capability of the Thomson scattering diagnostic to measure the instantaneous electron temperature, whereas measurements from the electron cyclotron emission radiometer become underestimates of the actual temperature values. (C) 2010 American Institute of Physics. [doi:10.1063/1.3494379]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new analysis of J/psi production yields in deuteron-gold collisions at root s(NN) =200 GeV using data taken from the PHENIX experiment in 2003 and previously published in S. S. Adler [Phys. Rev. Lett 96, 012304 (2006)]. The high statistics proton-proton J/psi data taken in 2005 are used to improve the baseline measurement and thus construct updated cold nuclear matter modification factors (R(dAu)). A suppression of J/psi in cold nuclear matter is observed as one goes forward in rapidity (in the deuteron-going direction), corresponding to a region more sensitive to initial-state low-x gluons in the gold nucleus. The measured nuclear modification factors are compared to theoretical calculations of nuclear shadowing to which a J/psi (or precursor) breakup cross section is added. Breakup cross sections of sigma(breakup)=2.8(-1.4)(+1.7) (2.2(-1.5)(+1.6)) mb are obtained by fitting these calculations to the data using two different models of nuclear shadowing. These breakup cross-section values are consistent within large uncertainties with the 4.2 +/- 0.5 mb determined at lower collision energies. Projecting this range of cold nuclear matter effects to copper-copper and gold-gold collisions reveals that the current constraints are not sufficient to firmly quantify the additional hot nuclear matter effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Mat`ern models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Hip fractures are responsible for excessive mortality, decreasing the 5-year survival rate by about 20%. From an economic perspective, they represent a major source of expense, with direct costs in hospitalization, rehabilitation, and institutionalization. The incidence rate sharply increases after the age of 70, but it can be reduced in women aged 70-80 years by therapeutic interventions. Recent analyses suggest that the most efficient strategy is to implement such interventions in women at the age of 70 years. As several guidelines recommend bone mineral density (BMD) screening of postmenopausal women with clinical risk factors, our objective was to assess the cost-effectiveness of two screening strategies applied to elderly women aged 70 years and older. METHODS: A cost-effectiveness analysis was performed using decision-tree analysis and a Markov model. Two alternative strategies, one measuring BMD of all women, and one measuring BMD only of those having at least one risk factor, were compared with the reference strategy "no screening". Cost-effectiveness ratios were measured as cost per year gained without hip fracture. Most probabilities were based on data observed in EPIDOS, SEMOF and OFELY cohorts. RESULTS: In this model, which is mostly based on observed data, the strategy "screen all" was more cost effective than "screen women at risk." For one woman screened at the age of 70 and followed for 10 years, the incremental (additional) cost-effectiveness ratio of these two strategies compared with the reference was 4,235 euros and 8,290 euros, respectively. CONCLUSION: The results of this model, under the assumptions described in the paper, suggest that in women aged 70-80 years, screening all women with dual-energy X-ray absorptiometry (DXA) would be more effective than no screening or screening only women with at least one risk factor. Cost-effectiveness studies based on decision-analysis trees maybe useful tools for helping decision makers, and further models based on different assumptions should be performed to improve the level of evidence on cost-effectiveness ratios of the usual screening strategies for osteoporosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glioma cell lines are an important tool for research in basic and translational neuro-oncology. Documentation of their genetic identity has become a requirement for scientific journals and grant applications to exclude cross-contamination and misidentification that lead to misinterpretation of results. Here, we report the standard 16 marker short tandem repeat (STR) DNA fingerprints for a panel of 39 widely used glioma cell lines as reference. Comparison of the fingerprints among themselves and with the large DSMZ database comprising 9 marker STRs for 2278 cell lines uncovered 3 misidentified cell lines and confirmed previously known cross-contaminations. Furthermore, 2 glioma cell lines exhibited identity scores of 0.8, which is proposed as the cutoff for detecting cross-contamination. Additional characteristics, comprising lack of a B-raf mutation in one line and a similarity score of 1 with the original tumor tissue in the other, excluded a cross-contamination. Subsequent simulation procedures suggested that, when using DNA fingerprints comprising only 9 STR markers, the commonly used similarity score of 0.8 is not sufficiently stringent to unambiguously differentiate the origin. DNA fingerprints are confounded by frequent genetic alterations in cancer cell lines, particularly loss of heterozygosity, that reduce the informativeness of STR markers and, thereby, the overall power for distinction. The similarity score depends on the number of markers measured; thus, more markers or additional cell line characteristics, such as information on specific mutations, may be necessary to clarify the origin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the constraints on the mass and mixing of a superstring-inspired E6 Z' neutral gauge boson that follow from the recent precise Z mass measurements and show that they depend very sensitively on the assumed value of the W mass and also, to a lesser extent, on the top-quark mass.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Acetate brain metabolism has the particularity to occur specifically in glial cells. Labeling studies, using acetate labeled either with 13C (NMR) or 11C (PET), are governed by the same biochemical reactions and thus follow the same mathematical principles. In this study, the objective was to adapt an NMR acetate brain metabolism model to analyse [1-11C]acetate infusion in rats. Methods: Brain acetate infusion experiments were modeled using a two-compartment model approach used in NMR.1-3 The [1-11C]acetate labeling study was done using a beta scintillator.4 The measured radioactive signal represents the time evolution of the sum of all labeled metabolites in the brain. Using a coincidence counter in parallel, an arterial input curve was measured. The 11C at position C-1 of acetate is metabolized in the first turn of the TCA cycle to the position 5 of glutamate (Figure 1A). Through the neurotransmission process, it is further transported to the position 5 of glutamine and the position 5 of neuronal glutamate. After the second turn of the TCA cycle, tracer from [1-11C]acetate (and also a part from glial [5-11C]glutamate) is transferred to glial [1-11C]glutamate and further to [1-11C]glutamine and neuronal glutamate through the neurotransmission cycle. Brain poster session: oxidative mechanisms S460 Journal of Cerebral Blood Flow & Metabolism (2009) 29, S455-S466 Results: The standard acetate two-pool PET model describes the system by a plasma pool and a tissue pool linked by rate constants. Experimental data are not fully described with only one tissue compartment (Figure 1B). The modified NMR model was fitted successfully to tissue time-activity curves from 6 single animals, by varying the glial mitochondrial fluxes and the neurotransmission flux Vnt. A glial composite rate constant Kgtg=Vgtg/[Ace]plasma was extracted. Considering an average acetate concentration in plasma of 1 mmol/g5 and the negligible additional amount injected, we found an average Vgtg = 0.08±0.02 (n = 6), in agreement with previous NMR measurements.1 The tissue time-activity curve is dominated by glial glutamate and later by glutamine (Figure 1B). Labeling of neuronal pools has a low influence, at least for the 20 mins of beta-probe acquisition. Based on the high diffusivity of CO2 across the blood-brain barrier; 11CO2 is not predominant in the total tissue curve, even if the brain CO2 pool is big compared with other metabolites, due to its strong dilution through unlabeled CO2 from neuronal metabolism and diffusion from plasma. Conclusion: The two-compartment model presented here is also able to fit data of positron emission experiments and to extract specific glial metabolic fluxes. 11C-labeled acetate presents an alternative for faster measurements of glial oxidative metabolism compared to NMR, potentially applicable to human PET imaging. However, to quantify the relative value of the TCA cycle flux compared to the transmitochondrial flux, the chemical sensitivity of NMR is required. PET and NMR are thus complementary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sediment-water exchanges of oxygen, ammonium, nitrate, total dissolved nitrogen, phosphate and total dissolved phosphorus were measured by means of an in situ incubator of 7 1 volume and 700 cm2 base area. The incubations lasted for three hours and were done over a whole season on different kinds of sediments in Alfaques Bay. We present some preliminary results on: i) methodological aspects, ii) spatial and temporal variability of fluxes, and iii) estimates of contribution of benthic nutrient regeneration relative to total nutrient loading of the Bay. Oxygen uptake averaged 1700 mmo1 m-2 h-1 (range 200-3500); no differences were found between sandy and muddy sediments. The release of ammonia from the sediment averaged 70 mmo1 m-2 h-1 and was higher in muddy sediments than in sandy ones. Very low to null nitrate and nitrite fluxes and only small fluxes of organic nitrogen were detected. We conclude that ammonium release from sediment is the major path of nitrogen regeneration. Some sediments removed dissolved reactive phosphorus (DRP) from the water and released dissolved organic phosphorus (DOP). Additional manipulative experiments revealed DRP release under particular conditions (turbulence, anoxia). From these data, we estimate that at least 50% of the nitrogen requirements of phytoplankton in the area may be supplied by benthic remineralization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two goals were pursued in this research: first, to evaluate statistically some effects of sample preparation and instrument geometry on reproducibility of X-ray diffraction intensity data; and second, to develop a procedure for finding minimum peak and background counting times for a desired level of accuracy. The ratio of calcite to dolomite in limestones was determined in trials. Ultra-fine wet grinding of the limestone in porcelain impact type ball mill gave most consistent X-ray results, but caused considerable line broadening, and peaks were best measured on an area count basis. Sample spinning reduced variance about one third, and a coarse beam-medium detector slit arrangement was found to be best. An equation is developed relating coefficient of variation of a count ratio to peak and background counts. By use of the equation or graphs the minimum coefficient of variation is predicted from one fast scan, and the number and optimum arrangement of additional counting periods to reduce variation to a desired limit may be obtained. The calculated coefficient is the maximum which may be attributed to the counting statistic but does not include experimental deviations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Most studies assess the prevalence of hypertension in pediatric populations based on blood pressure (BP) readings taken on a single visit. We determined the prevalence of hypertension measured on up to three visits in a Swiss pediatric population and examined the association between hypertension and overweight and selected other factors. METHODS: Anthropometric data and BP were measured in all children of the sixth school grade of the Vaud canton (Switzerland) in 2005-2006. 'Elevated BP' was defined according to sex-specific, age-specific and height-specific US reference data. BP was measured on up to two additional visits in children with elevated BP. 'Hypertension' was defined as 'elevated BP' on all three visits. RESULTS: Out of 6873 children, 5207 (76%) participated [2621 boys, 2586 girls; mean (SD) age, 12.3 (0.5) years]. The prevalence of elevated BP was 11.4, 3.8 and 2.2% on first, second and thirds visits, respectively; hence 2.2% had hypertension. Among hypertensive children, 81% had isolated systolic hypertension. Hypertension was associated with excess body weight, elevated heart rate and parents' history of hypertension. Of the children, 16.1% of boys and 12.4% of girls were overweight or obese (CDC criteria, body mass index >or= 85th percentile). Thirty-seven percent of cases of hypertension could be attributed to overweight or obesity. CONCLUSIONS: The proportion of children with elevated BP based on one visit was five times higher than based on three measurements taken at few-week intervals. Our data re-emphasize the need for prevention and control of overweight in children to curb the global hypertension burden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intercontinental Transport of Ozone and Precursors (ITOP) (part of International Consortium for Atmospheric Research on Transport and Transformation (ICARTT)) was an intense research effort to measure long-range transport of pollution across the North Atlantic and its impact on O3 production. During the aircraft campaign plumes were encountered containing large concentrations of CO plus other tracers and aerosols from forest fires in Alaska and Canada. A chemical transport model, p-TOMCAT, and new biomass burning emissions inventories are used to study the emissions long-range transport and their impact on the troposphere O3 budget. The fire plume structure is modeled well over long distances until it encounters convection over Europe. The CO values within the simulated plumes closely match aircraft measurements near North America and over the Atlantic and have good agreement with MOPITT CO data. O3 and NOx values were initially too great in the model plumes. However, by including additional vertical mixing of O3 above the fires, and using a lower NO2/CO emission ratio (0.008) for boreal fires, O3 concentrations are reduced closer to aircraft measurements, with NO2 closer to SCIAMACHY data. Too little PAN is produced within the simulated plumes, and our VOC scheme's simplicity may be another reason for O3 and NOx model-data discrepancies. In the p-TOMCAT simulations the fire emissions lead to increased tropospheric O3 over North America, the north Atlantic and western Europe from photochemical production and transport. The increased O3 over the Northern Hemisphere in the simulations reaches a peak in July 2004 in the range 2.0 to 6.2 Tg over a baseline of about 150 Tg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The success of Matrix-assisted laser desorption / ionisation (MALDI) in fields such as proteomics has partially but not exclusively been due to the development of improved data acquisition and sample preparation techniques. This has been required to overcome some of the short comings of the commonly used solid-state MALDI matrices such as - cyano-4-hydroxycinnamic acid (CHCA) and 2,5-dihydroxybenzoic acid (DHB). Solid state matrices form crystalline samples with highly inhomogeneous topography and morphology which results in large fluctuations in analyte signal intensity from spot to spot and positions within the spot. This means that efficient tuning of the mass spectrometer can be impeded and the use of MALDI MS for quantitative measurements is severely impeded. Recently new MALDI liquid matrices have been introduced which promise to be an effective alternative to crystalline matrices. Generally the liquid matrices comprise either ionic liquid matrices (ILMs) or a usually viscous liquid matrix which is doped with a UV lightabsorbing chromophore [1-3]. The advantages are that the droplet surface is smooth and relatively uniform with the analyte homogeneously distributed within. They have the ability to replenish a sampling position between shots negating the need to search for sample hot-spots. Also the liquid nature of the matrix allows for the use of additional additives to change the environment to which the analyte is added.