993 resultados para Measurements models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In contrast to mammalian erythrocytes, which have lost their nucleus and mitochondria during maturation, the erythrocytes of almost all other vertebrate species are nucleated throughout their lifespan. Little research has been done however to test for the presence and functionality of mitochondria in these cells, especially for birds. Here, we investigated those two points in erythrocytes of one common avian model: the zebra finch (Taeniopygia guttata). RESULTS: Transmission electron microscopy showed the presence of mitochondria in erythrocytes of this small passerine bird, especially after removal of haemoglobin interferences. High-resolution respirometry revealed increased or decreased rates of oxygen consumption by erythrocytes in response to the addition of respiratory chain substrates or inhibitors, respectively. Fluorometric assays confirmed the production of mitochondrial superoxide by avian erythrocytes. Interestingly, measurements of plasmatic oxidative markers indicated lower oxidative stress in blood of the zebra finch compared to a size-matched mammalian model, the mouse. CONCLUSIONS: Altogether, those findings demonstrate that avian erythrocytes possess functional mitochondria in terms of respiratory activities and reactive oxygen species (ROS) production. Interestingly, since blood oxidative stress was lower for our avian model compared to a size-matched mammalian, our results also challenge the idea that mitochondrial ROS production could have been one actor leading to this loss during the course of evolution. Opportunities to assess mitochondrial functioning in avian erythrocytes open new perspectives in the use of birds as models for longitudinal studies of ageing via lifelong blood sampling of the same subjects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: We sought to improve upon previously published statistical modeling strategies for binary classification of dyslipidemia for general population screening purposes based on the waist-to-hip circumference ratio and body mass index anthropometric measurements. METHODS: Study subjects were participants in WHO-MONICA population-based surveys conducted in two Swiss regions. Outcome variables were based on the total serum cholesterol to high density lipoprotein cholesterol ratio. The other potential predictor variables were gender, age, current cigarette smoking, and hypertension. The models investigated were: (i) linear regression; (ii) logistic classification; (iii) regression trees; (iv) classification trees (iii and iv are collectively known as "CART"). Binary classification performance of the region-specific models was externally validated by classifying the subjects from the other region. RESULTS: Waist-to-hip circumference ratio and body mass index remained modest predictors of dyslipidemia. Correct classification rates for all models were 60-80%, with marked gender differences. Gender-specific models provided only small gains in classification. The external validations provided assurance about the stability of the models. CONCLUSIONS: There were no striking differences between either the algebraic (i, ii) vs. non-algebraic (iii, iv), or the regression (i, iii) vs. classification (ii, iv) modeling approaches. Anticipated advantages of the CART vs. simple additive linear and logistic models were less than expected in this particular application with a relatively small set of predictor variables. CART models may be more useful when considering main effects and interactions between larger sets of predictor variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A survey was undertaken among Swiss occupational health and safety specialists in 2004 to identify uses, difficulties, and possible developments of exposure models. Occupational hygienists (121), occupational physicians (169), and safety specialists (95) were surveyed with an in depth questionnaire. Results obtained indicate that models are not used very much in practice in Switzerland and are reserved to research groups focusing on specific topics. However, various determinants of exposure are often considered important by professionals (emission rate, work activity), and in some cases recorded and used (room parameters, operator activity). These parameters cannot be directly included in present models. Nevertheless, more than half of the occupational hygienists think that it is important to develop quantitative exposure models. Looking at research institutions, there is, however, a big interest in the use of models to solve problems which are difficult to address with direct measurements; i. e. retrospective exposure assessment for specific clinical cases and prospective evaluation for new situations or estimation of the effect of selected parameters. In a recent study about cases of acutepulmonary toxicity following water proofing spray exposure, exposure models have been used to reconstruct exposure of a group of patients. Finally, in the context of exposure prediction, it is also important to report about a measurement database existing in Switzerland since 1991. [Authors]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Levels of circulating cardiac troponin I (cTnI) or T are correlated to extent of myocardial destruction after an acute myocardial infarction. Few studies analyzing this relation have employed a second-generation cTnI assay or cardiac magnetic resonance (CMR) as the imaging end point. In this post hoc study of the Efficacy of FX06 in the Prevention of Mycoardial Reperfusion Injury (F.I.R.E.) trial, we aimed at determining the correlation between single-point cTnI measurements and CMR-estimated infarct size at 5 to 7 days and 4 months after a first-time ST-elevation myocardial infarction (STEMI) and investigating whether cTnI might provide independent prognostic information regarding infarct size at 4 months even taking into account early infarct size. Two hundred twenty-seven patients with a first-time STEMI were included in F.I.R.E. All patients received primary percutaneous coronary intervention within 6 hours from onset of symptoms. cTnI was measured at 24 and 48 hours after admission. CMR was conducted within 1 week of the index event (5 to 7 days) and at 4 months. Pearson correlations (r) for infarct size and cTnI at 24 hours were r = 0.66 (5 days) and r = 0.63 (4 months) and those for cTnI at 48 hours were r = 0.67 (5 days) and r = 0.65 (4 months). In a multiple regression analysis for predicting infarct size at 4 months (n = 141), cTnI and infarct location retained an independent prognostic role even taking into account early infarct size. In conclusion, a single-point cTnI measurement taken early after a first-time STEMI is a useful marker for infarct size and might also supplement early CMR evaluation in prediction of infarct size at 4 months.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cultural variation in a population is affected by the rate of occurrence of cultural innovations, whether such innovations are preferred or eschewed, how they are transmitted between individuals in the population, and the size of the population. An innovation, such as a modification in an attribute of a handaxe, may be lost or may become a property of all handaxes, which we call "fixation of the innovation." Alternatively, several innovations may attain appreciable frequencies, in which case properties of the frequency distribution-for example, of handaxe measurements-is important. Here we apply the Moran model from the stochastic theory of population genetics to study the evolution of cultural innovations. We obtain the probability that an initially rare innovation becomes fixed, and the expected time this takes. When variation in cultural traits is due to recurrent innovation, copy error, and sampling from generation to generation, we describe properties of this variation, such as the level of heterogeneity expected in the population. For all of these, we determine the effect of the mode of social transmission: conformist, where there is a tendency for each naïve newborn to copy the most popular variant; pro-novelty bias, where the newborn prefers a specific variant if it exists among those it samples; one-to-many transmission, where the variant one individual carries is copied by all newborns while that individual remains alive. We compare our findings with those predicted by prevailing theories for rates of cultural change and the distribution of cultural variation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The detection of Parkinson's disease (PD) in its preclinical stages prior to outright neurodegeneration is essential to the development of neuroprotective therapies and could reduce the number of misdiagnosed patients. However, early diagnosis is currently hampered by lack of reliable biomarkers. (1) H magnetic resonance spectroscopy (MRS) offers a noninvasive measure of brain metabolite levels that allows the identification of such potential biomarkers. This study aimed at using MRS on an ultrahigh field 14.1 T magnet to explore the striatal metabolic changes occurring in two different rat models of the disease. Rats lesioned by the injection of 6-hydroxydopamine (6-OHDA) in the medial-forebrain bundle were used to model a complete nigrostriatal lesion while a genetic model based on the nigral injection of an adeno-associated viral (AAV) vector coding for the human α-synuclein was used to model a progressive neurodegeneration and dopaminergic neuron dysfunction, thereby replicating conditions closer to early pathological stages of PD. MRS measurements in the striatum of the 6-OHDA rats revealed significant decreases in glutamate and N-acetyl-aspartate levels and a significant increase in GABA level in the ipsilateral hemisphere compared with the contralateral one, while the αSyn overexpressing rats showed a significant increase in the GABA striatal level only. Therefore, we conclude that MRS measurements of striatal GABA levels could allow for the detection of early nigrostriatal defects prior to outright neurodegeneration and, as such, offers great potential as a sensitive biomarker of presymptomatic PD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relationships between porosity and hydraulic conductivity tend to be strongly scale- and site-dependent and are thus very difficult to establish. As a result, hydraulic conductivity distributions inferred from geophysically derived porosity models must be calibrated using some measurement of aquifer response. This type of calibration is potentially very valuable as it may allow for transport predictions within the considered hydrological unit at locations where only geophysical measurements are available, thus reducing the number of well tests required and thereby the costs of management and remediation. Here, we explore this concept through a series of numerical experiments. Considering the case of porosity characterization in saturated heterogeneous aquifers using crosshole ground-penetrating radar and borehole porosity log data, we use tracer test measurements to calibrate a relationship between porosity and hydraulic conductivity that allows the best prediction of the observed hydrological behavior. To examine the validity and effectiveness of the obtained relationship, we examine its performance at alternate locations not used in the calibration procedure. Our results indicate that this methodology allows us to obtain remarkably reliable hydrological predictions throughout the considered hydrological unit based on the geophysical data only. This was also found to be the case when significant uncertainty was considered in the underlying relationship between porosity and hydraulic conductivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern agriculture techniques have a great impact on crops and soil quality, especially by the increased machinery traffic and weight. Several devices have been developed for determining soil properties in the field, aimed at managing compacted areas. Penetrometry is a widely used technique; however, there are several types of penetrometers, which have different action modes that can affect the soil resistance measurement. The objective of this study was to compare the functionality of two penetrometry methods (manual and automated mode) in the field identification of compacted, highly mechanized sugarcane areas, considering the influence of soil water volumetric content (θ) on soil penetration resistance (PR). Three sugarcane fields on a Rhodic Eutrudrox were chosen, under a sequence of harvest systems: one manual harvest (1ManH), one mechanized harvest (1MH) and three mechanized harvests (3MH). The different degrees of mechanization were associated to cumulative compaction processes. An electronic penetrometer was used on PR measurements, so that the rod was introduced into the soil by hand (Manual) and by an electromechanical motor (Auto). The θ was measured in the field with a soil moisture sensor. Results showed an effect of θ on PR measurements and that regression models must be used to correct data before comparing harvesting systems. The rod introduction modes resulted in different mean PR values, where the "Manual" overestimated PR compared to the "Auto" mode at low θ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in remote sensing technologies have facilitated the generation of very high resolution (VHR) environmental data. Exploratory studies suggested that, if used in species distribution models (SDMs), these data should enable modelling species' micro-habitats and allow improving predictions for fine-scale biodiversity management. In the present study, we tested the influence, in SDMs, of predictors derived from a VHR digital elevation model (DEM) by comparing the predictive power of models for 239 plant species and their assemblages fitted at six different resolutions in the Swiss Alps. We also tested whether changes of the model quality for a species is related to its functional and ecological characteristics. Refining the resolution only contributed to slight improvement of the models for more than half of the examined species, with the best results obtained at 5 m, but no significant improvement was observed, on average, across all species. Contrary to our expectations, we could not consistently correlate the changes in model performance with species characteristics such as vegetation height. Temperature, the most important variable in the SDMs across the different resolutions, did not contribute any substantial improvement. Our results suggest that improving resolution of topographic data only is not sufficient to improve SDM predictions - and therefore local management - compared to previously used resolutions (here 25 and 100 m). More effort should be dedicated now to conduct finer-scale in-situ environmental measurements (e.g. for temperature, moisture, snow) to obtain improved environmental measurements for fine-scale species mapping and management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The asphalt concrete (AC) dynamic modulus (|E*|) is a key design parameter in mechanistic-based pavement design methodologies such as the American Association of State Highway and Transportation Officials (AASHTO) MEPDG/Pavement-ME Design. The objective of this feasibility study was to develop frameworks for predicting the AC |E*| master curve from falling weight deflectometer (FWD) deflection-time history data collected by the Iowa Department of Transportation (Iowa DOT). A neural networks (NN) methodology was developed based on a synthetically generated viscoelastic forward solutions database to predict AC relaxation modulus (E(t)) master curve coefficients from FWD deflection-time history data. According to the theory of viscoelasticity, if AC relaxation modulus, E(t), is known, |E*| can be calculated (and vice versa) through numerical inter-conversion procedures. Several case studies focusing on full-depth AC pavements were conducted to isolate potential backcalculation issues that are only related to the modulus master curve of the AC layer. For the proof-of-concept demonstration, a comprehensive full-depth AC analysis was carried out through 10,000 batch simulations using a viscoelastic forward analysis program. Anomalies were detected in the comprehensive raw synthetic database and were eliminated through imposition of certain constraints involving the sigmoid master curve coefficients. The surrogate forward modeling results showed that NNs are able to predict deflection-time histories from E(t) master curve coefficients and other layer properties very well. The NN inverse modeling results demonstrated the potential of NNs to backcalculate the E(t) master curve coefficients from single-drop FWD deflection-time history data, although the current prediction accuracies are not sufficient to recommend these models for practical implementation. Considering the complex nature of the problem investigated with many uncertainties involved, including the possible presence of dynamics during FWD testing (related to the presence and depth of stiff layer, inertial and wave propagation effects, etc.), the limitations of current FWD technology (integration errors, truncation issues, etc.), and the need for a rapid and simplified approach for routine implementation, future research recommendations have been provided making a strong case for an expanded research study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two portable Radio Frequency IDentification (RFID) systems (made by Texas Instruments and HiTAG) were developed and tested for bridge scour monitoring by the Department of Civil and Environmental Engineering at the University of Iowa (UI). Both systems consist of three similar components: 1) a passive cylindrical transponder of 2.2 cm in length (derived from transmitter/responder); 2) a low frequency reader (~134.2 kHz frequency); and 3) an antenna (of rectangular or hexagonal loop). The Texas Instruments system can only read one smart particle per time, while the HiTAG system was successfully modified here at UI by adding the anti-collision feature. The HiTAG system was equipped with four antennas and could simultaneously detect 1,000s of smart particles located in a close proximity. A computer code was written in C++ at the UI for the HiTAG system to allow simultaneous, multiple readouts of smart particles under different flow conditions. The code is written for the Windows XP operational system which has a user-friendly windows interface that provides detailed information regarding the smart particle that includes: identification number, location (orientation in x,y,z), and the instance the particle was detected.. These systems were examined within the context of this innovative research in order to identify the best suited RFID system for performing autonomous bridge scour monitoring. A comprehensive laboratory study that included 142 experimental runs and limited field testing was performed to test the code and determine the performance of each system in terms of transponder orientation, transponder housing material, maximum antenna-transponder detection distance, minimum inter-particle distance and antenna sweep angle. The two RFID systems capabilities to predict scour depth were also examined using pier models. The findings can be summarized as follows: 1) The first system (Texas Instruments) read one smart particle per time, and its effective read range was about 3ft (~1m). The second system (HiTAG) had similar detection ranges but permitted the addition of an anti-collision system to facilitate the simultaneous identification of multiple smart particles (transponders placed into marbles). Therefore, it was sought that the HiTAG system, with the anti-collision feature (or a system with similar features), would be preferable when compared to a single-read-out system for bridge scour monitoring, as the former could provide repetitive readings at multiple locations, which could help in predicting the scour-hole bathymetry along with maximum scour depth. 2) The HiTAG system provided reliable measures of the scour depth (z-direction) and the locations of the smart particles on the x-y plane within a distance of about 3ft (~1m) from the 4 antennas. A Multiplexer HTM4-I allowed the simultaneous use of four antennas for the HiTAG system. The four Hexagonal Loop antennas permitted the complete identification of the smart particles in an x, y, z orthogonal system as function of time. The HiTAG system can be also used to measure the rate of sediment movement (in kg/s or tones/hr). 3) The maximum detection distance of the antenna did not change significantly for the buried particles compared to the particles tested in the air. Thus, the low frequency RFID systems (~134.2 kHz) are appropriate for monitoring bridge scour because their waves can penetrate water and sand bodies without significant loss of their signal strength. 4) The pier model experiments in a flume with first RFID system showed that the system was able to successfully predict the maximum scour depth when the system was used with a single particle in the vicinity of pier model where scour-hole was expected. The pier model experiments with the second RFID system, performed in a sandbox, showed that system was able to successfully predict the maximum scour depth when two scour balls were used in the vicinity of the pier model where scour-hole was developed. 5) The preliminary field experiments with the second RFID system, at the Raccoon River, IA near the Railroad Bridge (located upstream of 360th street Bridge, near Booneville), showed that the RFID technology is transferable to the field. A practical method would be developed for facilitating the placement of the smart particles within the river bed. This method needs to be straightforward for the Department of Transportation (DOT) and county road working crews so it can be easily implemented at different locations. 6) Since the inception of this project, further research showed that there is significant progress in RFID technology. This includes the availability of waterproof RFID systems with passive or active transponders of detection ranges up to 60 ft (~20 m) within the water–sediment column. These systems do have anti-collision and can facilitate up to 8 powerful antennas which can significantly increase the detection range. Such systems need to be further considered and modified for performing automatic bridge scour monitoring. The knowledge gained from the two systems, including the software, needs to be adapted to the new systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PAH (N-(4-aminobenzoyl)glycin) clearance measurements have been used for 50 years in clinical research for the determination of renal plasma flow. The quantitation of PAH in plasma or urine is generally performed by colorimetric method after diazotation reaction but the measurements must be corrected for the unspecific residual response observed in blank plasma. We have developed a HPLC method to specifically determine PAH and its metabolite NAc-PAH using a gradient elution ion-pair reversed-phase chromatography with UV detection at 273 and 265 nm, respectively. The separations were performed at room temperature on a ChromCart (125 mmx4 mm I.D.) Nucleosil 100-5 microm C18AB cartridge column, using a gradient elution of MeOH-buffer pH 3.9 1:99-->15:85 over 15 min. The pH 3.9 buffered aqueous solution consisted in a mixture of 375 ml sodium citrate-citric acid solution (21.01 g citric acid and 8.0 g NaOH per liter), added up with 2.7 ml H3PO4 85%, 1.0 g of sodium heptanesulfonate and completed ad 1000 ml with ultrapure water. The N-acetyltransferase activity does not seem to notably affect PAH clearances, although NAc-PAH represents 10.2+/-2.7% of PAH excreted unchanged in 12 healthy subjects. The performance of the HPLC and the colorimetric method have been compared using urine and plasma samples collected from healthy volunteers. Good correlations (r=0.94 and 0.97, for plasma and urine, respectively) are found between the results obtained with both techniques. However, the colorimetric method gives higher concentrations of PAH in urine and lower concentrations in plasma than those determined by HPLC. Hence, both renal (ClR) and systemic (Cls) clearances are systematically higher (35.1 and 17.8%, respectively) with the colorimetric method. The fraction of PAH excreted by the kidney ClR/ClS calculated from HPLC data (n=143) is, as expected, always <1 (mean=0.73+/-0.11), whereas the colorimetric method gives a mean extraction ratio of 0.87+/-0.13 implying some unphysiological values (>1). In conclusion, HPLC not only enables the simultaneous quantitation of PAH and NAc-PAH, but may also provide more accurate and precise PAH clearance measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.