994 resultados para Intensity parameters


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This project was undertaken to study the relationships between the performance of locally available asphalts and their physicochemical properties under Iowa conditions with the ultimate objective of development of a locally and performance-based asphalt specification for durable pavements. Physical and physicochemical tests were performed on three sets of asphalt samples including: (a) twelve samples from local asphalt suppliers and their TFOT residues, (b) six core samples of known service records, and (c) a total of 79 asphalts from 10 pavement projects including original, lab aged and recovered asphalts from field mixes, as well as from lab aged mixes. Tests included standard rheological tests, HP-GPC and TMA. Some specific viscoelastic tests (at 5 deg C) were run on b samples and on some a samples. DSC and X-ray diffraction studies were performed on a and b samples. Furthermore, NMR techniques were applied to some a, b and c samples. Efforts were made to identify physicochemical properties which are correlated to physical properties known to affect field performance. The significant physicochemical parameters were used as a basis for an improved performance-based trial specification for Iowa to ensure more durable pavements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les catastrophes sont souvent perçues comme des événements rapides et aléatoires. Si les déclencheurs peuvent être soudains, les catastrophes, elles, sont le résultat d'une accumulation des conséquences d'actions et de décisions inappropriées ainsi que du changement global. Pour modifier cette perception du risque, des outils de sensibilisation sont nécessaires. Des méthodes quantitatives ont été développées et ont permis d'identifier la distribution et les facteurs sous- jacents du risque.¦Le risque de catastrophes résulte de l'intersection entre aléas, exposition et vulnérabilité. La fréquence et l'intensité des aléas peuvent être influencées par le changement climatique ou le déclin des écosystèmes, la croissance démographique augmente l'exposition, alors que l'évolution du niveau de développement affecte la vulnérabilité. Chacune de ses composantes pouvant changer, le risque est dynamique et doit être réévalué périodiquement par les gouvernements, les assurances ou les agences de développement. Au niveau global, ces analyses sont souvent effectuées à l'aide de base de données sur les pertes enregistrées. Nos résultats montrent que celles-ci sont susceptibles d'être biaisées notamment par l'amélioration de l'accès à l'information. Elles ne sont pas exhaustives et ne donnent pas d'information sur l'exposition, l'intensité ou la vulnérabilité. Une nouvelle approche, indépendante des pertes reportées, est donc nécessaire.¦Les recherches présentées ici ont été mandatées par les Nations Unies et par des agences oeuvrant dans le développement et l'environnement (PNUD, l'UNISDR, la GTZ, le PNUE ou l'UICN). Ces organismes avaient besoin d'une évaluation quantitative sur les facteurs sous-jacents du risque, afin de sensibiliser les décideurs et pour la priorisation des projets de réduction des risques de désastres.¦La méthode est basée sur les systèmes d'information géographique, la télédétection, les bases de données et l'analyse statistique. Une importante quantité de données (1,7 Tb) et plusieurs milliers d'heures de calculs ont été nécessaires. Un modèle de risque global a été élaboré pour révéler la distribution des aléas, de l'exposition et des risques, ainsi que pour l'identification des facteurs de risque sous- jacent de plusieurs aléas (inondations, cyclones tropicaux, séismes et glissements de terrain). Deux indexes de risque multiples ont été générés pour comparer les pays. Les résultats incluent une évaluation du rôle de l'intensité de l'aléa, de l'exposition, de la pauvreté, de la gouvernance dans la configuration et les tendances du risque. Il apparaît que les facteurs de vulnérabilité changent en fonction du type d'aléa, et contrairement à l'exposition, leur poids décroît quand l'intensité augmente.¦Au niveau local, la méthode a été testée pour mettre en évidence l'influence du changement climatique et du déclin des écosystèmes sur l'aléa. Dans le nord du Pakistan, la déforestation induit une augmentation de la susceptibilité des glissements de terrain. Les recherches menées au Pérou (à base d'imagerie satellitaire et de collecte de données au sol) révèlent un retrait glaciaire rapide et donnent une évaluation du volume de glace restante ainsi que des scénarios sur l'évolution possible.¦Ces résultats ont été présentés à des publics différents, notamment en face de 160 gouvernements. Les résultats et les données générées sont accessibles en ligne (http://preview.grid.unep.ch). La méthode est flexible et facilement transposable à des échelles et problématiques différentes, offrant de bonnes perspectives pour l'adaptation à d'autres domaines de recherche.¦La caractérisation du risque au niveau global et l'identification du rôle des écosystèmes dans le risque de catastrophe est en plein développement. Ces recherches ont révélés de nombreux défis, certains ont été résolus, d'autres sont restés des limitations. Cependant, il apparaît clairement que le niveau de développement configure line grande partie des risques de catastrophes. La dynamique du risque est gouvernée principalement par le changement global.¦Disasters are often perceived as fast and random events. If the triggers may be sudden, disasters are the result of an accumulation of actions, consequences from inappropriate decisions and from global change. To modify this perception of risk, advocacy tools are needed. Quantitative methods have been developed to identify the distribution and the underlying factors of risk.¦Disaster risk is resulting from the intersection of hazards, exposure and vulnerability. The frequency and intensity of hazards can be influenced by climate change or by the decline of ecosystems. Population growth increases the exposure, while changes in the level of development affect the vulnerability. Given that each of its components may change, the risk is dynamic and should be reviewed periodically by governments, insurance companies or development agencies. At the global level, these analyses are often performed using databases on reported losses. Our results show that these are likely to be biased in particular by improvements in access to information. International losses databases are not exhaustive and do not give information on exposure, the intensity or vulnerability. A new approach, independent of reported losses, is necessary.¦The researches presented here have been mandated by the United Nations and agencies working in the development and the environment (UNDP, UNISDR, GTZ, UNEP and IUCN). These organizations needed a quantitative assessment of the underlying factors of risk, to raise awareness amongst policymakers and to prioritize disaster risk reduction projects.¦The method is based on geographic information systems, remote sensing, databases and statistical analysis. It required a large amount of data (1.7 Tb of data on both the physical environment and socio-economic parameters) and several thousand hours of processing were necessary. A comprehensive risk model was developed to reveal the distribution of hazards, exposure and risk, and to identify underlying risk factors. These were performed for several hazards (e.g. floods, tropical cyclones, earthquakes and landslides). Two different multiple risk indexes were generated to compare countries. The results include an evaluation of the role of the intensity of the hazard, exposure, poverty, governance in the pattern and trends of risk. It appears that the vulnerability factors change depending on the type of hazard, and contrary to the exposure, their weight decreases as the intensity increases.¦Locally, the method was tested to highlight the influence of climate change and the ecosystems decline on the hazard. In northern Pakistan, deforestation exacerbates the susceptibility of landslides. Researches in Peru (based on satellite imagery and ground data collection) revealed a rapid glacier retreat and give an assessment of the remaining ice volume as well as scenarios of possible evolution.¦These results were presented to different audiences, including in front of 160 governments. The results and data generated are made available online through an open source SDI (http://preview.grid.unep.ch). The method is flexible and easily transferable to different scales and issues, with good prospects for adaptation to other research areas. The risk characterization at a global level and identifying the role of ecosystems in disaster risk is booming. These researches have revealed many challenges, some were resolved, while others remained limitations. However, it is clear that the level of development, and more over, unsustainable development, configures a large part of disaster risk and that the dynamics of risk is primarily governed by global change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As modern molecular biology moves towards the analysis of biological systems as opposed to their individual components, the need for appropriate mathematical and computational techniques for understanding the dynamics and structure of such systems is becoming more pressing. For example, the modeling of biochemical systems using ordinary differential equations (ODEs) based on high-throughput, time-dense profiles is becoming more common-place, which is necessitating the development of improved techniques to estimate model parameters from such data. Due to the high dimensionality of this estimation problem, straight-forward optimization strategies rarely produce correct parameter values, and hence current methods tend to utilize genetic/evolutionary algorithms to perform non-linear parameter fitting. Here, we describe a completely deterministic approach, which is based on interval analysis. This allows us to examine entire sets of parameters, and thus to exhaust the global search within a finite number of steps. In particular, we show how our method may be applied to a generic class of ODEs used for modeling biochemical systems called Generalized Mass Action Models (GMAs). In addition, we show that for GMAs our method is amenable to the technique in interval arithmetic called constraint propagation, which allows great improvement of its efficiency. To illustrate the applicability of our method we apply it to some networks of biochemical reactions appearing in the literature, showing in particular that, in addition to estimating system parameters in the absence of noise, our method may also be used to recover the topology of these networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study compares the effects of two short multiple-sprint exercise (MSE) (6 × 6 s) sessions with two different recovery durations (30 s or 180 s) on the slow component of oxygen uptake ([Formula: see text]O(2)) during subsequent high-intensity exercise. Ten male subjects performed a 6-min cycling test at 50% of the difference between the gas exchange threshold and [Formula: see text]O(2peak) (Δ50). Then, the subjects performed two MSEs of 6 × 6 s separated by two intersprint recoveries of 30 s (MSE(30)) and 180 s (MSE(180)), followed 10 min later by the Δ50 (Δ50(30) and Δ50(180), respectively). Electromyography (EMG) activities of the vastus medialis and lateralis were measured throughout each exercise bout. During MSE(30), muscle activity (root mean square) increased significantly (p ≤ 0.04), with a significant leftward-shifted median frequency of the power density spectrum (MDF; p ≤ 0.01), whereas MDF was significantly rightward-shifted during MSE(180) (p = 0.02). The mean [Formula: see text]O(2) value was significantly higher in MSE(30) than in MSE(180) (p < 0.001). During Δ50(30), [Formula: see text]O(2) and the deoxygenated hemoglobin ([HHb]) slow components were significantly reduced (-27%, p = 0.02, and -34%, p = 0.003, respectively) compared with Δ50. There were no significant modifications of the [Formula: see text]O(2) slow component in Δ50(180) compared with Δ50 (p = 0.32). The neuromuscular and metabolic adaptations during MSE(30) (preferential activation of type I muscle fibers evidenced by decreased MDF and a greater aerobic metabolism contribution to the required energy demands), but not during MSE(180), may lead to reduced [Formula: see text]O(2) and [HHb] slow components, suggesting an alteration in motor units recruitment profile (i.e., change in the type of muscle fibers recruited) and (or) an improved muscle O(2) delivery during subsequent exercise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hsp70-Hsp40-NEF and possibly Hsp100 are the only known molecular chaperones that can use the energy of ATP to convert stably pre-aggregated polypeptides into natively refolded proteins. However, the kinetic parameters and ATP costs have remained elusive because refolding reactions have only been successful with a molar excess of chaperones over their polypeptide substrates. Here we describe a stable, misfolded luciferase species that can be efficiently renatured by substoichiometric amounts of bacterial Hsp70-Hsp40-NEF. The reactivation rates increased with substrate concentration and followed saturation kinetics, thus allowing the determination of apparent V(max)' and K(m)' values for a chaperone-mediated renaturation reaction for the first time. Under the in vitro conditions used, one Hsp70 molecule consumed five ATPs to effectively unfold a single misfolded protein into an intermediate that, upon chaperone dissociation, spontaneously refolded to the native state, a process with an ATP cost a thousand times lower than expected for protein degradation and resynthesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work was to develop an alternative methodology to study and characterize the phosphate crystalline properties, directly associated with solubility and plant availability, in biochar from swine bones. Some phosphate symmetry properties of pyrolyzed swine bones were established, using solid state nuclear magnetic resonance spectroscopy, principal component analysis, and multivariate curve resolution analysis, on four pyrolyzed samples at different carbonization intensities. Increasing carbonization parameters (temperature or residence time) generates diverse phosphate structures, increasing their symmetry and decreasing the crossed polarizability of the pair ¹H-31P, producing phosphates with, probably, lower solubility than the ones produced at lower carbonization intensity. Additionally, a new methodology is being developed to study and characterize phosphate crystalline properties directly associated with phosphate solubility and availability to plants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION: To report acute and late toxicities in patients with intermediate- and high-risk prostate cancer treated with combined high-dose-rate brachytherapy (HDR-B) and intensity-modulated radiation therapy (IMRT). MATERIALS AND METHODS: From March 2003 to September 2005, 64 men were treated with a single implant HDR-B with 21 Gy given in three fractions, followed by 50 Gy IMRT along with organ tracking. Median age was 66.1 years, and risk of recurrence was intermediate in 47% of the patients or high in 53% of the patients. Androgen deprivation therapy was received by 69% of the patients. Toxicity was scored according to the CTCAE version 3.0. Median follow-up was 3.1 years. RESULTS: Acute grade 3 genitourinary (GU) toxicity was observed in 7.8% of the patients, and late grades 3 and 4 GU toxicity was observed in 10.9% and 1.6% of the patients. Acute grade 3 gastrointestinal (GI) toxicity was experienced by 1.6% of the patients, and late grade 3 GI toxicity was absent. The urethral V(120) (urethral volume receiving &gt; or =120% of the prescribed HDR-B dose) was associated with acute (P=.047) and late &gt; or = grade 2 GU toxicities (P=.049). CONCLUSIONS: Late grades 3 and 4GU toxicity occurred in 10.9% and 1.6% of the patients after HDR-B followed by IMRT in association with the irradiated urethral volume. The impact of V(120) on GU toxicity should be validated in further studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, I develop analytical models to price the value of supply chain investments under demand uncer¬tainty. This thesis includes three self-contained papers. In the first paper, we investigate the value of lead-time reduction under the risk of sudden and abnormal changes in demand forecasts. We first consider the risk of a complete and permanent loss of demand. We then provide a more general jump-diffusion model, where we add a compound Poisson process to a constant-volatility demand process to explore the impact of sudden changes in demand forecasts on the value of lead-time reduction. We use an Edgeworth series expansion to divide the lead-time cost into that arising from constant instantaneous volatility, and that arising from the risk of jumps. We show that the value of lead-time reduction increases substantially in the intensity and/or the magnitude of jumps. In the second paper, we analyze the value of quantity flexibility in the presence of supply-chain dis- intermediation problems. We use the multiplicative martingale model and the "contracts as reference points" theory to capture both positive and negative effects of quantity flexibility for the downstream level in a supply chain. We show that lead-time reduction reduces both supply-chain disintermediation problems and supply- demand mismatches. We furthermore analyze the impact of the supplier's cost structure on the profitability of quantity-flexibility contracts. When the supplier's initial investment cost is relatively low, supply-chain disin¬termediation risk becomes less important, and hence the contract becomes more profitable for the retailer. We also find that the supply-chain efficiency increases substantially with the supplier's ability to disintermediate the chain when the initial investment cost is relatively high. In the third paper, we investigate the value of dual sourcing for the products with heavy-tailed demand distributions. We apply extreme-value theory and analyze the effects of tail heaviness of demand distribution on the optimal dual-sourcing strategy. We find that the effects of tail heaviness depend on the characteristics of demand and profit parameters. When both the profit margin of the product and the cost differential between the suppliers are relatively high, it is optimal to buffer the mismatch risk by increasing both the inventory level and the responsive capacity as demand uncertainty increases. In that case, however, both the optimal inventory level and the optimal responsive capacity decrease as the tail of demand becomes heavier. When the profit margin of the product is relatively high, and the cost differential between the suppliers is relatively low, it is optimal to buffer the mismatch risk by increasing the responsive capacity and reducing the inventory level as the demand uncertainty increases. In that case, how¬ever, it is optimal to buffer with more inventory and less capacity as the tail of demand becomes heavier. We also show that the optimal responsive capacity is higher for the products with heavier tails when the fill rate is extremely high.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Training has been shown to induce cardioprotection. The mechanisms involved remain still poorly understood. Aims of the study were to examine the relevance of training intensity on myocardial protection against ischemia/reperfusion (I/R) injury, and to which extent the beneficial effects persist after training cessation in rats. Sprague-Dawley rats trained at either low (60% [Formula: see text]) or high (80% [Formula: see text]) intensity for 10 weeks. An additional group of highly trained rats was detrained for 4 weeks. Untrained rats served as controls. At the end of treatment, rats of all groups were split into two subgroups. In the former, rats underwent left anterior descending artery (LAD) ligature for 30 min, followed by 90-min reperfusion, with subsequent measurement of the infarct size. In the latter, biopsies were taken to measure heat-shock proteins (HSP) 70/72, vascular endothelial growth factor (VEGF) protein levels, and superoxide dismutase (SOD) activity. Training reduced infarct size proportionally to training intensity. With detraining, infarct size increased compared to highly trained rats, maintaining some cardioprotection with respect to controls. Cardioprotection was proportional to training intensity and related to HSP70/72 upregulation and Mn-SOD activity. The relationship with Mn-SOD was lost with detraining. VEGF protein expression was not affected by either training or detraining. Stress proteins and antioxidant defenses might be involved in the beneficial effects of long-term training as a function of training intensity, while HSP70 may be one of the factors accounting for the partial persistence of myocardial protection against I/R injury in detrained rats.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work was to assess the genetic parameters and to estimate genetic gains in young rubber tree progenies. The experiments were carried out during three years, in a randomized block design, with six replicates and ten plants per plot, in three representative Hevea crop regions of the state of São Paulo, Brazil. Twenty-two progenies were evaluated, from three to five years old, for rubber yield and annual girth growth. Genetic gain was estimated with the multi-effect index (MEI). Selection by progenies means provided greater estimated genetic gain than selection based on individuals, since heritability values of progeny means were greater than the ones of individual heritability, for both evaluated variables, in all the assessment years. The selection of the three best progenies for rubber yield provided a selection gain of 1.28 g per plant. The genetic gains estimated with MEI using data from early assessments (from 3 to 5-year-old) were generally high for annual girth growth and rubber yield. The high genetic gains for annual girth growth in the first year of assessment indicate that progenies can be selected at the beginning of the breeding program. Population effective size was consistent with the three progenies selected, showing that they were not related and that the population genetic variability is ensured. Early selection with the genetic gains estimated by MEI can be made on rubber tree progenies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work was to estimate the mating system parameters of a andiroba (Carapa guianensis) population using microsatellite markers and the mixed and correlated mating models. Twelve open‑pollinated progeny arrays of 15 individuals were sampled in an area with C. guianensis estimated density of 25.7 trees per hectare. Overall, the species has a mixed reproductive system, with a predominance of outcrossing. The multilocus outcrossing rate (t m = 0.862) was significantly lower than the unity, indicating that self‑pollination occurred. The rate of biparental inbreeding was substantial (t m ‑ t s = 0.134) and significantly different from zero. The correlation of selfing within progenies was high (r s = 0.635), indicating variation in the individual outcrossing rate. Consistent with this result, the estimate of the individual outcrossing rate ranged from 0.598 to 0.978. The multilocus correlation of paternity was low (r p(m) = 0.081), but significantly different from zero, suggesting that the progenies contain full‑sibs. The coancestry within progenies (Θ = 0.185) was higher and the variance effective size (Ne(v) = 2.7) was lower than expected for true half‑sib progenies (Θ = 0.125; Ne(v) = 4). These results suggest that, in order to maintain a minimum effective size of 150 individuals for breeding, genetic conservation, and environmental reforestation programs, seeds from at least 56 trees must be collected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New methods and devices for pursuing performance enhancement through altitude training were developed in Scandinavia and the USA in the early 1990s. At present, several forms of hypoxic training and/or altitude exposure exist: traditional 'live high-train high' (LHTH), contemporary 'live high-train low' (LHTL), intermittent hypoxic exposure during rest (IHE) and intermittent hypoxic exposure during continuous session (IHT). Although substantial differences exist between these methods of hypoxic training and/or exposure, all have the same goal: to induce an improvement in athletic performance at sea level. They are also used for preparation for competition at altitude and/or for the acclimatization of mountaineers. The underlying mechanisms behind the effects of hypoxic training are widely debated. Although the popular view is that altitude training may lead to an increase in haematological capacity, this may not be the main, or the only, factor involved in the improvement of performance. Other central (such as ventilatory, haemodynamic or neural adaptation) or peripheral (such as muscle buffering capacity or economy) factors play an important role. LHTL was shown to be an efficient method. The optimal altitude for living high has been defined as being 2200-2500 m to provide an optimal erythropoietic effect and up to 3100 m for non-haematological parameters. The optimal duration at altitude appears to be 4 weeks for inducing accelerated erythropoiesis whereas &lt;3 weeks (i.e. 18 days) are long enough for beneficial changes in economy, muscle buffering capacity, the hypoxic ventilatory response or Na(+)/K(+)-ATPase activity. One critical point is the daily dose of altitude. A natural altitude of 2500 m for 20-22 h/day (in fact, travelling down to the valley only for training) appears sufficient to increase erythropoiesis and improve sea-level performance. 'Longer is better' as regards haematological changes since additional benefits have been shown as hypoxic exposure increases beyond 16 h/day. The minimum daily dose for stimulating erythropoiesis seems to be 12 h/day. For non-haematological changes, the implementation of a much shorter duration of exposure seems possible. Athletes could take advantage of IHT, which seems more beneficial than IHE in performance enhancement. The intensity of hypoxic exercise might play a role on adaptations at the molecular level in skeletal muscle tissue. There is clear evidence that intense exercise at high altitude stimulates to a greater extent muscle adaptations for both aerobic and anaerobic exercises and limits the decrease in power. So although IHT induces no increase in VO(2max) due to the low 'altitude dose', improvement in athletic performance is likely to happen with high-intensity exercise (i.e. above the ventilatory threshold) due to an increase in mitochondrial efficiency and pH/lactate regulation. We propose a new combination of hypoxic method (which we suggest naming Living High-Training Low and High, interspersed; LHTLHi) combining LHTL (five nights at 3000 m and two nights at sea level) with training at sea level except for a few (2.3 per week) IHT sessions of supra-threshold training. This review also provides a rationale on how to combine the different hypoxic methods and suggests advances in both their implementation and their periodization during the yearly training programme of athletes competing in endurance, glycolytic or intermittent sports.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work was to estimate the repeatability of adaptability and stability parameters of common bean between years, within each biennium from 2003 to 2012, in Minas Gerais state, Brazil. Grain yield data from trials of value for cultivation and use common bean were analyzed. Grain yield, ecovalence, regression coefficient, and coefficient of determination were estimated considering location and sowing season per year, within each biennium. Subsequently, a analysis of variance these estimates was carried out, and repeatability was estimated in the biennia. Repeatability estimate for grain yield in most of the biennia was relatively high, but for ecovalence and regression coefficient it was null or of small magnitude, which indicates that confidence on identification of common bean lines for recommendation is greater when using means of yield, instead of stability parameters.