964 resultados para Data Migration Processes Modeling
Resumo:
Self-potential (SP) data are of interest to vadose zone hydrology because of their direct sensitivity to water flow and ionic transport. There is unfortunately little consensus in the literature about how to best model SP data under partially saturated conditions, and different approaches (often supported by one laboratory data set alone) have been proposed. We argue that this lack of agreement can largely be traced to electrode effects that have not been properly taken into account. A series of drainage and imbibition experiments were considered in which we found that previously proposed approaches to remove electrode effects were unlikely to provide adequate corrections. Instead, we explicitly modeled the electrode effects together with classical SP contributions using a flow and transport model. The simulated data agreed overall with the observed SP signals and allowed decomposing the different signal contributions to analyze them separately. After reviewing other published experimental data, we suggest that most of them include electrode effects that have not been properly taken into account. Our results suggest that previously presented SP theory works well when considering the modeling uncertainties presently associated with electrode effects. Additional work is warranted to not only develop suitable electrodes for laboratory experiments but also to assure that associated electrode effects that appear inevitable in longer term experiments are predictable, so that they can be incorporated into the modeling framework.
Resumo:
Experimental and theoretical investigations for growth of silicon nanoparticles (4 to 14 nm) in radio frequency discharge were carried out. Growth processes were performed with gas mixtures of SiH4 and Ar in a plasma chemical reactor at low pressure. A distinctive feature of presented kinetic model of generation and growth of nanoparticles (compared to our earlier model) is its ability to investigate small"critical" dimensions of clusters, determining the rate of particle production and taking into account the influence of SiH2 and Si2Hm dimer radicals. The experiments in the present study were extended to high pressure (≥20 Pa) and discharge power (≥40 W). Model calculations were compared to experimental measurements, investigating the dimension of silicon nanoparticles as a function of time, discharge power, gas mixture, total pressure, and gas flow.
Resumo:
Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.
Resumo:
Past and current climate change has already induced drastic biological changes. We need projections of how future climate change will further impact biological systems. Modeling is one approach to forecast future ecological impacts, but requires data for model parameterization. As collecting new data is costly, an alternative is to use the increasingly available georeferenced species occurrence and natural history databases. Here, we illustrate the use of such databases to assess climate change impacts on mountain flora. We show that these data can be used effectively to derive dynamic impact scenarios, suggesting upward migration of many species and possible extinctions when no suitable habitat is available at higher elevations. Systematically georeferencing all existing natural history collections data in mountain regions could allow a larger assessment of climate change impact on mountain ecosystems in Europe and elsewhere.
Resumo:
BACKGROUND: Metals are known endocrine disruptors and have been linked to cardiometabolic diseases via multiple potential mechanisms, yet few human studies have both the exposure variability and biologically-relevant phenotype data available. We sought to examine the distribution of metals exposure and potential associations with cardiometabolic risk factors in the "Modeling the Epidemiologic Transition Study" (METS), a prospective cohort study designed to assess energy balance and change in body weight, diabetes and cardiovascular disease risk in five countries at different stages of social and economic development. METHODS: Young adults (25-45 years) of African descent were enrolled (N = 500 from each site) in: Ghana, South Africa, Seychelles, Jamaica and the U.S.A. We randomly selected 150 blood samples (N = 30 from each site) to determine concentrations of selected metals (arsenic, cadmium, lead, mercury) in a subset of participants at baseline and to examine associations with cardiometabolic risk factors. RESULTS: Median (interquartile range) metal concentrations (μg/L) were: arsenic 8.5 (7.7); cadmium 0.01 (0.8); lead 16.6 (16.1); and mercury 1.5 (5.0). There were significant differences in metals concentrations by: site location, paid employment status, education, marital status, smoking, alcohol use, and fish intake. After adjusting for these covariates plus age and sex, arsenic (OR 4.1, 95% C.I. 1.2, 14.6) and lead (OR 4.0, 95% C.I. 1.6, 9.6) above the median values were significantly associated with elevated fasting glucose. These associations increased when models were further adjusted for percent body fat: arsenic (OR 5.6, 95% C.I. 1.5, 21.2) and lead (OR 5.0, 95% C.I. 2.0, 12.7). Cadmium and mercury were also related with increased odds of elevated fasting glucose, but the associations were not statistically significant. Arsenic was significantly associated with increased odds of low HDL cholesterol both with (OR 8.0, 95% C.I. 1.8, 35.0) and without (OR 5.9, 95% C.I. 1.5, 23.1) adjustment for percent body fat. CONCLUSIONS: While not consistent for all cardiometabolic disease markers, these results are suggestive of potentially important associations between metals exposure and cardiometabolic risk. Future studies will examine these associations in the larger cohort over time.
Resumo:
Introduction This dissertation consists of three essays in equilibrium asset pricing. The first chapter studies the asset pricing implications of a general equilibrium model in which real investment is reversible at a cost. Firms face higher costs in contracting than in expanding their capital stock and decide to invest when their productive capital is scarce relative to the overall capital of the economy. Positive shocks to the capital of the firm increase the size of the firm and reduce the value of growth options. As a result, the firm is burdened with more unproductive capital and its value lowers with respect to the accumulated capital. The optimal consumption policy alters the optimal allocation of resources and affects firm's value, generating mean-reverting dynamics for the M/B ratios. The model (1) captures convergence of price-to-book ratios -negative for growth stocks and positive for value stocks - (firm migration), (2) generates deviations from the classic CAPM in line with the cross-sectional variation in expected stock returns and (3) generates a non-monotone relationship between Tobin's q and conditional volatility consistent with the empirical evidence. The second chapter proposes a standard portfolio-choice problem with transaction costs and mean reversion in expected returns. In the presence of transactions costs, no matter how small, arbitrage activity does not necessarily render equal all riskless rates of return. When two such rates follow stochastic processes, it is not optimal immediately to arbitrage out any discrepancy that arises between them. The reason is that immediate arbitrage would induce a definite expenditure of transactions costs whereas, without arbitrage intervention, there exists some, perhaps sufficient, probability that these two interest rates will come back together without any costs having been incurred. Hence, one can surmise that at equilibrium the financial market will permit the coexistence of two riskless rates that are not equal to each other. For analogous reasons, randomly fluctuating expected rates of return on risky assets will be allowed to differ even after correction for risk, leading to important violations of the Capital Asset Pricing Model. The combination of randomness in expected rates of return and proportional transactions costs is a serious blow to existing frictionless pricing models. Finally, in the last chapter I propose a two-countries two-goods general equilibrium economy with uncertainty about the fundamentals' growth rates to study the joint behavior of equity volatilities and correlation at the business cycle frequency. I assume that dividend growth rates jump from one state to other, while countries' switches are possibly correlated. The model is solved in closed-form and the analytical expressions for stock prices are reported. When calibrated to the empirical data of United States and United Kingdom, the results show that, given the existing degree of synchronization across these business cycles, the model captures quite well the historical patterns of stock return volatilities. Moreover, I can explain the time behavior of the correlation, but exclusively under the assumption of a global business cycle.
Resumo:
Particle fluxes (including major components and grain size), and oceanographic parameters (near-bottom water temperature, current speed and suspended sediment concentration) were measured along the Cap de Creus submarine canyon in the Gulf of Lions (GoL; NW Mediterranean Sea) during two consecutive winter-spring periods (2009 2010 and 2010 2011). The comparison of data obtained with the measurements of meteorological and hydrological parameters (wind speed, turbulent heat flux, river discharge) have shown the important role of atmospheric forcings in transporting particulate matter through the submarine canyon and towards the deep sea. Indeed, atmospheric forcing during 2009 2010 and 2010 2011 winter months showed differences in both intensity and persistence that led to distinct oceanographic responses. Persistent dry northern winds caused strong heat losses (14.2 × 103 W m−2) in winter 2009 2010 that triggered a pronounced sea surface cooling compared to winter 2010 2011 (1.6 × 103 W m−2 lower). As a consequence, a large volume of dense shelf water formed in winter 2009 2010, which cascaded at high speed (up to ∼1 m s−1) down Cap de Creus Canyon as measured by a current-meter in the head of the canyon. The lower heat losses recorded in winter 2010 2011, together with an increased river discharge, resulted in lowered density waters over the shelf, thus preventing the formation and downslope transport of dense shelf water. High total mass fluxes (up to 84.9 g m−2 d−1) recorded in winter-spring 2009 2010 indicate that dense shelf water cascading resuspended and transported sediments at least down to the middle canyon. Sediment fluxes were lower (28.9 g m−2 d−1) under the quieter conditions of winter 2010 2011. The dominance of the lithogenic fraction in mass fluxes during the two winter-spring periods points to a resuspension origin for most of the particles transported down canyon. The variability in organic matter and opal contents relates to seasonally controlled inputs associated with the plankton spring bloom during March and April of both years.
Resumo:
The work described in this report documents the activities performed for the evaluation, development, and enhancement of the Iowa Department of Transportation (DOT) pavement condition information as part of their pavement management system operation. The study covers all of the Iowa DOT’s interstate and primary National Highway System (NHS) and non-NHS system. A new pavement condition rating system that provides a consistent, unified approach in rating pavements in Iowa is being proposed. The proposed 100-scale system is based on five individual indices derived from specific distress data and pavement properties, and an overall pavement condition index, PCI-2, that combines individual indices using weighting factors. The different indices cover cracking, ride, rutting, faulting, and friction. The Cracking Index is formed by combining cracking data (transverse, longitudinal, wheel-path, and alligator cracking indices). Ride, rutting, and faulting indices utilize the International Roughness Index (IRI), rut depth, and fault height, respectively.
Resumo:
Hydrologic analysis is a critical part of transportation design because it helps ensure that hydraulic structures are able to accommodate the flow regimes they are likely to see. This analysis is currently conducted using computer simulations of water flow patterns, and continuing developments in elevation survey techniques result in higher and higher resolution surveys. Current survey techniques now resolve many natural and anthropogenic features that were not practical to map and, thus, require new methods for dealing with depressions and flow discontinuities. A method for depressional analysis is proposed that uses the fact that most anthropogenically constructed embankments are roughly more symmetrical with greater slopes than natural depressions. An enforcement method for draining depressions is then analyzed on those depressions that should be drained. This procedure has been evaluated on a small watershed in central Iowa, Walnut Creek of the South Skunk River, HUC12 # 070801050901, and was found to accurately identify 88 of 92 drained depressions and place enforcements within two pixels, although the method often tries to drain prairie pothole depressions that are bisected by anthropogenic features.
Resumo:
Mississippi Tialley-type zinc-lead deposits and ore occurrences in the San Vicente belt are hosted in dolostones of the eastern Upper Triassic to Lower Jurassic Pucara basin, central Peru. Combined inorganic and organic geochemical data from 22 sites, including the main San Vicente deposit, minor ore occurrences, and barren localities, provide better understanding of fluid pathways and composition, ore precipitation mechanisms, Eh-pH changes during mineralization, and relationships between organic matter and ore formation. Ore-stage dark replacement dolomite and white sparry dolomite are Fe and rare earth element (REE) depleted, and Mn enriched, compared to the host dolomite. In the main deposit, they display significant negative Ce and probably Eu anomalies. Mixing of an incoming hot, slightly oxidizing, acidic brine (H2CO3 being the dominant dissolved carbon species), probably poor in REE and Fe, with local intraformational, alkaline, reducing waters explains the overall carbon and oxygen isotope variation and the distributions of REE and other trace elements in the different hydrothermal carbonate generations. The incoming ore fluid flowed through major aquifers, probably basal basin detrital units, with limited interaction with the carbonate host rocks. The hydrothermal carbonates show a strong regional chemical homogeneity, indicating access of the ore fluids by interconnected channelways near the ore occurrences. Negative Ce anomalies in the main deposit, that are absent at the district scale, indicate local ore-fluid chemical differences. Oxidation of both migrated and indigenous hydrocarbons by the incoming fluid provided the local reducing conditions necessary for sulfate reduction to H2S, pyrobitumen precipitation, and reduction of Eu3+ to Eu2+. Fe-Mn covariations, combined with the REE contents of the hydrothermal carbonates, are consistent with the mineralizing system shifting from reducing/rock-dominated to oxidizing/fluid-dominated conditions following ore deposition. Sulfate and sulfide sulfur isotopes support sulfide origin from evaporite-derived sulfate by thermochemical organic reduction; further evidence includes the presence of C-13-depleted calcite cements (similar to-12 parts per thousand delta(13)C) as sulfate pseudomorphs, elemental sulfur, altered organic matter in the host dolomite, and isotopically heavier, late, solid bitumen. Significant alteration of the indigenous and extrinsic hydrocarbons, with absent bacterial membrane biomarkers (hopanes) is observed. The light delta(34)S of sulfides from small mines and occurrences compared to the main deposit reflect a local contribution of isotopically light sulfur, evidence of local differences in the ore-fluid chemistry.
Resumo:
In work-zone configurations where lane drops are present, merging of traffic at the taper presents an operational concern. In addition, as flow through the work zone is reduced, the relative traffic safety of the work zone is also reduced. Improving work-zone flow-through merge points depends on the behavior of individual drivers. By better understanding driver behavior, traffic control plans, work zone policies, and countermeasures can be better targeted to reinforce desirable lane closure merging behavior, leading to both improved safety and work-zone capacity. The researchers collected data for two work-zone scenarios that included lane drops with one scenario on the Interstate and the other on an urban arterial roadway. The researchers then modeled and calibrated these scenarios in VISSIM using real-world speeds, travel times, queue lengths, and merging behaviors (percentage of vehicles merging upstream and near the merge point). Once built and calibrated, the researchers modeled strategies for various countermeasures in the two work zones. The models were then used to test and evaluate how various merging strategies affect safety and operations at the merge areas in these two work zones.
Resumo:
Background: The aim of this study was to evaluate how hospital capacity was managed focusing on standardizing the admission and discharge processes. Methods: This study was set in a 900-bed university affiliated hospital of the National Health Service, near Barcelona (Spain). This is a cross-sectional study of a set of interventions which were gradually implemented between April and December 2008. Mainly, they were focused on standardizing the admission and discharge processes to improve patient flow. Primary administrative data was obtained from the 2007 and 2009 Hospital Database. Main outcome measures were median length of stay, percentage of planned discharges, number of surgery cancellations and median number of delayed emergency admissions at 8:00 am. For statistical bivariate analysis, we used a Chi-squared for linear trend for qualitative variables and a Wilcoxon signed ranks test and a Mann–Whitney test for non-normal continuous variables. Results: The median patients’ global length of stay was 8.56 days in 2007 and 7.93 days in 2009 (p<0.051). The percentage of patients admitted the same day as surgery increased from 64.87% in 2007 to 86.01% in 2009 (p<0.05). The number of cancelled interventions due to lack of beds was 216 patients in 2007 and 42 patients in 2009. The median number of planned discharges went from 43.05% in 2007 to 86.01% in 2009 (p<0.01). The median number of emergency patients waiting for an in-hospital bed at 8:00 am was 5 patients in 2007 and 3 patients in 2009 (p<0.01). Conclusions: In conclusion, standardization of admission and discharge processes are largely in our control. There is a significant opportunity to create important benefits for increasing bed capacity and hospital throughput.
Resumo:
INTRODUCTION: Handwriting is a modality of language production whose cerebral substrates remain poorly known although the existence of specific regions is postulated. The description of brain damaged patients with agraphia and, more recently, several neuroimaging studies suggest the involvement of different brain regions. However, results vary with the methodological choices made and may not always discriminate between "writing-specific" and motor or linguistic processes shared with other abilities. METHODS: We used the "Activation Likelihood Estimate" (ALE) meta-analytical method to identify the cerebral network of areas commonly activated during handwriting in 18 neuroimaging studies published in the literature. Included contrasts were also classified according to the control tasks used, whether non-specific motor/output-control or linguistic/input-control. These data were included in two secondary meta-analyses in order to reveal the functional role of the different areas of this network. RESULTS: An extensive, mainly left-hemisphere network of 12 cortical and sub-cortical areas was obtained; three of which were considered as primarily writing-specific (left superior frontal sulcus/middle frontal gyrus area, left intraparietal sulcus/superior parietal area, right cerebellum) while others related rather to non-specific motor (primary motor and sensorimotor cortex, supplementary motor area, thalamus and putamen) or linguistic processes (ventral premotor cortex, posterior/inferior temporal cortex). CONCLUSIONS: This meta-analysis provides a description of the cerebral network of handwriting as revealed by various types of neuroimaging experiments and confirms the crucial involvement of the left frontal and superior parietal regions. These findings provide new insights into cognitive processes involved in handwriting and their cerebral substrates.
Resumo:
BACKGROUND: Macrophage migration inhibitory factor (MIF) is a proinflammatory cytokine produced by many tissues including pancreatic beta-cells. METHODS: This study investigates the impact of MIF on islet transplantation using MIF knock-out (MIFko) mice. RESULTS: Early islet function, assessed with a syngeneic marginal islet mass transplant model, was enhanced when using MIFko islets (P<0.05 compared with wild-type [WT] controls). This result was supported by increased in vitro resistance of MIFko islets to apoptosis (terminal deoxynucleotide tranferase-mediated dUTP nick-end labeling assay), and by improved glucose metabolism (lower blood glucose levels, reduced glucose areas under curve and higher insulin release during intraperitoneal glucose challenges, and in vitro in the absence of MIF, P<0.01). The beneficial impact of MIFko islets was insufficient to delay allogeneic islet rejection. However, the rejection of WT islet allografts was marginally delayed in MIFko recipients by 6 days when compared with WT recipient (P<0.05). This effect is supported by the lower activity of MIF-deficient macrophages, assessed in vitro and in vivo by cotransplantation of islet/macrophages. Leukocyte infiltration of the graft and donor-specific lymphocyte activity (mixed lymphocyte reaction, interferon gamma ELISPOT) were similar in both groups. CONCLUSION: These data indicate that targeting MIF has the potential to improve early function after syngeneic islet transplantation, but has only a marginal impact on allogeneic rejection.
Resumo:
Background: The aim of this study was to evaluate how hospital capacity was managed focusing on standardizing the admission and discharge processes. Methods: This study was set in a 900-bed university affiliated hospital of the National Health Service, near Barcelona (Spain). This is a cross-sectional study of a set of interventions which were gradually implemented between April and December 2008. Mainly, they were focused on standardizing the admission and discharge processes to improve patient flow. Primary administrative data was obtained from the 2007 and 2009 Hospital Database. Main outcome measures were median length of stay, percentage of planned discharges, number of surgery cancellations and median number of delayed emergency admissions at 8:00¿am. For statistical bivariate analysis, we used a Chi-squared for linear trend for qualitative variables and a Wilcoxon signed ranks test and a Mann¿Whitney test for non-normal continuous variables. Results:The median patients' global length of stay was 8.56 days in 2007 and 7.93 days in 2009 (p<0.051). The percentage of patients admitted the same day as surgery increased from 64.87% in 2007 to 86.01% in 2009 (p<0.05). The number of cancelled interventions due to lack of beds was 216 patients in 2007 and 42 patients in 2009. The median number of planned discharges went from 43.05% in 2007 to 86.01% in 2009 (p<0.01). The median number of emergency patients waiting for an in-hospital bed at 8:00¿am was 5 patients in 2007 and 3 patients in 2009 (p<0.01). Conclusions: In conclusion, standardization of admission and discharge processes are largely in our control. There is a significant opportunity to create important benefits for increasing bed capacity and hospital throughput.