903 resultados para Generalization of Ehrenfest’s urn Model
Resumo:
Compliance with punctual delivery under the high pressure of costs can be implemented through the optimization of the in-house tool supply. Within the Transfer Project 13 of the Collaborative Research Centre 489 using the example of the forging industry, a mathematical model was developed which determines the minimum inventory of forging tools required for production, considering the tool appropriation delay.
Resumo:
Plant species richness of permanent grasslands has often been found to be significantly associated with productivity. Concentrations of nutrients in biomass can give further insight into these productivity- plant species richness relationships, e.g. by reflecting land use or soil characteristics. However, the consistency of such relationships across different regions has rarely been taken into account, which might significantly compromise our potential for generalization. We recorded plant species richness and measured above-ground biomass and concentrations of nutrients in biomass in 295 grasslands in three regions in Germany that differ in soil and climatic conditions. Structural equation modelling revealed that nutrient concentrations were mostly indirectly associated with plant species richness via biomass production. However, negative associations between the concentrations of different nutrients and biomass and plant species richness differed considerably among regions. While in two regions, more than 40% of the variation in plant species richness could be attributed to variation in biomass, K, P, and to some degree also N concentrations, in the third region only 15% of the variation could be explained in this way. Generally, highest plant species richness was recorded in grasslands where N and P were co-limiting plant growth, in contrast to N or K (co-) limitation. But again, this pattern was not recorded in the third region. While for two regions land-use intensity and especially the application of fertilizers are suggested to be the main drivers causing the observed negative associations with productivity, in the third region the little variance accounted for, low species richness and weak relationships implied that former intensive grassland management, ongoing mineralization of peat and fluctuating water levels in fen grasslands have overruled effects of current land-use intensity and productivity. Finally, we conclude that regional replication is of major importance for studies seeking general insights into productivity-diversity relationships.
Resumo:
IMPORTANCE Because effective interventions to reduce hospital readmissions are often expensive to implement, a score to predict potentially avoidable readmissions may help target the patients most likely to benefit. OBJECTIVE To derive and internally validate a prediction model for potentially avoidable 30-day hospital readmissions in medical patients using administrative and clinical data readily available prior to discharge. DESIGN Retrospective cohort study. SETTING Academic medical center in Boston, Massachusetts. PARTICIPANTS All patient discharges from any medical services between July 1, 2009, and June 30, 2010. MAIN OUTCOME MEASURES Potentially avoidable 30-day readmissions to 3 hospitals of the Partners HealthCare network were identified using a validated computerized algorithm based on administrative data (SQLape). A simple score was developed using multivariable logistic regression, with two-thirds of the sample randomly selected as the derivation cohort and one-third as the validation cohort. RESULTS Among 10 731 eligible discharges, 2398 discharges (22.3%) were followed by a 30-day readmission, of which 879 (8.5% of all discharges) were identified as potentially avoidable. The prediction score identified 7 independent factors, referred to as the HOSPITAL score: h emoglobin at discharge, discharge from an o ncology service, s odium level at discharge, p rocedure during the index admission, i ndex t ype of admission, number of a dmissions during the last 12 months, and l ength of stay. In the validation set, 26.7% of the patients were classified as high risk, with an estimated potentially avoidable readmission risk of 18.0% (observed, 18.2%). The HOSPITAL score had fair discriminatory power (C statistic, 0.71) and had good calibration. CONCLUSIONS AND RELEVANCE This simple prediction model identifies before discharge the risk of potentially avoidable 30-day readmission in medical patients. This score has potential to easily identify patients who may need more intensive transitional care interventions.
Resumo:
Radiocarbon production, solar activity, total solar irradiance (TSI) and solar-induced climate change are reconstructed for the Holocene (10 to 0 kyr BP), and TSI is predicted for the next centuries. The IntCal09/SHCal04 radiocarbon and ice core CO2 records, reconstructions of the geomagnetic dipole, and instrumental data of solar activity are applied in the Bern3D-LPJ, a fully featured Earth system model of intermediate complexity including a 3-D dynamic ocean, ocean sediments, and a dynamic vegetation model, and in formulations linking radiocarbon production, the solar modulation potential, and TSI. Uncertainties are assessed using Monte Carlo simulations and bounding scenarios. Transient climate simulations span the past 21 thousand years, thereby considering the time lags and uncertainties associated with the last glacial termination. Our carbon-cycle-based modern estimate of radiocarbon production of 1.7 atoms cm−2 s−1 is lower than previously reported for the cosmogenic nuclide production model by Masarik and Beer (2009) and is more in-line with Kovaltsov et al. (2012). In contrast to earlier studies, periods of high solar activity were quite common not only in recent millennia, but throughout the Holocene. Notable deviations compared to earlier reconstructions are also found on decadal to centennial timescales. We show that earlier Holocene reconstructions, not accounting for the interhemispheric gradients in radiocarbon, are biased low. Solar activity is during 28% of the time higher than the modern average (650 MeV), but the absolute values remain weakly constrained due to uncertainties in the normalisation of the solar modulation to instrumental data. A recently published solar activity–TSI relationship yields small changes in Holocene TSI of the order of 1 W m−2 with a Maunder Minimum irradiance reduction of 0.85 ± 0.16 W m−2. Related solar-induced variations in global mean surface air temperature are simulated to be within 0.1 K. Autoregressive modelling suggests a declining trend of solar activity in the 21st century towards average Holocene conditions.
Resumo:
The ability of the one-dimensional lake model FLake to represent the mixolimnion temperatures for tropical conditions was tested for three locations in East Africa: Lake Kivu and Lake Tanganyika's northern and southern basins. Meteorological observations from surrounding automatic weather stations were corrected and used to drive FLake, whereas a comprehensive set of water temperature profiles served to evaluate the model at each site. Careful forcing data correction and model configuration made it possible to reproduce the observed mixed layer seasonality at Lake Kivu and Lake Tanganyika (northern and southern basins), with correct representation of both the mixed layer depth and water temperatures. At Lake Kivu, mixolimnion temperatures predicted by FLake were found to be sensitive both to minimal variations in the external parameters and to small changes in the meteorological driving data, in particular wind velocity. In each case, small modifications may lead to a regime switch, from the correctly represented seasonal mixed layer deepening to either completely mixed or permanently stratified conditions from similar to 10 m downwards. In contrast, model temperatures were found to be robust close to the surface, with acceptable predictions of near-surface water temperatures even when the seasonal mixing regime is not reproduced. FLake can thus be a suitable tool to parameterise tropical lake water surface temperatures within atmospheric prediction models. Finally, FLake was used to attribute the seasonal mixing cycle at Lake Kivu to variations in the near-surface meteorological conditions. It was found that the annual mixing down to 60m during the main dry season is primarily due to enhanced lake evaporation and secondarily to the decreased incoming long wave radiation, both causing a significant heat loss from the lake surface and associated mixolimnion cooling.
Resumo:
The potential and adaptive flexibility of population dynamic P-systems (PDP) to study population dynamics suggests that they may be suitable for modelling complex fluvial ecosystems, characterized by a composition of dynamic habitats with many variables that interact simultaneously. Using as a model a reservoir occupied by the zebra mussel Dreissena polymorpha, we designed a computational model based on P systems to study the population dynamics of larvae, in order to evaluate management actions to control or eradicate this invasive species. The population dynamics of this species was simulated under different scenarios ranging from the absence of water flow change to a weekly variation with different flow rates, to the actual hydrodynamic situation of an intermediate flow rate. Our results show that PDP models can be very useful tools to model complex, partially desynchronized, processes that work in parallel. This allows the study of complex hydroecological processes such as the one presented, where reproductive cycles, temperature and water dynamics are involved in the desynchronization of the population dynamics both, within areas and among them. The results obtained may be useful in the management of other reservoirs with similar hydrodynamic situations in which the presence of this invasive species has been documented.
Resumo:
Simulating surface wind over complex terrain is a challenge in regional climate modelling. Therefore, this study aims at identifying a set-up of the Weather Research and Forecasting Model (WRF) model that minimises system- atic errors of surface winds in hindcast simulations. Major factors of the model configuration are tested to find a suitable set-up: the horizontal resolution, the planetary boundary layer (PBL) parameterisation scheme and the way the WRF is nested to the driving data set. Hence, a number of sensitivity simulations at a spatial resolution of 2 km are carried out and compared to observations. Given the importance of wind storms, the analysis is based on case studies of 24 historical wind storms that caused great economic damage in Switzerland. Each of these events is downscaled using eight different model set-ups, but sharing the same driving data set. The results show that the lack of representation of the unresolved topography leads to a general overestimation of wind speed in WRF. However, this bias can be substantially reduced by using a PBL scheme that explicitly considers the effects of non-resolved topography, which also improves the spatial structure of wind speed over Switzerland. The wind direction, although generally well reproduced, is not very sensitive to the PBL scheme. Further sensitivity tests include four types of nesting methods: nesting only at the boundaries of the outermost domain, analysis nudging, spectral nudging, and the so-called re-forecast method, where the simulation is frequently restarted. These simulations show that restricting the freedom of the model to develop large-scale disturbances slightly increases the temporal agreement with the observations, at the same time that it further reduces the overestimation of wind speed, especially for maximum wind peaks. The model performance is also evaluated in the outermost domains, where the resolution is coarser. The results demonstrate the important role of horizontal resolution, where the step from 6 to 2 km significantly improves model performance. In summary, the combination of a grid size of 2 km, the non-local PBL scheme modified to explicitly account for non-resolved orography, as well as analysis or spectral nudging, is a superior combination when dynamical downscaling is aimed at reproducing real wind fields.
Resumo:
Numerous damage models have been developed in order to analyze seismic behavior. Among the different possibilities existing in the literature, it is very clear that models developed along the lines of continuum damage mechanics are more consistent with the definition of damage as a phenomenon with mechanical consequences because they include explicitly the coupling between damage and mechanical behavior. On the other hand, for seismic processes, phenomena such as low cycle fatigue may have a pronounced effect on the overall behavior of the frames and, therefore, its consideration turns out to be very important. However, most of existing models evaluate the damage only as a function of the maximum amplitude of cyclic deformation without considering the number of cycles. In this paper, a generalization of the simplified model proposed by Cipollina et al. [Cipollina A, López-Hinojosa A, Flórez-López J. Comput Struct 1995;54:1113–26] is made in order to include the low cycle fatigue. Such a model employs in its formulation irreversible thermodynamics and internal state variable theory.
Resumo:
Numerous damage models have been developed in order to analyse the seismic behavior. Among the different possibilities existing in the literature, it is very clear that models developed along the lines of Continuum Damage Mechanics are more consistent with the definition of damage like a phenomenon with mechanical consequences as they include explicitly the coupling between damage and mechanical behavior. On the other hand, for seismic processes, phenomena such as low cycle fatigue may have a pronounced effect on the overall behavior of the frames and, therefore, its consideration turns out to be very important. However, many of existing models evaluate the damage only as a function of the maximum amplitude of cyclic deformation without considering the number of cycles. In this paper, a generalization of the simplified model proposed by Flórez is made in order to include the low cycle fatigue. Such model employs in its formulation irreversible thermodynamics and internal state variable theory.
Resumo:
We report the material properties of 26 granular analogue materials used in 14 analogue modelling laboratories. We determined physical characteristics such as bulk density, grain size distribution, and grain shape, and performed ring shear tests to determine friction angles and cohesion, and uniaxial compression tests to evaluate the compaction behaviour. Mean grain size of the materials varied between c. 100 and 400 μm. Analysis of grain shape factors shows that the four different classes of granular materials (14 quartz sands, 5 dyed quartz sands, 4 heavy mineral sands and 3 size fractions of glass beads) can be broadly divided into two groups consisting of 12 angular and 14 rounded materials. Grain shape has an influence on friction angles, with most angular materials having higher internal friction angles (between c. 35° and 40°) than rounded materials, whereas well-rounded glass beads have the lowest internal friction angles (between c. 25° and 30°). We interpret this as an effect of intergranular sliding versus rolling. Most angular materials have also higher basal friction angles (tested for a specific foil) than more rounded materials, suggesting that angular grains scratch and wear the foil. Most materials have an internal cohesion in the order of 20–100 Pa except for well-rounded glass beads, which show a trend towards a quasi-cohesionless (C < 20 Pa) Coulomb-type material. The uniaxial confined compression tests reveal that rounded grains generally show less compaction than angular grains. We interpret this to be related to the initial packing density after sifting, which is higher for rounded grains than for angular grains. Ring-shear test data show that angular grains undergo a longer strain-hardening phase than more rounded materials. This might explain why analogue models consisting of angular grains accommodate deformation in a more distributed manner prior to strain localisation than models consisting of rounded grains.
Resumo:
A generalization of the Gram-Schmidt procedure is achieved by providing equations for updating and downdating oblique projectors. The work is motivated by the problem of adaptive signal representation outside the orthogonal basis setting. The proposed techniques are shown to be relevant to the problem of discriminating signals produced by different phenomena when the order of the signal model needs to be adjusted. © 2007 IOP Publishing Ltd.
Resumo:
Primary glioblastoma (GB), the most common and aggressive adult brain tumour, is refractory to conventional therapies and characterised by poor prognosis. GB displays striking cellular heterogeneity, with a sub-population, called Glioblastoma Stem Cells (GSCs), intrinsically resistant to therapy, hence the high rate of recurrence. Alterations of the tumour suppressor gene PTEN are prevalent in primary GBM, resulting in the inhibition of the polarity protein Lgl1 due to aPKC hyperactivation. Dysregulation of this molecular axis is one of the mechanisms involved in GSC maintenance. After demonstrating that the PTEN/aPKC/Lgl axis is conserved in Drosophila, I deregulated it in different cells populations of the nervous system in order to individuate the cells at the root of neurogenic brain cancers. This analysis identified the type II neuroblasts (NBs) as the most sensitive to alterations of this molecular axis. Type II NBs are a sub-population of Drosophila stem cells displaying a lineage similar to that of the mammalian neural stem cells. Following aPKC activation in these stem cells, I obtained an adult brain cancer model in Drosophila that summarises many phenotypic traits of human brain tumours. Fly tumours are indeed characterised by accumulation of highly proliferative immature cells and keep growing in the adult leading the affected animals to premature death. With the aim to understand the role of cell polarity disruption in this tumorigenic process I carried out a molecular characterisation and transcriptome analysis of brain cancers from our fly model. In summary, the model I built and partially characterised in this thesis work may help deepen our knowledge on human brain cancers by investigating many different aspects of this complicate disease.
Resumo:
Biobanks are key infrastructures in data-driven biomedical research. The counterpoint of this optimistic vision is the reality of biobank governance, which must address various ethical, legal and social issues, especially in terms of open consent, privacy and secondary uses which, if not sufficiently resolved, may undermine participants’ and society’s trust in biobanking. The effect of the digital paradigm on biomedical research has only accentuated these issues by adding new pressure for the data protection of biobank participants against the risks of covert discrimination, abuse of power against individuals and groups, and critical commercial uses. Moreover, the traditional research-ethics framework has been unable to keep pace with the transformative developments of the digital era, and has proven inadequate in protecting biobank participants and providing guidance for ethical practices. To this must be added the challenge of an increased tendency towards exploitation and the commercialisation of personal data in the field of biomedical research, which may undermine the altruistic and solidaristic values associated with biobank participation and risk losing alignment with societal interests in biobanking. My research critically analyses, from a bioethical perspective, the challenges and the goals of biobank governance in data-driven biomedical research in order to understand the conditions for the implementation of a governance model that can foster biomedical research and innovation, while ensuring adequate protection for biobank participants and an alignment of biobank procedures and policies with society’s interests and expectations. The main outcome is a conceptualisation of a socially-oriented and participatory model of biobanks by proposing a new ethical framework that relies on the principles of transparency, data protection and participation to tackle the key challenges of biobanks in the digital age and that is well-suited to foster these goals.
Resumo:
The quantification of the available energy in the environment is important because it determines photosynthesis, evapotranspiration and, therefore, the final yield of crops. Instruments for measuring the energy balance are costly and indirect estimation alternatives are desirable. This study assessed the Deardorff's model performance during a cycle of a sugarcane crop in Piracicaba, State of São Paulo, Brazil, in comparison to the aerodynamic method. This mechanistic model simulates the energy fluxes (sensible, latent heat and net radiation) at three levels (atmosphere, canopy and soil) using only air temperature, relative humidity and wind speed measured at a reference level above the canopy, crop leaf area index, and some pre-calibrated parameters (canopy albedo, soil emissivity, atmospheric transmissivity and hydrological characteristics of the soil). The analysis was made for different time scales, insolation conditions and seasons (spring, summer and autumn). Analyzing all data of 15 minute intervals, the model presented good performance for net radiation simulation in different insolations and seasons. The latent heat flux in the atmosphere and the sensible heat flux in the atmosphere did not present differences in comparison to data from the aerodynamic method during the autumn. The sensible heat flux in the soil was poorly simulated by the model due to the poor performance of the soil water balance method. The Deardorff's model improved in general the flux simulations in comparison to the aerodynamic method when more insolation was available in the environment.