944 resultados para Hazard-Based Models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: There is growing evidence that traffic-related air pollution reduces birth weight. Improving exposure assessment is a key issue to advance in this research area.Objective: We investigated the effect of prenatal exposure to traffic-related air pollution via geographic information system (GIS) models on birth weight in 570 newborns from the INMA (Environment and Childhood) Sabadell cohort.Methods: We estimated pregnancy and trimester-specific exposures to nitrogen dioxide and aromatic hydrocarbons [benzene, toluene, ethylbenzene, m/p-xylene, and o-xylene (BTEX)] by using temporally adjusted land-use regression (LUR) models. We built models for NO2 and BTEX using four and three 1-week measurement campaigns, respectively, at 57 locations. We assessed the relationship between prenatal air pollution exposure and birth weight with linear regression models. We performed sensitivity analyses considering time spent at home and time spent in nonresidential outdoor environments during pregnancy.Results: In the overall cohort, neither NO2 nor BTEX exposure was significantly associated with birth weight in any of the exposure periods. When considering only women who spent < 2 hr/day in nonresidential outdoor environments, the estimated reductions in birth weight associated with an interquartile range increase in BTEX exposure levels were 77 g [95% confidence interval (CI), 7–146 g] and 102 g (95% CI, 28–176 g) for exposures during the whole pregnancy and the second trimester, respectively. The effects of NO2 exposure were less clear in this subset.Conclusions: The association of BTEX with reduced birth weight underscores the negative role of vehicle exhaust pollutants in reproductive health. Time–activity patterns during pregnancy complement GIS-based models in exposure assessment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The pace of on-going climate change calls for reliable plant biodiversity scenarios. Traditional dynamic vegetation models use plant functional types that are summarized to such an extent that they become meaningless for biodiversity scenarios. Hybrid dynamic vegetation models of intermediate complexity (hybrid-DVMs) have recently been developed to address this issue. These models, at the crossroads between phenomenological and process-based models, are able to involve an intermediate number of well-chosen plant functional groups (PFGs). The challenge is to build meaningful PFGs that are representative of plant biodiversity, and consistent with the parameters and processes of hybrid-DVMs. Here, we propose and test a framework based on few selected traits to define a limited number of PFGs, which are both representative of the diversity (functional and taxonomic) of the flora in the Ecrins National Park, and adapted to hybrid-DVMs. This new classification scheme, together with recent advances in vegetation modeling, constitutes a step forward for mechanistic biodiversity modeling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Debris flows are among the most dangerous processes in mountainous areas due to their rapid rate of movement and long runout zone. Sudden and rather unexpected impacts produce not only damages to buildings and infrastructure but also threaten human lives. Medium- to regional-scale susceptibility analyses allow the identification of the most endangered areas and suggest where further detailed studies have to be carried out. Since data availability for larger regions is mostly the key limiting factor, empirical models with low data requirements are suitable for first overviews. In this study a susceptibility analysis was carried out for the Barcelonnette Basin, situated in the southern French Alps. By means of a methodology based on empirical rules for source identification and the empirical angle of reach concept for the 2-D runout computation, a worst-case scenario was first modelled. In a second step, scenarios for high, medium and low frequency events were developed. A comparison with the footprints of a few mapped events indicates reasonable results but suggests a high dependency on the quality of the digital elevation model. This fact emphasises the need for a careful interpretation of the results while remaining conscious of the inherent assumptions of the model used and quality of the input data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: Mutations within the KRAS proto-oncogene have predictive value but are of uncertain prognostic value in the treatment of advanced colorectal cancer. We took advantage of PETACC-3, an adjuvant trial with 3,278 patients with stage II to III colon cancer, to evaluate the prognostic value of KRAS and BRAF tumor mutation status in this setting. PATIENTS AND METHODS: Formalin-fixed paraffin-embedded tissue blocks (n = 1,564) were prospectively collected and DNA was extracted from tissue sections from 1,404 cases. Planned analysis of KRAS exon 2 and BRAF exon 15 mutations was performed by allele-specific real-time polymerase chain reaction. Survival analyses were based on univariate and multivariate proportional hazard regression models. RESULTS: KRAS and BRAF tumor mutation rates were 37.0% and 7.9%, respectively, and were not significantly different according to tumor stage. In a multivariate analysis containing stage, tumor site, nodal status, sex, age, grade, and microsatellite instability (MSI) status, KRAS mutation was associated with grade (P = .0016), while BRAF mutation was significantly associated with female sex (P = .017), and highly significantly associated with right-sided tumors, older age, high grade, and MSI-high tumors (all P < 10(-4)). In univariate and multivariate analysis, KRAS mutations did not have a major prognostic value regarding relapse-free survival (RFS) or overall survival (OS). BRAF mutation was not prognostic for RFS, but was for OS, particularly in patients with MSI-low (MSI-L) and stable (MSI-S) tumors (hazard ratio, 2.2; 95% CI, 1.4 to 3.4; P = .0003). CONCLUSION: In stage II-III colon cancer, the KRAS mutation status does not have major prognostic value. BRAF is prognostic for OS in MS-L/S tumors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tämä työ luo katsauksen ajallisiin ja stokastisiin ohjelmien luotettavuus malleihin sekä tutkii muutamia malleja käytännössä. Työn teoriaosuus sisältää ohjelmien luotettavuuden kuvauksessa ja arvioinnissa käytetyt keskeiset määritelmät ja metriikan sekä varsinaiset mallien kuvaukset. Työssä esitellään kaksi ohjelmien luotettavuusryhmää. Ensimmäinen ryhmä ovat riskiin perustuvat mallit. Toinen ryhmä käsittää virheiden ”kylvöön” ja merkitsevyyteen perustuvat mallit. Työn empiirinen osa sisältää kokeiden kuvaukset ja tulokset. Kokeet suoritettiin käyttämällä kolmea ensimmäiseen ryhmään kuuluvaa mallia: Jelinski-Moranda mallia, ensimmäistä geometrista mallia sekä yksinkertaista eksponenttimallia. Kokeiden tarkoituksena oli tutkia, kuinka syötetyn datan distribuutio vaikuttaa mallien toimivuuteen sekä kuinka herkkiä mallit ovat syötetyn datan määrän muutoksille. Jelinski-Moranda malli osoittautui herkimmäksi distribuutiolle konvergaatio-ongelmien vuoksi, ensimmäinen geometrinen malli herkimmäksi datan määrän muutoksille.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tämän diplomityön päätavoitteena oli parantaa kehitetyn kustannusperusteisen siirtohinnoittelutyökalun ominaisuuksia osastokohtaisen kustannusarviointiprosessin käyttöön. Työ on vaikeutunut lähimenneisyyden heikosta hintakyselyiden vastauskyvystä. Työn pääongelmana oli kerätä luotettavaa tuotannonohjausjärjestelmän kustannusaineistoa osittain vanhentuneista vakioventtiilien koneistus- ja materiaalitiedosta. Tutkimuksessa käytetyt tärkeimmät tutkimusmenetelmät voidaan jakaa siirtohinnoittelu- ja kustannusarvioprosessien kirjallisuustutkimukseen, kenttäanalyysiin ja nykyisen Microsoft Excel –siirtohinnoittelutyökalun kehittämiseen eri osastojen rajapinnassa. Siirtohinnoittelumenetelmät ovat yleisesti jaettu kustannus-, markkina- ja neuvotteluperusteisiin malleihin, jotka harvoin sellaisenaan kohtaavat siirtohinnoittelulle asetetut tavoitteet. Tämä ratkaisutapa voi johtaa tilanteisiin, jossa kaksi erillistä menetelmää sulautuvat yhteen. Lisäksi varsinaiseen siirtohinnoittelujärjestelmään yleensä vaikuttavat useat sisäiset ja ulkoiset tekijät. Lopullinen siirtohinnoittelumenetelmä tulisi ehdottomasti tukea myös yrityksen visiota ja muita liiketoiminnalle asetettuja strategioita. Työn tuloksena saatiin laajennettu Microsoft Excel –sovellus, joka vaatii sekä vuosittaista että kuukausittaista erikoisventtiilimateriaalien hinta- ja toimitusaikatietojen päivittämistä. Tämä ratkaisutapa ehdottomasti parantaa kustannusarviointiprosessia, koska myös alihankkijatietoja joudutaan tutkimaan systemaattisesti. Tämän jälkeen koko siirtohinnoitteluprosessia voidaan kehittää muuntamalla kokoonpano- ja testaustyövaiheiden kustannusrakennetta toimintoperustaisen kustannuslaskentamallin mukaiseksi.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Alpine tree-line ecotones are characterized by marked changes at small spatial scales that may result in a variety of physiognomies. A set of alternative individual-based models was tested with data from four contrasting Pinus uncinata ecotones in the central Spanish Pyrenees to reveal the minimal subset of processes required for tree-line formation. A Bayesian approach combined with Markov chain Monte Carlo methods was employed to obtain the posterior distribution of model parameters, allowing the use of model selection procedures. The main features of real tree lines emerged only in models considering nonlinear responses in individual rates of growth or mortality with respect to the altitudinal gradient. Variation in tree-line physiognomy reflected mainly changes in the relative importance of these nonlinear responses, while other processes, such as dispersal limitation and facilitation, played a secondary role. Different nonlinear responses also determined the presence or absence of krummholz, in agreement with recent findings highlighting a different response of diffuse and abrupt or krummholz tree lines to climate change. The method presented here can be widely applied in individual-based simulation models and will turn model selection and evaluation in this type of models into a more transparent, effective, and efficient exercise.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Biotic interactions are known to affect the composition of species assemblages via several mechanisms, such as competition and facilitation. However, most spatial models of species richness do not explicitly consider inter-specific interactions. Here, we test whether incorporating biotic interactions into high-resolution models alters predictions of species richness as hypothesised. We included key biotic variables (cover of three dominant arctic-alpine plant species) into two methodologically divergent species richness modelling frameworks - stacked species distribution models (SSDM) and macroecological models (MEM) - for three ecologically and evolutionary distinct taxonomic groups (vascular plants, bryophytes and lichens). Predictions from models including biotic interactions were compared to the predictions of models based on climatic and abiotic data only. Including plant-plant interactions consistently and significantly lowered bias in species richness predictions and increased predictive power for independent evaluation data when compared to the conventional climatic and abiotic data based models. Improvements in predictions were constant irrespective of the modelling framework or taxonomic group used. The global biodiversity crisis necessitates accurate predictions of how changes in biotic and abiotic conditions will potentially affect species richness patterns. Here, we demonstrate that models of the spatial distribution of species richness can be improved by incorporating biotic interactions, and thus that these key predictor factors must be accounted for in biodiversity forecasts

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT The objective of this study was to select allometric models to estimate total and pooled aboveground biomass of 4.5-year-old capixingui trees established in an agrisilvicultural system. Aboveground biomass distribution of capixingui was also evaluated. Single- (diameter at breast height [DBH] or crown diameter or stem diameter as the independent variable) and double-entry (DBH or crown diameter or stem diameter and total height as independent variables) models were studied. The estimated total biomass was 17.3 t.ha-1, corresponding to 86.6 kg per tree. All models showed a good fit to the data (R2ad > 0.85) for bole, branches, and total biomass. DBH-based models presented the best residual distribution. Model lnW = b0 + b1* lnDBH can be recommended for aboveground biomass estimation. Lower coefficients were obtained for leaves (R2ad > 82%). Biomass distribution followed the order: bole>branches>leaves. Bole biomass percentage decreased with increasing DBH of the trees, whereas branch biomass increased.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Routine activity theory introduced by Cohen& Felson in 1979 states that criminal acts are caused due to the presenceof criminals, vic-timsand the absence of guardians in time and place. As the number of collision of these elements in place and time increases, criminal acts will also increase even if the number of criminals or civilians remains the same within the vicinity of a city. Street robbery is a typical example of routine ac-tivity theory and the occurrence of which can be predicted using routine activity theory. Agent-based models allow simulation of diversity among individuals. Therefore agent based simulation of street robbery can be used to visualize how chronological aspects of human activity influence the incidence of street robbery.The conceptual model identifies three classes of people-criminals, civilians and police with certain activity areas for each. Police exist only as agents of formal guardianship. Criminals with a tendency for crime will be in the search for their victims. Civilians without criminal tendencycan be either victims or guardians. In addition to criminal tendency, each civilian in the model has a unique set of characteristicslike wealth, employment status, ability for guardianship etc. These agents are subjected to random walk through a street environment guided by a Q –learning module and the possible outcomes are analyzed

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, the residual Kullback–Leibler discrimination information measure is extended to conditionally specified models. The extension is used to characterize some bivariate distributions. These distributions are also characterized in terms of proportional hazard rate models and weighted distributions. Moreover, we also obtain some bounds for this dynamic discrimination function by using the likelihood ratio order and some preceding results.