941 resultados para Intention-based models
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
Syttymistä ja palamisen etenemistä partikkelikerroksessa tutkitaan paloturvallisuuden parantamista sekä kiinteitä polttoaineita käyttävien polttolaitteiden toiminnan tuntemista ja kehittämistä varten. Tässä tutkimuksessa on tavoitteena kerätä yhteen syttymiseen ja liekkirintaman etenemiseen liittyviä kokeellisia ja teoreettisia tutkimustuloksia, jotka auttavat kiinteäkerrospoltto- ja -kaasutus-laitteiden kehittämisessä ja suunnittelussa. Työ on esitutkimus sitä seuraavalle kokeelliselle ja teoreettiselle osalle. Käsittelyssä keskitytään erityisesti puuperäisiin polttoaineisiin. Hiilidioksidipäästöjen vähentämistavoitteet sekä kiinteiden jätteiden energiakäytön lisääminen ja kaatopaikalle viennin vähentäminen aiheuttavat lähitulevaisuudessa kerrospolton lisääntymistä. Kuljetusmatkojen optimoinnin takia joudutaan rakentamaan melko pieniä polttolaitoksia, joissa kerrospolttotekniikka on edullisin vaihtoehto. Syttymispisteellä tarkoitetaan Semenovin määritelmän mukaan tilaa ja ajankohtaa, jolloin polttoaineen ja hapen reaktioissa muodostuva nettoenergia aikayksikössä on yhtäsuuri kuin ympäristöön siirtyvä nettoenergiavirta. Itsesyttyminen tarkoittaa syttymistä ympäristön lämpötilan tai paineen suurenemisen seurauksena. Pakotettu syttyminen tapahtuu, kun syttymispisteen läheisyydessä on esimerkiksi liekki tai hehkuva kiinteä kappale, joka aiheuttaa paikallisen syttymisen ja syttymisrintaman leviämisen muualle polttoaineeseen. Kokeellinen tutkimus on osoittanut tärkeimmiksi syttymiseen ja syttymisrintaman etenemiseen vaikuttaviksi tekijöiksi polttoaineen kosteuden, haihtuvien aineiden pitoisuuden ja lämpöarvon, partikkelikerroksen huokoisuuden, partikkelien koon ja muodon, polttoaineen pinnalle tulevan säteilylämpövirran tiheyden, kaasun virtausnopeuden kerroksessa, hapen osuuden ympäristössä sekä palamisilman esilämmityksen. Kosteuden lisääntyminen suurentaa syttymisenergiaa ja -lämpötilaa sekä pidentää syttymisaikaa. Mitä enemmän polttoaine sisältää haihtuvia aineita sitä pienemmässä lämpötilassa se syttyy. Syttyminen ja syttymisrintaman eteneminen ovat sitä nopeampia mitä suurempi on polttoaineen lämpöarvo. Kerroksen huokoisuuden kasvun on havaittu suurentavan palamisen etenemisnopeutta. Pienet partikkelit syttyvät yleensä nopeammin ja pienemmässä lämpötilassa kuin suuret. Syttymisrintaman eteneminen nopeutuu partikkelien pinta-ala - tilavuussuhteen kasvaessa. Säteilylämpövirran tiheys on useissa polttosovellutuksissa merkittävin lämmönsiirtotekijä, jonka kasvu luonnollisesti nopeuttaa syttymistä. Ilman ja palamiskaasujen virtausnopeus kerroksessa vaikuttaa konvektiiviseen lämmönsiirtoon ja hapen pitoisuuteen syttymisvyöhykkeellä. Ilmavirtaus voi jäähdyttää ja kuumankaasun virtaus lämmittää kerrosta. Hapen osuuden kasvaminen nopeuttaa syttymistä ja liekkirintaman etenemistä kunnes saavutetaan tila, jota suuremmilla virtauksilla ilma jäähdyttää ja laimentaa reaktiovyöhykettä. Palamisilman esilämmitys nopeuttaa syttymisrintaman etenemistä. Syttymistä ja liekkirintaman etenemistä kuvataan yleensä empiirisillä tai säilyvyysyhtälöihin perustuvilla malleilla. Empiiriset mallit perustuvat mittaustuloksista tehtyihin korrelaatioihin sekä joihinkin tunnettuihin fysikaalisiin lainalaisuuksiin. Säilyvyysyhtälöihin perustuvissa malleissa systeemille määritetään massan, energian, liikemäärän ja alkuaineiden säilymisyhtälöt, joiden nopeutta kuvaavien siirtoyhtälöiden muodostamiseen käytetään teoreettisella ja kokeellisella tutkimuksella saatuja yhtälöitä. Nämä mallinnusluokat ovat osittain päällekkäisiä. Pintojen syttymistä kuvataan usein säilyvyysyhtälöihin perustuvilla malleilla. Partikkelikerrosten mallinnuksessa tukeudutaan enimmäkseen empiirisiin yhtälöihin. Partikkelikerroksia kuvaavista malleista Xien ja Liangin hiilipartikkelikerroksen syttymiseen liittyvä tutkimus ja Gortin puun ja jätteen polttoon liittyvä reaktiorintaman etenemistutkimus ovat lähimpänä säilyvyysyhtälöihin perustuvaa mallintamista. Kaikissa malleissa joudutaan kuitenkin yksinkertaistamaan todellista tapausta esimerkiksi vähentämällä dimensioita, reaktioita ja yhdisteitä sekä eliminoimalla vähemmän merkittävät siirtomekanismit. Suoraan kerrospolttoa ja -kaasutusta palvelevia syttymisen ja palamisen etenemisen tutkimuksia on vähän. Muita tarkoituksia varten tehtyjen tutkimusten polttoaineet, kerrokset ja ympäristöolosuhteet poikkeavat yleensä selvästi polttolaitteiden vastaavista olosuhteista. Erikokoisten polttoainepartikkelien ja ominaisuuksiltaan erilaisten polttoaineiden seospolttoa ei ole tutkittu juuri ollenkaan. Polttoainepartikkelien muodon vaikutuksesta on vain vähän tutkimusta.Ilman kanavoitumisen vaikutuksista ei löytynyt tutkimuksia.
Resumo:
Tämä työ luo katsauksen ajallisiin ja stokastisiin ohjelmien luotettavuus malleihin sekä tutkii muutamia malleja käytännössä. Työn teoriaosuus sisältää ohjelmien luotettavuuden kuvauksessa ja arvioinnissa käytetyt keskeiset määritelmät ja metriikan sekä varsinaiset mallien kuvaukset. Työssä esitellään kaksi ohjelmien luotettavuusryhmää. Ensimmäinen ryhmä ovat riskiin perustuvat mallit. Toinen ryhmä käsittää virheiden ”kylvöön” ja merkitsevyyteen perustuvat mallit. Työn empiirinen osa sisältää kokeiden kuvaukset ja tulokset. Kokeet suoritettiin käyttämällä kolmea ensimmäiseen ryhmään kuuluvaa mallia: Jelinski-Moranda mallia, ensimmäistä geometrista mallia sekä yksinkertaista eksponenttimallia. Kokeiden tarkoituksena oli tutkia, kuinka syötetyn datan distribuutio vaikuttaa mallien toimivuuteen sekä kuinka herkkiä mallit ovat syötetyn datan määrän muutoksille. Jelinski-Moranda malli osoittautui herkimmäksi distribuutiolle konvergaatio-ongelmien vuoksi, ensimmäinen geometrinen malli herkimmäksi datan määrän muutoksille.
Resumo:
Tämän diplomityön päätavoitteena oli parantaa kehitetyn kustannusperusteisen siirtohinnoittelutyökalun ominaisuuksia osastokohtaisen kustannusarviointiprosessin käyttöön. Työ on vaikeutunut lähimenneisyyden heikosta hintakyselyiden vastauskyvystä. Työn pääongelmana oli kerätä luotettavaa tuotannonohjausjärjestelmän kustannusaineistoa osittain vanhentuneista vakioventtiilien koneistus- ja materiaalitiedosta. Tutkimuksessa käytetyt tärkeimmät tutkimusmenetelmät voidaan jakaa siirtohinnoittelu- ja kustannusarvioprosessien kirjallisuustutkimukseen, kenttäanalyysiin ja nykyisen Microsoft Excel –siirtohinnoittelutyökalun kehittämiseen eri osastojen rajapinnassa. Siirtohinnoittelumenetelmät ovat yleisesti jaettu kustannus-, markkina- ja neuvotteluperusteisiin malleihin, jotka harvoin sellaisenaan kohtaavat siirtohinnoittelulle asetetut tavoitteet. Tämä ratkaisutapa voi johtaa tilanteisiin, jossa kaksi erillistä menetelmää sulautuvat yhteen. Lisäksi varsinaiseen siirtohinnoittelujärjestelmään yleensä vaikuttavat useat sisäiset ja ulkoiset tekijät. Lopullinen siirtohinnoittelumenetelmä tulisi ehdottomasti tukea myös yrityksen visiota ja muita liiketoiminnalle asetettuja strategioita. Työn tuloksena saatiin laajennettu Microsoft Excel –sovellus, joka vaatii sekä vuosittaista että kuukausittaista erikoisventtiilimateriaalien hinta- ja toimitusaikatietojen päivittämistä. Tämä ratkaisutapa ehdottomasti parantaa kustannusarviointiprosessia, koska myös alihankkijatietoja joudutaan tutkimaan systemaattisesti. Tämän jälkeen koko siirtohinnoitteluprosessia voidaan kehittää muuntamalla kokoonpano- ja testaustyövaiheiden kustannusrakennetta toimintoperustaisen kustannuslaskentamallin mukaiseksi.
Resumo:
Alpine tree-line ecotones are characterized by marked changes at small spatial scales that may result in a variety of physiognomies. A set of alternative individual-based models was tested with data from four contrasting Pinus uncinata ecotones in the central Spanish Pyrenees to reveal the minimal subset of processes required for tree-line formation. A Bayesian approach combined with Markov chain Monte Carlo methods was employed to obtain the posterior distribution of model parameters, allowing the use of model selection procedures. The main features of real tree lines emerged only in models considering nonlinear responses in individual rates of growth or mortality with respect to the altitudinal gradient. Variation in tree-line physiognomy reflected mainly changes in the relative importance of these nonlinear responses, while other processes, such as dispersal limitation and facilitation, played a secondary role. Different nonlinear responses also determined the presence or absence of krummholz, in agreement with recent findings highlighting a different response of diffuse and abrupt or krummholz tree lines to climate change. The method presented here can be widely applied in individual-based simulation models and will turn model selection and evaluation in this type of models into a more transparent, effective, and efficient exercise.
Resumo:
Biotic interactions are known to affect the composition of species assemblages via several mechanisms, such as competition and facilitation. However, most spatial models of species richness do not explicitly consider inter-specific interactions. Here, we test whether incorporating biotic interactions into high-resolution models alters predictions of species richness as hypothesised. We included key biotic variables (cover of three dominant arctic-alpine plant species) into two methodologically divergent species richness modelling frameworks - stacked species distribution models (SSDM) and macroecological models (MEM) - for three ecologically and evolutionary distinct taxonomic groups (vascular plants, bryophytes and lichens). Predictions from models including biotic interactions were compared to the predictions of models based on climatic and abiotic data only. Including plant-plant interactions consistently and significantly lowered bias in species richness predictions and increased predictive power for independent evaluation data when compared to the conventional climatic and abiotic data based models. Improvements in predictions were constant irrespective of the modelling framework or taxonomic group used. The global biodiversity crisis necessitates accurate predictions of how changes in biotic and abiotic conditions will potentially affect species richness patterns. Here, we demonstrate that models of the spatial distribution of species richness can be improved by incorporating biotic interactions, and thus that these key predictor factors must be accounted for in biodiversity forecasts
Resumo:
ABSTRACT The objective of this study was to select allometric models to estimate total and pooled aboveground biomass of 4.5-year-old capixingui trees established in an agrisilvicultural system. Aboveground biomass distribution of capixingui was also evaluated. Single- (diameter at breast height [DBH] or crown diameter or stem diameter as the independent variable) and double-entry (DBH or crown diameter or stem diameter and total height as independent variables) models were studied. The estimated total biomass was 17.3 t.ha-1, corresponding to 86.6 kg per tree. All models showed a good fit to the data (R2ad > 0.85) for bole, branches, and total biomass. DBH-based models presented the best residual distribution. Model lnW = b0 + b1* lnDBH can be recommended for aboveground biomass estimation. Lower coefficients were obtained for leaves (R2ad > 82%). Biomass distribution followed the order: bole>branches>leaves. Bole biomass percentage decreased with increasing DBH of the trees, whereas branch biomass increased.
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.
Resumo:
Routine activity theory introduced by Cohen& Felson in 1979 states that criminal acts are caused due to the presenceof criminals, vic-timsand the absence of guardians in time and place. As the number of collision of these elements in place and time increases, criminal acts will also increase even if the number of criminals or civilians remains the same within the vicinity of a city. Street robbery is a typical example of routine ac-tivity theory and the occurrence of which can be predicted using routine activity theory. Agent-based models allow simulation of diversity among individuals. Therefore agent based simulation of street robbery can be used to visualize how chronological aspects of human activity influence the incidence of street robbery.The conceptual model identifies three classes of people-criminals, civilians and police with certain activity areas for each. Police exist only as agents of formal guardianship. Criminals with a tendency for crime will be in the search for their victims. Civilians without criminal tendencycan be either victims or guardians. In addition to criminal tendency, each civilian in the model has a unique set of characteristicslike wealth, employment status, ability for guardianship etc. These agents are subjected to random walk through a street environment guided by a Q –learning module and the possible outcomes are analyzed
Resumo:
Understanding how biological visual systems perform object recognition is one of the ultimate goals in computational neuroscience. Among the biological models of recognition the main distinctions are between feedforward and feedback and between object-centered and view-centered. From a computational viewpoint the different recognition tasks - for instance categorization and identification - are very similar, representing different trade-offs between specificity and invariance. Thus the different tasks do not strictly require different classes of models. The focus of the review is on feedforward, view-based models that are supported by psychophysical and physiological data.
Resumo:
Baylis & Driver (Nature Neuroscience, 2001) have recently presented data on the response of neurons in macaque inferotemporal cortex (IT) to various stimulus transformations. They report that neurons can generalize over contrast and mirror reversal, but not over figure-ground reversal. This finding is taken to demonstrate that ``the selectivity of IT neurons is not determined simply by the distinctive contours in a display, contrary to simple edge-based models of shape recognition'', citing our recently presented model of object recognition in cortex (Riesenhuber & Poggio, Nature Neuroscience, 1999). In this memo, I show that the main effects of the experiment can be obtained by performing the appropriate simulations in our simple feedforward model. This suggests for IT cell tuning that the possible contributions of explicit edge assignment processes postulated in (Baylis & Driver, 2001) might be smaller than expected.
Resumo:
El siguiente trabajo, a partir de la identificación de los diferentes sujetos que participan en el medio ambiente donde se desenvuelve el restaurante El Molino, busca determinar cuáles podrían ser las estrategias de mercadeo más efectivas para que la imagen, concepto y servicio del restaurante, la marca en general, resulte lo más atractivas posibles para los segmentos objetivo de la empresa. Dadas las circunstancias de que es un negocio reciente, no existen datos históricos de la imagen que proyecta la marca hacia sus clientes, por lo tanto la información a partir de la cual se pretenden generar alternativas para que la marca influencie a los clientes de la manera deseada será conseguida a partir de una simulación que será obtenida de un modelo basado en agentes. Con esto lo que se busca es poder parametrizar en qué aspectos y de qué forma la empresa debe invertir para que la forma en que los clientes perciben la marca sea la deseada por el restaurante.
Resumo:
Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.
Resumo:
We assessed the vulnerability of blanket peat to climate change in Great Britain using an ensemble of 8 bioclimatic envelope models. We used 4 published models that ranged from simple threshold models, based on total annual precipitation, to Generalised Linear Models (GLMs, based on mean annual temperature). In addition, 4 new models were developed which included measures of water deficit as threshold, classification tree, GLM and generalised additive models (GAM). Models that included measures of both hydrological conditions and maximum temperature provided a better fit to the mapped peat area than models based on hydrological variables alone. Under UKCIP02 projections for high (A1F1) and low (B1) greenhouse gas emission scenarios, 7 out of the 8 models showed a decline in the bioclimatic space associated with blanket peat. Eastern regions (Northumbria, North York Moors, Orkney) were shown to be more vulnerable than higher-altitude, western areas (Highlands, Western Isles and Argyle, Bute and The Trossachs). These results suggest a long-term decline in the distribution of actively growing blanket peat, especially under the high emissions scenario, although it is emphasised that existing peatlands may well persist for decades under a changing climate. Observational data from long-term monitoring and manipulation experiments in combination with process-based models are required to explore the nature and magnitude of climate change impacts on these vulnerable areas more fully.