998 resultados para Multidimensional modelling
Resumo:
This paper aims at developing a simulation framework to jointly assess agricultural and water issues. While the strong linkages between water, food, and the environment call for an integrated and multidisciplinary modelling approach, a complete and consistent modelling system to evaluate food-water relationships in Europe was missing so far. The spatial economic simulation model for agriculture CAPRI, which comprises a set of environmental indicators to assess food-environment interrelations within European regions, has been extended to account for food-water links. This modelling framework enables simulating the potential impact of climate change and water availability on agricultural production at the EU regional level, as well as looking at the sustainable use of water, the implementation of water policies or the integration of water issues in the Common Agricultural Policy
Resumo:
Sustainability and the food-water-environment nexus. Food-water linkages in global agro-economic models. The CAPRI water module. Potential to jointly assess food and water policies. Pilot case study. Further development.
Resumo:
Anaerobic digestion is a multistep process, mediated by a functionally and phylogenetically diverse microbial population. One of the crucial steps is oxidation of organic acids, with electron transfer via hydrogen or formate from acetogenic bacteria to methanogens. This syntrophic microbiological process is strongly restricted by a thermodynamic limitation on the allowable hydrogen or formate concentration. In order to study this process in more detail, we developed an individual-based biofilm model which enables to describe the processes at a microbial resolution. The biochemical model is the ADM1, implemented in a multidimensional domain. With this model, we evaluated three important issues for the syntrophic relationship: (i) is there a fundamental difference in using hydrogen or formate as electron carrier? (ii) Does a thermodynamic-based inhibition function produced substantially different results from an empirical function? and; (iii) Does the physical colocation of acetogens and methanogens follow directly from a general model. Hydrogen or formate as electron carrier had no substantial impact on model results. Standard inhibition functions or thermodynamic inhibition function gave similar results at larger substrate field grid sizes (> 10 mu m), but at smaller grid sizes, the thermodynamic-based function reduced the number of cells with long interspecies distances (> 2.5 mu m). Therefore, a very fine grid resolution is needed to reflect differences between the thermodynamic function, and a more generic inhibition form. The co-location of syntrophic bacteria was well predicted without a need to assume a microbiological based mechanism (e.g., through chemotaxis) of biofilm formation.
Resumo:
Nowadays, data mining is based on low-level specications of the employed techniques typically bounded to a specic analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Here, we propose a model-driven approach based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (via data-warehousing technology) and the analysis models for data mining (tailored to a specic platform). Thus, analysts can concentrate on the analysis problem via conceptual data-mining models instead of low-level programming tasks related to the underlying-platform technical details. These tasks are now entrusted to the model-transformations scaffolding.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Resumo:
Les sociétés modernes dépendent de plus en plus sur les systèmes informatiques et ainsi, il y a de plus en plus de pression sur les équipes de développement pour produire des logiciels de bonne qualité. Plusieurs compagnies utilisent des modèles de qualité, des suites de programmes qui analysent et évaluent la qualité d'autres programmes, mais la construction de modèles de qualité est difficile parce qu'il existe plusieurs questions qui n'ont pas été répondues dans la littérature. Nous avons étudié les pratiques de modélisation de la qualité auprès d'une grande entreprise et avons identifié les trois dimensions où une recherche additionnelle est désirable : Le support de la subjectivité de la qualité, les techniques pour faire le suivi de la qualité lors de l'évolution des logiciels, et la composition de la qualité entre différents niveaux d'abstraction. Concernant la subjectivité, nous avons proposé l'utilisation de modèles bayésiens parce qu'ils sont capables de traiter des données ambiguës. Nous avons appliqué nos modèles au problème de la détection des défauts de conception. Dans une étude de deux logiciels libres, nous avons trouvé que notre approche est supérieure aux techniques décrites dans l'état de l'art, qui sont basées sur des règles. Pour supporter l'évolution des logiciels, nous avons considéré que les scores produits par un modèle de qualité sont des signaux qui peuvent être analysés en utilisant des techniques d'exploration de données pour identifier des patrons d'évolution de la qualité. Nous avons étudié comment les défauts de conception apparaissent et disparaissent des logiciels. Un logiciel est typiquement conçu comme une hiérarchie de composants, mais les modèles de qualité ne tiennent pas compte de cette organisation. Dans la dernière partie de la dissertation, nous présentons un modèle de qualité à deux niveaux. Ces modèles ont trois parties: un modèle au niveau du composant, un modèle qui évalue l'importance de chacun des composants, et un autre qui évalue la qualité d'un composé en combinant la qualité de ses composants. L'approche a été testée sur la prédiction de classes à fort changement à partir de la qualité des méthodes. Nous avons trouvé que nos modèles à deux niveaux permettent une meilleure identification des classes à fort changement. Pour terminer, nous avons appliqué nos modèles à deux niveaux pour l'évaluation de la navigabilité des sites web à partir de la qualité des pages. Nos modèles étaient capables de distinguer entre des sites de très bonne qualité et des sites choisis aléatoirement. Au cours de la dissertation, nous présentons non seulement des problèmes théoriques et leurs solutions, mais nous avons également mené des expériences pour démontrer les avantages et les limitations de nos solutions. Nos résultats indiquent qu'on peut espérer améliorer l'état de l'art dans les trois dimensions présentées. En particulier, notre travail sur la composition de la qualité et la modélisation de l'importance est le premier à cibler ce problème. Nous croyons que nos modèles à deux niveaux sont un point de départ intéressant pour des travaux de recherche plus approfondis.
Resumo:
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Resumo:
The ecotoxicological response of the living organisms in an aquatic system depends on the physical, chemical and bacteriological variables, as well as the interactions between them. An important challenge to scientists is to understand the interaction and behaviour of factors involved in a multidimensional process such as the ecotoxicological response.With this aim, multiple linear regression (MLR) and principal component regression were applied to the ecotoxicity bioassay response of Chlorella vulgaris and Vibrio fischeri in water collected at seven sites of Leça river during five monitoring campaigns (February, May, June, August and September of 2006). The river water characterization included the analysis of 22 physicochemical and 3 microbiological parameters. The model that best fitted the data was MLR, which shows: (i) a negative correlation with dissolved organic carbon, zinc and manganese, and a positive one with turbidity and arsenic, regarding C. vulgaris toxic response; (ii) a negative correlation with conductivity and turbidity and a positive one with phosphorus, hardness, iron, mercury, arsenic and faecal coliforms, concerning V. fischeri toxic response. This integrated assessment may allow the evaluation of the effect of future pollution abatement measures over the water quality of Leça River.
Resumo:
Our work is focused on alleviating the workload for designers of adaptive courses on the complexity task of authoring adaptive learning designs adjusted to specific user characteristics and the user context. We propose an adaptation platform that consists in a set of intelligent agents where each agent carries out an independent adaptation task. The agents apply machine learning techniques to support the user modelling for the adaptation process
Resumo:
1. The ecological niche is a fundamental biological concept. Modelling species' niches is central to numerous ecological applications, including predicting species invasions, identifying reservoirs for disease, nature reserve design and forecasting the effects of anthropogenic and natural climate change on species' ranges. 2. A computational analogue of Hutchinson's ecological niche concept (the multidimensional hyperspace of species' environmental requirements) is the support of the distribution of environments in which the species persist. Recently developed machine-learning algorithms can estimate the support of such high-dimensional distributions. We show how support vector machines can be used to map ecological niches using only observations of species presence to train distribution models for 106 species of woody plants and trees in a montane environment using up to nine environmental covariates. 3. We compared the accuracy of three methods that differ in their approaches to reducing model complexity. We tested models with independent observations of both species presence and species absence. We found that the simplest procedure, which uses all available variables and no pre-processing to reduce correlation, was best overall. Ecological niche models based on support vector machines are theoretically superior to models that rely on simulating pseudo-absence data and are comparable in empirical tests. 4. Synthesis and applications. Accurate species distribution models are crucial for effective environmental planning, management and conservation, and for unravelling the role of the environment in human health and welfare. Models based on distribution estimation rather than classification overcome theoretical and practical obstacles that pervade species distribution modelling. In particular, ecological niche models based on machine-learning algorithms for estimating the support of a statistical distribution provide a promising new approach to identifying species' potential distributions and to project changes in these distributions as a result of climate change, land use and landscape alteration.
Resumo:
Ydinvoimalaitokset on suunniteltu ja rakennettu niin, että niillä on kyky selviytyä erilaisista käyttöhäiriöistä ja onnettomuuksista ilman laitoksen vahingoittumista sekä väestön ja ympäristön vaarantumista. On erittäin epätodennäköistä, että ydinvoimalaitosonnettomuus etenee reaktorisydämen vaurioitumiseen asti, minkä seurauksena sydänmateriaalien hapettuminen voi tuottaa vetyä. Jäädytyspiirin rikkoutumisen myötä vety saattaa kulkeutua ydinvoimalaitoksen suojarakennukseen, jossa se voi muodostaa palavan seoksen ilman hapen kanssa ja palaa tai jopa räjähtää. Vetypalosta aiheutuvat lämpötila- ja painekuormitukset vaarantavat suojarakennuksen eheyden ja suojarakennuksen sisällä olevien turvajärjestelmien toimivuuden, joten tehokas ja luotettava vedynhallintajärjestelmä on tarpeellinen. Passiivisia autokatalyyttisiä vetyrekombinaattoreita käytetäänyhä useammissa Euroopan ydinvoimaitoksissa vedynhallintaan. Nämä rekombinaattorit poistavat vetyä katalyyttisellä reaktiolla vedyn reagoidessa katalyytin pinnalla hapen kanssa muodostaen vesihöyryä. Rekombinaattorit ovat täysin passiivisiaeivätkä tarvitse ulkoista energiaa tai operaattoritoimintaa käynnistyäkseen taitoimiakseen. Rekombinaattoreiden käyttäytymisen tutkimisellatähdätään niiden toimivuuden selvittämiseen kaikissa mahdollisissa onnettomuustilanteissa, niiden suunnittelun optimoimiseen sekä niiden optimaalisen lukumäärän ja sijainnin määrittämiseen suojarakennuksessa. Suojarakennuksen mallintamiseen käytetään joko keskiarvoistavia ohjelmia (Lumped parameter (LP) code), moniulotteisia virtausmalliohjelmia (Computational Fluid Dynamics, CFD) tai näiden yhdistelmiä. Rekombinaattoreiden mallintaminen on toteutettu näissä ohjelmissa joko kokeellisella, teoreettisella tai yleisellä (eng. Global Approach) mallilla. Tämä diplomityö sisältää tulokset TONUS OD-ohjelman sisältämän Siemens FR90/1-150 rekombinaattorin mallin vedynkulutuksen tarkistuslaskuista ja TONUS OD-ohjelmalla suoritettujen laskujen tulokset Siemens rekombinaattoreiden vuorovaikutuksista. TONUS on CEA:n (Commissariat à 1'En¬ergie Atomique) kehittämä LP (OD) ja CFD -vetyanalyysiohjelma, jota käytetään vedyn jakautumisen, palamisenja detonaation mallintamiseen. TONUS:sta käytetään myös vedynpoiston mallintamiseen passiivisilla autokatalyyttisillä rekombinaattoreilla. Vedynkulutukseen vaikuttavat tekijät eroteltiin ja tutkittiin yksi kerrallaan. Rekombinaattoreiden vuorovaikutuksia tutkittaessa samaan tilavuuteen sijoitettiin eri kokoisia ja eri lukumäärä rekombinaattoreita. Siemens rekombinaattorimalli TONUS OD-ohjelmassa laskee vedynkulutuksen kuten oletettiin ja tulokset vahvistavat TONUS OD-ohjelman fysikaalisen laskennan luotettavuuden. Mahdollisia paikallisia jakautumia tutkitussa tilavuudessa ei voitu havaita LP-ohjelmalla, koska se käyttäälaskennassa suureiden tilavuuskeskiarvoja. Paikallisten jakautumien tutkintaan tarvitaan CFD -laskentaohjelma.
Resumo:
Our work is focused on alleviating the workload for designers of adaptive courses on the complexity task of authoring adaptive learning designs adjusted to specific user characteristics and the user context. We propose an adaptation platform that consists in a set of intelligent agents where each agent carries out an independent adaptation task. The agents apply machine learning techniques to support the user modelling for the adaptation process
Resumo:
We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.
Resumo:
Purpose – This paper aims to address the gaps in service recovery strategy assessment. An effective service recovery strategy that prevents customer defection after a service failure is a powerful managerial instrument. The literature to date does not present a comprehensive assessment of service recovery strategy. It also lacks a clear picture of the service recovery actions at managers’ disposal in case of failure and the effectiveness of individual strategies on customer outcomes. Design/methodology/approach – Based on service recovery theory, this paper proposes a formative index of service recovery strategy and empirically validates this measure using partial least-squares path modelling with survey data from 437 complainants in the telecommunications industry in Egypt. Findings – The CURE scale (CUstomer REcovery scale) presents evidence of reliability as well as convergent, discriminant and nomological validity. Findings also reveal that problem-solving, speed of response, effort, facilitation and apology are the actions that have an impact on the customer’s satisfaction with service recovery. Practical implications – This new formative index is of potential value in investigating links between strategy and customer evaluations of service by helping managers identify which actions contribute most to changes in the overall service recovery strategy as well as satisfaction with service recovery. Ultimately, the CURE scale facilitates the long-term planning of effective complaint management. Originality/value – This is the first study in the service marketing literature to propose a comprehensive assessment of service recovery strategy and clearly identify the service recovery actions that contribute most to changes in the overall service recovery strategy.
Resumo:
Popular dimension reduction and visualisation algorithms rely on the assumption that input dissimilarities are typically Euclidean, for instance Metric Multidimensional Scaling, t-distributed Stochastic Neighbour Embedding and the Gaussian Process Latent Variable Model. It is well known that this assumption does not hold for most datasets and often high-dimensional data sits upon a manifold of unknown global geometry. We present a method for improving the manifold charting process, coupled with Elastic MDS, such that we no longer assume that the manifold is Euclidean, or of any particular structure. We draw on the benefits of different dissimilarity measures allowing for the relative responsibilities, under a linear combination, to drive the visualisation process.