18 resultados para uncertainty modelling
em Universitat de Girona, Spain
Resumo:
It can be assumed that the composition of Mercury’s thin gas envelope (exosphere) is related to the composition of the planets crustal materials. If this relationship is true, then inferences regarding the bulk chemistry of the planet might be made from a thorough exospheric study. The most vexing of all unsolved problems is the uncertainty in the source of each component. Historically, it has been believed that H and He come primarily from the solar wind, while Na and K originate from volatilized materials partitioned between Mercury’s crust and meteoritic impactors. The processes that eject atoms and molecules into the exosphere of Mercury are generally considered to be thermal vaporization, photonstimulated desorption (PSD), impact vaporization, and ion sputtering. Each of these processes has its own temporal and spatial dependence. The exosphere is strongly influenced by Mercury’s highly elliptical orbit and rapid orbital speed. As a consequence the surface undergoes large fluctuations in temperature and experiences differences of insolation with longitude. We will discuss these processes but focus more on the expected surface composition and solar wind particle sputtering which releases material like Ca and other elements from the surface minerals and discuss the relevance of composition modelling
Resumo:
Not considered in the analytical model of the plant, uncertainties always dramatically decrease the performance of the fault detection task in the practice. To cope better with this prevalent problem, in this paper we develop a methodology using Modal Interval Analysis which takes into account those uncertainties in the plant model. A fault detection method is developed based on this model which is quite robust to uncertainty and results in no false alarm. As soon as a fault is detected, an ANFIS model is trained in online to capture the major behavior of the occurred fault which can be used for fault accommodation. The simulation results understandably demonstrate the capability of the proposed method for accomplishing both tasks appropriately
Resumo:
This paper analyzes the optimal behavior of farmers in the presence of direct payments and uncertainty. In an empirical analysis for Switzerland, it confirms previously obtained theoretical results and determines the magnitude of the theoretical predicted effects. The results show that direct payments increase agricultural production between 3.7% to 4.8%. Alternatively to direct payments, the production effect of tax reductions is evaluated in order to determine its magnitude. The empirical analysis corroborates the theoretical results of the literature and demonstrates that tax reductions are also distorting, but to a substantially lesser degree if losses are not offset. However, tax reductions, independently whether losses are offset or not, lead to higher government spending than pure direct payments
Resumo:
This analysis was stimulated by the real data analysis problem of household expenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that try to add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spending excluding alcohol/tobacco similar for teetotal and non-teetotal households? In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than one component, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durables within the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small. While this analysis is based on around economic data, the ideas carry over to many other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
In any discipline, where uncertainty and variability are present, it is important to have principles which are accepted as inviolate and which should therefore drive statistical modelling, statistical analysis of data and any inferences from such an analysis. Despite the fact that two such principles have existed over the last two decades and from these a sensible, meaningful methodology has been developed for the statistical analysis of compositional data, the application of inappropriate and/or meaningless methods persists in many areas of application. This paper identifies at least ten common fallacies and confusions in compositional data analysis with illustrative examples and provides readers with necessary, and hopefully sufficient, arguments to persuade the culprits why and how they should amend their ways
Resumo:
The identification of compositional changes in fumarolic gases of active and quiescent volcanoes is one of the most important targets in monitoring programs. From a general point of view, many systematic (often cyclic) and random processes control the chemistry of gas discharges, making difficult to produce a convincing mathematical-statistical modelling. Changes in the chemical composition of volcanic gases sampled at Vulcano Island (Aeolian Arc, Sicily, Italy) from eight different fumaroles located in the northern sector of the summit crater (La Fossa) have been analysed by considering their dependence from time in the period 2000-2007. Each intermediate chemical composition has been considered as potentially derived from the contribution of the two temporal extremes represented by the 2000 and 2007 samples, respectively, by using inverse modelling methodologies for compositional data. Data pertaining to fumaroles F5 and F27, located on the rim and in the inner part of La Fossa crater, respectively, have been used to achieve the proposed aim. The statistical approach has allowed us to highlight the presence of random and not random fluctuations, features useful to understand how the volcanic system works, opening new perspectives in sampling strategies and in the evaluation of the natural risk related to a quiescent volcano
Resumo:
In this article, the results of a modified SERVQUAL questionnaire (Parasuraman et al., 1991) are reported. The modifications consisted in substituting questionnaire items particularly suited to a specific service (banking) and context (county of Girona, Spain) for the original rather general and abstract items. These modifications led to more interpretable factors which accounted for a higher percentage of item variance. The data were submitted to various structural equation models which made it possible to conclude that the questionnaire contains items with a high measurement quality with respect to five identified dimensions of service quality which differ from those specified by Parasuraman et al. And are specific to the banking service. The two dimensions relating to the behaviour of employees have the greatest predictive power on overall quality and satisfaction ratings, which enables managers to use a low-cost reduced version of the questionnaire to monitor quality on a regular basis. It was also found that satisfaction and overall quality were perfectly correlated thus showing that customers do not perceive these concepts as being distinct
Resumo:
En aquest article es resumeixen els resultats publicats en un informe de l' ISS (Istituto Superiore di Sanità) del desembre de 2006, sobre un model matemàtic desenvolupat per un grup de treball que inclou a investigadors de les Universitats de Trento, Pisa i Roma, i els Instituts Nacionals de Salut (Istituto Superiore di Sanità, ISS), per avaluar i mesurar l'impacte de la transmissió i el control de la pandèmia de grip
Resumo:
Often practical performance of analytical redundancy for fault detection and diagnosis is decreased by uncertainties prevailing not only in the system model, but also in the measurements. In this paper, the problem of fault detection is stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem can be solved using modal interval analysis and consistency techniques. Consistency techniques are then shown to be particularly efficient to check the consistency of the analytical redundancy relations (ARRs), dealing with uncertain measurements and parameters. Through the work presented in this paper, it can be observed that consistency techniques can be used to increase the performance of a robust fault detection tool, which is based on interval arithmetic. The proposed method is illustrated using a nonlinear dynamic model of a hydraulic system
Resumo:
This paper deals with fault detection and isolation problems for nonlinear dynamic systems. Both problems are stated as constraint satisfaction problems (CSP) and solved using consistency techniques. The main contribution is the isolation method based on consistency techniques and uncertainty space refining of interval parameters. The major advantage of this method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements, and model errors. Interval calculations bring independence from the assumption of monotony considered by several approaches for fault isolation which are based on observers. An application to a well known alcoholic fermentation process model is presented
Resumo:
The main objective of this paper aims at developing a methodology that takes into account the human factor extracted from the data base used by the recommender systems, and which allow to resolve the specific problems of prediction and recommendation. In this work, we propose to extract the user's human values scale from the data base of the users, to improve their suitability in open environments, such as the recommender systems. For this purpose, the methodology is applied with the data of the user after interacting with the system. The methodology is exemplified with a case study
Resumo:
In this thesis I propose a novel method to estimate the dose and injection-to-meal time for low-risk intensive insulin therapy. This dosage-aid system uses an optimization algorithm to determine the insulin dose and injection-to-meal time that minimizes the risk of postprandial hyper- and hypoglycaemia in type 1 diabetic patients. To this end, the algorithm applies a methodology that quantifies the risk of experiencing different grades of hypo- or hyperglycaemia in the postprandial state induced by insulin therapy according to an individual patient’s parameters. This methodology is based on modal interval analysis (MIA). Applying MIA, the postprandial glucose level is predicted with consideration of intra-patient variability and other sources of uncertainty. A worst-case approach is then used to calculate the risk index. In this way, a safer prediction of possible hyper- and hypoglycaemic episodes induced by the insulin therapy tested can be calculated in terms of these uncertainties.
Resumo:
The activated sludge and anaerobic digestion processes have been modelled in widely accepted models. Nevertheless, these models still have limitations when describing operational problems of microbiological origin. The aim of this thesis is to develop a knowledge-based model to simulate risk of plant-wide operational problems of microbiological origin.For the risk model heuristic knowledge from experts and literature was implemented in a rule-based system. Using fuzzy logic, the system can infer a risk index for the main operational problems of microbiological origin (i.e. filamentous bulking, biological foaming, rising sludge and deflocculation). To show the results of the risk model, it was implemented in the Benchmark Simulation Models. This allowed to study the risk model's response in different scenarios and control strategies. The risk model has shown to be really useful providing a third criterion to evaluate control strategies apart from the economical and environmental criteria.
Resumo:
Els lixiviats d'abocadors urbans són aigües residuals altament contaminades, que es caracteritzen per les elevades concentracions d'amoni i el baix contingut de matèria orgànica biodegradable. El tractament dels lixiviats a través dels processos de nitrificació-desnitrificació convencionals és costós a causa de la seva elevada demanda d'oxigen i la necessitat d'addició d'una font de carboni externa. En els darrers anys, la viabilitat del tractament d'aquest tipus d'afluents per un procés combinat de nitritació parcial-anammox ha estat demostrada. Aquesta tesi es centra en el tractament de lixiviats d'abocador a través d'un procés de nitritació parcial en SBR, com un pas preparatori per a un reactor anammox. Els resultats de l'estudi han demostrat la viabilitat d'aquesta tecnologia per al tractament de lixiviats d'abocador. El treball va evolucionar des d'una escala inicial de laboratori, on el procés va ser testat inicialment, a uns exitosos experiments d'operació a llarg termini a escala pilot. Finalment, la tesi també inclou el desenvolupament, calibració i validació d'un model matemàtic del procés, que té com a objectiu augmentar el coneixement del procés.
Resumo:
En les últimes dècades, l'increment dels nivells de radiació solar ultraviolada (UVR) que arriba a la Terra (principalment degut a la disminució d'ozó estratosfèric) juntament amb l'augment detectat en malalties relacionades amb l'exposició a la UVR, ha portat a un gran volum d'investigacions sobre la radiació solar en aquesta banda i els seus efectes en els humans. L'índex ultraviolat (UVI), que ha estat adoptat internacionalment, va ser definit amb el propòsit d'informar al públic general sobre els riscos d'exposar el cos nu a la UVR i per tal d'enviar missatges preventius. L'UVI es va definir inicialment com el valor màxim diari. No obstant, el seu ús actual s'ha ampliat i té sentit referir-se a un valor instantani o a una evolució diària del valor d'UVI mesurat, modelitzat o predit. El valor concret d'UVI està afectat per la geometria Sol-Terra, els núvols, l'ozó, els aerosols, l'altitud i l'albedo superficial. Les mesures d'UVI d'alta qualitat són essencials com a referència i per estudiar tendències a llarg termini; es necessiten també tècniques acurades de modelització per tal d'entendre els factors que afecten la UVR, per predir l'UVI i com a control de qualitat de les mesures. És d'esperar que les mesures més acurades d'UVI s'obtinguin amb espectroradiòmetres. No obstant, com que els costs d'aquests dispositius són elevats, és més habitual trobar dades d'UVI de radiòmetres eritemàtics (de fet, la majoria de les xarxes d'UVI estan equipades amb aquest tipus de sensors). Els millors resultats en modelització s'obtenen amb models de transferència radiativa de dispersió múltiple quan es coneix bé la informació d'entrada. No obstant, habitualment no es coneix informació d'entrada, com per exemple les propietats òptiques dels aerosols, la qual cosa pot portar a importants incerteses en la modelització. Sovint, s'utilitzen models més simples per aplicacions com ara la predicció d'UVI o l'elaboració de mapes d'UVI, ja que aquests són més ràpids i requereixen menys paràmetres d'entrada. Tenint en compte aquest marc de treball, l'objectiu general d'aquest estudi és analitzar l'acord al qual es pot arribar entre la mesura i la modelització d'UVI per condicions de cel sense núvols. D'aquesta manera, en aquest estudi es presenten comparacions model-mesura per diferents tècniques de modelització, diferents opcions d'entrada i per mesures d'UVI tant de radiòmetres eritemàtics com d'espectroradiòmeters. Com a conclusió general, es pot afirmar que la comparació model-mesura és molt útil per detectar limitacions i estimar incerteses tant en les modelitzacions com en les mesures. Pel que fa a la modelització, les principals limitacions que s'han trobat és la falta de coneixement de la informació d'aerosols considerada com a entrada dels models. També, s'han trobat importants diferències entre l'ozó mesurat des de satèl·lit i des de la superfície terrestre, la qual cosa pot portar a diferències importants en l'UVI modelitzat. PTUV, una nova i simple parametrització pel càlcul ràpid d'UVI per condicions de cel serens, ha estat desenvolupada en base a càlculs de transferència radiativa. La parametrització mostra una bona execució tant respecte el model base com en comparació amb diverses mesures d'UVI. PTUV ha demostrat la seva utilitat per aplicacions particulars com ara l'estudi de l'evolució anual de l'UVI per un cert lloc (Girona) i la composició de mapes d'alta resolució de valors d'UVI típics per un territori concret (Catalunya). En relació a les mesures, es constata que és molt important saber la resposta espectral dels radiòmetres eritemàtics per tal d'evitar grans incerteses a la mesura d'UVI. Aquest instruments, si estan ben caracteritzats, mostren una bona comparació amb els espectroradiòmetres d'alta qualitat en la mesura d'UVI. Les qüestions més importants respecte les mesures són la calibració i estabilitat a llarg termini. També, s'ha observat un efecte de temperatura en el PTFE, un material utilitzat en els difusors en alguns instruments, cosa que potencialment podria tenir implicacions importants en el camp experimental. Finalment, i pel que fa a les comparacions model-mesura, el millor acord s'ha trobat quan es consideren mesures d'UVI d'espectroradiòmetres d'alta qualitat i s'usen models de transferència radiativa que consideren les millors dades disponibles pel que fa als paràmetres òptics d'ozó i aerosols i els seus canvis en el temps. D'aquesta manera, l'acord pot ser tan alt dins un 0.1º% en UVI, i típicament entre menys d'un 3%. Aquest acord es veu altament deteriorat si s'ignora la informació d'aerosols i depèn de manera important del valor d'albedo de dispersió simple dels aerosols. Altres dades d'entrada del model, com ara l'albedo superficial i els perfils d'ozó i temperatura introdueixen una incertesa menor en els resultats de modelització.