89 resultados para Analytical modelling
Resumo:
This analysis was stimulated by the real data analysis problem of householdexpenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that tryto add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spendingexcluding alcohol/tobacco similar for teetotal and non-teetotal households?In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than onecomponent, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durableswithin the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small.While this analysis is based on around economic data, the ideas carry over tomany other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
In this work we develop a viscoelastic bar element that can handle multiple rheo- logical laws with non-linear elastic and non-linear viscous material models. The bar element is built by joining in series an elastic and viscous bar, constraining the middle node position to the bar axis with a reduction method, and stati- cally condensing the internal degrees of freedom. We apply the methodology to the modelling of reversible softening with sti ness recovery both in 2D and 3D, a phenomenology also experimentally observed during stretching cycles on epithelial lung cell monolayers.
Resumo:
In this paper we propose a parsimonious regime-switching approach to model the correlations between assets, the threshold conditional correlation (TCC) model. This method allows the dynamics of the correlations to change from one state (or regime) to another as a function of observable transition variables. Our model is similar in spirit to Silvennoinen and Teräsvirta (2009) and Pelletier (2006) but with the appealing feature that it does not suffer from the course of dimensionality. In particular, estimation of the parameters of the TCC involves a simple grid search procedure. In addition, it is easy to guarantee a positive definite correlation matrix because the TCC estimator is given by the sample correlation matrix, which is positive definite by construction. The methodology is illustrated by evaluating the behaviour of international equities, govenrment bonds and major exchange rates, first separately and then jointly. We also test and allow for different parts in the correlation matrix to be governed by different transition variables. For this, we estimate a multi-threshold TCC specification. Further, we evaluate the economic performance of the TCC model against a constant conditional correlation (CCC) estimator using a Diebold-Mariano type test. We conclude that threshold correlation modelling gives rise to a significant reduction in portfolio´s variance.
Resumo:
It can be assumed that the composition of Mercury’s thin gas envelope (exosphere) is related to thecomposition of the planets crustal materials. If this relationship is true, then inferences regarding the bulkchemistry of the planet might be made from a thorough exospheric study. The most vexing of allunsolved problems is the uncertainty in the source of each component. Historically, it has been believedthat H and He come primarily from the solar wind, while Na and K originate from volatilized materialspartitioned between Mercury’s crust and meteoritic impactors. The processes that eject atoms andmolecules into the exosphere of Mercury are generally considered to be thermal vaporization, photonstimulateddesorption (PSD), impact vaporization, and ion sputtering. Each of these processes has its owntemporal and spatial dependence. The exosphere is strongly influenced by Mercury’s highly ellipticalorbit and rapid orbital speed. As a consequence the surface undergoes large fluctuations in temperatureand experiences differences of insolation with longitude. We will discuss these processes but focus moreon the expected surface composition and solar wind particle sputtering which releases material like Caand other elements from the surface minerals and discuss the relevance of composition modelling
Resumo:
The main objective of this paper aims at developing a methodology that takes into account the human factor extracted from the data base used by the recommender systems, and which allow to resolve the specific problems of prediction and recommendation. In this work, we propose to extract the user's human values scale from the data base of the users, to improve their suitability in open environments, such as the recommender systems. For this purpose, the methodology is applied with the data of the user after interacting with the system. The methodology is exemplified with a case study
Resumo:
En aquest article es resumeixen els resultats publicats en un informe de l' ISS (Istituto Superiore di Sanità) del desembre de 2006, sobre un model matemàtic desenvolupat per un grup de treball que inclou a investigadors de les Universitats de Trento, Pisa i Roma, i els Instituts Nacionals de Salut (Istituto Superiore di Sanità, ISS), per avaluar i mesurar l'impacte de la transmissió i el control de la pandèmia de grip
Resumo:
The identification of compositional changes in fumarolic gases of active and quiescent volcanoes is one of the mostimportant targets in monitoring programs. From a general point of view, many systematic (often cyclic) and randomprocesses control the chemistry of gas discharges, making difficult to produce a convincing mathematical-statisticalmodelling.Changes in the chemical composition of volcanic gases sampled at Vulcano Island (Aeolian Arc, Sicily, Italy) fromeight different fumaroles located in the northern sector of the summit crater (La Fossa) have been analysed byconsidering their dependence from time in the period 2000-2007. Each intermediate chemical composition has beenconsidered as potentially derived from the contribution of the two temporal extremes represented by the 2000 and 2007samples, respectively, by using inverse modelling methodologies for compositional data. Data pertaining to fumarolesF5 and F27, located on the rim and in the inner part of La Fossa crater, respectively, have been used to achieve theproposed aim. The statistical approach has allowed us to highlight the presence of random and not random fluctuations,features useful to understand how the volcanic system works, opening new perspectives in sampling strategies and inthe evaluation of the natural risk related to a quiescent volcano
Resumo:
A four compartment model of the cardiovascular system is developed. To allow for easy interpretation and to minimise the number of parameters, an effort was made to keep the model as simple as possible. A sensitivity analysis is first carried out to determine which are the most important model parameters to characterise the blood pressure signal. A four stage process is then described which accurately determines all parameter values. This process is applied to data from three patients and good agreement is shown in all cases.
Resumo:
Nessie is an Autonomous Underwater Vehicle (AUV) created by a team of students in the Heriot Watt University to compete in the Student Autonomous Underwater Competition, Europe (SAUC-E) in August 2006. The main objective of the project is to find the dynamic equation of the robot, dynamic model. With it, the behaviour of the robot will be easier to understand and movement tests will be available by computer without the need of the robot, what is a way to save time, batteries, money and the robot from water inside itself. The object of the second part in this project is setting a control system for Nessie by using the model
Resumo:
We derive analytical expressions for the propagation speed of downward combustion fronts of thin solid fuels with a background flow initially at rest. The classical combustion model for thin solid fuels that consists of five coupled reaction-convection-diffusion equations is here reduced into a single equation with the gas temperature as the single variable. For doing so we apply a two-zone combustion model that divides the system into a preheating region and a pyrolyzing region. The speed of the combustion front is obtained after matching the temperature and its derivative at the location that separates both regions.We also derive a simplified version of this analytical expression expected to be valid for a wide range of cases. Flame front velocities predicted by our analyticalexpressions agree well with experimental data found in the literature for a large variety of cases and substantially improve the results obtained from a previous well-known analytical expression
Resumo:
A model-based approach for fault diagnosis is proposed, where the fault detection is based on checking the consistencyof the Analytical Redundancy Relations (ARRs) using an interval tool. The tool takes into account the uncertainty in theparameters and the measurements using intervals. Faults are explicitly included in the model, which allows for the exploitation of additional information. This information is obtained from partial derivatives computed from the ARRs. The signs in the residuals are used to prune the candidate space when performing the fault diagnosis task. The method is illustrated using a two-tank example, in which these aspects are shown to have an impact on the diagnosis and fault discrimination, since the proposed method goes beyond the structural methods
Resumo:
Projecte de recerca elaborat a partir d’una estada a la University of British Columbia, Canadà, entre 2010 i 2012 La malaltia d'Alzheimer (MA) representa avui la forma més comuna de demència en la població envellida. Malgrat fa 100 anys que va ser descoberta, encara avui no existeix cap tractament preventiu i/o curatiu ni cap agent de diagnòstic que permeti valorar quantitativament l'evolució d'aquesta malaltia. L'objectiu en el que s'emmarca aquest treball és contribuir a aportar solucions al problema de la manca d'agents terapèutics i de diagnosi, unívocs i rigorosos, per a la MA. Des del camp de la química bioinorgànica és fàcil fixar-se en l'excessiva concentració d'ions Zn(II) i Cu(II) en els cervells de malalts de MA, plantejar-se la seva utilització com a dianes terapèutica i, en conseqüència, cercar agents quelants que evitin la formació de plaques senils o contribueixin a la seva dissolució. Si bé aquest va ser el punt de partida d’aquest projecte, els múltiples factors implicats en la patogènesi de la MA fan que el clàssic paradigma d’ ¨una molècula, una diana¨ limiti la capacitat de la molècula de combatre aquesta malaltia tan complexa. Per tant, un esforç considerable s’ha dedicat al disseny d’agentsmultifuncionals que combatin els múltiples factors que caracteritzen el desenvolupament de la MA. En el present treball s’han dissenyat agents multifuncionals inspirats en dos esquelets moleculars ben establers i coneguts en el camp de la química medicinal: la tioflavina-T (ThT) i la deferiprona (DFP). La utilització de tècniques in silico que inclouen càlculs farmacocinètics i modelatge molecular ha estat un procés cabdal per a l’avaluació dels millors candidats en base als següents requeriments: (a) compliment de determinades propietats farmacocinètiques que estableixin el seu possible ús com a fàrmac (b) hidrofobicitat adequada per travessar la BBB i (c) interacció amb el pèptid Aen solució.
Resumo:
The Drivers Scheduling Problem (DSP) consists of selecting a set of duties for vehicle drivers, for example buses, trains, plane or boat drivers or pilots, for the transportation of passengers or goods. This is a complex problem because it involves several constraints related to labour and company rules and can also present different evaluation criteria and objectives. Being able to develop an adequate model for this problem that can represent the real problem as close as possible is an important research area.The main objective of this research work is to present new mathematical models to the DSP problem that represent all the complexity of the drivers scheduling problem, and also demonstrate that the solutions of these models can be easily implemented in real situations. This issue has been recognized by several authors and as important problem in Public Transportation. The most well-known and general formulation for the DSP is a Set Partition/Set Covering Model (SPP/SCP). However, to a large extend these models simplify some of the specific business aspects and issues of real problems. This makes it difficult to use these models as automatic planning systems because the schedules obtained must be modified manually to be implemented in real situations. Based on extensive passenger transportation experience in bus companies in Portugal, we propose new alternative models to formulate the DSP problem. These models are also based on Set Partitioning/Covering Models; however, they take into account the bus operator issues and the perspective opinions and environment of the user.We follow the steps of the Operations Research Methodology which consist of: Identify the Problem; Understand the System; Formulate a Mathematical Model; Verify the Model; Select the Best Alternative; Present the Results of theAnalysis and Implement and Evaluate. All the processes are done with close participation and involvement of the final users from different transportation companies. The planner s opinion and main criticisms are used to improve the proposed model in a continuous enrichment process. The final objective is to have a model that can be incorporated into an information system to be used as an automatic tool to produce driver schedules. Therefore, the criteria for evaluating the models is the capacity to generate real and useful schedules that can be implemented without many manual adjustments or modifications. We have considered the following as measures of the quality of the model: simplicity, solution quality and applicability. We tested the alternative models with a set of real data obtained from several different transportation companies and analyzed the optimal schedules obtained with respect to the applicability of the solution to the real situation. To do this, the schedules were analyzed by the planners to determine their quality and applicability. The main result of this work is the proposition of new mathematical models for the DSP that better represent the realities of the passenger transportation operators and lead to better schedules that can be implemented directly in real situations.