939 resultados para measurement error models
Resumo:
Dynaamisia simulointimalleja tarvitaan, jotta voidaan tarkastella järjestelmän käyttäytymistä ajan funktiona. Simulointimallilla voidaan simuloida järjestelmän lähtö erilaisilla herätteillä. Mallin avulla saadaan myös tarkempi käsitys järjestelmästä ja sen osa-alueista, koska simulointimallista voidaan tarkastella sellaisia asioita, jotka voivat olla oikeasta järjestelmästä vaikeasti mitattavia. Tässä työssä kehitetään LUT Energian hyötysuhdemittapaikan keskikokoista kalorimetriä approksimoiva dynaaminen lämmönsiirtomalli käyttäen Matlab® Simulink -ohjelmistoa. Kehitetyn lämmönsiirtomallin tarkkuutta arvioidaan todellisella järjestelmällä tehdyillä mittauksilla. Työssä käytetään karkeita approksimaatioita, jotka tulee korvata tarkemmilla matemaattisilla malleilla jatkokehitystä varten. Työssä kehitetty dynaaminen lämmönsiirtomalli approksimoi todellisen järjestelmän vastetta lämmitysvaiheessa keskimääräisenvirheen ±0,19 °C tarkkuudella.
Resumo:
This master’s thesis is devoted to study different heat flux measurement techniques such as differential temperature sensors, semi-infinite surface temperature methods, calorimetric sensors and gradient heat flux sensors. The possibility to use Gradient Heat Flux Sensors (GHFS) to measure heat flux in the combustion chamber of compression ignited reciprocating internal combustion engines was considered in more detail. A. Mityakov conducted an experiment, where Gradient Heat Flux Sensor was placed in four stroke diesel engine Indenor XL4D to measure heat flux in the combustion chamber. The results which were obtained from the experiment were compared with model’s numerical output. This model (a one – dimensional single zone model) was implemented with help of MathCAD and the result of this implementation is graph of heat flux in combustion chamber in relation to the crank angle. The values of heat flux throughout the cycle obtained with aid of heat flux sensor and theoretically were sufficiently similar, but not identical. Such deviation is rather common for this type of experiment.
Resumo:
One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. A large number of robot measurement systems are now available commercially. Yet, there is a dearth of systems that are portable, accurate and low cost. In this work a measurement system that can fill this gap in local calibration is presented. The measurement system consists of a single CCD camera mounted on the robot tool flange with a wide angle lens, and uses space resection models to measure the end-effector pose relative to a world coordinate system, considering radial distortions. Scale factors and image center are obtained with innovative techniques, making use of a multiview approach. The target plate consists of a grid of white dots impressed on a black photographic paper, and mounted on the sides of a 90-degree angle plate. Results show that the achieved average accuracy varies from 0.2mm to 0.4mm, at distances from the target from 600mm to 1000mm respectively, with different camera orientations.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Wind energy has obtained outstanding expectations due to risks of global warming and nuclear energy production plant accidents. Nowadays, wind farms are often constructed in areas of complex terrain. A potential wind farm location must have the site thoroughly surveyed and the wind climatology analyzed before installing any hardware. Therefore, modeling of Atmospheric Boundary Layer (ABL) flows over complex terrains containing, e.g. hills, forest, and lakes is of great interest in wind energy applications, as it can help in locating and optimizing the wind farms. Numerical modeling of wind flows using Computational Fluid Dynamics (CFD) has become a popular technique during the last few decades. Due to the inherent flow variability and large-scale unsteadiness typical in ABL flows in general and especially over complex terrains, the flow can be difficult to be predicted accurately enough by using the Reynolds-Averaged Navier-Stokes equations (RANS). Large- Eddy Simulation (LES) resolves the largest and thus most important turbulent eddies and models only the small-scale motions which are more universal than the large eddies and thus easier to model. Therefore, LES is expected to be more suitable for this kind of simulations although it is computationally more expensive than the RANS approach. With the fast development of computers and open-source CFD software during the recent years, the application of LES toward atmospheric flow is becoming increasingly common nowadays. The aim of the work is to simulate atmospheric flows over realistic and complex terrains by means of LES. Evaluation of potential in-land wind park locations will be the main application for these simulations. Development of the LES methodology to simulate the atmospheric flows over realistic terrains is reported in the thesis. The work also aims at validating the LES methodology at a real scale. In the thesis, LES are carried out for flow problems ranging from basic channel flows to real atmospheric flows over one of the most recent real-life complex terrain problems, the Bolund hill. All the simulations reported in the thesis are carried out using a new OpenFOAM® -based LES solver. The solver uses the 4th order time-accurate Runge-Kutta scheme and a fractional step method. Moreover, development of the LES methodology includes special attention to two boundary conditions: the upstream (inflow) and wall boundary conditions. The upstream boundary condition is generated by using the so-called recycling technique, in which the instantaneous flow properties are sampled on aplane downstream of the inlet and mapped back to the inlet at each time step. This technique develops the upstream boundary-layer flow together with the inflow turbulence without using any precursor simulation and thus within a single computational domain. The roughness of the terrain surface is modeled by implementing a new wall function into OpenFOAM® during the thesis work. Both, the recycling method and the newly implemented wall function, are validated for the channel flows at relatively high Reynolds number before applying them to the atmospheric flow applications. After validating the LES model over simple flows, the simulations are carried out for atmospheric boundary-layer flows over two types of hills: first, two-dimensional wind-tunnel hill profiles and second, the Bolund hill located in Roskilde Fjord, Denmark. For the twodimensional wind-tunnel hills, the study focuses on the overall flow behavior as a function of the hill slope. Moreover, the simulations are repeated using another wall function suitable for smooth surfaces, which already existed in OpenFOAM® , in order to study the sensitivity of the flow to the surface roughness in ABL flows. The simulated results obtained using the two wall functions are compared against the wind-tunnel measurements. It is shown that LES using the implemented wall function produces overall satisfactory results on the turbulent flow over the two-dimensional hills. The prediction of the flow separation and reattachment-length for the steeper hill is closer to the measurements than the other numerical studies reported in the past for the same hill geometry. The field measurement campaign performed over the Bolund hill provides the most recent field-experiment dataset for the mean flow and the turbulence properties. A number of research groups have simulated the wind flows over the Bolund hill. Due to the challenging features of the hill such as the almost vertical hill slope, it is considered as an ideal experimental test case for validating micro-scale CFD models for wind energy applications. In this work, the simulated results obtained for two wind directions are compared against the field measurements. It is shown that the present LES can reproduce the complex turbulent wind flow structures over a complicated terrain such as the Bolund hill. Especially, the present LES results show the best prediction of the turbulent kinetic energy with an average error of 24.1%, which is a 43% smaller than any other model results reported in the past for the Bolund case. Finally, the validated LES methodology is demonstrated to simulate the wind flow over the existing Muukko wind farm located in South-Eastern Finland. The simulation is carried out only for one wind direction and the results on the instantaneous and time-averaged wind speeds are briefly reported. The demonstration case is followed by discussions on the practical aspects of LES for the wind resource assessment over a realistic inland wind farm.
Resumo:
Coronary artery disease is an atherosclerotic disease, which leads to narrowing of coronary arteries, deteriorated myocardial blood flow and myocardial ischaemia. In acute myocardial infarction, a prolonged period of myocardial ischaemia leads to myocardial necrosis. Necrotic myocardium is replaced with scar tissue. Myocardial infarction results in various changes in cardiac structure and function over time that results in “adverse remodelling”. This remodelling may result in a progressive worsening of cardiac function and development of chronic heart failure. In this thesis, we developed and validated three different large animal models of coronary artery disease, myocardial ischaemia and infarction for translational studies. In the first study the coronary artery disease model had both induced diabetes and hypercholesterolemia. In the second study myocardial ischaemia and infarction were caused by a surgical method and in the third study by catheterisation. For model characterisation, we used non-invasive positron emission tomography (PET) methods for measurement of myocardial perfusion, oxidative metabolism and glucose utilisation. Additionally, cardiac function was measured by echocardiography and computed tomography. To study the metabolic changes that occur during atherosclerosis, a hypercholesterolemic and diabetic model was used with [18F] fluorodeoxyglucose ([18F]FDG) PET-imaging technology. Coronary occlusion models were used to evaluate metabolic and structural changes in the heart and the cardioprotective effects of levosimendan during post-infarction cardiac remodelling. Large animal models were used in testing of novel radiopharmaceuticals for myocardial perfusion imaging. In the coronary artery disease model, we observed atherosclerotic lesions that were associated with focally increased [18F]FDG uptake. In heart failure models, chronic myocardial infarction led to the worsening of systolic function, cardiac remodelling and decreased efficiency of cardiac pumping function. Levosimendan therapy reduced post-infarction myocardial infarct size and improved cardiac function. The novel 68Ga-labeled radiopharmaceuticals tested in this study were not successful for the determination of myocardial blood flow. In conclusion, diabetes and hypercholesterolemia lead to the development of early phase atherosclerotic lesions. Coronary artery occlusion produced considerable myocardial ischaemia and later infarction following myocardial remodelling. The experimental models evaluated in these studies will enable further studies concerning disease mechanisms, new radiopharmaceuticals and interventions in coronary artery disease and heart failure.
Resumo:
Several methods have been described to measure intraocular pressure (IOP) in clinical and research situations. However, the measurement of time varying IOP with high accuracy, mainly in situations that alter corneal properties, has not been reported until now. The present report describes a computerized system capable of recording the transitory variability of IOP, which is sufficiently sensitive to reliably measure ocular pulse peak-to-peak values. We also describe its characteristics and discuss its applicability to research and clinical studies. The device consists of a pressure transducer, a signal conditioning unit and an analog-to-digital converter coupled to a video acquisition board. A modified Cairns trabeculectomy was performed in 9 Oryctolagus cuniculus rabbits to obtain changes in IOP decay parameters and to evaluate the utility and sensitivity of the recording system. The device was effective for the study of kinetic parameters of IOP, such as decay pattern and ocular pulse waves due to cardiac and respiratory cycle rhythm. In addition, there was a significant increase of IOP versus time curve derivative when pre- and post-trabeculectomy recordings were compared. The present procedure excludes corneal thickness and error related to individual operator ability. Clinical complications due to saline infusion and pressure overload were not observed during biomicroscopic evaluation. Among the disadvantages of the procedure are the requirement of anesthesia and the use in acute recordings rather than chronic protocols. Finally, the method described may provide a reliable alternative for the study of ocular pressure dynamic alterations in man and may facilitate the investigation of the pathogenesis of glaucoma.
Resumo:
The Caco-2 cell line has been used as a model to predict the in vitro permeability of the human intestinal barrier. The predictive potential of the assay relies on an appropriate in-house validation of the method. The objective of the present study was to develop a single HPLC-UV method for the identification and quantitation of marker drugs and to determine the suitability of the Caco-2 cell permeability assay. A simple chromatographic method was developed for the simultaneous determination of both passively (propranolol, carbamazepine, acyclovir, and hydrochlorothiazide) and actively transported drugs (vinblastine and verapamil). Separation was achieved on a C18 column with step-gradient elution (acetonitrile and aqueous solution of ammonium acetate, pH 3.0) at a flow rate of 1.0 mL/min and UV detection at 275 nm during the total run time of 35 min. The method was validated and found to be specific, linear, precise, and accurate. This chromatographic system can be readily used on a routine basis and its utilization can be extended to other permeability models. The results obtained in the Caco-2 bi-directional transport experiments confirmed the validity of the assay, given that high and low permeability profiles were identified, and P-glycoprotein functionality was established.
Resumo:
This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.
Resumo:
The objective of this research is to create a current state analysis of pulp supply chain processes from production planning to deliveries to customers. A cross-functional flowchart is being used to model these processes. These models help finding key performance indicators (KPIs) which enable examinations of the supply chain efficiency. Supply chain measures in different processes reveal the changes need processes that affect the whole supply chain and its efficiency and competitiveness. Structure of pulp supply chain differs from most of the other supply chains. The fact that there are big volumes of bulk products, small product variations and supply forecasts are made for the year ahead make the difference. This factor brings different benefits but also challenges when developing supply chain. This thesis divides pulp supply chain in three different main categories: production planning, warehousing and transportation. It provides tools for estimating the functionality of supply chain as well as developing the efficiency for different functions of supply chain. By having a better understanding of supply chain processes and measurement the whole supply chain structure can be developed significantly.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.
Resumo:
In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.
Resumo:
Introduction: Le but de l’étude était d’examiner l’effet des matériaux à empreintes sur la précision et la fiabilité des modèles d’études numériques. Méthodes: Vingt-cinq paires de modèles en plâtre ont été choisies au hasard parmi les dossiers de la clinique d’orthodontie de l’Université de Montréal. Une empreinte en alginate (Kromopan 100), une empreinte en substitut d’alginate (Alginot), et une empreinte en PVS (Aquasil) ont été prises de chaque arcade pour tous les patients. Les empreintes ont été envoyées chez Orthobyte pour la coulée des modèles en plâtre et la numérisation des modèles numériques. Les analyses de Bolton 6 et 12, leurs mesures constituantes, le surplomb vertical (overbite), le surplomb horizontal (overjet) et la longueur d’arcade ont été utilisés pour comparaisons. Résultats : La corrélation entre mesures répétées était de bonne à excellente pour les modèles en plâtre et pour les modèles numériques. La tendance voulait que les mesures répétées sur les modèles en plâtre furent plus fiables. Il existait des différences statistiquement significatives pour l’analyse de Bolton 12, pour la longueur d’arcade mandibulaire, et pour le chevauchement mandibulaire, ce pour tous les matériaux à empreintes. La tendance observée fut que les mesures sur les modèles en plâtre étaient plus petites pour l’analyse de Bolton 12 mais plus grandes pour la longueur d’arcade et pour le chevauchement mandibulaire. Malgré les différences statistiquement significatives trouvées, ces différences n’avaient aucune signification clinique. Conclusions : La précision et la fiabilité du logiciel pour l’analyse complète des modèles numériques sont cliniquement acceptables quand on les compare avec les résultats de l’analyse traditionnelle sur modèles en plâtre.
Resumo:
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.