45 resultados para Classical measurement error model

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electric motors driven by adjustable-frequency converters may produce periodic excitation forces that can cause torque and speed ripple. Interaction with the driven mechanical system may cause undesirable vibrations that affect the system performance and lifetime. Direct drives in sensitive applications, such as elevators or paper machines, emphasize the importance of smooth torque production. This thesis analyses the non-idealities of frequencyconverters that produce speed and torque ripple in electric drives. The origin of low order harmonics in speed and torque is examined. It is shown how different current measurement error types affect the torque. As the application environment, direct torque control (DTC) method is applied to permanent magnet synchronous machines (PMSM). A simulation model to analyse the effect of the frequency converter non-idealities on the performance of the electric drives is created. Themodel enables to identify potential problems causing torque vibrations and possibly damaging oscillations in electrically driven machine systems. The model is capable of coupling with separate simulation software of complex mechanical loads. Furthermore, the simulation model of the frequency converter's control algorithm can be applied to control a real frequency converter. A commercial frequencyconverter with standard software, a permanent magnet axial flux synchronous motor and a DC motor as the load are used to detect the effect of current measurement errors on load torque. A method to reduce the speed and torque ripple by compensating the current measurement errors is introduced. The method is based on analysing the amplitude of a selected harmonic component of speed as a function oftime and selecting a suitable compensation alternative for the current error. The speed can be either measured or estimated, so the compensation method is applicable also for speed sensorless drives. The proposed compensation method is tested with a laboratory drive, which consists of commercial frequency converter hardware with self-made software and a prototype PMSM. The speed and torque rippleof the test drive are reduced by applying the compensation method. In addition to the direct torque controlled PMSM drives, the compensation method can also beapplied to other motor types and control methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In mathematical modeling the estimation of the model parameters is one of the most common problems. The goal is to seek parameters that fit to the measurements as well as possible. There is always error in the measurements which implies uncertainty to the model estimates. In Bayesian statistics all the unknown quantities are presented as probability distributions. If there is knowledge about parameters beforehand, it can be formulated as a prior distribution. The Bays’ rule combines the prior and the measurements to posterior distribution. Mathematical models are typically nonlinear, to produce statistics for them requires efficient sampling algorithms. In this thesis both Metropolis-Hastings (MH), Adaptive Metropolis (AM) algorithms and Gibbs sampling are introduced. In the thesis different ways to present prior distributions are introduced. The main issue is in the measurement error estimation and how to obtain prior knowledge for variance or covariance. Variance and covariance sampling is combined with the algorithms above. The examples of the hyperprior models are applied to estimation of model parameters and error in an outlier case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis the X-ray tomography is discussed from the Bayesian statistical viewpoint. The unknown parameters are assumed random variables and as opposite to traditional methods the solution is obtained as a large sample of the distribution of all possible solutions. As an introduction to tomography an inversion formula for Radon transform is presented on a plane. The vastly used filtered backprojection algorithm is derived. The traditional regularization methods are presented sufficiently to ground the Bayesian approach. The measurements are foton counts at the detector pixels. Thus the assumption of a Poisson distributed measurement error is justified. Often the error is assumed Gaussian, altough the electronic noise caused by the measurement device can change the error structure. The assumption of Gaussian measurement error is discussed. In the thesis the use of different prior distributions in X-ray tomography is discussed. Especially in severely ill-posed problems the use of a suitable prior is the main part of the whole solution process. In the empirical part the presented prior distributions are tested using simulated measurements. The effect of different prior distributions produce are shown in the empirical part of the thesis. The use of prior is shown obligatory in case of severely ill-posed problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksen päätavoite oli kehittää suorituskyvyn analysointijärjestelmä metalliteollisuuden alihankintaa suorittavalle pk-yritykselle. Lisäksi tutkittiin toimintatapoja, jotka edesauttavat menestyksekkään analysointijärjestelmän rakentamista. Tutkimuksessa käsiteltiin myös mittausjärjestelmän hyötyjä ja haittoja pk-yritykselle. Tutkimuksen teoreettisessa osassa käsitellään yleisesti suorituskykyä, esitellään erilaisia suorituskyvyn analysointijärjestelmiä ja selvitetään järjestelmien eroja. Lisäksi esitellään erilaisia prosessimalleja, joiden avulla yritys voi rakentaa suorituskyvyn analysointijärjestelmän. Tutkimuksen empiirisessä osassa esitellään yrityksessä läpikäyty prosessimalli, jonka avulla rakennettiin suorituskyvyn analysointijärjestelmä. Yrityksessä läpikäydyn prosessin pohjana toimi SAKE-prosessimalli, mutta ideoita haettiin myös Toivasen mallista. Tutkimuksen tuloksena syntyi teoreettinen paketti suorituskyvyn analysoinnista ja malli suorituskyvyn analysointijärjestelmästä. Teoreettinen paketti toimi hyvänä pohjana ja tarjosi taustatietoa aiheesta projektissa mukana olleille henkilöille. Tutkimuksen tuloksena syntynyt malli soveltuu parhaiten metallin mekaanista työstöä suorittavalle yritykselle, mutta myös muut yritykset voivat ottaa tästä mallia. Hyödyllisimmäksi näkökulmaksi voi nostaa itse prosessin, jonka avulla päästään tarkastelemaan yrityksen menestymisen taustalla olevia tekijöitä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena oli kehittää mallimittaristo logistiikkapalvelualalla toimivan yrityksen operatiivisen tason suorituskyvyn seurantaan ja toiminnanohjausta varten, päivittäisen johtamisen tueksi. Tutkimus suoritettiin pääosin toiminta-analyyttisena, yhden yrityksen empiirisenä tapaustutkimuksena. Tutkimuksen kohdeyrityksen toiminnanmittaus perustuu tällä hetkellä pääasiassa taloudellisiin mittareihin ja muutamaan kyselyyn. Toiminnanohjauksen ja – kehittämisen, päätöksenteon tueksi tarvitaan, taloudellisten mittareiden lisäksi, mittareita, joilla pystytään seuraamaan suorituskyvyn taustalla vaikuttavien tekijöiden kehittymistä. Tutkimuksen kohdeyrityksen operatiivisen tason suorituskyvyn mallimittariston suunnittelussa haluttiin varmistaa, että jatkossa mittaamisella vaikutettaisiin seuraustekijöiden lisäksi myös syytekijöihin, selkiyttää liiketoiminnan tavoitteet, operatiivisen tason näkökulmasta, ja mittaamisen tavoite. Tutkimuksessa esitelty mallimittaristo on suunniteltu, tasapainotetun mittariston viitekehyksen avulla. Mittariston näkökulmiksi valittiin: talous, sidosryhmä (asiakas), prosessi ja henkilöstö. Mittariston tuottaman tiedon tavoitteena on toiminnanohjauksen, -kehittämisen ja päätöksenteon tukeminen, kun mittaustulokset ja trendi ovat yhdessä paikassa, on tiedonhaku ja - hyödyntäminen helpompaa. Mallimittaristoa ei testattu eikä käyttöönotettu tutkimuksessa.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over the last decades, calibration techniques have been widely used to improve the accuracy of robots and machine tools since they only involve software modification instead of changing the design and manufacture of the hardware. Traditionally, there are four steps are required for a calibration, i.e. error modeling, measurement, parameter identification and compensation. The objective of this thesis is to propose a method for the kinematics analysis and error modeling of a newly developed hybrid redundant robot IWR (Intersector Welding Robot), which possesses ten degrees of freedom (DOF) where 6-DOF in parallel and additional 4-DOF in serial. In this article, the problem of kinematics modeling and error modeling of the proposed IWR robot are discussed. Based on the vector arithmetic method, the kinematics model and the sensitivity model of the end-effector subject to the structure parameters is derived and analyzed. The relations between the pose (position and orientation) accuracy and manufacturing tolerances, actuation errors, and connection errors are formulated. Computer simulation is performed to examine the validity and effectiveness of the proposed method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis was carried out as a case study of a company YIT in order to clarify the sev-erest risks for the company and to build a method for project portfolio evaluation. The target organization creates new living environment by constructing residential buildings, business premises, infrastructure and entire areas worth for EUR 1.9 billion in the year 2013. Company has noted project portfolio management needs more information about the structure of project portfolio and possible influences of market shock situation. With interviews have been evaluated risks with biggest influence and most appropriate metrics to examine. The major risks for the company were evaluated by interviewing the executive staff. At the same time, the most appropriate risk metrics were considered. At the moment sales risk was estimated to have biggest impact on company‟s business. Therefore project port-folio evaluation model was created and three different scenarios for company‟s future were created in order to identify the scale of possible market shock situation. The created model is tested with public and descriptive figures of YIT in a one-year-long market shock and the impact on different metrics was evaluated. Study was conducted using con-structive research methodology. Results indicate that company has notable sales risk in certain sections of business portfolio.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.