858 resultados para Practical Error Estimator
Resumo:
Contrast enhancement is an image processing technique where the objective is to preprocess the image so that relevant information can be either seen or further processed more reliably. These techniques are typically applied when the image itself or the device used for image reproduction provides poor visibility and distinguishability of different regions of interest inthe image. In most studies, the emphasis is on the visualization of image data,but this human observer biased goal often results to images which are not optimal for automated processing. The main contribution of this study is to express the contrast enhancement as a mapping from N-channel image data to 1-channel gray-level image, and to devise a projection method which results to an image with minimal error to the correct contrast image. The projection, the minimum-error contrast image, possess the optimal contrast between the regions of interest in the image. The method is based on estimation of the probability density distributions of the region values, and it employs Bayesian inference to establish the minimum error projection.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
The present dissertation is devoted to the systematic approach to the development of organic toxic and refractory pollutants abatement by chemical decomposition methods in aqueous and gaseous phases. The systematic approach outlines the basic scenario of chemical decomposition process applications with a step-by-step approximation to the most effective result with a predictable outcome for the full-scale application, confirmed by successful experience. The strategy includes the following steps: chemistry studies, reaction kinetic studies in interaction with the mass transfer processes under conditions of different control parameters, contact equipment design and studies, mathematical description of the process for its modelling and simulation, processes integration into treatment technology and its optimisation, and the treatment plant design. The main idea of the systematic approach for oxidation process introduction consists of a search for the most effective combination between the chemical reaction and the treatment device, in which the reaction is supposed to take place. Under this strategy,a knowledge of the reaction pathways, its products, stoichiometry and kinetics is fundamental and, unfortunately, often unavailable from the preliminary knowledge. Therefore, research made in chemistry on novel treatment methods, comprisesnowadays a substantial part of the efforts. Chemical decomposition methods in the aqueous phase include oxidation by ozonation, ozone-associated methods (O3/H2O2, O3/UV, O3/TiO2), Fenton reagent (H2O2/Fe2+/3+) and photocatalytic oxidation (PCO). In the gaseous phase, PCO and catalytic hydrolysis over zero valent ironsare developed. The experimental studies within the described methodology involve aqueous phase oxidation of natural organic matter (NOM) of potable water, phenolic and aromatic amino compounds, ethylene glycol and its derivatives as de-icing agents, and oxygenated motor fuel additives ¿ methyl tert-butyl ether (MTBE) ¿ in leachates and polluted groundwater. Gas-phase chemical decomposition includes PCO of volatile organic compounds and dechlorination of chlorinated methane derivatives. The results of the research summarised here are presented in fifteenattachments (publications and papers submitted for publication and under preparation).
Resumo:
The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.
Resumo:
The market place of the twenty-first century will demand that manufacturing assumes a crucial role in a new competitive field. Two potential resources in the area of manufacturing are advanced manufacturing technology (AMT) and empowered employees. Surveys in Finland have shown the need to invest in the new AMT in the Finnish sheet metal industry in the 1990's. In this run the focus has been on hard technology and less attention is paid to the utilization of human resources. In manymanufacturing companies an appreciable portion of the profit within reach is wasted due to poor quality of planning and workmanship. The production flow production error distribution of the sheet metal part based constructions is inspectedin this thesis. The objective of the thesis is to analyze the origins of production errors in the production flow of sheet metal based constructions. Also the employee empowerment is investigated in theory and the meaning of the employee empowerment in reducing the overall production error amount is discussed in this thesis. This study is most relevant to the sheet metal part fabricating industrywhich produces sheet metal part based constructions for electronics and telecommunication industry. This study concentrates on the manufacturing function of a company and is based on a field study carried out in five Finnish case factories. In each studied case factory the most delicate work phases for production errors were detected. It can be assumed that most of the production errors are caused in manually operated work phases and in mass production work phases. However, no common theme in collected production error data for production error distribution in the production flow can be found. Most important finding was still that most of the production errors in each case factory studied belong to the 'human activity based errors-category'. This result indicates that most of the problemsin the production flow are related to employees or work organization. Development activities must therefore be focused to the development of employee skills orto the development of work organization. Employee empowerment gives the right tools and methods to achieve this.
A priori parameterisation of the CERES soil-crop models and tests against several European data sets
Resumo:
Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.
Resumo:
Location information is becoming increasingly necessary as every new smartphone incorporates a GPS (Global Positioning System) which allows the development of various applications based on it. However, it is not possible to properly receive the GPS signal in indoor environments. For this reason, new indoor positioning systems are being developed. As indoors is a very challenging scenario, it is necessary to study the precision of the obtained location information in order to determine if these new positioning techniques are suitable for indoor positioning.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Tämä työ luo katsauksen ajallisiin ja stokastisiin ohjelmien luotettavuus malleihin sekä tutkii muutamia malleja käytännössä. Työn teoriaosuus sisältää ohjelmien luotettavuuden kuvauksessa ja arvioinnissa käytetyt keskeiset määritelmät ja metriikan sekä varsinaiset mallien kuvaukset. Työssä esitellään kaksi ohjelmien luotettavuusryhmää. Ensimmäinen ryhmä ovat riskiin perustuvat mallit. Toinen ryhmä käsittää virheiden ”kylvöön” ja merkitsevyyteen perustuvat mallit. Työn empiirinen osa sisältää kokeiden kuvaukset ja tulokset. Kokeet suoritettiin käyttämällä kolmea ensimmäiseen ryhmään kuuluvaa mallia: Jelinski-Moranda mallia, ensimmäistä geometrista mallia sekä yksinkertaista eksponenttimallia. Kokeiden tarkoituksena oli tutkia, kuinka syötetyn datan distribuutio vaikuttaa mallien toimivuuteen sekä kuinka herkkiä mallit ovat syötetyn datan määrän muutoksille. Jelinski-Moranda malli osoittautui herkimmäksi distribuutiolle konvergaatio-ongelmien vuoksi, ensimmäinen geometrinen malli herkimmäksi datan määrän muutoksille.
Resumo:
Diplomityö tehtiin kansainväliseen, mekaanisen puunjalostusteollisuuden koneita, tuotantojärjestelmiä ja tehtaita toimittavaan yritykseen. Diplomityön tarkoituksena oli kartoittaa syitä viilusorvin teräpenkin asemoinnissa esiintyneisiin värähtelyongelmiin ja tutkia ratkaisuja niiden voittamiseksi, sekä sorvausvoimien määrittäminen. Diplomityön teoreettisessa osassa tutustuttiin viilusorvin, erityisesti sen hydraulisten servojärjestelmien rakenteeseen ja toimintaan sekä viilunsorvaukseen. Teräpenkin syötön servojärjestelmää tutkittiin teoreettisesti johtamalla suljetun piirin siirtofunktiot “asema/käsky” ja “virhe/voima” ja tulostamalla niiden taajuusvaste-kuvaajat, joista tutkittiin parametrien vaikutuksia järjestelmän toimintaan. Tulokset vahvistettiin simuloimalla. Todettiin nykyisen järjestelmän värähtelyongelmien johtuvan pääasiassa hydrauliöljyn joustosta sylinterissä. Parannuksina ehdotettiin suurempaa sylinterin halkaisijaa ja viskoosikitkakertoimen suurentamista. Diplomityön kokeellisessa osassa mitattiin viilusorvin servojärjestelmien toimilaitteissa esiintyviä voimia ja niiden perusteella laskettiin varsinaiset sorvausvoimat. Lisäksi tutkittiin teräpenkin syötön ja muiden servojärjestelmien asemointitarkkuutta sorvauksen aikana. Mittauksia varten diplomityössä suunniteltiin ja hankittiin kannettava mittausjärjestelmä.
Resumo:
Tämä diplomityö käsittelee työterveys- ja työturvallisuushallinnan (TTT) sekä ympäristönsuojelun ongelmia ja riskejä, joita tehdasalueen toiminnanharjoittaja kohtaa ulkoistaessaan tehdastoimintojaan ja siirtyessään käyttämään 24 h ulkoisia kunnossapitopalveluja. Teoriaosa selventää ulkoistukseen liittyviä lainmukaisia määräyksiä ja vaatimuksia koskien terveyden, turvallisuuden ja ympäristöongelmien hallintaa sellu-, paperi- ja kartonkitehtaissa Euroopassa, Yhdysvalloissa ja Suomessa. TTT-toiminnan tason sekä ympäristönsuojelun tason mittaamisen ongelmat tuodaan esille. Olemassa olevia kansainvälisiä TTT-johtamisjärjestelmien ja ympäristöjärjestelmien standardeja, riskien hallintatyökaluja ja ohjelmia esitellään lyhyesti. Käytännön osa toteutettiin tapaustutkimuksena, jonka kohteena oli Äänekosken tehdaskombinaatti ja kemianteollisuuden laitos, Noviant CMC Oy. TTT-hallintatoimien ja ympäristönsuojelun ongelmia tutkitaan tehdastoimintoja ulkoistettaessa. Integroidun johtamisjärjestelmän auditointimenettelyt, ulkoistuksen kohdealueet, pk-yrityksien riskien hallinta ja ulkoisten työntekijöiden turvallisuuskoulutus ovat erityisen tarkastelun alla. Käyttäen hyväksi kerättyä TTT- ja ympäristöaineistoa, suunniteltiin malli ja sisältöehdotus uudelle internet-selain tyyppiselle työkalulle TTT- ja ympäristöasioiden hallinnan avuksi. Työkalu on tarkoitettu palvelemaan Noviant CMC Oy:n eri sidosryhmien tarpeita. Diplomityön käytännön osa muodostaa pohjan JP MILLSAFE - pilottiprojektille, joka käynnistettiin internet-selain tyyppisen turvallisuuspalvelusovelluksen kehittämiseksi palvelemaan Äänekosken tehdaskombinaatin eri sidosryhmien tarpeita.
Resumo:
The purpose of the research is to define practical profit which can be achieved using neural network methods as a prediction instrument. The thesis investigates the ability of neural networks to forecast future events. This capability is checked on the example of price prediction during intraday trading on stock market. The executed experiments show predictions of average 1, 2, 5 and 10 minutes’ prices based on data of one day and made by two different types of forecasting systems. These systems are based on the recurrent neural networks and back propagation neural nets. The precision of the predictions is controlled by the absolute error and the error of market direction. The economical effectiveness is estimated by a special trading system. In conclusion, the best structures of neural nets are tested with data of 31 days’ interval. The best results of the average percent of profit from one transaction (buying + selling) are 0.06668654, 0.188299453, 0.349854787 and 0.453178626, they were achieved for prediction periods 1, 2, 5 and 10 minutes. The investigation can be interesting for the investors who have access to a fast information channel with a possibility of every-minute data refreshment.
Resumo:
Tässä diplomityössä tutkittiin operaattorin IP-verkossa toteutettavia puhepalveluita. Tutkimus perustui käytännön tarpeeseen. VoIP-tekniikalla toteutetuista puheluista on nopeasti tullut vakavasti otettava haastaja perinteiselle piirikytkentäiselle puhelintekniikalle. IP-tekniikka mahdollistaa data-, puhe- sekä videopalvelujen integroimisen yhteen verkkoon. Lisäksi IP-verkko on edullinen, laajalle levinnyt ja tehokas. Nämä ominaisuudet tekevät siitä houkuttelevan vaihtoehdon puhepalvelujen alustaksi. Verkkojen yhdistyminen mahdollistaa uudentyyppisen kommunikointiympäristön, jossa voidaan käyttää monenlaisia sovelluksia ja apuvälineitä ihmisten välisen kommunikoinnin helpottamiseksi. Tähän työhön sisältyi testilaitteiston hankkiminen ja asentaminen. Laitteistolla oli pystyttävä toteuttamaan operaattorin VoIP-järjestelmä, jolla oli kyettävä toteuttamaan usealle yritykselle IP-vaihdepalvelut. Laitteistoa testattiin itse aiheutetuilla virhetilanteilla sekä koekäyttäjillä. Testauksessa selvitettiin järjestelmän soveltuvuutta operaattorin tuotantokäyttöön.