33 resultados para adaptive fusion

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A quadcopter is a helicopter with four rotors, which is mechanically simple device, but requires complex electrical control for each motor. Control system needs accurate information about quadcopter’s attitude in order to achieve stable flight. The goal of this bachelor’s thesis was to research how this information could be obtained. Literature review revealed that most of the quadcopters, whose source-code is available, use a complementary filter or some derivative of it to fuse data from a gyroscope, an accelerometer and often also a magnetometer. These sensors combined are called an Inertial Measurement Unit. This thesis focuses on calculating angles from each sensor’s data and fusing these with a complementary filter. On the basis of literature review and measurements using a quadcopter, the proposed filter provides sufficiently accurate attitude data for flight control system. However, a simple complementary filter has one significant drawback – it works reliably only when the quadcopter is hovering or moving at a constant speed. The reason is that an accelerometer can’t be used to measure angles accurately if linear acceleration is present. This problem can be fixed using some derivative of a complementary filter like an adaptive complementary filter or a Kalman filter, which are not covered in this thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tulevaisuudessa siirrettävät laitteet, kuten matkapuhelimet ja kämmenmikrot, pystyvät muodostamaan verkkoyhteyden käyttäen erilaisia yhteysmenetelmiä eri tilanteissa. Yhteysmenetelmillä on toisistaan poikkeavat viestintäominaisuudet mm. latenssin, kaistanleveyden, virhemäärän yms. suhteen. Langattomille yhteysmenetelmille on myös ominaista tietoliikenneyhteyden ominaisuuksien voimakas muuttuminen ympäristön suhteen. Parhaan suorituskyvyn ja käytettävyyden saavuttamiseksi, on siirrettävän laitteen pystyttävä mukautumaan käytettyyn viestintämenetelmään ja viestintäympäristössä tapahtuviin muutoksiin. Olennainen osa tietoliikenteessä ovat protokollapinot, jotka mahdollistavat tietoliikenneyhteyden järjestelmien välillä tarjoten verkkopalveluita päätelaitteen käyttäjäsovelluksille. Jotta protokollapinot pystyisivät mukautumaan tietyn viestintäympäristön ominaisuuksiin, on protokollapinon käyttäytymistä pystyttävä muuttamaan ajonaikaisesti. Perinteisesti protokollapinot ovat kuitenkin rakennettu muuttumattomiksi niin, että mukautuminen tässä laajuudessa on erittäin vaikeaa toteuttaa, ellei jopa mahdotonta. Tämä diplomityö käsittelee mukautuvien protokollapinojen rakentamista käyttäen komponenttipohjaista ohjelmistokehystä joka mahdollistaa protokollapinojen ajonaikaisen muuttamisen. Toteuttamalla esimerkkijärjestelmän, ja mittaamalla sen suorituskykyä vaihtelevassa tietoliikenneympäristössä, osoitamme, että mukautuvat protokollapinot ovat mahdollisia rakentaa ja ne tarjoavat merkittäviä etuja erityisesti tulevaisuuden siirrettävissä laitteissa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Oral mucosa is a frequent site of primary herpes simplex virus type 1 (HSV-1) infection, whereas intraoral recurrent disease is very rare. Instead, reactivation from latency predominantly results in asymptomatic HSV shedding to saliva or recurrent labial herpes (RLH) with highly individual frequency. The current study aimed to elucidate the role of human oral innate and acquired immune mechanisms in modulation of HSV infection in orolabial region. Saliva was found to neutralize HSV-1, and to protect cells from infection independently of salivary antibodies. Neutralization capacity was higher in saliva from asymptomatic HSV-seropositive individuals compared to subjects with history of RLH or seronegative controls. Neutralization was at least partially associated with salivary lactoferrin content. Further, lactoferrin and peroxidase-generated hypothiocyanite were found to either neutralize HSV-1 or interfere with HSV-1 replication, whereas lysozyme displayed no anti-HSV-1 activity. Lactoferrin was also shown to modulate HSV-1 infection by inhibiting keratinocyte proliferation. RLH susceptibility was further found to be associated with Th2 biased cytokine responses against HSV, and a higher level of anti- HSV-IgG with Th2 polarization, indicating lack of efficiency of humoral response in the control of HSV disease. In a three-dimensional cell culture, keratinocytes were found to support both lytic and nonproductive infection, suggesting HSV persistence in epithelial cells, and further emphasizing the importance of peripheral immune control of HSV. These results suggest that certain innate salivary antimicrobial compounds and Th1 type cellular responses are critically important in protecting the host against HSV disease, implying possible applications in drug, vaccine and gene therapy design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a brief résumé of the history of solidification research and key factors affecting the solidification of fusion welds. There is a general agreement of the basic solidification theory, albeit differing - even confusing - nomenclatures do exist, and Cases 2 and 3 (the Chalmers' basic boundary conditions for solidification, categorized by Savage as Cases) are variably emphasized. Model Frame, a tool helping to model the continuum of fusion weld solidification from start to end, is proposed. It incorporates the general solidification models, of which the pertinent ones are selected for the actual modeling. The basic models are the main solidification Cases 1…4. These discrete Cases are joined with Sub-Cases: models of Pfann, Flemings and others, bringing needed Sub-Case variables into the model. Model Frame depicts a grain growing from the weld interface to its centerline. Besides modeling, the Model Frame supports education and academic debate. The new mathematical modeling techniques will extend its use into multi-dimensional modeling, introducing new variables and increasing the modeling accuracy. We propose a model: melting/solidification-model (M/S-model) - predicting the solute profile at the start of the solidification of a fusion weld. This Case 3-based Sub-Case takes into account the melting stage, the solute back-diffusion in the solid, and the growth rate acceleration typical to fusion welds. We propose - based on works of Rutter & Chalmers, David & Vitek and our experimental results on copper - that NEGS-EGS-transition is not associated only with cellular-dendritic-transition. Solidification is studied experimentally on pure and doped copper with welding speed range from 0 to 200 cm/min, with one test at 3000 cm/min. Found were only planar and cellular structures, no dendrites - columnar or equiaxed. Cell sub structures: rows of cubic elements we call "cubelettes", "cell-bands" and "micro-cells", as well as an anomalous crack morphology "crack-eye", were detected, as well as microscopic hot crack nucleus we call "grain-lag cracks", caused by a grain slightly lagging behind its neighbors in arrival to the weld centerline. Varestraint test and R-test revealed a change of crack morphologies from centerline cracks to grainand cell boundary cracks with an increasing welding speed. High speed made the cracks invisible to bare eye and hardly detectable with light microscope, while electron microscope often revealed networks of fine micro-cracks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this diploma work advantages of coherent anti-Stokes Raman scattering spectrometry (CARS) and various methods of the quantitative analysis of substance structure with its help are considered. The basic methods and concepts of the adaptive analysis are adduced. On the basis of these methods the algorithm of automatic measurement of a scattering strip size of a target component in CARS spectrum is developed. The algorithm uses known full spectrum of target substance and compares it with a CARS spectrum. The form of a differential spectrum is used as a feedback to control the accuracy of matching. To exclude the influence of a background in CARS spectra the differential spectrum is analysed by means of its second derivative. The algorithm is checked up on the simulated simple spectra and on the spectra of organic compounds received experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis programmatic, application-layer means for better energy-efficiency in the VoIP application domain are studied. The work presented concentrates on optimizations which are suitable for VoIP-implementations utilizing SIP and IEEE 802.11 technologies. Energy-saving optimizations can have an impact on perceived call quality, and thus energy-saving means are studied together with those factors affecting perceived call quality. In this thesis a general view on a topic is given. Based on theory, adaptive optimization schemes for dynamic controlling of application's operation are proposed. A runtime quality model, capable of being integrated into optimization schemes, is developed for VoIP call quality estimation. Based on proposed optimization schemes, some power consumption measurements are done to find out achievable advantages. Measurement results show that a reduction in power consumption is possible to achieve with the help of adaptive optimization schemes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. In this thesis, methods for fusing proprioception, force and vision together are proposed. Making assumptions of object shape and modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. The fusion of force and visual measurements makes it possible to estimate the pose of a moving target with an end-effector mounted moving camera at high rate and accuracy. The proposed approach takes the latency of the vision system into account explicitly, to provide high sample rate estimates. The estimates also allow a smooth transition from vision-based motion control to force control. The velocity of the end-effector can be controlled by estimating the distance to the target by vision and determining the velocity profile giving rapid approach and minimal force overshoot. Experiments with a 5-degree-of-freedom parallel hydraulic manipulator and a 6-degree-of-freedom serial manipulator show that integration of several sensor modalities can increase the accuracy of the measurements significantly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.