50 resultados para PARAMETERS CALIBRATION
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
The goal of this thesis was to make a dimensioning tool to determine the plastic capacity of the boiler supporting header. The capacity of the header is traditionally determined by using FE-method during the project phase. By using the dimensioning tool the goal is to ensure the capacity already in the proposal phase. The study began by analyzing the headers of the ongoing projects by using FE-method. For the analytical solution a plain header was analyzed without the effects of branches or lug. The calibration of parameters in the analytical solution was made using these results. In the analytical solution the plastic capacity of the plastic hinges in the header was defined. The stresses caused by the internal pressure as well as the normal and shear forces caused by the external loading reduced the plastic moment. The final capacity was determined by using the principle of virtual work. The weakening effect of the branches was taken into account by using pressure areas. Also the capacity of the punching shear was defined. The results from the FE-analyses and the analytical solution correlate with each other. The results from the analytical solution are conservative but give correct enough results when considering the accuracy of the used method.
Resumo:
In this dissertation, active galactic nuclei (AGN) are discussed, as they are seen with the high-resolution radio-astronomical technique called Very Long Baseline Interferometry (VLBI). This observational technique provides very high angular resolution (_ 10−300 = 1 milliarcsecond). VLBI observations, performed at different radio frequencies (multi-frequency VLBI), allow to penetrate deep into the core of an AGN to reveal an otherwise obscured inner part of the jet and the vicinity of the AGN’s central engine. Multi-frequency VLBI data are used to scrutinize the structure and evolution of the jet, as well as the distribution of the polarized emission. These data can help to derive the properties of the plasma and the magnetic field, and to provide constraints to the jet composition and the parameters of emission mechanisms. Also VLBI data can be used for testing the possible physical processes in the jet by comparing observational results with results of numerical simulations. The work presented in this thesis contributes to different aspects of AGN physics studies, as well as to the methodology of VLBI data reduction. In particular, Paper I reports evidence of optical and radio emission of AGN coming from the same region in the inner jet. This result was obtained via simultaneous observations of linear polarization in the optical and in radio using VLBI technique of a sample of AGN. Papers II and III describe, in detail, the jet kinematics of the blazar 0716+714, based on multi-frequency data, and reveal a peculiar kinematic pattern: plasma in the inner jet appears to move substantially faster that that in the large-scale jet. This peculiarity is explained by the jet bending, in Paper III. Also, Paper III presents a test of the new imaging technique for VLBI data, the Generalized Maximum Entropy Method (GMEM), with the observed (not simulated) data and compares its results with the conventional imaging. Papers IV and V report the results of observations of the circularly polarized (CP) emission in AGN at small spatial scales. In particular, Paper IV presents values of the core CP for 41 AGN at 15, 22 and 43 GHz, obtained with the help of the standard Gain transfer (GT) method, which was previously developed by D. Homan and J.Wardle for the calibration of multi-source VLBI observations. This method was developed for long multi-source observations, when many AGN are observed in a single VLBI run. In contrast, in Paper V, an attempt is made to apply the GT method to single-source VLBI observations. In such observations, the object list would include only a few sources: a target source and two or three calibrators, and it lasts much shorter than the multi-source experiment. For the CP calibration of a single-source observation, it is necessary to have a source with zero or known CP as one of the calibrators. If the archival observations included such a source to the list of calibrators, the GT could also be used for the archival data, increasing a list of known AGN with the CP at small spatial scale. Paper V contains also calculation of contributions of different sourced of errors to the uncertainty of the final result, and presents the first results for the blazar 0716+714.
Resumo:
The main focus of this thesis is to define the field weakening point of permanent magnet synchronous machine with embedded magnets in traction applications. Along with the thesis a modelling program is made to help the designer to define the field weakening point in practical applications. The thesis utilizes the equations based on the current angle. These equations can be derived from the vector diagram of permanent magnet synchronous machine. The design parameters of the machine are: The maximum rotational speed, saliency ratio, maximum induced voltage and characteristic current. The main result of the thesis is finding out the rated rotational speed, from which the field weakening starts. The action of the machine is estimated at a wide speed range and the changes of machine parameters are examined.
Resumo:
In this thesis, a classi cation problem in predicting credit worthiness of a customer is tackled. This is done by proposing a reliable classi cation procedure on a given data set. The aim of this thesis is to design a model that gives the best classi cation accuracy to e ectively predict bankruptcy. FRPCA techniques proposed by Yang and Wang have been preferred since they are tolerant to certain type of noise in the data. These include FRPCA1, FRPCA2 and FRPCA3 from which the best method is chosen. Two di erent approaches are used at the classi cation stage: Similarity classi er and FKNN classi er. Algorithms are tested with Australian credit card screening data set. Results obtained indicate a mean classi cation accuracy of 83.22% using FRPCA1 with similarity classi- er. The FKNN approach yields a mean classi cation accuracy of 85.93% when used with FRPCA2, making it a better method for the suitable choices of the number of nearest neighbors and fuzziness parameters. Details on the calibration of the fuzziness parameter and other parameters associated with the similarity classi er are discussed.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
The Arctic region becoming very active area of the industrial developments since it may contain approximately 15-25% of the hydrocarbon and other valuable natural resources which are in great demand nowadays. Harsh operation conditions make the Arctic region difficult to access due to low temperatures which can drop below -50 °C in winter and various additional loads. As a result, newer and modified metallic materials are implemented which can cause certain problems in welding them properly. Steel is still the most widely used material in the Arctic regions due to high mechanical properties, cheapness and manufacturability. Moreover, with recent steel manufacturing development it is possible to make up to 1100 MPa yield strength microalloyed high strength steel which can be operated at temperatures -60 °C possessing reasonable weldability, ductility and suitable impact toughness which is the most crucial property for the Arctic usability. For many years, the arc welding was the most dominant joining method of the metallic materials. Recently, other joining methods are successfully implemented into welding manufacturing due to growing industrial demands and one of them is the laser-arc hybrid welding. The laser-arc hybrid welding successfully combines the advantages and eliminates the disadvantages of the both joining methods therefore produce less distortions, reduce the need of edge preparation, generates narrower heat-affected zone, and increase welding speed or productivity significantly. Moreover, due to easy implementation of the filler wire, accordingly the mechanical properties of the joints can be manipulated in order to produce suitable quality. Moreover, with laser-arc hybrid welding it is possible to achieve matching weld metal compared to the base material even with the low alloying welding wires without excessive softening of the HAZ in the high strength steels. As a result, the laser-arc welding methods can be the most desired and dominating welding technology nowadays, and which is already operating in automotive and shipbuilding industries with a great success. However, in the future it can be extended to offshore, pipe-laying, and heavy equipment industries for arctic environment. CO2 and Nd:YAG laser sources in combination with gas metal arc source have been used widely in the past two decades. Recently, the fiber laser sources offered high power outputs with excellent beam quality, very high electrical efficiency, low maintenance expenses, and higher mobility due to fiber optics. As a result, fiber laser-arc hybrid process offers even more extended advantages and applications. However, the information about fiber or disk laser-arc hybrid welding is very limited. The objectives of the Master’s thesis are concentrated on the study of fiber laser-MAG hybrid welding parameters in order to understand resulting mechanical properties and quality of the welds. In this work only ferrous materials are reviewed. The qualitative methodological approach has been used to achieve the objectives. This study demonstrates that laser-arc hybrid welding is suitable for welding of many types, thicknesses and strength of steels with acceptable mechanical properties along very high productivity. New developments of the fiber laser-arc hybrid process offers extended capabilities over CO2 laser combined with the arc. This work can be used as guideline in hybrid welding technology with comprehensive study the effect of welding parameter on joint quality.
Resumo:
Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.
Resumo:
The Sun is a crucial benchmark for how we see the universe. Especially when it comes to the visible range of the spectrum, stars are commonly compared to the Sun, as it is the most thoroughly studied star. In this work I have focussed on two aspects of the Sun and how it is used in modern astronomy. Firstly, I try to answer the question on how similar to the Sun another star can be. Given the limits of observations, we call a solar twin a star that has the same observed parameters as the Sun within its errors. These stars can be used as stand-in suns when doing observations, as normal night-time telescopes are not built to be pointed at the Sun. There have been many searches for these twins and every one of them provided not only information on how close to the Sun another star can be, but also helped us to understand the Sun itself. In my work I have selected _ 300 stars that are both photometrically and spectroscopically close to the Sun and found 22 solar twins, of which 17 were previously unknown and can therefore help the emerging picture on solar twins. In my second research project I have used my full sample of 300 solar analogue stars to check the temperature and metallicity scale of stellar catalogue calibrations. My photometric sample was originally drawn from the Geneva-Copenhagen-Survey (Nordström et al. 2004; Holmberg et al. 2007, 2009) for which two alternative calibrations exist, i.e. GCS-III (Holmberg et al. 2009) and C11 (Casagrande et al. 2011). I used very high resolution spectra of solar analogues, and a new approach to test the two calibrations. I found a zero–point shift of order of +75 K and +0.10 dex in effective temperature and metallicity, respectively, in the GCS-III and therefore favour the C11 calibration, which found similar offsets. I then performed a spectroscopic analysis of the stars to derive effective temperatures and metallicities, and tested that they are well centred around the solar values.
Resumo:
In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.
Resumo:
The aim of this work is to apply approximate Bayesian computation in combination with Marcov chain Monte Carlo methods in order to estimate the parameters of tuberculosis transmission. The methods are applied to San Francisco data and the results are compared with the outcomes of previous works. Moreover, a methodological idea with the aim to reduce computational time is also described. Despite the fact that this approach is proved to work in an appropriate way, further analysis is needed to understand and test its behaviour in different cases. Some related suggestions to its further enhancement are described in the corresponding chapter.