811 resultados para Algorithm Calibration


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study shows the development, simulation and actual implementation of a closed-loop controller based on fuzzy logic that is able to regulate and standardize the mass flow of a helical fertilizer applicator. The control algorithm was developed using MATLAB's Fuzzy Logic Toolbox. Both open and closed-loop simulations of the controller were performed in MATLAB's Simulink environment. The instantaneous deviation of the mass flow from the set point (SP), its derivative, the equipment´s translation velocity and acceleration were all used as input signals for the controller, whereas the voltage of the applicator's DC electric motor (DCEM) was driven by the controller as output signal. Calibration and validation of the rules and membership functions of the fuzzy logic were accomplished in the computer simulation phase, taking into account the system's response to SP changes. The mass flow variation coefficient, measured in experimental tests, ranged from 6.32 to 13.18%. The steady state error fell between -0.72 and 0.13g s-1 and the recorded average rise time of the system was 0.38 s. The implemented controller was able to both damp the oscillations in mass flow that are characteristic of helical fertilizer applicators, and to effectively respond to SP variations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work describes techniques for modeling, optimizing and simulating calibration processes of robots using off-line programming. The identification of geometric parameters of the nominal kinematic model is optimized using techniques of numerical optimization of the mathematical model. The simulation of the actual robot and the measurement system is achieved by introducing random errors representing their physical behavior and its statistical repeatability. An evaluation of the corrected nominal kinematic model brings about a clear perception of the influence of distinct variables involved in the process for a suitable planning, and indicates a considerable accuracy improvement when the optimized model is compared to the non-optimized one.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The determination of the intersection curve between Bézier Surfaces may be seen as the composition of two separated problems: determining initial points and tracing the intersection curve from these points. The Bézier Surface is represented by a parametric function (polynomial with two variables) that maps a point in the tridimensional space from the bidimensional parametric space. In this article, it is proposed an algorithm to determine the initial points of the intersection curve of Bézier Surfaces, based on the solution of polynomial systems with the Projected Polyhedral Method, followed by a method for tracing the intersection curves (Marching Method with differential equations). In order to allow the use of the Projected Polyhedral Method, the equations of the system must be represented in terms of the Bernstein basis, and towards this goal it is proposed a robust and reliable algorithm to exactly transform a multivariable polynomial in terms of power basis to a polynomial written in terms of Bernstein basis .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present an algorithm for the numerical simulation of the cavitation in the hydrodynamic lubrication of journal bearings. Despite the fact that this physical process is usually modelled as a free boundary problem, we adopted the equivalent variational inequality formulation. We propose a two-level iterative algorithm, where the outer iteration is associated to the penalty method, used to transform the variational inequality into a variational equation, and the inner iteration is associated to the conjugate gradient method, used to solve the linear system generated by applying the finite element method to the variational equation. This inner part was implemented using the element by element strategy, which is easily parallelized. We analyse the behavior of two physical parameters and discuss some numerical results. Also, we analyse some results related to the performance of a parallel implementation of the algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. A large number of robot measurement systems are now available commercially. Yet, there is a dearth of systems that are portable, accurate and low cost. In this work a measurement system that can fill this gap in local calibration is presented. The measurement system consists of a single CCD camera mounted on the robot tool flange with a wide angle lens, and uses space resection models to measure the end-effector pose relative to a world coordinate system, considering radial distortions. Scale factors and image center are obtained with innovative techniques, making use of a multiview approach. The target plate consists of a grid of white dots impressed on a black photographic paper, and mounted on the sides of a 90-degree angle plate. Results show that the achieved average accuracy varies from 0.2mm to 0.4mm, at distances from the target from 600mm to 1000mm respectively, with different camera orientations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The formal calibration procedure of a phase fraction meter is based on registering the outputs resulting from imposed phase fractions at known flow regimes. This can be straightforwardly done in laboratory conditions, but is rarely the case in industrial conditions, and particularly for on-site applications. Thus, there is a clear need for less restrictive calibration methods regarding to the prior knowledge of the complete set of inlet conditions. A new procedure is proposed in this work for the on-site construction of the calibration curve from total flown mass values of the homogeneous dispersed phase. The solution is obtained by minimizing a convenient error functional, assembled with data from redundant tests to handle the intrinsic ill-conditioned nature of the problem. Numerical simulations performed for increasing error levels demonstrate that acceptable calibration curves can be reconstructed, even from total mass measured within a precision of up to 2%. Consequently, the method can readily be applied, especially in on-site calibration problems in which classical procedures fail due to the impossibility of having a strict control of all the input/output parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Successful management of rivers requires an understanding of the fluvial processes that govern them. This, in turn cannot be achieved without a means of quantifying their geomorphology and hydrology and the spatio-temporal interactions between them, that is, their hydromorphology. For a long time, it has been laborious and time-consuming to measure river topography, especially in the submerged part of the channel. The measurement of the flow field has been challenging as well, and hence, such measurements have long been sparse in natural environments. Technological advancements in the field of remote sensing in the recent years have opened up new possibilities for capturing synoptic information on river environments. This thesis presents new developments in fluvial remote sensing of both topography and water flow. A set of close-range remote sensing methods is employed to eventually construct a high-resolution unified empirical hydromorphological model, that is, river channel and floodplain topography and three-dimensional areal flow field. Empirical as well as hydraulic theory-based optical remote sensing methods are tested and evaluated using normal colour aerial photographs and sonar calibration and reference measurements on a rocky-bed sub-Arctic river. The empirical optical bathymetry model is developed further by the introduction of a deep-water radiance parameter estimation algorithm that extends the field of application of the model to shallow streams. The effect of this parameter on the model is also assessed in a study of a sandy-bed sub-Arctic river using close-range high-resolution aerial photography, presenting one of the first examples of fluvial bathymetry modelling from unmanned aerial vehicles (UAV). Further close-range remote sensing methods are added to complete the topography integrating the river bed with the floodplain to create a seamless high-resolution topography. Boat- cart- and backpack-based mobile laser scanning (MLS) are used to measure the topography of the dry part of the channel at a high resolution and accuracy. Multitemporal MLS is evaluated along with UAV-based photogrammetry against terrestrial laser scanning reference data and merged with UAV-based bathymetry to create a two-year series of seamless digital terrain models. These allow the evaluation of the methodology for conducting high-resolution change analysis of the entire channel. The remote sensing based model of hydromorphology is completed by a new methodology for mapping the flow field in 3D. An acoustic Doppler current profiler (ADCP) is deployed on a remote-controlled boat with a survey-grade global navigation satellite system (GNSS) receiver, allowing the positioning of the areally sampled 3D flow vectors in 3D space as a point cloud and its interpolation into a 3D matrix allows a quantitative volumetric flow analysis. Multitemporal areal 3D flow field data show the evolution of the flow field during a snow-melt flood event. The combination of the underwater and dry topography with the flow field yields a compete model of river hydromorphology at the reach scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this work was to calibrate the material properties including strength and strain values for different material zones of ultra-high strength steel (UHSS) welded joints under monotonic static loading. The UHSS is heat sensitive and softens by heat due to welding, the affected zone is heat affected zone (HAZ). In this regard, cylindrical specimens were cut out from welded joints of Strenx® 960 MC and Strenx® Tube 960 MH, were examined by tensile test. The hardness values of specimens’ cross section were measured. Using correlations between hardness and strength, initial material properties were obtained. The same size specimen with different zones of material same as real specimen were created and defined in finite element method (FEM) software with commercial brand Abaqus 6.14-1. The loading and boundary conditions were defined considering tensile test values. Using initial material properties made of hardness-strength correlations (true stress-strain values) as Abaqus main input, FEM is utilized to simulate the tensile test process. By comparing FEM Abaqus results with measured results of tensile test, initial material properties will be revised and reused as software input to be fully calibrated in such a way that FEM results and tensile test results deviate minimum. Two type of different S960 were used including 960 MC plates, and structural hollow section 960 MH X-joint. The joint is welded by BöhlerTM X96 filler material. In welded joints, typically the following zones appear: Weld (WEL), Heat affected zone (HAZ) coarse grained (HCG) and fine grained (HFG), annealed zone, and base material (BaM). Results showed that: The HAZ zone is softened due to heat input while welding. For all the specimens, the softened zone’s strength is decreased and makes it a weakest zone where fracture happens while loading. Stress concentration of a notched specimen can represent the properties of notched zone. The load-displacement diagram from FEM modeling matches with the experiments by the calibrated material properties by compromising two correlations of hardness and strength.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new Ultra-High Vacuum (UHV) reflectance spectrometer was successfully designed, making use of a Janis Industries ST-400 sample cryostat, IR Labs bolometer, and Briiker IFS 66 v/S spectrometer. Two of the noteworthy features include an in situ gold evaporator and internal reference path, both of which allow for the experiment to progress with a completely undisturbed sample position. As tested, the system was designed to operate between 4.2 K and 325 K over a frequency range of 60 - 670 cm~^. This frequency range can easily be extended through the addition of appUcable detectors. Tests were performed on SrTiOa, a highly ionic incipient ferroelectric insulator with a well known reflectance. The presence and temperatmre dependence of the lowest frequency "soft" mode were measured, as was the presence of the other two infrared modes. During the structural phase transition from cubic to tetragonal perovskite, the splitting of the second phonon mode was also observed. All of the collected data indicate good agreement with previous measurements, with a minor discrepency between the actual and recorded sample temperatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.