53 resultados para Front Tracking Methods
Resumo:
A square-wave voltammetric (SWV) method and a flow injection analysis system with amperometric detection were developed for the determination of tramadol hydrochloride. The SWV method enables the determination of tramadol over the concentration range of 15-75 µM with a detection limit of 2.2 µM. Tramadol could be determined in concentrations between 9 and 50 µM at a sampling rate of 90 h-1, with a detection limit of 1.7 µM using the flow injection system. The electrochemical methods developed were successfully applied to the determination of tramadol in pharmaceutical dosage forms, without any pre-treatment of the samples. Recovery trials were performed to assess the accuracy of the results; the values were between 97 and 102% for both methods.
Resumo:
In order to combat a variety of pests, pesticides are widely used in fruits. Several extraction procedures (liquid extraction, single drop microextraction, microwave-assisted extraction, pressurized liquid extraction, supercritical fluid extraction, solid-phase extraction, solid-phase microextraction, matrix solid-phase dispersion, and stir bar sorptive extraction) have been reported to determine pesticide residues in fruits and fruit juices. The significant change in recent years is the introduction of the Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) methods in these matrices analysis. A combination of techniques reported the use of new extraction methods and chromatography to provide better quantitative recoveries at low levels. The use of mass spectrometric detectors in combination with liquid and gas chromatography has played a vital role to solve many problems related to food safety. The main attention in this review is on the achievements that have been possible because of the progress in extraction methods and the latest advances and novelties in mass spectrometry, and how these progresses have influenced the best control of food, allowing for an increase in the food safety and quality standards.
Resumo:
The state of the art of voltammetric and amperometric methods used in the study and determination of pesticides in crops, food, phytopharmaceutical products, and environmental samples is reviewed. The main structural groups of pesticides, i.e., triazines, organophosphates, organochlorides, nitrocompounds, carbamates, thiocarbamates, sulfonylureas, and bipyridinium compounds are considered with some degradation products. The advantages, drawbacks, and trends in the development of voltammetric and amperometric methods for study and determination of pesticides in these samples are discussed.
Resumo:
This paper focuses on evaluating the usability of an Intelligent Wheelchair (IW) in both real and simulated environments. The wheelchair is controlled at a high-level by a flexible multimodal interface, using voice commands, facial expressions, head movements and joystick as its main inputs. A Quasi-experimental design was applied including a deterministic sample with a questionnaire that enabled to apply the System Usability Scale. The subjects were divided in two independent samples: 46 individuals performing the experiment with an Intelligent Wheelchair in a simulated environment (28 using different commands in a sequential way and 18 with the liberty to choose the command); 12 individuals performing the experiment with a real IW. The main conclusion achieved by this study is that the usability of the Intelligent Wheelchair in a real environment is higher than in the simulated environment. However there were not statistical evidences to affirm that there are differences between the real and simulated wheelchairs in terms of safety and control. Also, most of users considered the multimodal way of driving the wheelchair very practical and satisfactory. Thus, it may be concluded that the multimodal interfaces enables very easy and safe control of the IW both in simulated and real environments.
Resumo:
We perform a comparison between the fractional iteration and decomposition methods applied to the wave equation on Cantor set. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.
Resumo:
To avoid additional hardware deployment, indoor localization systems have to be designed in such a way that they rely on existing infrastructure only. Besides the processing of measurements between nodes, localization procedure can include the information of all available environment information. In order to enhance the performance of Wi-Fi based localization systems, the innovative solution presented in this paper considers also the negative information. An indoor tracking method inspired by Kalman filtering is also proposed.
Resumo:
Knowing exactly where a mobile entity is and monitoring its trajectory in real-time has recently attracted a lot of interests from both academia and industrial communities, due to the large number of applications it enables, nevertheless, it is nowadays one of the most challenging problems from scientific and technological standpoints. In this work we propose a tracking system based on the fusion of position estimations provided by different sources, that are combined together to get a final estimation that aims at providing improved accuracy with respect to those generated by each system individually. In particular, exploiting the availability of a Wireless Sensor Network as an infrastructure, a mobile entity equipped with an inertial system first gets the position estimation using both a Kalman Filter and a fully distributed positioning algorithm (the Enhanced Steepest Descent, we recently proposed), then combines the results using the Simple Convex Combination algorithm. Simulation results clearly show good performance in terms of the final accuracy achieved. Finally, the proposed technique is validated against real data taken from an inertial sensor provided by THALES ITALIA.
Resumo:
Wireless Sensor Networks (WSNs) are highly distributed systems in which resource allocation (bandwidth, memory) must be performed efficiently to provide a minimum acceptable Quality of Service (QoS) to the regions where critical events occur. In fact, if resources are statically assigned independently from the location and instant of the events, these resources will definitely be misused. In other words, it is more efficient to dynamically grant more resources to sensor nodes affected by critical events, thus providing better network resource management and reducing endto- end delays of event notification and tracking. In this paper, we discuss the use of a WSN management architecture based on the active network management paradigm to provide the real-time tracking and reporting of dynamic events while ensuring efficient resource utilization. The active network management paradigm allows packets to transport not only data, but also program scripts that will be executed in the nodes to dynamically modify the operation of the network. This presumes the use of a runtime execution environment (middleware) in each node to interpret the script. We consider hierarchical (e.g. cluster-tree, two-tiered architecture) WSN topologies since they have been used to improve the timing performance of WSNs as they support deterministic medium access control protocols.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
The characteristics of carbon fibre reinforced laminates had widened their use, from aerospace to domestic appliances. A common characteristic is the need of drilling for assembly purposes. It is known that a drilling process that reduces the drill thrust force can decrease the risk of delamination. In this work, delamination assessment methods based on radiographic data are compared and correlated with mechanical test results (bearing test).
Resumo:
Constrained and unconstrained Nonlinear Optimization Problems often appear in many engineering areas. In some of these cases it is not possible to use derivative based optimization methods because the objective function is not known or it is too complex or the objective function is non-smooth. In these cases derivative based methods cannot be used and Direct Search Methods might be the most suitable optimization methods. An Application Programming Interface (API) including some of these methods was implemented using Java Technology. This API can be accessed either by applications running in the same computer where it is installed or, it can be remotely accessed through a LAN or the Internet, using webservices. From the engineering point of view, the information needed from the API is the solution for the provided problem. On the other hand, from the optimization methods researchers’ point of view, not only the solution for the problem is needed. Also additional information about the iterative process is useful, such as: the number of iterations; the value of the solution at each iteration; the stopping criteria, etc. In this paper are presented the features added to the API to allow users to access to the iterative process data.
Resumo:
In Nonlinear Optimization Penalty and Barrier Methods are normally used to solve Constrained Problems. There are several Penalty/Barrier Methods and they are used in several areas from Engineering to Economy, through Biology, Chemistry, Physics among others. In these areas it often appears Optimization Problems in which the involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. In this work some Penalty/Barrier functions are tested and compared, using in the internal process, Derivative-free, namely Direct Search, methods. This work is a part of a bigger project involving the development of an Application Programming Interface, that implements several Optimization Methods, to be used in applications that need to solve constrained and/or unconstrained Nonlinear Optimization Problems. Besides the use of it in applied mathematics research it is also to be used in engineering software packages.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores.Área de Especialização de Sistemas Autónomos
Resumo:
On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.