191 resultados para event detection algorithm
Resumo:
The simultaneous use of different sensors technologies is an efficient method to increase the performance of chemical sensors systems. Among the available technologies, mass and capacitance transducers are particularly interesting because they can take advantage also from non-conductive sensing layers, such as most of the more interesting molecular recognition systems. In this paper, an array of quartz microbalance sensors is complemented by an array of capacitors obtained from a commercial biometrics fingerprints detector. The two sets of transducers, properly functionalized by sensitive molecular and polymeric films, are utilized for the estimation of adulteration in gasolines, and in particular to quantify the content of ethanol in gasolines, an application of importance for Brazilian market. Results indicate that the hybrid system outperforms the individual sensor arrays even if the quantification of ethanol in gasoline, due to the variability of gasolines formulation, is affected by a barely acceptable error. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We derive an easy-to-compute approximate bound for the range of step-sizes for which the constant-modulus algorithm (CMA) will remain stable if initialized close to a minimum of the CM cost function. Our model highlights the influence, of the signal constellation used in the transmission system: for smaller variation in the modulus of the transmitted symbols, the algorithm will be more robust, and the steady-state misadjustment will be smaller. The theoretical results are validated through several simulations, for long and short filters and channels.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.
Resumo:
An algorithm inspired on ant behavior is developed in order to find out the topology of an electric energy distribution network with minimum power loss. The algorithm performance is investigated in hypothetical and actual circuits. When applied in an actual distribution system of a region of the State of Sao Paulo (Brazil), the solution found by the algorithm presents loss lower than the topology built by the concessionary company.
Resumo:
The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.
Resumo:
This paper presents the design and implementation of an embedded soft sensor, i. e., a generic and autonomous hardware module, which can be applied to many complex plants, wherein a certain variable cannot be directly measured. It is implemented based on a fuzzy identification algorithm called ""Limited Rules"", employed to model continuous nonlinear processes. The fuzzy model has a Takagi-Sugeno-Kang structure and the premise parameters are defined based on the Fuzzy C-Means (FCM) clustering algorithm. The firmware contains the soft sensor and it runs online, estimating the target variable from other available variables. Tests have been performed using a simulated pH neutralization plant. The results of the embedded soft sensor have been considered satisfactory. A complete embedded inferential control system is also presented, including a soft sensor and a PID controller. (c) 2007, ISA. Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper analyzes the complexity-performance trade-off of several heuristic near-optimum multiuser detection (MuD) approaches applied to the uplink of synchronous single/multiple-input multiple-output multicarrier code division multiple access (S/MIMO MC-CDMA) systems. Genetic algorithm (GA), short term tabu search (STTS) and reactive tabu search (RTS), simulated annealing (SA), particle swarm optimization (PSO), and 1-opt local search (1-LS) heuristic multiuser detection algorithms (Heur-MuDs) are analyzed in details, using a single-objective antenna-diversity-aided optimization approach. Monte- Carlo simulations show that, after convergence, the performances reached by all near-optimum Heur-MuDs are similar. However, the computational complexities may differ substantially, depending on the system operation conditions. Their complexities are carefully analyzed in order to obtain a general complexity-performance framework comparison and to show that unitary Hamming distance search MuD (uH-ds) approaches (1-LS, SA, RTS and STTS) reach the best convergence rates, and among them, the 1-LS-MuD provides the best trade-off between implementation complexity and bit error rate (BER) performance.
Resumo:
This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.
Resumo:
Interval-censored survival data, in which the event of interest is not observed exactly but is only known to occur within some time interval, occur very frequently. In some situations, event times might be censored into different, possibly overlapping intervals of variable widths; however, in other situations, information is available for all units at the same observed visit time. In the latter cases, interval-censored data are termed grouped survival data. Here we present alternative approaches for analyzing interval-censored data. We illustrate these techniques using a survival data set involving mango tree lifetimes. This study is an example of grouped survival data.
Resumo:
Neozygites tanajoae is an entomopathogenic fungus which has been used for biocontrol of the cassava green mite (Mononychellus tanajoa, CGM) in Africa. Establishment and dispersal of Brazilian isolates which have been introduced into some African countries in recent years to improve CGM control was followed with specific PCR assays. Two primer pairs, NEOSSU_F/NEOSSU_R and 8DDC_F/8DDC_R, were used to differentiate isolates collected from several locations in Brazil and from three countries in Africa, Benin, Ghana and Tanzania. The first primer pair enabled the species-specific detection of Neozygites tanajoae, while the second differentiated the Brazilian isolates from those of other geographical origin. PCR assays were designed for detection of fungal DNA in the matrix of dead infested mites since N. tanajoae is difficult to isolate and culture on selective artificial media. Our results show that all isolates (Brazilian and African) that sporulated on mummified mites were amplified with the first primer pair confirming their Neozygites tanajoae identity. The second pair amplified DNA from all the Brazilian isolates, but did not amplify any DNA samples from the African isolates. None of the two primers showed amplification neither from any of the non-sporulating mite extracts nor from the dead uninfected mites used as negative controls. We confirmed that the two primer pairs tested are suitable for the detection and differential identification of N. tanajoae isolates from Brazil and Africa and that they are useful to monitor the establishment and spread of the Brazilian isolates of N. tanajoae introduced into Benin or into other African countries for improvement of CGM biocontrol.
Resumo:
Xanthomonas axonopodis pv. passiflorae causes bacterial spot in passion fruit. It attacks the purple and yellow passion fruit as well as the sweet passion fruit. The diversity of 87 isolates of pv. passiflorae collected from across 22 fruit orchards in Brazil was evaluated using molecular profiles and statistical procedures, including an unweighted pair-group method with arithmetical averages-based dendrogram, analysis of molecular variance (AMOVA), and an assigning test that provides information on genetic structure at the population level. Isolates from another eight pathovars were included in the molecular analyses and all were shown to have a distinct repetitive sequence-based polymerase chain reaction profile. Amplified fragment length polymorphism technique revealed considerable diversity among isolates of pv. passiflorae, and AMOVA showed that most of the variance (49.4%) was due to differences between localities. Cluster analysis revealed that most genotypic clusters were homogeneous and that variance was associated primarily with geographic origin. The disease adversely affects fruit production and may kill infected plants. A method for rapid diagnosis of the pathogen, even before the disease symptoms become evident, has value for producers. Here, a set of primers (Xapas) was designed by exploiting a single-nucleotide polymorphism between the sequences of the intergenic 16S-23S rRNA spacer region of the pathovars. Xapas was shown to effectively detect all pv. passiflorae isolates and is recommended for disease diagnosis in passion fruit orchards.
Resumo:
Diagnosing herbicide-resistant weed populations is the first step for herbicide resistance management. Monitoring the nature, distribution, and abundance of the resistant plants in fields demands efficient and effective screening tests. Different glyphosate resistant populations of Lolium multiflorum (VA) and L. rigidum (C) were used in assays for testing their effectiveness to detect herbicide resistance. According to a Petri dish bioassay 7 days after treatment (DAT), the VA and the C populations were 27 and 31 times more resistant to glyphosate than the susceptible populations, L. multiflorum (SM) and L. rigidum (SR), respectively. On a whole-plant bioassay (21 DAT), the VA and the C populations were 6 and 11 times more resistant to glyphosate than their respective susceptible populations. The susceptible populations accumulated 2.5 and 1.4-fold more shikimic acid 48 hours after treatment (HAT), than the resistant VA and C. Glyphosate gradually inhibited net photosynthesis in all populations but at 48-72 HAT the resistant plants recovered, whereas no recovery was detected in susceptible populations. All assays were capable of detecting the resistant populations and this may be useful for farmers and consultants as an effective tool to reduce the spread of the resistant populations through quicker implementation of alternative weed management practices. However, they differed in time, costs and equipments necessaries for successfully carrying on the tests. Regarding costs, the cheapest ones were Petri dish and whole-plant bioassays, but they are time-consuming methods as the major constraints are the collection of seeds from the field and at least some weeks to evaluate the resistance. The shikimic acid and net photosynthesis assays were the quickest ones but they demand sophisticated equipments which could restrict its use.