44 resultados para efficient algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Direct borohydride fuel cells are promising high energy density portable generators. However, their development remains limited by the complexity of the anodic reaction: The borohydride oxidation reaction (BOR) kinetics is slow and occurs at high overvoltages, while it may compete with the heterogeneous hydrolysis of BH(4)(-). Nevertheless, one usually admits that gold is rather inactive toward the heterogeneous hydrolysis of BH(4)(-) and presents some activity regarding the BOR, therefore yielding to the complete eight-electron BOR. In the present paper, by coupling online mass spectrometry to electrochemistry, we in situ monitored the H(2) yield during BOR experiments on sputtered gold electrodes. Our results show non-negligible H(2) generation on Au on the whole BOR potential range (0-0.8 V vs reversible hydrogen electrode), thus revealing that gold cannot be considered as a faradaic-efficient BOR electrocatalyst. We further propose a relevant reaction pathway for the BOR on gold that accounts for these findings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 'blue copper' enzyme bilirubin oxidase from Myrothecium verrucaria shows significantly enhanced adsorption on a pyrolytic graphite 'edge' (PGE) electrode that has been covalently modified with naphthyl-2-carboxylate functionalities by diazonium coupling. Modified electrodes coated with bilirubin oxidase show electrocatalytic voltammograms for the direct, four-electron reduction of O(2) by bilirubin oxidase with up to four times the current density of an unmodified PGE electrode. Electrocatalytic voltammograms measured with a rapidly rotating electrode (to remove effects of O(2) diffusion limitation) have a complex shape (an almost linear dependence of current on potential below pH 6) that is similar regardless of how PGE is chemically modified. Importantly, the same waveform is observed if bilirubin oxidase is adsorbed on Au(111) or Pt(111) single-crystal electrodes (at which activity is short-lived). The electrocatalytic behavior of bilirubin oxidase, including its enhanced response on chemically-modified PGE, therefore reflects inherent properties that do not depend on the electrode material. The variation of voltammetric waveshapes and potential-dependent (O(2)) Michaelis constants with pH and analysis in terms of the dispersion model are consistent with a change in rate-determining step over the pH range 5-8: at pH 5, the high activity is limited by the rate of interfacial redox cycling of the Type 1 copper whereas at pH 8 activity is much lower and a sigmoidal shape is approached, showing that interfacial electron transfer is no longer a limiting factor. The electrocatalytic activity of bilirubin oxidase on Pt(111) appears as a prominent pre-wave to electrocatalysis by Pt surface atoms, thus substantiating in a single, direct experiment that the minimum overpotential required for O(2) reduction by the enzyme is substantially smaller than required at Pt. At pH 8, the onset of O(2) reduction lies within 0.14 V of the four-electron O(2)/2H(2)O potential.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Leptin-deficient mice (Lep(ob)/Lep(ob), also known as ob/ob) are of great importance for studies of obesity, diabetes and other correlated pathologies. Thus, generation of animals carrying the Lep(ob) gene mutation as well as additional genomic modifications has been used to associate genes with metabolic diseases. However, the infertility of Lep(ob)/Lep(ob) mice impairs this kind of breeding experiment. Objective: To propose a new method for production of Lep(ob)/Lep(ob) animals and Lep(ob)/Lep(ob)-derived animal models by restoring the fertility of Lep(ob)/Lep(ob) mice in a stable way through white adipose tissue transplantations. Methods: For this purpose, 1 g of peri-gonadal adipose tissue from lean donors was used in subcutaneous transplantations of Lep(ob)/Lep(ob) animals and a crossing strategy was established to generate Lep(ob)/Lep(ob)-derived mice. Results: The presented method reduced by four times the number of animals used to generate double transgenic models (from about 20 to 5 animals per double mutant produced) and minimized the number of genotyping steps (from 3 to 1 genotyping step, reducing the number of Lep gene genotyping assays from 83 to 6). Conclusion: The application of the adipose transplantation technique drastically improves both the production of Lep(ob)/Lep(ob) animals and the generation of Lep(ob)/Lep(ob)-derived animal models. International Journal of Obesity (2009) 33, 938-944; doi: 10.1038/ijo.2009.95; published online 16 June 2009

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Voltage and current waveforms of a distribution or transmission power system are not pure sinusoids. There are distortions in these waveforms that can be represented as a combination of the fundamental frequency, harmonics and high frequency transients. This paper presents a novel approach to identifying harmonics in power system distorted waveforms. The proposed method is based on Genetic Algorithms, which is an optimization technique inspired by genetics and natural evolution. GOOAL, a specially designed intelligent algorithm for optimization problems, was successfully implemented and tested. Two kinds of representations concerning chromosomes are utilized: binary and real. The results show that the proposed method is more precise than the traditional Fourier Transform, especially considering the real representation of the chromosomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a strategy for the solution of the WDM optical networks planning. Specifically, the problem of Routing and Wavelength Allocation (RWA) in order to minimize the amount of wavelengths used. In this case, the problem is known as the Min-RWA. Two meta-heuristics (Tabu Search and Simulated Annealing) are applied to take solutions of good quality and high performance. The key point is the degradation of the maximum load on the virtual links in favor of minimization of number of wavelengths used; the objective is to find a good compromise between the metrics of virtual topology (load in Gb/s) and of the physical topology (quantity of wavelengths). The simulations suggest good results when compared to some existing in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This technical note develops information filter and array algorithms for a linear minimum mean square error estimator of discrete-time Markovian jump linear systems. A numerical example for a two-mode Markovian jump linear system, to show the advantage of using array algorithms to filter this class of systems, is provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A procedure is proposed to accurately model thin wires in lossy media by finite element analysis. It is based on the determination of a suitable element width in the vicinity of the wire, which strongly depends on the wire radius to yield accurate results. The approach is well adapted to the analysis of grounding systems. The numerical results of the application of finite element analysis with the suitably chosen element width are compared with both analytical results and those computed by a commercial package for the analysis of grounding systems, showing very good agreement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrical impedance tomography (EIT) captures images of internal features of a body. Electrodes are attached to the boundary of the body, low intensity alternating currents are applied, and the resulting electric potentials are measured. Then, based on the measurements, an estimation algorithm obtains the three-dimensional internal admittivity distribution that corresponds to the image. One of the main goals of medical EIT is to achieve high resolution and an accurate result at low computational cost. However, when the finite element method (FEM) is employed and the corresponding mesh is refined to increase resolution and accuracy, the computational cost increases substantially, especially in the estimation of absolute admittivity distributions. Therefore, we consider in this work a fast iterative solver for the forward problem, which was previously reported in the context of structural optimization. We propose several improvements to this solver to increase its performance in the EIT context. The solver is based on the recycling of approximate invariant subspaces, and it is applied to reduce the EIT computation time for a constant and high resolution finite element mesh. In addition, we consider a powerful preconditioner and provide a detailed pseudocode for the improved iterative solver. The numerical results show the effectiveness of our approach: the proposed algorithm is faster than the preconditioned conjugate gradient (CG) algorithm. The results also show that even on a standard PC without parallelization, a high mesh resolution (more than 150,000 degrees of freedom) can be used for image estimation at a relatively low computational cost. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and set-valued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among experts; they can also encode models based on belief functions and possibilistic measures. All algorithms for approximate inference in this paper rely on exact inferences in credal networks based on polytrees with binary variables, as these inferences have polynomial complexity. We are inspired by approximate algorithms for Bayesian networks; thus the Loopy 2U algorithm resembles Loopy Belief Propagation, while the Iterated Partial Evaluation and Structured Variational 2U algorithms are, respectively, based on Localized Partial Evaluation and variational techniques. (C) 2007 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.