63 resultados para Interface algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This technical note develops information filter and array algorithms for a linear minimum mean square error estimator of discrete-time Markovian jump linear systems. A numerical example for a two-mode Markovian jump linear system, to show the advantage of using array algorithms to filter this class of systems, is provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ecological niche modelling combines species occurrence points with environmental raster layers in order to obtain models for describing the probabilistic distribution of species. The process to generate an ecological niche model is complex. It requires dealing with a large amount of data, use of different software packages for data conversion, for model generation and for different types of processing and analyses, among other functionalities. A software platform that integrates all requirements under a single and seamless interface would be very helpful for users. Furthermore, since biodiversity modelling is constantly evolving, new requirements are constantly being added in terms of functions, algorithms and data formats. This evolution must be accompanied by any software intended to be used in this area. In this scenario, a Service-Oriented Architecture (SOA) is an appropriate choice for designing such systems. According to SOA best practices and methodologies, the design of a reference business process must be performed prior to the architecture definition. The purpose is to understand the complexities of the process (business process in this context refers to the ecological niche modelling problem) and to design an architecture able to offer a comprehensive solution, called a reference architecture, that can be further detailed when implementing specific systems. This paper presents a reference business process for ecological niche modelling, as part of a major work focused on the definition of a reference architecture based on SOA concepts that will be used to evolve the openModeller software package for species modelling. The basic steps that are performed while developing a model are described, highlighting important aspects, based on the knowledge of modelling experts. In order to illustrate the steps defined for the process, an experiment was developed, modelling the distribution of Ouratea spectabilis (Mart.) Engl. (Ochnaceae) using openModeller. As a consequence of the knowledge gained with this work, many desirable improvements on the modelling software packages have been identified and are presented. Also, a discussion on the potential for large-scale experimentation in ecological niche modelling is provided, highlighting opportunities for research. The results obtained are very important for those involved in the development of modelling tools and systems, for requirement analysis and to provide insight on new features and trends for this category of systems. They can also be very helpful for beginners in modelling research, who can use the process and the experiment example as a guide to this complex activity. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and set-valued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among experts; they can also encode models based on belief functions and possibilistic measures. All algorithms for approximate inference in this paper rely on exact inferences in credal networks based on polytrees with binary variables, as these inferences have polynomial complexity. We are inspired by approximate algorithms for Bayesian networks; thus the Loopy 2U algorithm resembles Loopy Belief Propagation, while the Iterated Partial Evaluation and Structured Variational 2U algorithms are, respectively, based on Localized Partial Evaluation and variational techniques. (C) 2007 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nanomaterials have triggered excitement in both fundamental science and technological applications in several fields However, the same characteristic high interface area that is responsible for their unique properties causes unconventional instability, often leading to local collapsing during application Thermodynamically, this can be attributed to an increased contribution of the interface to the free energy, activating phenomena such as sintering and grain growth The lack of reliable interface energy data has restricted the development of conceptual models to allow the control of nanoparticle stability on a thermodynamic basis. Here we introduce a novel and accessible methodology to measure interface energy of nanoparticles exploiting the heat released during sintering to establish a quantitative relation between the solid solid and solid vapor interface energies. We exploited this method in MgO and ZnO nanoparticles and determined that the ratio between the solid solid and solid vapor interface energy is 11 for MgO and 0.7 for ZnO. We then discuss that this ratio is responsible for a thermodynamic metastable state that may prevent collapsing of nanoparticles and, therefore, may be used as a tool to design long-term stable nanoparticles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Controlling the phase stability of ZrO2 nanoparticles is of major importance in the development of new ZrO2-based nanotechnologies. Because of the fact that in nanoparticles the surface accounts for a larger fraction of the total atoms, the relative phase stability can be controlled throughout the surface composition, which can be toned by surface excess of one of the components of the system., The objective of this work is to delineate a relationship between surface excess (or solid solution) of MgO relative to ZrO2 and the polymorphic stability of (ZrO2)(1-x) - (MgO), nanopowders, where 0.0 <= x <= 0.6. The nanopowders were prepared by a liquid precursor method at 500 degrees C and characterized by N-2 adsorption (BET), X-ray diffraction (XRD), X-Ray photoelectron spectroscopy (XPS), and Raman spectroscopy. For pure ZrO2 samples, both tetragonal and monoclinic polymorphs were detected, as expected considering the literature. For MgO molar fractions varying from 0.05 to 0.10, extensive solid solution could not be detected, and a ZrO2 surface energy reduction, caused by Mg surface excess detected by XPS, promoted tetragonal polymorph thermodynamic stabilization with relation to monoclinic. For MgO molar fractions higher than 0.10 and up to 0.40, Mg solid solution could be detected and induced cubic phase stabilization. MgO periclase was observed only at x = 0.6. A discussion based on the relationship between the surface excess, surface energy, and polymorph stability is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design, construction, and characterization of a portable opto-coupled potentiostat are presented. The potentiostat is battery-powered, managed by a microcontroller, which implements cyclic voltammetry (CV) using suitable sensor electrodes. Its opto-coupling permits a wide range of current measurements, varying from mA to nA. Two software interfaces were developed to perform the CV measurement: a virtual instrument for a personal computer (PC) and a C-base interface for personal digital assistant (PDA). The potentiostat has been evaluated by detection of potassium ferrocyanide in KCl medium, both with macro and microelectrodes. There was good agreement between the instrumental results and those from commercial equipment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The flowshop scheduling problem with blocking in-process is addressed in this paper. In this environment, there are no buffers between successive machines: therefore intermediate queues of jobs waiting in the system for their next operations are not allowed. Heuristic approaches are proposed to minimize the total tardiness criterion. A constructive heuristic that explores specific characteristics of the problem is presented. Moreover, a GRASP-based heuristic is proposed and Coupled with a path relinking strategy to search for better outcomes. Computational tests are presented and the comparisons made with an adaptation of the NEH algorithm and with a branch-and-bound algorithm indicate that the new approaches are promising. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A glycosylphosphatidylinositol (GPI)-anchored enzyme (rat osseous plate alkaline phosphatase-OAP) was studied as monolayer (pure and mixed with lipids) at the air-water interface. Surface pressure and surface potential-area isotherms showed that the enzyme forms a stable monolayer and exhibits a liquid-expanded state even at surface pressure as high as 30 mN m(-1). Isotherms for mixed dimyristoylphosphatidic acid (DMPA)-OAP monolayer showed the absence of a liquid-expanded/liquid-condensed phase transition as observed for pure DMPA monolayer. In both cases, pure or mixed monolayer, the enzyme preserves its native conformation under compression at the air-water interface as observed from in situ p-polarized light Fourier transform-infrared reflection-absorption spectroscopic (FT-IRRAS) measurements. Changes in orientation and conformation of the enzyme due to the presence or absence of DMPA, as well as due to the surface compression, are discussed. (C) 2008 Published by Elsevier Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Jacobsen catalyst, Mn(salen), was immobilized in chitosan membrane. The obtained Mn(salen)-Chit was characterized by thermogravimetric analysis (TC), differential thermal analysis (DTA), differential scanning calorimetry (DSC), infrared spectroscopy (FT-IR), degree of N-acetylation by (1)H NMR, and UV-vis spectroscopy. The UV-vis absorption spectrum of the encapsulated catalyst displayed the typical bands of the Jacobsen catalyst, and the FT-IR presented an absorption band characteristic of the imines present in the Jacobsen catalyst. The chitosan membranes were available, in a biphasic system, as a catalytic barrier between two different phases: an organic substrate phase (cyclooctene or styrene) and an aqueous solution of either m-CPBA, t-BuOOH or H(2)O(2), and dismissing the need for phase transfer agents and leading to better product yields compared with the catalyst in homogeneous medium. This new catalyst did not leach from the support and was reused many times, leading to high turnover frequencies. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes the use of the q-Gaussian mutation with self-adaptation of the shape of the mutation distribution in evolutionary algorithms. The shape of the q-Gaussian mutation distribution is controlled by a real parameter q. In the proposed method, the real parameter q of the q-Gaussian mutation is encoded in the chromosome of individuals and hence is allowed to evolve during the evolutionary process. In order to test the new mutation operator, evolution strategy and evolutionary programming algorithms with self-adapted q-Gaussian mutation generated from anisotropic and isotropic distributions are presented. The theoretical analysis of the q-Gaussian mutation is also provided. In the experimental study, the q-Gaussian mutation is compared to Gaussian and Cauchy mutations in the optimization of a set of test functions. Experimental results show the efficiency of the proposed method of self-adapting the mutation distribution in evolutionary algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The study we assessed how often patients who are manifesting a myocardial infarction (MI) would not be considered candidates for intensive lipid-lowering therapy based on the current guidelines. Methods: In 355 consecutive patients manifesting ST elevation MI (STEMI), admission plasma C-reactive protein (CRP) was measured and Framingham risk score (FRS), PROCAM risk score, Reynolds risk score, ASSIGN risk score, QRISK, and SCORE algorithms were applied. Cardiac computed tomography and carotid ultrasound were performed to assess the coronary artery calcium score (CAC), carotid intima-media thickness (cIMT) and the presence of carotid plaques. Results: Less than 50% of STEMI patients would be identified as having high risk before the event by any of these algorithms. With the exception of FRS (9%), all other algorithms would assign low risk to about half of the enrolled patients. Plasma CRP was <1.0 mg/L in 70% and >2 mg/L in 14% of the patients. The average cIMT was 0.8 +/- 0.2 mm and only in 24% of patients was >= 1.0 mm. Carotid plaques were found in 74% of patients. CAC > 100 was found in 66% of patients. Adding CAC >100 plus the presence of carotid plaque, a high-risk condition would be identified in 100% of the patients using any of the above mentioned algorithms. Conclusion: More than half of patients manifesting STEMI would not be considered as candidates for intensive preventive therapy by the current clinical algorithms. The addition of anatomical parameters such as CAC and the presence of carotid plaques can substantially reduce the CVD risk underestimation. (C) 2010 Elsevier Ireland Ltd. All rights reserved.