777 resultados para Silver Pohlig Hellman algorithm
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009
Resumo:
In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.
Resumo:
Purpose - The purpose of this paper is to develop a novel unstructured simulation approach for injection molding processes described by the Hele-Shaw model. Design/methodology/approach - The scheme involves dual dynamic meshes with active and inactive cells determined from an initial background pointset. The quasi-static pressure solution in each timestep for this evolving unstructured mesh system is approximated using a control volume finite element method formulation coupled to a corresponding modified volume of fluid method. The flow is considered to be isothermal and non-Newtonian. Findings - Supporting numerical tests and performance studies for polystyrene described by Carreau, Cross, Ellis and Power-law fluid models are conducted. Results for the present method are shown to be comparable to those from other methods for both Newtonian fluid and polystyrene fluid injected in different mold geometries. Research limitations/implications - With respect to the methodology, the background pointset infers a mesh that is dynamically reconstructed here, and there are a number of efficiency issues and improvements that would be relevant to industrial applications. For instance, one can use the pointset to construct special bases and invoke a so-called ""meshless"" scheme using the basis. This would require some interesting strategies to deal with the dynamic point enrichment of the moving front that could benefit from the present front treatment strategy. There are also issues related to mass conservation and fill-time errors that might be addressed by introducing suitable projections. The general question of ""rate of convergence"" of these schemes requires analysis. Numerical results here suggest first-order accuracy and are consistent with the approximations made, but theoretical results are not available yet for these methods. Originality/value - This novel unstructured simulation approach involves dual meshes with active and inactive cells determined from an initial background pointset: local active dual patches are constructed ""on-the-fly"" for each ""active point"" to form a dynamic virtual mesh of active elements that evolves with the moving interface.
Resumo:
In this work a systematic study of the dependence of the structural, electronic, and vibrational properties on nanoparticle size is performed. Based on our total energy calculations we identified three characteristic regimes associated with the nanoparticle`s dimensions: (i) below 1.5 nm (100 atoms) where remarkable molecular aspects are observed; (ii) between 1.5 and 2.0 nm (100 and 300 atoms) where the molecular behavior is influenced by the inner core crystal properties; and (iii) above 2.0 nm (more than 300 atoms) where the crystal properties are preponderant. In all considered regimes the nanoparticle`s surface modulates its properties. This modulation decreases with the increasing of the nanoparticle`s size.
Resumo:
A novel cryptography method based on the Lorenz`s attractor chaotic system is presented. The proposed algorithm is secure and fast, making it practical for general use. We introduce the chaotic operation mode, which provides an interaction among the password, message and a chaotic system. It ensures that the algorithm yields a secure codification, even if the nature of the chaotic system is known. The algorithm has been implemented in two versions: one sequential and slow and the other, parallel and fast. Our algorithm assures the integrity of the ciphertext (we know if it has been altered, which is not assured by traditional algorithms) and consequently its authenticity. Numerical experiments are presented, discussed and show the behavior of the method in terms of security and performance. The fast version of the algorithm has a performance comparable to AES, a popular cryptography program used commercially nowadays, but it is more secure, which makes it immediately suitable for general purpose cryptography applications. An internet page has been set up, which enables the readers to test the algorithm and also to try to break into the cipher.
Resumo:
This paper presents a study on wavelets and their characteristics for the specific purpose of serving as a feature extraction tool for speaker verification (SV), considering a Radial Basis Function (RBF) classifier, which is a particular type of Artificial Neural Network (ANN). Examining characteristics such as support-size, frequency and phase responses, amongst others, we show how Discrete Wavelet Transforms (DWTs), particularly the ones which derive from Finite Impulse Response (FIR) filters, can be used to extract important features from a speech signal which are useful for SV. Lastly, an SV algorithm based on the concepts presented is described.
Resumo:
This paper proposes an improved voice activity detection (VAD) algorithm using wavelet and support vector machine (SVM) for European Telecommunication Standards Institution (ETS1) adaptive multi-rate (AMR) narrow-band (NB) and wide-band (WB) speech codecs. First, based on the wavelet transform, the original IIR filter bank and pitch/tone detector are implemented, respectively, via the wavelet filter bank and the wavelet-based pitch/tone detection algorithm. The wavelet filter bank can divide input speech signal into several frequency bands so that the signal power level at each sub-band can be calculated. In addition, the background noise level can be estimated in each sub-band by using the wavelet de-noising method. The wavelet filter bank is also derived to detect correlated complex signals like music. Then the proposed algorithm can apply SVM to train an optimized non-linear VAD decision rule involving the sub-band power, noise level, pitch period, tone flag, and complex signals warning flag of input speech signals. By the use of the trained SVM, the proposed VAD algorithm can produce more accurate detection results. Various experimental results carried out from the Aurora speech database with different noise conditions show that the proposed algorithm gives considerable VAD performances superior to the AMR-NB VAD Options 1 and 2, and AMR-WB VAD. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Given two strings A and B of lengths n(a) and n(b), n(a) <= n(b), respectively, the all-substrings longest common subsequence (ALCS) problem obtains, for every substring B` of B, the length of the longest string that is a subsequence of both A and B. The ALCS problem has many applications, such as finding approximate tandem repeats in strings, solving the circular alignment of two strings and finding the alignment of one string with several others that have a common substring. We present an algorithm to prepare the basic data structure for ALCS queries that takes O(n(a)n(b)) time and O(n(a) + n(b)) space. After this preparation, it is possible to build that allows any LCS length to be retrieved in constant time. Some trade-offs between the space required and a matrix of size O(n(b)(2)) the querying time are discussed. To our knowledge, this is the first algorithm in the literature for the ALCS problem. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The adsorption of pyridine (py) on Fe, Co, Ni and Ag electrodes was studied using surface-enhanced Raman scattering (SERS) to gain insight into the nature of the adsorbed species. The wavenumber values and relative intensities of the SERS bands were compared to the normal Raman spectrum of the chemically prepared transition metal complexes. Raman spectra of model clusters M(4)(py) (four metal atoms bonded to one py moiety) and M(4)(alpha-pyridil) where M = Ag, Fe, Co or Ni were calculated by density functional theory (DFT) and used to interpret the experimental SERS results. The similarity of the calculated M(4)(py) spectra with the experimental SERS spectra confirm the molecular adsorption of py on the surface of the metallic electrodes. All these results exclude the formation of adsorbed alpha-pyridil species, as suggested previously. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
In this work, the surface-enhanced Raman scattering (SERS) spectra of pyridine (py) on thin films of Co and Ni electrodeposited on an Ag electrode activated by oxidation-reduction cycles (ORC) are presented. The SERS spectra from the thin films were compared to those of py on activated bare transition metal electrodes. It was verified that the SERS spectra of py on 3 monolayers (ML)-thick films of Ni and Co presented only bands assignable to the py adsorbed on transition metal surfaces. It was also observed that even for 50 ML-thick transition metal films, the py SERS intensity was ca. 40% of the intensity from the 3 ML-thick films. The relative intensities of the SERS bands depended on the thickness of the films, and for films thicker than 7 ML for Co and 9 ML for Ni they were very similar to those of the bare transition metal electrodes. The transition metal thin films over Ag activated electrodes presented SERS intensities 3 orders of magnitude higher than the ones from bare transition metal electrodes. These films are more suitable to study the adsorption of low Raman cross-section molecules than are ORC-activated transition metal electrodes.
Resumo:
This paper reports the preparation and characterization of poly-{trans-[RuCl(2)(vpy)(4)]-styrene-divinylbenzene} and styrene-divinylbenzene-vinylpiridine filled with nanosilver. Theses materials were synthesized by non aqueous polymerization through a chemical reaction using benzoyl peroxide as the initiator. The nanosilver was obtained from chemical reduction using NaBH(4) as reducing agent and sodium citrate as stabilizer. The nanometric dimension of nanosilver was monitored by UV-visible and confirmed through TEM. The morphology was characterized by SEM and the thermal properties were done by TGA and DSC. The antimicrobial action of the polymers impregnated with nanosilver was evaluated using both microorganisms, Staphylococcus aureus and Escherichia coli. The antimicrobial activity of the poly-{trans-[RuCl(2)(vpy)(4)]-styrene-divinylbenzene} filled with nanosilver was confirmed by the presence of an inhibition halo of the bacterial growth in seeded culture media, but was not confirmed to the styrene-divinylbenzene-vinylpiridine. The present work suggest that trans - [RuCl(2)(vpy)(4)] complex facilitate the release of silver ion from the media.
Resumo:
A new electrocatalytic active porphyrin nanocomposite material was obtained by electropolymerization of meso-tetra(4-sulphonatephenyl) porphyrinate manganese(III) complex (MnTPPS) in alkaline solutions containing sub-micromolar concentrations of silver chloride. The modified glassy carbon electrodes efficiently oxidize hydrazine at 10 mV versus Ag/AgCl, dramatically decreasing the overpotential of conventional carbon electrodes. The analytical characteristics of this amperometric sensor coupled with batch injection analysis (BIA) technique were explored. Wide linear dynamic range (2.5 x 10(-7) to 2.5 x 10(-4) mol L-1), good repeatability (R.S.D. = 0.84%, n = 30) and low detection (3.1 x 10(-8) mol L-1) and quantification (1.0 x 10(-7) mol L-1) limits, as well as very fast sampling frequency (60 determinations per hour) were achieved. (c) 2007 Elsevier B.V. All rights reserved.