759 resultados para Algorithm fusion
Resumo:
Realistic coupled-channel calculation results for the (18)[O] + (58,60,64)Ni systems in the bombarding energy range 34.5 <= E(Lab) <= 6-5 MeV are presented. The overall agreement with existing experimental data is quite good. Our calculations predict an unexpected fusion suppression for above-barrier energies, with an important contribution of the two neutron ((18)O, (16)O) transfer channel couplings. The sub-barrier fusion enhancement and the above barrier suppression, predicted by the calculations, are consistent with the nuclear structure of the Ni region. Comparisons with recently reported similar effects in reactions induced by the (6)He projectile are discussed. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The traditional reduction methods to represent the fusion cross sections of different systems are flawed when attempting to completely eliminate the geometrical aspects, such as the heights and radii of the barriers, and the static effects associated with the excess neutrons or protons in weakly bound nuclei. We remedy this by introducing a new dimensionless universal function, which allows the separation and disentanglement of the static and dynamic aspects of the breakup coupling effects connected with the excess nucleons. Applying this new reduction procedure to fusion data of several weakly bound systems, we find a systematic suppression of complete fusion above the Coulomb barrier and enhancement below it. Different behaviors are found for the total fusion cross sections. They are appreciably suppressed in collisions of neutron-halo nuclei, while they are practically not affected by the breakup coupling in cases of stable weakly bound nuclei. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We study the effects of several approximations commonly used in coupled-channel analyses of fusion and elastic scattering cross sections. Our calculations are performed considering couplings to inelastic states in the context of the frozen approximation, which is equivalent to the coupled-channel formalism when dealing with small excitation energies. Our findings indicate that, in some cases, the effect of the approximations on the theoretical cross sections can be larger than the precision of the experimental data.
Resumo:
A new technique to analyze fusion data is developed. From experimental cross sections and results of coupled-channel calculations a dimensionless function is constructed. In collisions of strongly bound nuclei this quantity is very close to a universal function of a variable related to the collision energy, whereas for weakly bound projectiles the effects of breakup coupling are measured by the deviations with respect to this universal function. This technique is applied to collisions of stable and unstable weakly bound isotopes.
Resumo:
We describe how the method of detection of delayed K x-rays produced by the electron capture decay of the residual nuclei can be a powerful tool in the investigation of the effect of the breakup process on the complete fusion (CF) cross-section of weakly bound nuclei at energies close to the Coulomb barrier. This is presently one of the most interesting subjects under investigation in the field of low-energy nuclear reactions, and the difficult experimental task of separating CF from the incomplete fusion (ICF) of one of the breakup fragments can be achieved by the x-ray spectrometry method. We present results for the fusion of the (9)Be + (144)Sm system. Copyright (c) 2008 John Wiley & Sons, Ltd.
Resumo:
The bare nucleus S(E) factors for the (2)H(d, p)(3)H and (2)H(d.n)(3)He reactions have been measured for the first time via the Trojan Horse Method off the proton in (3)He from 1.5 MeV down to 2 key. This range overlaps with the relevant region for Standard Big Bang Nucleosynthesis as well as with the thermal energies of future fusion reactors and deuterium burning in the Pre-Main-Sequence phase of stellar evolution. This is the first pioneering experiment in quasi free regime where the charged spectator is detected. Both the energy dependence and the absolute value of the S(E) factors deviate by more than 15% from available direct data with new S(0) values of 57.4 +/- 1.8 MeVb for (3)H + p and 60.1 +/- 1.9 MeV b for (3)He + n. None of the existing fitting curves is able to provide the correct slope of the new data in the full range, thus calling for a revision of the theoretical description. This has consequences in the calculation of the reaction rates with more than a 25% increase at the temperatures of future fusion reactors. (C) 2011 Elsevier By. All rights reserved.
Resumo:
A novel cryptography method based on the Lorenz`s attractor chaotic system is presented. The proposed algorithm is secure and fast, making it practical for general use. We introduce the chaotic operation mode, which provides an interaction among the password, message and a chaotic system. It ensures that the algorithm yields a secure codification, even if the nature of the chaotic system is known. The algorithm has been implemented in two versions: one sequential and slow and the other, parallel and fast. Our algorithm assures the integrity of the ciphertext (we know if it has been altered, which is not assured by traditional algorithms) and consequently its authenticity. Numerical experiments are presented, discussed and show the behavior of the method in terms of security and performance. The fast version of the algorithm has a performance comparable to AES, a popular cryptography program used commercially nowadays, but it is more secure, which makes it immediately suitable for general purpose cryptography applications. An internet page has been set up, which enables the readers to test the algorithm and also to try to break into the cipher.
Resumo:
This paper presents a study on wavelets and their characteristics for the specific purpose of serving as a feature extraction tool for speaker verification (SV), considering a Radial Basis Function (RBF) classifier, which is a particular type of Artificial Neural Network (ANN). Examining characteristics such as support-size, frequency and phase responses, amongst others, we show how Discrete Wavelet Transforms (DWTs), particularly the ones which derive from Finite Impulse Response (FIR) filters, can be used to extract important features from a speech signal which are useful for SV. Lastly, an SV algorithm based on the concepts presented is described.
Resumo:
This paper proposes an improved voice activity detection (VAD) algorithm using wavelet and support vector machine (SVM) for European Telecommunication Standards Institution (ETS1) adaptive multi-rate (AMR) narrow-band (NB) and wide-band (WB) speech codecs. First, based on the wavelet transform, the original IIR filter bank and pitch/tone detector are implemented, respectively, via the wavelet filter bank and the wavelet-based pitch/tone detection algorithm. The wavelet filter bank can divide input speech signal into several frequency bands so that the signal power level at each sub-band can be calculated. In addition, the background noise level can be estimated in each sub-band by using the wavelet de-noising method. The wavelet filter bank is also derived to detect correlated complex signals like music. Then the proposed algorithm can apply SVM to train an optimized non-linear VAD decision rule involving the sub-band power, noise level, pitch period, tone flag, and complex signals warning flag of input speech signals. By the use of the trained SVM, the proposed VAD algorithm can produce more accurate detection results. Various experimental results carried out from the Aurora speech database with different noise conditions show that the proposed algorithm gives considerable VAD performances superior to the AMR-NB VAD Options 1 and 2, and AMR-WB VAD. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Given two strings A and B of lengths n(a) and n(b), n(a) <= n(b), respectively, the all-substrings longest common subsequence (ALCS) problem obtains, for every substring B` of B, the length of the longest string that is a subsequence of both A and B. The ALCS problem has many applications, such as finding approximate tandem repeats in strings, solving the circular alignment of two strings and finding the alignment of one string with several others that have a common substring. We present an algorithm to prepare the basic data structure for ALCS queries that takes O(n(a)n(b)) time and O(n(a) + n(b)) space. After this preparation, it is possible to build that allows any LCS length to be retrieved in constant time. Some trade-offs between the space required and a matrix of size O(n(b)(2)) the querying time are discussed. To our knowledge, this is the first algorithm in the literature for the ALCS problem. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In situ fusion on the boat-type graphite platform has been used as a sample pretreatment for the direct determination of Co, Cr and Mn in Portland cement by solid sampling graphite furnace atomic absorption spectrometry (SS-GF AAS). The 3-field Zeeman technique was adopted for background correction to decrease the sensitivity during measurements. This strategy allowed working with up to 200 mu g of sample. The in situ fusion was accomplished using 10 mu L of a flux mixture 4.0% m/v Na(2)CO(3) + 4.0% m/v ZnO + 0.1% m/v Triton (R) X-100 added over the cement sample and heated at 800 degrees C for 20 s. The resulting mould was completely dissolved with 10 mu L of 0.1% m/v HNO(3). Limits of detection were 0.11 mu g g(-1) for Co, 1.1 mu g g(-1) for Cr and 1.9 mu g g(-1) for Mn. The accuracy of the proposed method has been evaluated by the analysis of certified reference materials. The values found presented no statistically significant differences compared to the certified values (Student`s t-test, p<0.05). In general, the relative standard deviation was lower than 12% (n = 5). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A dosing algorithm including genetic (VKORC1 and CYP2C9 genotypes) and nongenetic factors (age, weight, therapeutic indication, and cotreatment with amiodarone or simvastatin) explained 51% of the variance in stable weekly warfarin doses in 390 patients attending an anticoagulant clinic in a Brazilian public hospital. The VKORC1 3673G>A genotype was the most important predictor of warfarin dose, with a partial R(2) value of 23.9%. Replacing the VKORC1 3673G>A genotype with VKORC1 diplotype did not increase the algorithm`s predictive power. We suggest that three other single-nucleotide polymorphisms (SNPs) (5808T>G, 6853G>C, and 9041G>A) that are in strong linkage disequilibrium (LD) with 3673G>A would be equally good predictors of the warfarin dose requirement. The algorithm`s predictive power was similar across the self-identified ""race/color"" subsets. ""Race/color"" was not associated with stable warfarin dose in the multiple regression model, although the required warfarin dose was significantly lower (P = 0.006) in white (29 +/- 13 mg/week, n = 196) than in black patients (35 +/- 15 mg/week, n = 76).