98 resultados para Fast algorithm
Resumo:
Purpose - The purpose of this paper is to develop a novel unstructured simulation approach for injection molding processes described by the Hele-Shaw model. Design/methodology/approach - The scheme involves dual dynamic meshes with active and inactive cells determined from an initial background pointset. The quasi-static pressure solution in each timestep for this evolving unstructured mesh system is approximated using a control volume finite element method formulation coupled to a corresponding modified volume of fluid method. The flow is considered to be isothermal and non-Newtonian. Findings - Supporting numerical tests and performance studies for polystyrene described by Carreau, Cross, Ellis and Power-law fluid models are conducted. Results for the present method are shown to be comparable to those from other methods for both Newtonian fluid and polystyrene fluid injected in different mold geometries. Research limitations/implications - With respect to the methodology, the background pointset infers a mesh that is dynamically reconstructed here, and there are a number of efficiency issues and improvements that would be relevant to industrial applications. For instance, one can use the pointset to construct special bases and invoke a so-called ""meshless"" scheme using the basis. This would require some interesting strategies to deal with the dynamic point enrichment of the moving front that could benefit from the present front treatment strategy. There are also issues related to mass conservation and fill-time errors that might be addressed by introducing suitable projections. The general question of ""rate of convergence"" of these schemes requires analysis. Numerical results here suggest first-order accuracy and are consistent with the approximations made, but theoretical results are not available yet for these methods. Originality/value - This novel unstructured simulation approach involves dual meshes with active and inactive cells determined from an initial background pointset: local active dual patches are constructed ""on-the-fly"" for each ""active point"" to form a dynamic virtual mesh of active elements that evolves with the moving interface.
Resumo:
Multidimensional scaling is applied in order to visualize an analogue of the small-world effect implied by edges having different displacement velocities in transportation networks. Our findings are illustrated for two real-world systems, namely the London urban network (streets and underground) and the US highway network enhanced by some of the main US airlines routes. We also show that the travel time in these two networks is drastically changed by attacks targeting the edges with large displacement velocities. (C) 2011 Elsevier By. All rights reserved.
Resumo:
This article reports a relaxation study in an oriented system containing spin 3/2 nuclei using quantum state tomography (QST). The use of QST allowed evaluating the time evolution of all density matrix elements starting from several initial states. Using an appropriated treatment based on the Redfield theory, the relaxation rate of each density matrix element was measured and the reduced spectral densities that describe the system relaxation were determined. All the experimental data could be well described assuming pure quadrupolar relaxation and reduced spectral densities corresponding to a superposition of slow and fast motions. The data were also analyzed in the context of Quantum Information Processing, where the coherence loss of each qubit of the system was determined using the partial trace operation. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
This paper presents a study on wavelets and their characteristics for the specific purpose of serving as a feature extraction tool for speaker verification (SV), considering a Radial Basis Function (RBF) classifier, which is a particular type of Artificial Neural Network (ANN). Examining characteristics such as support-size, frequency and phase responses, amongst others, we show how Discrete Wavelet Transforms (DWTs), particularly the ones which derive from Finite Impulse Response (FIR) filters, can be used to extract important features from a speech signal which are useful for SV. Lastly, an SV algorithm based on the concepts presented is described.
Resumo:
This paper proposes an improved voice activity detection (VAD) algorithm using wavelet and support vector machine (SVM) for European Telecommunication Standards Institution (ETS1) adaptive multi-rate (AMR) narrow-band (NB) and wide-band (WB) speech codecs. First, based on the wavelet transform, the original IIR filter bank and pitch/tone detector are implemented, respectively, via the wavelet filter bank and the wavelet-based pitch/tone detection algorithm. The wavelet filter bank can divide input speech signal into several frequency bands so that the signal power level at each sub-band can be calculated. In addition, the background noise level can be estimated in each sub-band by using the wavelet de-noising method. The wavelet filter bank is also derived to detect correlated complex signals like music. Then the proposed algorithm can apply SVM to train an optimized non-linear VAD decision rule involving the sub-band power, noise level, pitch period, tone flag, and complex signals warning flag of input speech signals. By the use of the trained SVM, the proposed VAD algorithm can produce more accurate detection results. Various experimental results carried out from the Aurora speech database with different noise conditions show that the proposed algorithm gives considerable VAD performances superior to the AMR-NB VAD Options 1 and 2, and AMR-WB VAD. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Purpose: We present an iterative framework for CT reconstruction from transmission ultrasound data which accurately and efficiently models the strong refraction effects that occur in our target application: Imaging the female breast. Methods: Our refractive ray tracing framework has its foundation in the fast marching method (FNMM) and it allows an accurate as well as efficient modeling of curved rays. We also describe a novel regularization scheme that yields further significant reconstruction quality improvements. A final contribution is the development of a realistic anthropomorphic digital breast phantom based on the NIH Visible Female data set. Results: Our system is able to resolve very fine details even in the presence of significant noise, and it reconstructs both sound speed and attenuation data. Excellent correspondence with a traditional, but significantly more computationally expensive wave equation solver is achieved. Conclusions: Apart from the accurate modeling of curved rays, decisive factors have also been our regularization scheme and the high-quality interpolation filter we have used. An added benefit of our framework is that it accelerates well on GPUs where we have shown that clinical 3D reconstruction speeds on the order of minutes are possible.
Resumo:
This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Given two strings A and B of lengths n(a) and n(b), n(a) <= n(b), respectively, the all-substrings longest common subsequence (ALCS) problem obtains, for every substring B` of B, the length of the longest string that is a subsequence of both A and B. The ALCS problem has many applications, such as finding approximate tandem repeats in strings, solving the circular alignment of two strings and finding the alignment of one string with several others that have a common substring. We present an algorithm to prepare the basic data structure for ALCS queries that takes O(n(a)n(b)) time and O(n(a) + n(b)) space. After this preparation, it is possible to build that allows any LCS length to be retrieved in constant time. Some trade-offs between the space required and a matrix of size O(n(b)(2)) the querying time are discussed. To our knowledge, this is the first algorithm in the literature for the ALCS problem. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We propose a likelihood ratio test ( LRT) with Bartlett correction in order to identify Granger causality between sets of time series gene expression data. The performance of the proposed test is compared to a previously published bootstrapbased approach. LRT is shown to be significantly faster and statistically powerful even within non- Normal distributions. An R package named gGranger containing an implementation for both Granger causality identification tests is also provided.
Resumo:
A reliable and fast sensor for in vitro evaluation of solar protection factors (SPFs) of cosmetic products, based on the photobleaching kinetics of a nanocrystalline TiO(2)/dye UV-dosimeter, has been devised. The accuracy, robustness and suitability of the new device was demonstrated by the excellent matching of the predicted and the in vivo results up to SPF 70, for four standard samples analyzed in blind. These results strongly suggest that our device can be useful for routine SPF evaluation in laboratories devoted to the development or production of cosmetic formulations, since the conventional in vitro methods tend to exhibit unacceptably high errors above SPF similar to 30 and the conventional in vivo methods tend to be expensive and exceedingly time consuming. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A new compact system encompassing in flow gas diffusion unit and a wall-jet amperometric FIA detector, coated with a supramolecular porphyrin film, was specially designed as an alternative to the time-consuming Monier-Williams method, allowing fast, reproducible and accurate analyses of free sulphite species in fruit juices. In fact, a linear response between 0.64 and 6.4 ppm of sodium sulphite. LOD = 0.043 ppm, relative standard deviation of +/- 1.5% (n = 10) and analytical frequency of 85 analyses/h were obtained utilising optimised conditions. That superior analytical performance allows the precise evaluation of the amount of free sulphite present in foods, providing an important comparison between the standard addition and the standard injection methods. Although the first one is most frequently used, it was strongly influenced by matrix effects because of the unexpected reactivity of sulphite ions with the juice matrixes, leading to its partial consumption soon after addition. In contrast, the last method was not susceptible to matrix effects yielding accurate results, being more reliable for analytical purposes. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A fast and robust analytical method for amperometric determination of hydrogen peroxide (H(2)O(2)) based on batch injection analysis (BIA) on an array of gold microelectrodes modified with platinum is proposed. The gold microelectrode array (n = 14) was obtained from electronic chips developed for surface mounted device technology (SMD), whose size offers advantages to adapt them in batch cells. The effect of the dispensing rate, volume injected, distance between the platinum microelectrodes and the pipette tip, as well as the volume of solution in the cell on the analytical response were evaluated. The method allows the H(2)O(2) amperometric determination in the concentration range from 0.8 mu mol L(-1) to 100 mu mol L(-1). The analytical frequency can attain 300 determinations per hour and the detection limit was estimated in 0.34 mu mol L(-1) (3 sigma). The anodic current peaks obtained after a series of 23 successive injections of 50 mu L of 25 mu mol L(-1) H(2)O(2) showed an RSD < 0.9%. To ensure the good selectivity to detect H(2)O(2), its determination was performed in a differential mode, with selective destruction of the H(2)O(2) with catalase in 10 mmol L(-1) phosphate buffer solution. Practical application of the analytical procedure involved H(2)O(2) determination in rainwater of Sao Paulo City. A comparison of the results obtained by the proposed ampermetric method with another one which combines flow injection analysis (FIA) with spectrophotometric detection showed good agreement. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A simple, fast, accurate, and sensitive spectrophotometric method was developed to determine zinc(II). This method is based on the reaction of Zn(II) with di-2-pyridyl ketone benzoylhydrazone (DPKBH), at pH=5.5 and 50% (v/v) ethanol. Beers law was obeyed in the range 0.020-1.82 mu g mL(-1) with a molar apsorptivity of 3.64 x 10(4) L mol(-1) cm(-1), and a detection limit (3) of 2.29 mu g L-1. The action of some interfering ions was verified and the developed method applied to pharmaceutical and biological samples. The results were then compared with those obtained by using a flame atomic absorption technique.