871 resultados para Anisotropic Analytical Algorithm
Resumo:
This paper presents a study on wavelets and their characteristics for the specific purpose of serving as a feature extraction tool for speaker verification (SV), considering a Radial Basis Function (RBF) classifier, which is a particular type of Artificial Neural Network (ANN). Examining characteristics such as support-size, frequency and phase responses, amongst others, we show how Discrete Wavelet Transforms (DWTs), particularly the ones which derive from Finite Impulse Response (FIR) filters, can be used to extract important features from a speech signal which are useful for SV. Lastly, an SV algorithm based on the concepts presented is described.
Resumo:
This paper proposes an improved voice activity detection (VAD) algorithm using wavelet and support vector machine (SVM) for European Telecommunication Standards Institution (ETS1) adaptive multi-rate (AMR) narrow-band (NB) and wide-band (WB) speech codecs. First, based on the wavelet transform, the original IIR filter bank and pitch/tone detector are implemented, respectively, via the wavelet filter bank and the wavelet-based pitch/tone detection algorithm. The wavelet filter bank can divide input speech signal into several frequency bands so that the signal power level at each sub-band can be calculated. In addition, the background noise level can be estimated in each sub-band by using the wavelet de-noising method. The wavelet filter bank is also derived to detect correlated complex signals like music. Then the proposed algorithm can apply SVM to train an optimized non-linear VAD decision rule involving the sub-band power, noise level, pitch period, tone flag, and complex signals warning flag of input speech signals. By the use of the trained SVM, the proposed VAD algorithm can produce more accurate detection results. Various experimental results carried out from the Aurora speech database with different noise conditions show that the proposed algorithm gives considerable VAD performances superior to the AMR-NB VAD Options 1 and 2, and AMR-WB VAD. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Given two strings A and B of lengths n(a) and n(b), n(a) <= n(b), respectively, the all-substrings longest common subsequence (ALCS) problem obtains, for every substring B` of B, the length of the longest string that is a subsequence of both A and B. The ALCS problem has many applications, such as finding approximate tandem repeats in strings, solving the circular alignment of two strings and finding the alignment of one string with several others that have a common substring. We present an algorithm to prepare the basic data structure for ALCS queries that takes O(n(a)n(b)) time and O(n(a) + n(b)) space. After this preparation, it is possible to build that allows any LCS length to be retrieved in constant time. Some trade-offs between the space required and a matrix of size O(n(b)(2)) the querying time are discussed. To our knowledge, this is the first algorithm in the literature for the ALCS problem. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
We report an effective approach for the construction of a biomimetic sensor of multicopper oxidases by immobilizing a cyclic-tetrameric copper(II) species, containing the ligand (4-imidazolyl)ethylene-2-amino-1-ethylpyridine (apyhist), in the Nafion (R) membrane on a vitreous carbon electrode surface. This complex provides a tetranuclear arrangement of copper ions that allows an effective reduction of oxygen to water, in a catalytic cycle involving four electrons. The electrochemical reduction of oxygen was studied at pH 9.0 buffer solution by using cyclic voltammetry, chronoamperometry, rotating disk electrode voltammetry and scanning electrochemical microscopy techniques. The mediator shows good electrocatalytic ability for the reduction of O(2) at pH 9.0, with reduction of overpotential (350 mV) and increased current response in comparison with results obtained with a bare glassy carbon electrode. The heterogeneous rate constant (k(ME)`) for the reduction of O(2) at the modified electrode was determined by using a Koutecky-Levich plot. In addition, the charge transport rate through the coating and the apparent diffusion coefficient of O(2) into the modifier film were also evaluated. The overall process was found to be governed by the charge transport through the coating, occurring at the interface or at a finite layer at the electrode/coating interface. The proposed study opens up the way for the development of bioelectronic devices based on molecular recognition and self-organization. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP)
Resumo:
A dosing algorithm including genetic (VKORC1 and CYP2C9 genotypes) and nongenetic factors (age, weight, therapeutic indication, and cotreatment with amiodarone or simvastatin) explained 51% of the variance in stable weekly warfarin doses in 390 patients attending an anticoagulant clinic in a Brazilian public hospital. The VKORC1 3673G>A genotype was the most important predictor of warfarin dose, with a partial R(2) value of 23.9%. Replacing the VKORC1 3673G>A genotype with VKORC1 diplotype did not increase the algorithm`s predictive power. We suggest that three other single-nucleotide polymorphisms (SNPs) (5808T>G, 6853G>C, and 9041G>A) that are in strong linkage disequilibrium (LD) with 3673G>A would be equally good predictors of the warfarin dose requirement. The algorithm`s predictive power was similar across the self-identified ""race/color"" subsets. ""Race/color"" was not associated with stable warfarin dose in the multiple regression model, although the required warfarin dose was significantly lower (P = 0.006) in white (29 +/- 13 mg/week, n = 196) than in black patients (35 +/- 15 mg/week, n = 76).
Resumo:
Many factors can affect the quality of diesel oil, in particular the degradation processes that are directly related to some organosulfur compounds. During the degradation process, these compounds are oxidized into their corresponding sulfonic acids, generating a strong acid content during the process. p-Toluene sulfonic acid analysis was performed using the linear sweep voltammetry technique with a platinum ultramicroelectrode in aqueous solution containing 3 mol L(-1) potassium chloride. An extraction step was introduced prior to the voltammetric detection in order to avoid the adsorption of organic molecules, which inhibit the electrochemical response. The extraction step promoted the transference of sulfonic acid from the diesel oil to an aqueous phase. The method was accurate and reproducible, with detection and quantification limits of 5 ppm and 15 ppm, respectively. Recovery of sulfonic acid was around 90%.
Resumo:
Composite electrodes were prepared using graphite powder and silicone rubber in different compositions. The use of such hydrophopic materials interned to diminish the swallowing observed in other cases when the electrodes are used in aqueous solutions for a long time. The composite was characterized for the response reproducibility, ohmic resistance, thermal behavior and active area. The voltammetric response in relation to analytes with known voltammetric behavior was also evaluated, always in comparison with the glassy carbon. The 70% (graphite, w/w) composite electrode was used in the quantitative determination of hydroquinone (HQ) in a DPV procedure in which a detection limit of 5.1 x 10(-8) mol L-1 was observed. HQ was determined in a photographic developer sample with errors lower then 1% in relation to the label value. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
A new composite electrode based on multiwall carbon nanotubes (MWCNT) and silicone-rubber (SR) was developed and applied to the determination of propranolol in pharmaceutical formulations. The effect of using MWCNT/graphite mixtures in different proportions was also investigated. Cyclic voltammetry and electrochemical impedance spectroscopy were used for electrochemical characterization of different electrode compositions. Propranolol was determined using MWCNT/SR 70% (m/m) electrodes with linear dynamic ranges up to 7.0 mu molL(-1) by differential pulse and up to 5.4 mu molL(-1) by square wave voltammetry, with LODs of 0.12 and 0.078 mu molL(-1), respectively. Analysis of commercial samples agreed with that obtained by the official spectrophotometric method. The electrode is mechanically robust and presented reproducible results and a long useful life.
Resumo:
Microfluidic paper-based analytical devices (mu PADs) are a new class of point-of-care diagnostic devices that are inexpensive, easy to use, and designed specifically for use in developing countries. (To listen to a podcast about this feature, please go to the Analytical Chemistry multimedia page at pubs.acs.org/page/ancham/audio/index.html.)
Resumo:
The giant extracellular hemoglobin of Glossoscolex paulistus (HbGp) is constituted by Subunits containing heme groups with molecular masses (M) in the range of 15 to 19 kDa, monomers of 16 kDa (d), and trimers of 51 to 52 kDa (abc) linked by nonheme structures named linkers of 24 to 32 kDa (L). HbGp is homologous to Lumbricus terrestris hemoglobin (HbLt). Several reports propose M of HbLt in the range of 3.6 to 4.4 MDa. Based on subunits M determined by mass spectrometry and assuming HbGp stoichiometry of 12(abcd)(3)L(3) (Vinogradov model) plus 144 heme groups, a Value of M for HbGp oligomer of 3560 kDa can be predicted. This Value is nearly 500 kDa higher than the unique HbGp M Value reported in the literature. In the current work, sedimentation velocity analytical ultracentrifugation (AUC) experiments were performed to obtain M for HbGp in oxy and cyano-met forms. s(20,w)(0), values of 58.1 +/- 0.2 S and 59.6 +/- 0.2 S, respectively, for the two oxidation forms were obtained. The ratio between sedimentation and diffusion coefficients supplied values for M of approximately 3600 100 and 3700 100 kDa for oxy and cyano-met HbGp forms, respectively. An independent determination of the partial specific volume, V(bar), for HbGp was performed based on density measurements, providing a value of 0.764 +/- 0.008, in excellent agreement with the estimates from SEDFIT software. Our results show total consistency between M obtained by AUC and recent partial characterization by mass spectrometry. Therefore, HbGp possesses M very close to that of HbLt, suggesting an oligomeric assembly in agreement with the Vinogradov model. (c) 2008 Elsevier Inc. All rights reserved.