871 resultados para Rejection-sampling Algorithm
Resumo:
We present the first results of a study investigating the processes that control concentrations and sources of Pb and particulate matter in the atmosphere of Sao Paulo City Brazil Aerosols were collected with high temporal resolution (3 hours) during a four-day period in July 2005 The highest Pb concentrations measured coincided with large fireworks during celebration events and associated to high traffic occurrence Our high-resolution data highlights the impact that a singular transient event can have on air quality even in a megacity Under meteorological conditions non-conducive to pollutant dispersion Pb and particulate matter concentrations accumulated during the night leading to the highest concentrations in aerosols collected early in the morning of the following day The stable isotopes of Pb suggest that emissions from traffic remain an Important source of Pb in Sao Paulo City due to the large traffic fleet despite low Pb concentrations in fuels (C) 2010 Elsevier BV All rights reserved
Resumo:
This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Given two strings A and B of lengths n(a) and n(b), n(a) <= n(b), respectively, the all-substrings longest common subsequence (ALCS) problem obtains, for every substring B` of B, the length of the longest string that is a subsequence of both A and B. The ALCS problem has many applications, such as finding approximate tandem repeats in strings, solving the circular alignment of two strings and finding the alignment of one string with several others that have a common substring. We present an algorithm to prepare the basic data structure for ALCS queries that takes O(n(a)n(b)) time and O(n(a) + n(b)) space. After this preparation, it is possible to build that allows any LCS length to be retrieved in constant time. Some trade-offs between the space required and a matrix of size O(n(b)(2)) the querying time are discussed. To our knowledge, this is the first algorithm in the literature for the ALCS problem. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Mixed models may be defined with or without reference to sampling, and can be used to predict realized random effects, as when estimating the latent values of study subjects measured with response error. When the model is specified without reference to sampling, a simple mixed model includes two random variables, one stemming from an exchangeable distribution of latent values of study subjects and the other, from the study subjects` response error distributions. Positive probabilities are assigned to both potentially realizable responses and artificial responses that are not potentially realizable, resulting in artificial latent values. In contrast, finite population mixed models represent the two-stage process of sampling subjects and measuring their responses, where positive probabilities are only assigned to potentially realizable responses. A comparison of the estimators over the same potentially realizable responses indicates that the optimal linear mixed model estimator (the usual best linear unbiased predictor, BLUP) is often (but not always) more accurate than the comparable finite population mixed model estimator (the FPMM BLUP). We examine a simple example and provide the basis for a broader discussion of the role of conditioning, sampling, and model assumptions in developing inference.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The analytical determination of atmospheric pollutants still presents challenges due to the low-level concentrations (frequently in the mu g m(-3) range) and their variations with sampling site and time In this work a capillary membrane diffusion scrubber (CMDS) was scaled down to match with capillary electrophoresis (CE) a quick separation technique that requires nothing more than some nanoliters of sample and when combined with capacitively coupled contactless conductometric detection (C(4)D) is particularly favorable for ionic species that do not absorb in the UV-vis region like the target analytes formaldehyde formic acid acetic acid and ammonium The CMDS was coaxially assembled inside a PTFE tube and fed with acceptor phase (deionized water for species with a high Henry s constant such as formaldehyde and carboxylic acids or acidic solution for ammonia sampling with equilibrium displacement to the non-volatile ammonium ion) at a low flow rate (8 3 nLs(-1)) while the sample was aspirated through the annular gap of the concentric tubes at 25 mLs(-1) A second unit in all similar to the CMDS was operated as a capillary membrane diffusion emitter (CMDE) generating a gas flow with know concentrations of ammonia for the evaluation of the CMDS The fluids of the system were driven with inexpensive aquarium air pumps and the collected samples were stored in vials cooled by a Peltier element Complete protocols were developed for the analysis in air of NH(3) CH(3)COOH HCOOH and with a derivatization setup CH(2)O by associating the CMDS collection with the determination by CE-C(4)D The ammonia concentrations obtained by electrophoresis were checked against the reference spectrophotometric method based on Berthelot s reaction Sensitivity enhancements of this reference method were achieved by using a modified Berthelot reaction solenoid micro-pumps for liquid propulsion and a long optical path cell based on a liquid core waveguide (LCW) All techniques and methods of this work are in line with the green analytical chemistry trends (C) 2010 Elsevier B V All rights reserved
Resumo:
This paper reports a method for the direct and simultaneous determination of Cr and Mn in alumina by slurry sampling graphite furnace atomic absorption spectrometry (SiS-SIMAAS) using niobium carbide (NbC) as a graphite platform modifier and sodium fluoride (NaF) as a matrix modifier. 350 mu g of Nb were thermally deposited on the platform surface allowing the formation of NbC (mp 3500 degrees C) to minimize the reaction between aluminium and carbon of the pyrolytic platform, improving the graphite tube lifetime up to 150 heating cycles. A solution of 0.2 mol L(-1) NaF was used as matrix modifier for alumina dissolution as cryolite-based melt, allowing volatilization during pyrolysis step. Masses (c.a. 50 mg) of sample were suspended in 30 ml of 2.0% (v/v) of HNO(3). Slurry was manually homogenized before sampling. Aliquots of 20 mu l of analytical solutions and slurry samples were co-injected into the graphite tube with 20 mu l of the matrix modifier. In the best conditions of the heating program, pyrolysis and atomization temperatures were 1300 degrees C and 2400 degrees C, respectively. A step of 1000 degrees C was optimized allowing the alumina dissolution to form cryolite. The accuracy of the proposed method has been evaluated by the analysis of standard reference materials. The found concentrations presented no statistical differences compared to the certified values at 95% of the confidence level. Limits of detection were 66 ng g(-1) for Cr and 102 ng g(-1) for Mn and the characteristic masses were 10 and 13 pg for Cr and Mn, respectively.
Resumo:
In situ fusion on the boat-type graphite platform has been used as a sample pretreatment for the direct determination of Co, Cr and Mn in Portland cement by solid sampling graphite furnace atomic absorption spectrometry (SS-GF AAS). The 3-field Zeeman technique was adopted for background correction to decrease the sensitivity during measurements. This strategy allowed working with up to 200 mu g of sample. The in situ fusion was accomplished using 10 mu L of a flux mixture 4.0% m/v Na(2)CO(3) + 4.0% m/v ZnO + 0.1% m/v Triton (R) X-100 added over the cement sample and heated at 800 degrees C for 20 s. The resulting mould was completely dissolved with 10 mu L of 0.1% m/v HNO(3). Limits of detection were 0.11 mu g g(-1) for Co, 1.1 mu g g(-1) for Cr and 1.9 mu g g(-1) for Mn. The accuracy of the proposed method has been evaluated by the analysis of certified reference materials. The values found presented no statistically significant differences compared to the certified values (Student`s t-test, p<0.05). In general, the relative standard deviation was lower than 12% (n = 5). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Compared to other volatile carbonylic compounds present in outdoor air, formaldehyde (CH2O) is the most toxic, deserving more attention in terms of indoor and outdoor air quality legislation and control. The analytical determination of CH2O in air still presents challenges due to the low-level concentration (in the sub-ppb range) and its variation with sampling site and time. Of the many available analytical methods for carbonylic compounds, the most widespread one is the time consuming collection in cartridges impregnated with 2,4-dinitrophenylhydrazine followed by the analysis of the formed hydrazones by HPLC. The present work proposes the use of polypropylene hollow porous capillary fibers to achieve efficient CH2O collection. The Oxyphan (R) fiber (designed for blood oxygenation) was chosen for this purpose because it presents good mechanical resistance, high density of very fine pores and high ratio of collection area to volume of the acceptor fluid in the tube, all favorable for the development of air sampling apparatus. The collector device consists of a Teflon pipe inside of which a bundle of polypropylene microporous capillary membranes was introduced. While the acceptor passes at a low flow rate through the capillaries, the sampled air circulates around the fibers, impelled by a low flow membrane pump (of the type used for aquariums ventilation). The coupling of this sampling technique with the selective and quantitative determination of CH2O, in the form of hydroxymethanesulfonate (HMS) after derivatization with HSO3-, by capillary electrophoresis with capacitively coupled contactless conductivity detection (CE-(CD)-D-4) enabled the development of a complete analytical protocol for the CH2O evaluation in air. (C) 2008 Published by Elsevier B.V.
Resumo:
A fast and reliable method for the direct determination of iron in sand by solid sampling graphite furnace atomic absorption spectrometry was developed. A Zeeman-effect 3-field background corrector was used to decrease the sensitivity of spectrometer measurements. This strategy allowed working with up to 200 mu g of samples, thus improving the representativity. Using samples with small particle sizes (1-50 mu m) and adding 5 mu g Pd as chemical modifier, it was possible to obtain suitable calibration curves with aqueous reference solutions. The pyrolysis and atomization temperatures for the optimized heating program were 1400 and 2500 degrees C, respectively. The characteristic mass, based on integrated absorbance, was 56 pg, and the detection limits, calculated considering the variability of 20 consecutive measurements of platform inserted without sample was 32 pg. The accuracy of the procedure was checked with the analysis of two reference materials (IPT 62 and 63). The determined concentrations were in agreement with the recommended values (95% confidence level). Five sand samples were analyzed, and a good agreement (95% confidence level) was observed using the proposed method and conventional flame atomic absorption spectrometry. The relative standard deviations were lower than 25% (n = 5). The tube and boat platform lifetimes were around 1000 and 250 heating cycles, respectively.
Resumo:
One method using a solid sampling device for the direct determination of Cr and Ni in fresh and used lubricating oils by graphite furnace atomic absorption spectrometry are proposed. The high organic content in the samples was minimized using a digestion step at 400 degrees C in combination with an oxidant mixture 1.0% (v v(-1)) HNO3+15% (v v(-1)) H2O2+0.1% (m v(-1)) Triton X-100 for the in situ digestion. The 3-field mode Zeeman-effect allowed the spectrometer calibration up to 5 ng of Cr and Ni. The quantification limits were 0.86 mu g g(-1) for Cr and 0.82 mg g(-1) for Ni, respectively. The analysis of reference materials showed no statistically significant difference between the recommended values and those obtained by the proposed methods.
Resumo:
A dosing algorithm including genetic (VKORC1 and CYP2C9 genotypes) and nongenetic factors (age, weight, therapeutic indication, and cotreatment with amiodarone or simvastatin) explained 51% of the variance in stable weekly warfarin doses in 390 patients attending an anticoagulant clinic in a Brazilian public hospital. The VKORC1 3673G>A genotype was the most important predictor of warfarin dose, with a partial R(2) value of 23.9%. Replacing the VKORC1 3673G>A genotype with VKORC1 diplotype did not increase the algorithm`s predictive power. We suggest that three other single-nucleotide polymorphisms (SNPs) (5808T>G, 6853G>C, and 9041G>A) that are in strong linkage disequilibrium (LD) with 3673G>A would be equally good predictors of the warfarin dose requirement. The algorithm`s predictive power was similar across the self-identified ""race/color"" subsets. ""Race/color"" was not associated with stable warfarin dose in the multiple regression model, although the required warfarin dose was significantly lower (P = 0.006) in white (29 +/- 13 mg/week, n = 196) than in black patients (35 +/- 15 mg/week, n = 76).