967 resultados para computational cost
Resumo:
Carbon monoxide, the chief killer in fires, and other species are modelled for a series of enclosure fires. The conditions emulate building fires where CO is formed in the rich, turbulent, nonpremixed flame and is transported frozen to lean mixtures by the ceiling jet which is cooled by radiation and dilution. Conditional moment closure modelling is used and computational domain minimisation criteria are developed which reduce the computational cost of this method. The predictions give good agreement for CO and other species in the lean, quenched-gas stream, holding promise that this method may provide a practical means of modelling real, three-dimensional fire situations. (c) 2005 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
In this work, we consider the numerical solution of a large eigenvalue problem resulting from a finite rank discretization of an integral operator. We are interested in computing a few eigenpairs, with an iterative method, so a matrix representation that allows for fast matrix-vector products is required. Hierarchical matrices are appropriate for this setting, and also provide cheap LU decompositions required in the spectral transformation technique. We illustrate the use of freely available software tools to address the problem, in particular SLEPc for the eigensolvers and HLib for the construction of H-matrices. The numerical tests are performed using an astrophysics application. Results show the benefits of the data-sparse representation compared to standard storage schemes, in terms of computational cost as well as memory requirements.
Resumo:
Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Nowadays, several sensors and mechanisms are available to estimate a mobile robot trajectory and location with respect to its surroundings. Usually absolute positioning mechanisms are the most accurate, but they also are the most expensive ones, and require pre installed equipment in the environment. Therefore, a system capable of measuring its motion and location within the environment (relative positioning) has been a research goal since the beginning of autonomous vehicles. With the increasing of the computational performance, computer vision has become faster and, therefore, became possible to incorporate it in a mobile robot. In visual odometry feature based approaches, the model estimation requires absence of feature association outliers for an accurate motion. Outliers rejection is a delicate process considering there is always a trade-off between speed and reliability of the system. This dissertation proposes an indoor 2D position system using Visual Odometry. The mobile robot has a camera pointed to the ceiling, for image analysis. As requirements, the ceiling and the oor (where the robot moves) must be planes. In the literature, RANSAC is a widely used method for outlier rejection. However, it might be slow in critical circumstances. Therefore, it is proposed a new algorithm that accelerates RANSAC, maintaining its reliability. The algorithm, called FMBF, consists on comparing image texture patterns between pictures, preserving the most similar ones. There are several types of comparisons, with different computational cost and reliability. FMBF manages those comparisons in order to optimize the trade-off between speed and reliability.
Resumo:
ABSTRACTIn the Amazon, river navigation is very important due to the length of navigable rivers and the lack of alternative road networks. Boats usually operate in unfavorable conditions, since there is no hydrodynamic relation among propellers, geometry, and the dimensions of the boat hull. Currently, there is no methodology for propeller hydrodynamic optimization with low computational cost and easy implementation in the region. The aim of this work was to develop a mathematical approach for marine propeller design applied to boats typically found on Amazon rivers. We developed an optimized formulation for the chord and pitch angle distributions, taking into account the classical model of Glauert. A theoretical analysis for the thrust and torque relationships on an annular control volume was performed. The mathematical model used was based on the Blade Element Momentum Theory (BEMT). We concluded that the new methodology proposed in this work demonstrates a good physical behavior when compared with the theory of Glauert and the experimental data of the Wageningen B3-50 propeller.
Resumo:
Based on Lucas functions, an improved version of the Diffie-Hellman distribution key scheme and to the ElGamal public key cryptosystem scheme are proposed, together with an implementation and computational cost. The security relies on the difficulty of factoring an RSA integer and on the difficulty of computing the discrete logarithm.
Resumo:
Based on third order linear sequences, an improvement version of the Diffie-Hellman distribution key scheme and the ElGamal public key cryptosystem scheme are proposed, together with an implementation and computational cost. The security relies on the difficulty of factoring an RSA integer and on the difficulty of computing the discrete logarithm.
Resumo:
En el entorno actual, diversas ramas de las ciencias, tienen la necesidad de auxiliarse de la computación de altas prestaciones para la obtención de resultados a relativamente corto plazo. Ello es debido fundamentalmente, al alto volumen de información que necesita ser procesada y también al costo computacional que demandan dichos cálculos. El beneficio al realizar este procesamiento de manera distribuida y paralela, logra acortar los tiempos de espera en la obtención de los resultados y de esta forma posibilita una toma decisiones con mayor anticipación. Para soportar ello, existen fundamentalmente dos modelos de programación ampliamente extendidos: el modelo de paso de mensajes a través de librerías basadas en el estándar MPI, y el de memoria compartida con la utilización de OpenMP. Las aplicaciones híbridas son aquellas que combinan ambos modelos con el fin de aprovechar en cada caso, las potencialidades específicas del paralelismo en cada uno. Lamentablemente, la práctica ha demostrado que la utilización de esta combinación de modelos, no garantiza necesariamente una mejoría en el comportamiento de las aplicaciones. Por lo tanto, un análisis de los factores que influyen en el rendimiento de las mismas, nos beneficiaría a la hora de implementarlas pero también, sería un primer paso con el fin de llegar a predecir su comportamiento. Adicionalmente, supondría una vía para determinar que parámetros de la aplicación modificar con el fin de mejorar su rendimiento. En el trabajo actual nos proponemos definir una metodología para la identificación de factores de rendimiento en aplicaciones híbridas y en congruencia, la identificación de algunos factores que influyen en el rendimiento de las mismas.
Resumo:
In computer graphics, global illumination algorithms take into account not only the light that comes directly from the sources, but also the light interreflections. This kind of algorithms produce very realistic images, but at a high computational cost, especially when dealing with complex environments. Parallel computation has been successfully applied to such algorithms in order to make it possible to compute highly-realistic images in a reasonable time. We introduce here a speculation-based parallel solution for a global illumination algorithm in the context of radiosity, in which we have taken advantage of the hierarchical nature of such an algorithm
Resumo:
In networks with small buffers, such as optical packet switching based networks, the convolution approach is presented as one of the most accurate method used for the connection admission control. Admission control and resource management have been addressed in other works oriented to bursty traffic and ATM. This paper focuses on heterogeneous traffic in OPS based networks. Using heterogeneous traffic and bufferless networks the enhanced convolution approach is a good solution. However, both methods (CA and ECA) present a high computational cost for high number of connections. Two new mechanisms (UMCA and ISCA) based on Monte Carlo method are proposed to overcome this drawback. Simulation results show that our proposals achieve lower computational cost compared to enhanced convolution approach with an small stochastic error in the probability estimation
Resumo:
The prediction of binding modes (BMs) occurring between a small molecule and a target protein of biological interest has become of great importance for drug development. The overwhelming diversity of needs leaves room for docking approaches addressing specific problems. Nowadays, the universe of docking software ranges from fast and user friendly programs to algorithmically flexible and accurate approaches. EADock2 is an example of the latter. Its multiobjective scoring function was designed around the CHARMM22 force field and the FACTS solvation model. However, the major drawback of such a software design lies in its computational cost. EADock dihedral space sampling (DSS) is built on the most efficient features of EADock2, namely its hybrid sampling engine and multiobjective scoring function. Its performance is equivalent to that of EADock2 for drug-like ligands, while the CPU time required has been reduced by several orders of magnitude. This huge improvement was achieved through a combination of several innovative features including an automatic bias of the sampling toward putative binding sites, and a very efficient tree-based DSS algorithm. When the top-scoring prediction is considered, 57% of BMs of a test set of 251 complexes were reproduced within 2 Å RMSD to the crystal structure. Up to 70% were reproduced when considering the five top scoring predictions. The success rate is lower in cross-docking assays but remains comparable with that of the latest version of AutoDock that accounts for the protein flexibility. © 2011 Wiley Periodicals, Inc. J Comput Chem, 2011.
Resumo:
Black-box optimization problems (BBOP) are de ned as those optimization problems in which the objective function does not have an algebraic expression, but it is the output of a system (usually a computer program). This paper is focussed on BBOPs that arise in the eld of insurance, and more speci cally in reinsurance problems. In this area, the complexity of the models and assumptions considered to de ne the reinsurance rules and conditions produces hard black-box optimization problems, that must be solved in order to obtain the optimal output of the reinsurance. The application of traditional optimization approaches is not possible in BBOP, so new computational paradigms must be applied to solve these problems. In this paper we show the performance of two evolutionary-based techniques (Evolutionary Programming and Particle Swarm Optimization). We provide an analysis in three BBOP in reinsurance, where the evolutionary-based approaches exhibit an excellent behaviour, nding the optimal solution within a fraction of the computational cost used by inspection or enumeration methods.