942 resultados para Stochastic simulation algorithm
Resumo:
A parallel formulation for the simulation of a branch prediction algorithm is presented. This parallel formulation identifies independent tasks in the algorithm which can be executed concurrently. The parallel implementation is based on the multithreading model and two parallel programming platforms: pthreads and Cilk++. Improvement in execution performance by up to 7 times is observed for a generic 2-bit predictor in a 12-core multiprocessor system.
Resumo:
As satellite technology develops, satellite rainfall estimates are likely to become ever more important in the world of food security. It is therefore vital to be able to identify the uncertainty of such estimates and for end users to be able to use this information in a meaningful way. This paper presents new developments in the methodology of simulating satellite rainfall ensembles from thermal infrared satellite data. Although the basic sequential simulation methodology has been developed in previous studies, it was not suitable for use in regions with more complex terrain and limited calibration data. Developments in this work include the creation of a multithreshold, multizone calibration procedure, plus investigations into the causes of an overestimation of low rainfall amounts and the best way to take into account clustered calibration data. A case study of the Ethiopian highlands has been used as an illustration.
Resumo:
With the fast development of wireless communications, ZigBee and semiconductor devices, home automation networks have recently become very popular. Since typical consumer products deployed in home automation networks are often powered by tiny and limited batteries, one of the most challenging research issues is concerning energy reduction and the balancing of energy consumption across the network in order to prolong the home network lifetime for consumer devices. The introduction of clustering and sink mobility techniques into home automation networks have been shown to be an efficient way to improve the network performance and have received significant research attention. Taking inspiration from nature, this paper proposes an Ant Colony Optimization (ACO) based clustering algorithm specifically with mobile sink support for home automation networks. In this work, the network is divided into several clusters and cluster heads are selected within each cluster. Then, a mobile sink communicates with each cluster head to collect data directly through short range communications. The ACO algorithm has been utilized in this work in order to find the optimal mobility trajectory for the mobile sink. Extensive simulation results from this research show that the proposed algorithm significantly improves home network performance when using mobile sinks in terms of energy consumption and network lifetime as compared to other routing algorithms currently deployed for home automation networks.
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009
Resumo:
Purpose - The purpose of this paper is to develop a novel unstructured simulation approach for injection molding processes described by the Hele-Shaw model. Design/methodology/approach - The scheme involves dual dynamic meshes with active and inactive cells determined from an initial background pointset. The quasi-static pressure solution in each timestep for this evolving unstructured mesh system is approximated using a control volume finite element method formulation coupled to a corresponding modified volume of fluid method. The flow is considered to be isothermal and non-Newtonian. Findings - Supporting numerical tests and performance studies for polystyrene described by Carreau, Cross, Ellis and Power-law fluid models are conducted. Results for the present method are shown to be comparable to those from other methods for both Newtonian fluid and polystyrene fluid injected in different mold geometries. Research limitations/implications - With respect to the methodology, the background pointset infers a mesh that is dynamically reconstructed here, and there are a number of efficiency issues and improvements that would be relevant to industrial applications. For instance, one can use the pointset to construct special bases and invoke a so-called ""meshless"" scheme using the basis. This would require some interesting strategies to deal with the dynamic point enrichment of the moving front that could benefit from the present front treatment strategy. There are also issues related to mass conservation and fill-time errors that might be addressed by introducing suitable projections. The general question of ""rate of convergence"" of these schemes requires analysis. Numerical results here suggest first-order accuracy and are consistent with the approximations made, but theoretical results are not available yet for these methods. Originality/value - This novel unstructured simulation approach involves dual meshes with active and inactive cells determined from an initial background pointset: local active dual patches are constructed ""on-the-fly"" for each ""active point"" to form a dynamic virtual mesh of active elements that evolves with the moving interface.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
In this work, two different docking programs were used, AutoDock and FlexX, which use different types of scoring functions and searching methods. The docking poses of all quinone compounds studied stayed in the same region in the trypanothione reductase. This region is a hydrophobic pocket near to Phe396, Pro398 and Leu399 amino acid residues. The compounds studied displays a higher affinity in trypanothione reductase (TR) than glutathione reductase (GR), since only two out of 28 quinone compounds presented more favorable docking energy in the site of human enzyme. The interaction of quinone compounds with the TR enzyme is in agreement with other studies, which showed different binding sites from the ones formed by cysteines 52 and 58. To verify the results obtained by docking, we carried out a molecular dynamics simulation with the compounds that presented the highest and lowest docking energies. The results showed that the root mean square deviation (RMSD) between the initial and final pose were very small. In addition, the hydrogen bond pattern was conserved along the simulation. In the parasite enzyme, the amino acid residues Leu399, Met400 and Lys402 are replaced in the human enzyme by Met406, Tyr407 and Ala409, respectively. In view of the fact that Leu399 is an amino acid of the Z site, this difference could be explored to design selective inhibitors of TR.
Resumo:
This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.
Resumo:
The Intelligent Algorithm is designed for theusing a Battery source. The main function is to automate the Hybrid System through anintelligent Algorithm so that it takes the decision according to the environmental conditionsfor utilizing the Photovoltaic/Solar Energy and in the absence of this, Fuel Cell energy isused. To enhance the performance of the Fuel Cell and Photovoltaic Cell we used batterybank which acts like a buffer and supply the current continuous to the load. To develop the main System whlogic based controller was used. Fuzzy Logic based controller used to develop this system,because they are chosen to be feasible for both controlling the decision process and predictingthe availability of the available energy on the basis of current Photovoltaic and Battery conditions. The Intelligent Algorithm is designed to optimize the performance of the system and to selectthe best available energy source(s) in regard of the input parameters. The enhance function of these Intelligent Controller is to predict the use of available energy resources and turn on thatparticular source for efficient energy utilization. A fuzzy controller was chosen to take thedecisions for the efficient energy utilization from the given resources. The fuzzy logic basedcontroller is designed in the Matlab-Simulink environment. Initially, the fuzzy based ruleswere built. Then MATLAB based simulation system was designed and implemented. Thenthis whole proposed model is simulated and tested for the accuracy of design and performanceof the system.
Resumo:
Bin planning (arrangements) is a key factor in the timber industry. Improper planning of the storage bins may lead to inefficient transportation of resources, which threaten the overall efficiency and thereby limit the profit margins of sawmills. To address this challenge, a simulation model has been developed. However, as numerous alternatives are available for arranging bins, simulating all possibilities will take an enormous amount of time and it is computationally infeasible. A discrete-event simulation model incorporating meta-heuristic algorithms has therefore been investigated in this study. Preliminary investigations indicate that the results achieved by GA based simulation model are promising and better than the other meta-heuristic algorithm. Further, a sensitivity analysis has been done on the GA based optimal arrangement which contributes to gaining insights and knowledge about the real system that ultimately leads to improved and enhanced efficiency in sawmill yards. It is expected that the results achieved in the work will support timber industries in making optimal decisions with respect to arrangement of storage bins in a sawmill yard.
Resumo:
The regimen of environmental flows (EF) must be included as terms of environmental demand in the management of water resources. Even though there are numerous methods for the computation of EF, the criteria applied at different steps in the calculation process are quite subjective whereas the results are fixed values that must be meet by water planners. This study presents a friendly-user tool for the assessment of the probability of compliance of a certain EF scenario with the natural regimen in a semiarid area in southern Spain. 250 replications of a 25-yr period of different hydrological variables (rainfall, minimum and maximum flows, ...) were obtained at the study site from the combination of Monte Carlo technique and local hydrological relationships. Several assumptions are made such as the independence of annual rainfall from year to year and the variability of occurrence of the meteorological agents, mainly precipitation as the main source of uncertainty. Inputs to the tool are easily selected from a first menu and comprise measured rainfall data, EF values and the hydrological relationships for at least a 20-yr period. The outputs are the probabilities of compliance of the different components of the EF for the study period. From this, local optimization can be applied to establish EF components with a certain level of compliance in the study period. Different options for graphic output and analysis of results are included in terms of graphs and tables in several formats. This methodology turned out to be a useful tool for the implementation of an uncertainty analysis within the scope of environmental flows in water management and allowed the simulation of the impacts of several water resource development scenarios in the study site.
Resumo:
Neste trabalho é dado ênfase à inclusão das incertezas na avaliação do comportamento estrutural, objetivando uma melhor representação das características do sistema e uma quantificação do significado destas incertezas no projeto. São feitas comparações entre as técnicas clássicas existentes de análise de confiabilidade, tais como FORM, Simulação Direta Monte Carlo (MC) e Simulação Monte Carlo com Amostragem por Importância Adaptativa (MCIS), e os métodos aproximados da Superfície de Resposta( RS) e de Redes Neurais Artificiais(ANN). Quando possível, as comparações são feitas salientando- se as vantagens e inconvenientes do uso de uma ou de outra técnica em problemas com complexidades crescentes. São analisadas desde formulações com funções de estado limite explícitas até formulações implícitas com variabilidade espacial de carregamento e propriedades dos materiais, incluindo campos estocásticos. É tratado, em especial, o problema da análise da confiabilidade de estruturas de concreto armado incluindo o efeito da variabilidade espacial de suas propriedades. Para tanto é proposto um modelo de elementos finitos para a representação do concreto armado que incorpora as principais características observadas neste material. Também foi desenvolvido um modelo para a geração de campos estocásticos multidimensionais não Gaussianos para as propriedades do material e que é independente da malha de elementos finitos, assim como implementadas técnicas para aceleração das avaliações estruturais presentes em qualquer das técnicas empregadas. Para o tratamento da confiabilidade através da técnica da Superfície de Resposta, o algoritmo desenvolvido por Rajashekhar et al(1993) foi implementado. Já para o tratamento através de Redes Neurais Artificias, foram desenvolvidos alguns códigos para a simulação de redes percéptron multicamada e redes com função de base radial e então implementados no algoritmo de avaliação de confiabilidade desenvolvido por Shao et al(1997). Em geral, observou-se que as técnicas de simulação tem desempenho bastante baixo em problemas mais complexos, sobressaindo-se a técnica de primeira ordem FORM e as técnicas aproximadas da Superfície de Resposta e de Redes Neurais Artificiais, embora com precisão prejudicada devido às aproximações presentes.
Resumo:
Neste trabalho apresentamos um novo método numérico com passo adaptativo baseado na abordagem de linearização local, para a integração de equações diferenciais estocásticas com ruído aditivo. Propomos, também, um esquema computacional que permite a implementação eficiente deste método, adaptando adequadamente o algorítimo de Padé com a estratégia “scaling-squaring” para o cálculo das exponenciais de matrizes envolvidas. Antes de introduzirmos a construção deste método, apresentaremos de forma breve o que são equações diferenciais estocásticas, a matemática que as fundamenta, a sua relevância para a modelagem dos mais diversos fenômenos, e a importância da utilização de métodos numéricos para avaliar tais equações. Também é feito um breve estudo sobre estabilidade numérica. Com isto, pretendemos introduzir as bases necessárias para a construção do novo método/esquema. Ao final, vários experimentos numéricos são realizados para mostrar, de forma prática, a eficácia do método proposto, e compará-lo com outros métodos usualmente utilizados.
Resumo:
Trabalho apresentado no XXXV CNMAC, Natal-RN, 2014.
Resumo:
We consider risk-averse convex stochastic programs expressed in terms of extended polyhedral risk measures. We derive computable con dence intervals on the optimal value of such stochastic programs using the Robust Stochastic Approximation and the Stochastic Mirror Descent (SMD) algorithms. When the objective functions are uniformly convex, we also propose a multistep extension of the Stochastic Mirror Descent algorithm and obtain con dence intervals on both the optimal values and optimal solutions. Numerical simulations show that our con dence intervals are much less conservative and are quicker to compute than previously obtained con dence intervals for SMD and that the multistep Stochastic Mirror Descent algorithm can obtain a good approximate solution much quicker than its nonmultistep counterpart. Our con dence intervals are also more reliable than asymptotic con dence intervals when the sample size is not much larger than the problem size.