917 resultados para Simulation in robotcs
Resumo:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
Resumo:
When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.
Resumo:
In order to estimate depth through supervised deep learning-based stereo methods, it is necessary to have access to precise ground truth depth data. While the gathering of precise labels is commonly tackled by deploying depth sensors, this is not always a viable solution. For instance, in many applications in the biomedical domain, the choice of sensors capable of sensing depth at small distances with high precision on difficult surfaces (that present non-Lambertian properties) is very limited. It is therefore necessary to find alternative techniques to gather ground truth data without having to rely on external sensors. In this thesis, two different approaches have been tested to produce supervision data for biomedical images. The first aims to obtain input stereo image pairs and disparities through simulation in a virtual environment, while the second relies on a non-learned disparity estimation algorithm in order to produce noisy disparities, which are then filtered by means of hand-crafted confidence measures to create noisy labels for a subset of pixels. Among the two, the second approach, which is referred in literature as proxy-labeling, has shown the best results and has even outperformed the non-learned disparity estimation algorithm used for supervision.
Resumo:
We report the STAR measurements of dielectron (e(+)e(-)) production at midrapidity (|y(ee)|<1) in Au+Au collisions at √[s(NN)]=200 GeV. The measurements are evaluated in different invariant mass regions with a focus on 0.30-0.76 (ρ-like), 0.76-0.80 (ω-like), and 0.98-1.05 (ϕ-like) GeV/c(2). The spectrum in the ω-like and ϕ-like regions can be well described by the hadronic cocktail simulation. In the ρ-like region, however, the vacuum ρ spectral function cannot describe the shape of the dielectron excess. In this range, an enhancement of 1.77±0.11(stat)±0.24(syst)±0.33(cocktail) is determined with respect to the hadronic cocktail simulation that excludes the ρ meson. The excess yield in the ρ-like region increases with the number of collision participants faster than the ω and ϕ yields. Theoretical models with broadened ρ contributions through interactions with constituents in the hot QCD medium provide a consistent description of the dilepton mass spectra for the measurement presented here and the earlier data at the Super Proton Synchrotron energies.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
The quantification of the available energy in the environment is important because it determines photosynthesis, evapotranspiration and, therefore, the final yield of crops. Instruments for measuring the energy balance are costly and indirect estimation alternatives are desirable. This study assessed the Deardorff's model performance during a cycle of a sugarcane crop in Piracicaba, State of São Paulo, Brazil, in comparison to the aerodynamic method. This mechanistic model simulates the energy fluxes (sensible, latent heat and net radiation) at three levels (atmosphere, canopy and soil) using only air temperature, relative humidity and wind speed measured at a reference level above the canopy, crop leaf area index, and some pre-calibrated parameters (canopy albedo, soil emissivity, atmospheric transmissivity and hydrological characteristics of the soil). The analysis was made for different time scales, insolation conditions and seasons (spring, summer and autumn). Analyzing all data of 15 minute intervals, the model presented good performance for net radiation simulation in different insolations and seasons. The latent heat flux in the atmosphere and the sensible heat flux in the atmosphere did not present differences in comparison to data from the aerodynamic method during the autumn. The sensible heat flux in the soil was poorly simulated by the model due to the poor performance of the soil water balance method. The Deardorff's model improved in general the flux simulations in comparison to the aerodynamic method when more insolation was available in the environment.
Resumo:
In this paper, the method of Galerkin and the Askey-Wiener scheme are used to obtain approximate solutions to the stochastic displacement response of Kirchhoff plates with uncertain parameters. Theoretical and numerical results are presented. The Lax-Milgram lemma is used to express the conditions for existence and uniqueness of the solution. Uncertainties in plate and foundation stiffness are modeled by respecting these conditions, hence using Legendre polynomials indexed in uniform random variables. The space of approximate solutions is built using results of density between the space of continuous functions and Sobolev spaces. Approximate Galerkin solutions are compared with results of Monte Carlo simulation, in terms of first and second order moments and in terms of histograms of the displacement response. Numerical results for two example problems show very fast convergence to the exact solution, at excellent accuracies. The Askey-Wiener Galerkin scheme developed herein is able to reproduce the histogram of the displacement response. The scheme is shown to be a theoretically sound and efficient method for the solution of stochastic problems in engineering. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Neste trabalho é efectuado, não só o diagnóstico em regime permanente, mas também o estudo, simulação e análise do comportamento dinâmico da rede eléctrica da ilha de São Vicente em Cabo Verde. Os estudos de estabilidade transitória desempenham um importante papel, tanto no planeamento como na operação dos sistemas de potência. Tais estudos são realizados, em grande parte, através de simulação digital no domínio do tempo, utilizando integração numérica para resolver as equações não-lineares que modelam a dinâmica do sistema e dependem da existência de registos reais de perturbação (ex: osciloperturbografia). O objectivo do trabalho será também verificar a aplicabilidade dos requisitos técnicos que as unidades geradoras devem ter, no que concerne ao controlo de tensão, estabelecidos na futura regulamentação europeia desenvolvida pela ENTSO-E (European Network Transmission System Operator for Electricity). De entre os requisitos analisou-se a capacidade das máquinas existentes suportarem cavas de tensão decorrentes de curto-circuitos trifásicos simétricos, Fault Ride Through, no ponto de ligação à rede. Identificaram-se para o efeito os factores que influenciam a estabilidade desta rede, em regime perturbado nomeadamente: (i) duração do defeito, (ii) caracterização da carga, com e sem a presença do sistema de controlo de tensão (AVR) em unidades de geração síncronas. Na ausência de registos reais sobre o comportamento do sistema, conclui-se que este é sensível à elasticidade das cargas em particular do tipo potência constante, existindo risco de perda de estabilidade, neste caso, para defeitos superiores a 5ms sem AVR. A existência de AVR nesta rede afigura-se como indispensável para garantir estabilidade de tensão sendo contudo necessário proceder a uma correcta parametrização.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde. Área de especialização: Protecção contra Radiações
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Química e Biológica
Resumo:
Aim - To use Monte Carlo (MC) together with voxel phantoms to analyze the tissue heterogeneity effect in the dose distributions and equivalent uniform dose (EUD) for (125)I prostate implants. Background - Dose distribution calculations in low dose-rate brachytherapy are based on the dose deposition around a single source in a water phantom. This formalism does not take into account tissue heterogeneities, interseed attenuation, or finite patient dimensions effects. Tissue composition is especially important due to the photoelectric effect. Materials and Methods - The computed tomographies (CT) of two patients with prostate cancer were used to create voxel phantoms for the MC simulations. An elemental composition and density were assigned to each structure. Densities of the prostate, vesicles, rectum and bladder were determined through the CT electronic densities of 100 patients. The same simulations were performed considering the same phantom as pure water. Results were compared via dose-volume histograms and EUD for the prostate and rectum. Results - The mean absorbed doses presented deviations of 3.3-4.0% for the prostate and of 2.3-4.9% for the rectum, when comparing calculations in water with calculations in the heterogeneous phantom. In the calculations in water, the prostate D 90 was overestimated by 2.8-3.9% and the rectum D 0.1cc resulted in dose differences of 6-8%. The EUD resulted in an overestimation of 3.5-3.7% for the prostate and of 7.7-8.3% for the rectum. Conclusions - The deposited dose was consistently overestimated for the simulation in water. In order to increase the accuracy in the determination of dose distributions, especially around the rectum, the introduction of the model-based algorithms is recommended.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Química
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Eletrotécnica Ramo de Automação e Eletrónica Industrial
Resumo:
A forma como aprendemos depende do contexto tecnológico e sociocultural que nos rodeia, actualmente a inclusão de tecnologia recente na sala de aula não é mais considerada opcional, mas sim uma necessidade pois a forma como o aluno aprende está em constante evolução. Tendo em atenção esta necessidade, foi desenvolvido no decorrer desta tese um simulador em realidade virtual que utiliza comandos/interfaces hápticos. O objectivo deste simulador é ensinar conceitos de física de forma interactiva. Os dispositivos hápticos permitem adicionar o sentido táctil ou de toque à interacção entre homem e máquina, permitindo assim aceder a novas sensações relativas ao seu uso nomeadamente com objectivos de aprendizagem. O simulador desenvolvido designado por “Forces of Physics” aborda três tipos de forças da física: forças de atrito, forças gravitacionais e forças aerodinâmicas. Cada tipo de força corresponde a um módulo do simulador contendo uma simulação individual em que são explicados conceitos específicos dessa força num ambiente visual estimulante e com uma interacção mais realista devido à inclusão do dispositivo háptico Novint Falcon. O simulador foi apresentado a vários utilizadores bem como á comunidade científica através de apresentações em conferências. A avaliação foi realizada com recurso a um questionário com dez perguntas, cinco de sobre aprendizagem e cinco sobre a utilização, tendo sido preenchido por 14 utilizadores. O simulador obteve uma boa recepção por parte dos utilizadores, tendo vários utilizadores expressado as suas opiniões sobre estado actual do simulador, do futuro do mesmo e da respectiva validade para uso na sala de aula.