917 resultados para Simulation in robotcs
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Agronomia (Ciência do Solo) - FCAV
Resumo:
The use of computational grid simulators is particularly important for studying the algorithms of task scheduling. Through the simulators it’s possible to assess and compare the performance of different algorithms in various scenarios. Despite the simulation tools provide basic features for simulation in distributed environments, they don’t offer internal policies of task scheduling, so that the implementation of the algorithms must be realized by the user himself. Therefore, this study aims to present the library of task scheduling LIBTS (LIBrary Tasks Scheduling) which is developed and adapted to the SimGrid simulator to provide the users with a tool to analyze the algorithms of task scheduling in the computational grid.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The water management in any area is highly important to the success of many business and also of life and the understanding of your relationship with the environment brings better control to its demand. I.e. hydrogeological studies are needed under better understanding of the behavior of an aquifer, so that its management is done so as not to deplete or harm it. The objective of this work is the numerical modeling in transient regime of a portion of the Rio Claro aquifer formation in order to get answers about its hydrogeological parameters, its main flow direction and also its most sensitive parameters. A literature review and conceptual characterization of the aquifer, combined with field campaigns and monitoring of local water level (NA), enabled the subsequent construction of the mathematical model by finite elements method, using the FEFLOW 6.1 ® computational algorithm. The study site includes the campus of UNESP and residential and industrial areas of Rio Claro city. Its area of 9.73 km ² was divided into 318040 triangular elements spread over six layers, totaling a volume of 0.25 km³. The local topography and geological contacts were obtained from previous geological and geophysical studies as well as profiles of campus wells and SIAGAS / CPRM system. The seven monitoring wells on campus were set up as observation points for calibration and checking of the simulation results. Sampling and characterization of Rio Claro sandstones shows up a high hydrological and lithological heterogeneity for the aquifer formation. The simulation results indicate values of hydraulic conductivity between 10-6 and 10-4 m / s, getting the Recharge/Rainfall simulation in transient ratio at 13%. Even with the simplifications imposed on the model, it was able to represent the fluctuations of local NA over a year of monitoring. The result was the exit of 3774770 m³ of water and the consequently NA fall. The model is considered representative for the...
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The study of proportions is a common topic in many fields of study. The standard beta distribution or the inflated beta distribution may be a reasonable choice to fit a proportion in most situations. However, they do not fit well variables that do not assume values in the open interval (0, c), 0 < c < 1. For these variables, the authors introduce the truncated inflated beta distribution (TBEINF). This proposed distribution is a mixture of the beta distribution bounded in the open interval (c, 1) and the trinomial distribution. The authors present the moments of the distribution, its scoring vector, and Fisher information matrix, and discuss estimation of its parameters. The properties of the suggested estimators are studied using Monte Carlo simulation. In addition, the authors present an application of the TBEINF distribution for unemployment insurance data.
Resumo:
The use of numerical simulation in the design and evaluation of products performance is ever increasing. To a greater extent, such estimates are needed in a early design stage, when physical prototypes are not available. When dealing with vibro-acoustic models, known to be computationally expensive, a question remains, which is related to the accuracy of such models in view of the well-know variability inherent to the mass manufacturing production techniques. In addition, both academia and industry have recently realized the importance of actually listening to a products sound, either by measurements or by virtual sound synthesis, in order to assess its performance. In this work, the scatter of significant parameter variations on a simplified vehicle vibro-acoustic model is calculated on loudness metrics using Monte Carlo analysis. The mapping from the system parameters to sound quality metric is performed by a fully-coupled vibro-acoustic finite element model. Different loudness metrics are used, including overall sound pressure level expressed in dB and Specific Loudness in Sones. Sound quality equivalent sources are used to excite this model and the sound pressure level at the driver's head position is acquired to be evaluated according to sound quality metrics. No significant variation has been perceived when evaluating the system using regular sound pressure level expressed in in dB and dB(A). This happens because of the third-octave filters that averages the results under some frequency bands. On the other hand, Zwicker Loudness presents important variations, arguably, due to the masking effects.
Resumo:
The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.
Resumo:
Das Phasenverhalten und die Grenzflächeneigenschaften vonPolymeren in superkritischer Lösung werden anhand einesvergröberten Kugel-Feder-Modells für das ReferenzsystemHexadekan-CO2 untersucht. Zur Bestimmung der Parameter imPotential setzt man die kritischen Punkte von Simulation undExperiment gleich. Wechselwirkungen zwischen beidenKomponenten werden durch eine modifizierteLorentz-Berthelot-Regel modelliert. Die Übereinstimmung mitden Experimenten ist sehr gut - insbesondere kann dasPhasendiagramm des Mischsystems inklusive kritischer Linienreproduziert werden. Ein Vergleich mit numerischenStörungsrechnungen (TPT1) liefert eine qualitativeÜbereinstimmung und Hinweise zur Verbesserung derverwendeten Zustandsgleichung. Aufbauend auf diesen Betrachtungen werden die Frühstadiender Keimbildung untersucht. Für das Lennard-Jones-Systemwird zum ersten Mal der Übergang vom homogenen Gas zu einemeinzelnen Tropfen im endlichen Volumen direkt nachgewiesenund quantifiziert. Die freie Energie von kleinen Clusternwird mit einem einfachen, klassischen Nukleationsmodellbestimmt und nach oben abgeschätzt. Die vorgestellten Untersuchungen wurden durch eineWeiterentwicklung des Umbrella-Sampling-Algorithmusermöglicht. Hierbei wird die Simulation in mehrereSimulationsfenster unterteilt, die nacheinander abgearbeitetwerden. Die Methode erlaubt eine Bestimmung derFreien-Energie-Landschaft an einer beliebigen Stelle desPhasendiagramms. Der Fehler ist kontrollierbar undunabhängig von der Art der Unterteilung.
Resumo:
Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a complicated target distribution via simple ergodic averages. A fundamental question in MCMC applications is when should the sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the MCMC sampling the first time the width of a confidence interval based on the ergodic averages is less than a user-specified value. Hence calculating Monte Carlo standard errors is a critical step in assessing the output of the simulation. In particular, we consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We describe sufficient conditions for the strong consistency and asymptotic normality of both methods and investigate their finite sample properties in a variety of examples.