928 resultados para Computer Simulation, Adaptive Simulations
Resumo:
Pós-graduação em Física - IGCE
Resumo:
Pós-graduação em Ciência e Tecnologia de Materiais - FC
Resumo:
Pós-graduação em Educação para a Ciência - FC
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The objective of this project was to study the epidemiology of bovine tuberculosis in the presence of a wildlife reservoir species. Cross-sectional and longitudinal studies of possum populations with endemic bovine tuberculosis infection were analyzed. The results were used to develop a computer simulation model of the dynamics of bovine tuberculosis infection in possum populations. A case-control study of breakdowns to tuberculosis infection in cattle herds in the Central North Island of New Zealand was conducted to identify risk factors other than exposure to tuberculosis in local possum populations.
Resumo:
This article describes the design, implementation, and experiences with AcMus, an open and integrated software platform for room acoustics research, which comprises tools for measurement, analysis, and simulation of rooms for music listening and production. Through use of affordable hardware, such as laptops, consumer audio interfaces and microphones, the software allows evaluation of relevant acoustical parameters with stable and consistent results, thus providing valuable information in the diagnosis of acoustical problems, as well as the possibility of simulating modifications in the room through analytical models. The system is open-source and based on a flexible and extensible Java plug-in framework, allowing for cross-platform portability, accessibility and experimentation, thus fostering collaboration of users, developers and researchers in the field of room acoustics.
Resumo:
The photons scattered by the Compton effect can be used to characterize the physical properties of a given sample due to the influence that the electron density exerts on the number of scattered photons. However, scattering measurements involve experimental and physical factors that must be carefully analyzed to predict uncertainty in the detection of Compton photons. This paper presents a method for the optimization of the geometrical parameters of an experimental arrangement for Compton scattering analysis, based on its relations with the energy and incident flux of the X-ray photons. In addition, the tool enables the statistical analysis of the information displayed and includes the coefficient of variation (CV) measurement for a comparative evaluation of the physical parameters of the model established for the simulation. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
There is special interest in the incorporation of metallic nanoparticles in a surrounding dielectric matrix for obtaining composites with desirable characteristics such as for surface plasmon resonance, which can be used in photonics and sensing, and controlled surface electrical conductivity. We investigated nanocomposites produced through metallic ion implantation in insulating substrate, where the implanted metal self-assembles into nanoparticles. During the implantation, the excess of metal atom concentration above the solubility limit leads to nucleation and growth of metal nanoparticles, driven by the temperature and temperature gradients within the implanted sample including the beam-induced thermal characteristics. The nanoparticles nucleate near the maximum of the implantation depth profile (projected range), that can be estimated by computer simulation using the TRIDYN. This is a Monte Carlo simulation program based on the TRIM (Transport and Range of Ions in Matter) code that takes into account compositional changes in the substrate due to two factors: previously implanted dopant atoms, and sputtering of the substrate surface. Our study suggests that the nanoparticles form a bidimentional array buried few nanometers below the substrate surface. More specifically we have studied Au/PMMA (polymethylmethacrylate), Pt/PMMA, Ti/alumina and Au/alumina systems. Transmission electron microscopy of the implanted samples showed the metallic nanoparticles formed in the insulating matrix. The nanocomposites were characterized by measuring the resistivity of the composite layer as function of the dose implanted. These experimental results were compared with a model based on percolation theory, in which electron transport through the composite is explained by conduction through a random resistor network formed by the metallic nanoparticles. Excellent agreement was found between the experimental results and the predictions of the theory. It was possible to conclude, in all cases, that the conductivity process is due only to percolation (when the conducting elements are in geometric contact) and that the contribution from tunneling conduction is negligible.
Resumo:
In this work, we have used a combined of atomistic simulation methods to explore the effects of confinement of water molecules between silica surfaces. Firstly, the mechanical properties of water severe confined (~3A) between two silica alpha-quartz was determined based on first principles calculations within the density functional theory (DFT). Simulated annealing methods were employed due to the complex potential energry surface, and the difficulties to avoid local minima. Our results suggest that much of the stiffness of the material (46%) remains, even after the insertion of a water monolayer in the silica. Secondly, in order to access typical time scales for confined systems, classical molecular dynamics was used to determine the dynamical properties of water confined in silica cylindrical pores, with diameters varying from 10 to 40A. in this case we have varied the passivation of the silica surface, from 13% to 100% of SiOH, and the other terminations being SiOH2 and SiOH3, the distribution of the different terminations was obtained with a Monte Carlo simulation. The simulations indicates a lowering of the diffusion coefficientes as the diameter decreases, due to the structuration of hydrogen bonds of water molecules; we have also obtained the density profiles of the confined water and the interfacial tension.
Resumo:
The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.
Resumo:
GERMAN:Im Rahmen der vorliegenden Arbeit soll der Einfluß einerräumlichen Beschränkung auf die Dynamik einer unterkühltenFlüssigkeit charakterisiert werden. Insbesondere sollgeklärt werden, welche Rolle die Kooperativität derTeilchenbewegung bei niedrigen Temperaturen spielt. Hierzuuntersuchen wir mit Hilfe einer Molekulardynamik-Computersimulation die dynamischen Eigenschaften eineseinfachen Modellglasbildners, einer binäre Lennard-Jones-Flüssigkeit, für Systeme mit unterschiedlichen Geometrienund Wandarten. Durch geschickte Wahl der Wandpotentiale konnte erreichtwerden, daß die Struktur der Flüssigkeit mit der im Bulknahezu identisch ist.In Filmen mit glatten Wänden beobachtet man, daß dieDynamik der Flüssigkeit in der Nähe der Wand starkbeschleunigt ist und sich diese veränderte Dynamik bis weitin den Film ausbreitet. Den umgekehrten Effekt erhält man,wenn man eine strukturierte, rauhe Wand verwendet, in derenNähe die Dynamik stark verlangsamt ist.Die kontinuierliche Verlangsamung bzw. Beschleunigung derDynamik vom Verhalten an der Oberfläche zum Bulkverhaltenin genügend großem Abstand zur Wand können wirphänomenologisch beschreiben. Hieraus kann mancharakteristische dynamische Längenskalen ablesen, die mitsinkender Temperatur kontinuierlich anwachsen, d.h. derBereich, in dem die Existenz der Wand einen (indirekten)Einfluß auf die Dynamik eines Flüssigkeitsteilchens hat,breitet sich immer weiter aus. Man kann daher vonBereichen kooperativer Bewegung sprechen, die mit sinkenderTemperatur anwachsen.Unsere Untersuchungen von Röhren zeigen, daß aufgrund desstärkeren Einflusses der Wände die beobachteten Effektegrößer sind als in Filmgeometrie. Bei Reduzierung derSystemgröße zeigen sich immer größere Unterschiede zumBulkverhalten.
Resumo:
This work contains several applications of the mode-coupling theory (MCT) and is separated into three parts. In the first part we investigate the liquid-glass transition of hard spheres for dimensions d→∞ analytically and numerically up to d=800 in the framework of MCT. We find that the critical packing fraction ϕc(d) scales as d²2^(-d), which is larger than the Kauzmann packing fraction ϕK(d) found by a small-cage expansion by Parisi and Zamponi [J. Stat. Mech.: Theory Exp. 2006, P03017 (2006)]. The scaling of the critical packing fraction is different from the relation ϕc(d)∼d2^(-d) found earlier by Kirkpatrick and Wolynes [Phys. Rev. A 35, 3072 (1987)]. This is due to the fact that the k dependence of the critical collective and self nonergodicity parameters fc(k;d) and fcs(k;d) was assumed to be Gaussian in the previous theories. We show that in MCT this is not the case. Instead fc(k;d) and fcs(k;d), which become identical in the limit d→∞, converge to a non-Gaussian master function on the scale k∼d^(3/2). We find that the numerically determined value for the exponent parameter λ and therefore also the critical exponents a and b depend on the dimension d, even at the largest evaluated dimension d=800. In the second part we compare the results of a molecular-dynamics simulation of liquid Lennard-Jones argon far away from the glass transition [D. Levesque, L. Verlet, and J. Kurkijärvi, Phys. Rev. A 7, 1690 (1973)] with MCT. We show that the agreement between theory and computer simulation can be improved by taking binary collisions into account [L. Sjögren, Phys. Rev. A 22, 2866 (1980)]. We find that an empiric prefactor of the memory function of the original MCT equations leads to similar results. In the third part we derive the equations for a mode-coupling theory for the spherical components of the stress tensor. Unfortunately it turns out that they are too complex to be solved numerically.
Resumo:
Allgemein erlaubt adaptive Gitterverfeinerung eine Steigerung der Effizienz numerischer Simulationen ohne dabei die Genauigkeit des Ergebnisses signifikant zu verschlechtern. Es ist jedoch noch nicht erforscht, in welchen Bereichen des Rechengebietes die räumliche Auflösung tatsächlich vergröbert werden kann, ohne die Genauigkeit des Ergebnisses signifikant zu beeinflussen. Diese Frage wird hier für ein konkretes Beispiel von trockener atmosphärischer Konvektion untersucht, nämlich der Simulation von warmen Luftblasen. Zu diesem Zweck wird ein neuartiges numerisches Modell entwickelt, das auf diese spezielle Anwendung ausgerichtet ist. Die kompressiblen Euler-Gleichungen werden mit einer unstetigen Galerkin Methode gelöst. Die Zeitintegration geschieht mit einer semi-implizite Methode und die dynamische Adaptivität verwendet raumfüllende Kurven mit Hilfe der Funktionsbibliothek AMATOS. Das numerische Modell wird validiert mit Hilfe einer Konvergenzstudie und fünf Standard-Testfällen. Eine Methode zum Vergleich der Genauigkeit von Simulationen mit verschiedenen Verfeinerungsgebieten wird eingeführt, die ohne das Vorhandensein einer exakten Lösung auskommt. Im Wesentlichen geschieht dies durch den Vergleich von Eigenschaften der Lösung, die stark von der verwendeten räumlichen Auflösung abhängen. Im Fall einer aufsteigenden Warmluftblase ist der zusätzliche numerische Fehler durch die Verwendung der Adaptivität kleiner als 1% des gesamten numerischen Fehlers, wenn die adaptive Simulation mehr als 50% der Elemente einer uniformen hoch-aufgelösten Simulation verwendet. Entsprechend ist die adaptive Simulation fast doppelt so schnell wie die uniforme Simulation.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
IEF protein binary separations were performed in a 12-μL drop suspended between two palladium electrodes, using pH gradients created by electrolysis of simple buffers at low voltages (1.5-5 V). The dynamics of pH gradient formation and protein separation were investigated by computer simulation and experimentally via digital video microscope imaging in the presence and absence of pH indicator solution. Albumin, ferritin, myoglobin, and cytochrome c were used as model proteins. A drop containing 2.4 μg of each protein was applied, electrophoresed, and allowed to evaporate until it splits to produce two fractions that were recovered by rinsing the electrodes with a few microliters of buffer. Analysis by gel electrophoresis revealed that anode and cathode fractions were depleted from high pI and low pI proteins, respectively, whereas proteins with intermediate pI values were recovered in both fractions. Comparable data were obtained with diluted bovine serum that was fortified with myoglobin and cytochrome c.