976 resultados para Monte-carlo Simulations
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
The present study explores a “hydrophobic” energy function for folding simulations of the protein lattice model. The contribution of each monomer to conformational energy is the product of its “hydrophobicity” and the number of contacts it makes, i.e., E(h⃗, c⃗) = −Σi=1N cihi = −(h⃗.c⃗) is the negative scalar product between two vectors in N-dimensional cartesian space: h⃗ = (h1, … , hN), which represents monomer hydrophobicities and is sequence-dependent; and c⃗ = (c1, … , cN), which represents the number of contacts made by each monomer and is conformation-dependent. A simple theoretical analysis shows that restrictions are imposed concomitantly on both sequences and native structures if the stability criterion for protein-like behavior is to be satisfied. Given a conformation with vector c⃗, the best sequence is a vector h⃗ on the direction upon which the projection of c⃗ − c̄⃗ is maximal, where c̄⃗ is the diagonal vector with components equal to c̄, the average number of contacts per monomer in the unfolded state. Best native conformations are suggested to be not maximally compact, as assumed in many studies, but the ones with largest variance of contacts among its monomers, i.e., with monomers tending to occupy completely buried or completely exposed positions. This inside/outside segregation is reflected on an apolar/polar distribution on the corresponding sequence. Monte Carlo simulations in two dimensions corroborate this general scheme. Sequences targeted to conformations with large contact variances folded cooperatively with thermodynamics of a two-state transition. Sequences targeted to maximally compact conformations, which have lower contact variance, were either found to have degenerate ground state or to fold with much lower cooperativity.
Resumo:
Protein aggregation is studied by following the simultaneous folding of two designed identical 20-letter amino acid chains within the framework of a lattice model and using Monte Carlo simulations. It is found that protein aggregation is determined by elementary structures (partially folded intermediates) controlled by local contacts among some of the most strongly interacting amino acids and formed at an early stage in the folding process.
Resumo:
A physical theory of protein secondary structure is proposed and tested by performing exceedingly simple Monte Carlo simulations. In essence, secondary structure propensities are predominantly a consequence of two competing local effects, one favoring hydrogen bond formation in helices and turns, the other opposing the attendant reduction in sidechain conformational entropy on helix and turn formation. These sequence specific biases are densely dispersed throughout the unfolded polypeptide chain, where they serve to preorganize the folding process and largely, but imperfectly, anticipate the native secondary structure.
Resumo:
How colloidal particles interact with each other is one of the key issues that determines our ability to interpret experimental results for phase transitions in colloidal dispersions and our ability to apply colloid science to various industrial processes. The long-accepted theories for answering this question have been challenged by results from recent experiments. Herein we show from Monte-Carlo simulations that there is a short-range attractive force between identical macroions in electrolyte solutions containing divalent counterions. Complementing some recent and related results by others, we present strong evidence of attraction between a pair of spherical macroions in the presence of added salt ions for the conditions where the interacting macroion pair is not affected by any other macroions that may be in the solution. This attractive force follows from the internal-energy contribution of counterion mediation. Contrary to conventional expectations, for charged macroions in an electrolyte solution, the entropic force is repulsive at most solution conditions because of localization of small ions in the vicinity of macroions. Both Derjaguin–Landau–Verwey–Overbeek theory and Sogami–Ise theory fail to describe the attractive interactions found in our simulations; the former predicts only repulsive interaction and the latter predicts a long-range attraction that is too weak and occurs at macroion separations that are too large. Our simulations provide fundamental “data” toward an improved theory for the potential of mean force as required for optimum design of new materials including those containing nanoparticles.
Resumo:
Spatial structure of genetic variation within populations, an important interacting influence on evolutionary and ecological processes, can be analyzed in detail by using spatial autocorrelation statistics. This paper characterizes the statistical properties of spatial autocorrelation statistics in this context and develops estimators of gene dispersal based on data on standing patterns of genetic variation. Large numbers of Monte Carlo simulations and a wide variety of sampling strategies are utilized. The results show that spatial autocorrelation statistics are highly predictable and informative. Thus, strong hypothesis tests for neutral theory can be formulated. Most strikingly, robust estimators of gene dispersal can be obtained with practical sample sizes. Details about optimal sampling strategies are also described.
Resumo:
The folding mechanism of a 125-bead heteropolymer model for proteins is investigated with Monte Carlo simulations on a cubic lattice. Sequences that do and do not fold in a reasonable time are compared. The overall folding behavior is found to be more complex than that of models for smaller proteins. Folding begins with a rapid collapse followed by a slow search through the semi-compact globule for a sequence-dependent stable core with about 30 out of 176 native contacts which serves as the transition state for folding to a near-native structure. Efficient search for the core is dependent on structural features of the native state. Sequences that fold have large amounts of stable, cooperative structure that is accessible through short-range initiation sites, such as those in anti-parallel sheets connected by turns. Before folding is completed, the system can encounter a second bottleneck, involving the condensation and rearrangement of surface residues. Overly stable local structure of the surface residues slows this stage of the folding process. The relation of the results from the 125-mer model studies to the folding of real proteins is discussed.
Resumo:
We recorded miniature endplate currents (mEPCs) using simultaneous voltage clamp and extracellular methods, allowing correction for time course measurement errors. We obtained a 20-80% rise time (tr) of approximately 80 micros at 22 degrees C, shorter than any previously reported values, and tr variability (SD) with an upper limit of 25-30 micros. Extracellular electrode pressure can increase tr and its variability by 2- to 3-fold. Using Monte Carlo simulations, we modeled passive acetylcholine diffusion through a vesicle fusion pore expanding radially at 25 nm x ms(-1) (rapid, from endplate omega figure appearance) or 0.275 nm x ms(-1) (slow, from mast cell exocytosis). Simulated mEPCs obtained with rapid expansion reproduced tr and the overall shape of our experimental mEPCs, and were similar to simulated mEPCs obtained with instant acetylcholine release. We conclude that passive transmitter diffusion, coupled with rapid expansion of the fusion pore, is sufficient to explain the time course of experimentally measured synaptic currents with trs of less than 100 micros.
Resumo:
The concentration of protein in a solution has been found to have a significant effect on ion binding affinity. It is well known that an increase in ionic strength of the solvent medium by addition of salt modulates the ion-binding affinity of a charged protein due to electrostatic screening. In recent Monte Carlo simulations, a similar screening has been detected to arise from an increase in the concentration of the protein itself. Experimental results are presented here that verify the theoretical predictions; high concentrations of the negatively charged proteins calbindin D9k and calmodulin are found to reduce their affinity for divalent cations. The Ca(2+)-binding constant of the C-terminal site in the Asn-56 --> Ala mutant of calbindin D9k has been measured at seven different protein concentrations ranging from 27 microM to 7.35 mM by using 1H NMR. A 94% reduction in affinity is observed when going from the lowest to the highest protein concentration. For calmodulin, we have measured the average Mg(2+)-binding constant of sites I and II at 0.325, 1.08, and 3.25 mM protein and find a 13-fold difference between the two extremes. Monte Carlo calculations have been performed for the two cases described above to provide a direct comparison of the experimental and simulated effects of protein concentration on metal ion affinities. The overall agreement between theory and experiment is good. The results have important implications for all biological systems involving interactions between charged species.
Resumo:
We study a polydisperse soft-spheres model for colloids by means of microcanonical Monte Carlo simulations. We consider a polydispersity as high as 24%. Although solidification occurs, neither a crystal nor an amorphous state are thermodynamically stable. A finite size scaling analysis reveals that in the thermodynamic limit: a the fluid-solid transition is rather a crystal-amorphous phase-separation, b such phase-separation is preceded by the dynamic glass transition, and c small and big particles arrange themselves in the two phases according to a complex pattern not predicted by any fractionation scenario.
Resumo:
This paper describes JANUS, a modular massively parallel and reconfigurable FPGA-based computing system. Each JANUS module has a computational core and a host. The computational core is a 4x4 array of FPGA-based processing elements with nearest-neighbor data links. Processors are also directly connected to an I/O node attached to the JANUS host, a conventional PC. JANUS is tailored for, but not limited to, the requirements of a class of hard scientific applications characterized by regular code structure, unconventional data manipulation instructions and not too large data-base size. We discuss the architecture of this configurable machine, and focus on its use on Monte Carlo simulations of statistical mechanics. On this class of application JANUS achieves impressive performances: in some cases one JANUS processing element outperfoms high-end PCs by a factor ≈1000. We also discuss the role of JANUS on other classes of scientific applications.
Resumo:
The cold climate anomaly about 8200 years ago is investigated with CLIMBER-2, a coupled atmosphere-ocean-biosphere model of intermediate complexity. This climate model simulates a cooling of about 3.6 K over the North Atlantic induced by a meltwater pulse from Lake Agassiz routed through the Hudson strait. The meltwater pulse is assumed to have a volume of 1.6 x 10^14 m^3 and a period of discharge of 2 years on the basis of glaciological modeling of the decay of the Laurentide Ice Sheet ( LIS). We present a possible mechanism which can explain the centennial duration of the 8.2 ka cold event. The mechanism is related to the existence of an additional equilibrium climate state with reduced North Atlantic Deep Water (NADW) formation and a southward shift of the NADW formation area. Hints at the additional climate state were obtained from the largely varying duration of the pulse-induced cold episode in response to overlaid random freshwater fluctuations in Monte Carlo simulations. The model equilibrium state was attained by releasing a weak multicentury freshwater flux through the St. Lawrence pathway completed by the meltwater pulse. The existence of such a climate mode appears essential for reproducing climate anomalies in close agreement with paleoclimatic reconstructions of the 8.2 ka event. The results furthermore suggest that the temporal evolution of the cold event was partly a matter of chance.
Resumo:
We investigate the critical properties of the four-state commutative random permutation glassy Potts model in three and four dimensions by means of Monte Carlo simulations and a finite-size scaling analysis. By using a field programmable gate array, we have been able to thermalize a large number of samples of systems with large volume. This has allowed us to observe a spin-glass ordered phase in d=4 and to study the critical properties of the transition. In d=3, our results are consistent with the presence of a Kosterlitz-Thouless transition, but also with different scenarios: transient effects due to a value of the lower critical dimension slightly below 3 could be very important.
Resumo:
The energy spectrum of ultra-high energy cosmic rays above 10(18)eV is measured using the hybrid events collected by the Pierre Auger Observatory between November 2005 and September 2010. The large exposure of the Observatory allows the measurement of the main features of the energy spectrum with high statistics. Full Monte Carlo simulations of the extensive air showers (based on the CORSIKA code) and of the hybrid detector response are adopted here as an independent cross check of the standard analysis (Phys. Lett. B 685, 239 (2010)). The dependence on mass composition and other systematic uncertainties are discussed in detail and, in the full Monte Carlo approach, a region of confidence for flux measurements is defined when all the uncertainties are taken into account. An update is also reported of the energy spectrum obtained by combining the hybrid spectrum and that measured using the surface detector array.
Resumo:
We describe the hardwired implementation of algorithms for Monte Carlo simulations of a large class of spin models. We have implemented these algorithms as VHDL codes and we have mapped them onto a dedicated processor based on a large FPGA device. The measured performance on one such processor is comparable to O(100) carefully programmed high-end PCs: it turns out to be even better for some selected spin models. We describe here codes that we are currently executing on the IANUS massively parallel FPGA-based system.