943 resultados para Statistical physics
Resumo:
This thesis includes analysis of disordered spin ensembles corresponding to Exact Cover, a multi-access channel problem, and composite models combining sparse and dense interactions. The satisfiability problem in Exact Cover is addressed using a statistical analysis of a simple branch and bound algorithm. The algorithm can be formulated in the large system limit as a branching process, for which critical properties can be analysed. Far from the critical point a set of differential equations may be used to model the process, and these are solved by numerical integration and exact bounding methods. The multi-access channel problem is formulated as an equilibrium statistical physics problem for the case of bit transmission on a channel with power control and synchronisation. A sparse code division multiple access method is considered and the optimal detection properties are examined in typical case by use of the replica method, and compared to detection performance achieved by interactive decoding methods. These codes are found to have phenomena closely resembling the well-understood dense codes. The composite model is introduced as an abstraction of canonical sparse and dense disordered spin models. The model includes couplings due to both dense and sparse topologies simultaneously. The new type of codes are shown to outperform sparse and dense codes in some regimes both in optimal performance, and in performance achieved by iterative detection methods in finite systems.
Resumo:
The problem of learning by examples in ultrametric committee machines (UCMs) is studied within the framework of statistical mechanics. Using the replica formalism we calculate the average generalization error in UCMs with L hidden layers and for a large enough number of units. In most of the regimes studied we find that the generalization error, as a function of the number of examples presented, develops a discontinuous drop at a critical value of the load parameter. We also find that when L>1 a number of teacher networks with the same number of hidden layers and different overlaps induce learning processes with the same critical points.
Resumo:
Measurement of lung ventilation is one of the most reliable techniques in diagnosing pulmonary diseases. The time-consuming and bias-prone traditional methods using hyperpolarized H 3He and 1H magnetic resonance imageries have recently been improved by an automated technique based on 'multiple active contour evolution'. This method involves a simultaneous evolution of multiple initial conditions, called 'snakes', eventually leading to their 'merging' and is entirely independent of the shapes and sizes of snakes or other parametric details. The objective of this paper is to show, through a theoretical analysis, that the functional dynamics of merging as depicted in the active contour method has a direct analogue in statistical physics and this explains its 'universality'. We show that the multiple active contour method has an universal scaling behaviour akin to that of classical nucleation in two spatial dimensions. We prove our point by comparing the numerically evaluated exponents with an equivalent thermodynamic model. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.
Resumo:
We determine the critical noise level for decoding low-density parity check error-correcting codes based on the magnetization enumerator (M), rather than on the weight enumerator (W) employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived using the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
The increase in renewable energy generators introduced into the electricity grid is putting pressure on its stability and management as predictions of renewable energy sources cannot be accurate or fully controlled. This, with the additional pressure of fluctuations in demand, presents a problem more complex than the current methods of controlling electricity distribution were designed for. A global approximate and distributed optimisation method for power allocation that accommodates uncertainties and volatility is suggested and analysed. It is based on a probabilistic method known as message passing [1], which has deep links to statistical physics methodology. This principled method of optimisation is based on local calculations and inherently accommodates uncertainties; it is of modest computational complexity and provides good approximate solutions.We consider uncertainty and fluctuations drawn from a Gaussian distribution and incorporate them into the message-passing algorithm. We see the effect that increasing uncertainty has on the transmission cost and how the placement of volatile nodes within a grid, such as renewable generators or consumers, effects it.
Resumo:
We investigate a class of simple models for Langevin dynamics of turbulent flows, including the one-layer quasi-geostrophic equation and the two-dimensional Euler equations. Starting from a path integral representation of the transition probability, we compute the most probable fluctuation paths from one attractor to any state within its basin of attraction. We prove that such fluctuation paths are the time reversed trajectories of the relaxation paths for a corresponding dual dynamics, which are also within the framework of quasi-geostrophic Langevin dynamics. Cases with or without detailed balance are studied. We discuss a specific example for which the stationary measure displays either a second order (continuous) or a first order (discontinuous) phase transition and a tricritical point. In situations where a first order phase transition is observed, the dynamics are bistable. Then, the transition paths between two coexisting attractors are instantons (fluctuation paths from an attractor to a saddle), which are related to the relaxation paths of the corresponding dual dynamics. For this example, we show how one can analytically determine the instantons and compute the transition probabilities for rare transitions between two attractors.
Resumo:
It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero-and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent a of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.
Resumo:
Review paper, to appear in the Springer Lecture Notes in Physics volume "Thermal transport in low dimensions: from statistical physics to nanoscale heat transfer" (S. Lepri ed.)
Resumo:
As a device, the laser is an elegant conglomerate of elementary physical theories and state-of-the-art techniques ranging from quantum mechanics, thermal and statistical physics, material growth and non-linear mathematics. The laser has been a commercial success in medicine and telecommunication while driving the development of highly optimised devices specifically designed for a plethora of uses. Due to their low-cost and large-scale predictability many aspects of modern life would not function without the lasers. However, the laser is also a window into a system that is strongly emulated by non-linear mathematical systems and are an exceptional apparatus in the development of non-linear dynamics and is often used in the teaching of non-trivial mathematics. While single-mode semiconductor lasers have been well studied, a unified comparison of single and two-mode lasers is still needed to extend the knowledge of semiconductor lasers, as well as testing the limits of current model. Secondly, this work aims to utilise the optically injected semiconductor laser as a tool so study non-linear phenomena in other fields of study, namely ’Rogue waves’ that have been previously witnessed in oceanography and are suspected as having non-linear origins. The first half of this thesis includes a reliable and fast technique to categorise the dynamical state of optically injected two mode and single mode lasers. Analysis of the experimentally obtained time-traces revealed regions of various dynamics and allowed the automatic identification of their respective stability. The impact of this method is also extended to the detection regions containing bi-stabilities. The second half of the thesis presents an investigation into the origins of Rogue Waves in single mode lasers. After confirming their existence in single mode lasers, their distribution in time and sudden appearance in the time-series is studied to justify their name. An examination is also performed into the existence of paths that make Rogue Waves possible and the impact of noise on their distribution is also studied.
Resumo:
This dissertation covers two separate topics in statistical physics. The first part of the dissertation focuses on computational methods of obtaining the free energies (or partition functions) of crystalline solids. We describe a method to compute the Helmholtz free energy of a crystalline solid by direct evaluation of the partition function. In the many-dimensional conformation space of all possible arrangements of N particles inside a periodic box, the energy landscape consists of localized islands corresponding to different solid phases. Calculating the partition function for a specific phase involves integrating over the corresponding island. Introducing a natural order parameter that quantifies the net displacement of particles from lattices sites, we write the partition function in terms of a one-dimensional integral along the order parameter, and evaluate this integral using umbrella sampling. We validate the method by computing free energies of both face-centered cubic (FCC) and hexagonal close-packed (HCP) hard sphere crystals with a precision of $10^{-5}k_BT$ per particle. In developing the numerical method, we find several scaling properties of crystalline solids in the thermodynamic limit. Using these scaling properties, we derive an explicit asymptotic formula for the free energy per particle in the thermodynamic limit. In addition, we describe several changes of coordinates that can be used to separate internal degrees of freedom from external, translational degrees of freedom. The second part of the dissertation focuses on engineering idealized physical devices that work as Maxwell's demon. We describe two autonomous mechanical devices that extract energy from a single heat bath and convert it into work, while writing information onto memory registers. Additionally, both devices can operate as Landauer's eraser, namely they can erase information from a memory register, while energy is dissipated into the heat bath. The phase diagrams and the efficiencies of the two models are solved and analyzed. These two models provide concrete physical illustrations of the thermodynamic consequences of information processing.
Resumo:
Finding equilibration times is a major unsolved problem in physics with few analytical results. Here we look at equilibration times for quantum gases of bosons and fermions in the regime of negligibly weak interactions, a setting which not only includes paradigmatic systems such as gases confined to boxes, but also Luttinger liquids and the free superfluid Hubbard model. To do this, we focus on two classes of measurements: (i) coarse-grained observables, such as the number of particles in a region of space, and (ii) few-mode measurements, such as phase correlators.Weshow that, in this setting, equilibration occurs quite generally despite the fact that the particles are not interacting. Furthermore, for coarse-grained measurements the timescale is generally at most polynomial in the number of particles N, which is much faster than previous general upper bounds, which were exponential in N. For local measurements on lattice systems, the timescale is typically linear in the number of lattice sites. In fact, for one-dimensional lattices, the scaling is generally linear in the length of the lattice, which is optimal. Additionally, we look at a few specific examples, one of which consists ofNfermions initially confined on one side of a partition in a box. The partition is removed and the fermions equilibrate extremely quickly in time O(1 N).
Resumo:
Water has been called the “most studied and least understood” of all liquids, and upon supercooling its behavior becomes even more anomalous. One particularly fruitful hypothesis posits a liquid-liquid critical point terminating a line of liquid-liquid phase transitions that lies just beyond the reach of experiment. Underlying this hypothesis is the conjecture that there is a competition between two distinct hydrogen-bonding structures of liquid water, one associated with high density and entropy and the other with low density and entropy. The competition between these structures is hypothesized to lead at very low temperatures to a phase transition between a phase rich in the high-density structure and one rich in the low-density structure. Equations of state based on this conjecture have given an excellent account of the thermodynamic properties of supercooled water. In this thesis, I extend that line of research. I treat supercooled aqueous solutions and anomalous behavior of the thermal conductivity of supercooled water. I also address supercooled water at negative pressures, leading to a framework for a coherent understanding of the thermodynamics of water at low temperatures. I supplement analysis of experimental results with data from the TIP4P/2005 model of water, and include an extensive analysis of the thermodynamics of this model.
Resumo:
Single-cell functional proteomics assays can connect genomic information to biological function through quantitative and multiplex protein measurements. Tools for single-cell proteomics have developed rapidly over the past 5 years and are providing unique opportunities. This thesis describes an emerging microfluidics-based toolkit for single cell functional proteomics, focusing on the development of the single cell barcode chips (SCBCs) with applications in fundamental and translational cancer research.
The microchip designed to simultaneously quantify a panel of secreted, cytoplasmic and membrane proteins from single cells will be discussed at the beginning, which is the prototype for subsequent proteomic microchips with more sophisticated design in preclinical cancer research or clinical applications. The SCBCs are a highly versatile and information rich tool for single-cell functional proteomics. They are based upon isolating individual cells, or defined number of cells, within microchambers, each of which is equipped with a large antibody microarray (the barcode), with between a few hundred to ten thousand microchambers included within a single microchip. Functional proteomics assays at single-cell resolution yield unique pieces of information that significantly shape the way of thinking on cancer research. An in-depth discussion about analysis and interpretation of the unique information such as functional protein fluctuations and protein-protein correlative interactions will follow.
The SCBC is a powerful tool to resolve the functional heterogeneity of cancer cells. It has the capacity to extract a comprehensive picture of the signal transduction network from single tumor cells and thus provides insight into the effect of targeted therapies on protein signaling networks. We will demonstrate this point through applying the SCBCs to investigate three isogenic cell lines of glioblastoma multiforme (GBM).
The cancer cell population is highly heterogeneous with high-amplitude fluctuation at the single cell level, which in turn grants the robustness of the entire population. The concept that a stable population existing in the presence of random fluctuations is reminiscent of many physical systems that are successfully understood using statistical physics. Thus, tools derived from that field can probably be applied to using fluctuations to determine the nature of signaling networks. In the second part of the thesis, we will focus on such a case to use thermodynamics-motivated principles to understand cancer cell hypoxia, where single cell proteomics assays coupled with a quantitative version of Le Chatelier's principle derived from statistical mechanics yield detailed and surprising predictions, which were found to be correct in both cell line and primary tumor model.
The third part of the thesis demonstrates the application of this technology in the preclinical cancer research to study the GBM cancer cell resistance to molecular targeted therapy. Physical approaches to anticipate therapy resistance and to identify effective therapy combinations will be discussed in detail. Our approach is based upon elucidating the signaling coordination within the phosphoprotein signaling pathways that are hyperactivated in human GBMs, and interrogating how that coordination responds to the perturbation of targeted inhibitor. Strongly coupled protein-protein interactions constitute most signaling cascades. A physical analogy of such a system is the strongly coupled atom-atom interactions in a crystal lattice. Similar to decomposing the atomic interactions into a series of independent normal vibrational modes, a simplified picture of signaling network coordination can also be achieved by diagonalizing protein-protein correlation or covariance matrices to decompose the pairwise correlative interactions into a set of distinct linear combinations of signaling proteins (i.e. independent signaling modes). By doing so, two independent signaling modes – one associated with mTOR signaling and a second associated with ERK/Src signaling have been resolved, which in turn allow us to anticipate resistance, and to design combination therapies that are effective, as well as identify those therapies and therapy combinations that will be ineffective. We validated our predictions in mouse tumor models and all predictions were borne out.
In the last part, some preliminary results about the clinical translation of single-cell proteomics chips will be presented. The successful demonstration of our work on human-derived xenografts provides the rationale to extend our current work into the clinic. It will enable us to interrogate GBM tumor samples in a way that could potentially yield a straightforward, rapid interpretation so that we can give therapeutic guidance to the attending physicians within a clinical relevant time scale. The technical challenges of the clinical translation will be presented and our solutions to address the challenges will be discussed as well. A clinical case study will then follow, where some preliminary data collected from a pediatric GBM patient bearing an EGFR amplified tumor will be presented to demonstrate the general protocol and the workflow of the proposed clinical studies.
Resumo:
The study of complex systems has become a prestigious area of science, although relatively young . Its importance was demonstrated by the diversity of applications that several studies have already provided to various fields such as biology , economics and Climatology . In physics , the approach of complex systems is creating paradigms that influence markedly the new methods , bringing to Statistical Physics problems macroscopic level no longer restricted to classical studies such as those of thermodynamics . The present work aims to make a comparison and verification of statistical data on clusters of profiles Sonic ( DT ) , Gamma Ray ( GR ) , induction ( ILD ) , neutron ( NPHI ) and density ( RHOB ) to be physical measured quantities during exploratory drilling of fundamental importance to locate , identify and characterize oil reservoirs . Software were used : Statistica , Matlab R2006a , Origin 6.1 and Fortran for comparison and verification of the data profiles of oil wells ceded the field Namorado School by ANP ( National Petroleum Agency ) . It was possible to demonstrate the importance of the DFA method and that it proved quite satisfactory in that work, coming to the conclusion that the data H ( Hurst exponent ) produce spatial data with greater congestion . Therefore , we find that it is possible to find spatial pattern using the Hurst coefficient . The profiles of 56 wells have confirmed the existence of spatial patterns of Hurst exponents , ie parameter B. The profile does not directly assessed catalogs verification of geological lithology , but reveals a non-random spatial distribution
Resumo:
An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. It is a well-known fact that DVMs can also have extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and so without spurious ones, is called normal. For binary mixtures also the concept of supernormal DVMs was introduced, meaning that in addition to the DVM being normal, the restriction of the DVM to any single species also is normal. Here we introduce generalizations of this concept to DVMs for multicomponent mixtures. We also present some general algorithms for constructing such models and give some concrete examples of such constructions. One of our main results is that for any given number of species, and any given rational mass ratios we can construct a supernormal DVM. The DVMs are constructed in such a way that for half-space problems, as the Milne and Kramers problems, but also nonlinear ones, we obtain similar structures as for the classical discrete Boltzmann equation for one species, and therefore we can apply obtained results for the classical Boltzmann equation.