943 resultados para 010506 Statistical Mechanics, Physical Combinatorics and Mathematical Aspects of Condensed Matter
Resumo:
A quantum random walk on the integers exhibits pseudo memory effects, in that its probability distribution after N steps is determined by reshuffling the first N distributions that arise in a classical random walk with the same initial distribution. In a classical walk, entropy increase can be regarded as a consequence of the majorization ordering of successive distributions. The Lorenz curves of successive distributions for a symmetric quantum walk reveal no majorization ordering in general. Nevertheless, entropy can increase, and computer experiments show that it does so on average. Varying the stages at which the quantum coin system is traced out leads to new quantum walks, including a symmetric walk for which majorization ordering is valid but the spreading rate exceeds that of the usual symmetric quantum walk.
Resumo:
This study examined how the floc characteristics affect dewaterability of activated sludge. The floc properties were characterized by morphological parameters (floc size distribution, fractal dimension and filament index), physical properties (flocculating ability, surface charge, relative hydrophobicity and viscosity), and chemical constituents in sludge and extracted extracellular polymeric substances (EPS), including the polymeric compounds protein, humic substances, carbohydrates and the ions Ca2+, Mg2+, Fe3+ and Al3+. The dewaterability was defined in terms of the bound water content and capillary suction time (CST). The bound water and CST corresponded to a similar indication with respect to dewaterability of activated sludge. The floc physical parameters were the most important factors which effect significantly on the water binding ability of the sludge flocs. The morphological characteristics had relatively weak impact on the dewaterability. The polymeric components protein and carbohydrate had a significant contribution to enhance the water binding ability of the sludge flocs. The effect of humic substances in the sludge on the dewaterability was, however, insignificant. The CST had good statistical correlations with the polymeric constituents measured in both sludge and the extracted EPS, and the bound water was only correlated well with the individual polymers measured in the sludge. High concentration of Ca2+, Mg2+, Fe3+ and Al3+ had significant improvement for dewaterability. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
In this article, we review the current state of knowledge concerning the physical and chemical properties of the eumelanin pigment. We examine properties related to its photoprotective functionality, and draw the crucial link between fundamental molecular structure and observable macroscopic behaviour. Where necessary, we also briefly review certain aspects of the pheomelanin literature to draw relevant comparison. A full understanding of melanin function, and indeed its role in retarding or promoting the disease state, can only be obtained through a full mapping of key structure-property relationships in the main pigment types. We are engaged in such an endeavor for the case of eumelanin.
Resumo:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
Resumo:
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.
Resumo:
Using techniques from Statistical Physics, the annealed VC entropy for hyperplanes in high dimensional spaces is calculated as a function of the margin for a spherical Gaussian distribution of inputs.
Resumo:
An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed (Becker S. and Hinton G., Nature, 355 (1992) 161). By exploiting a formal analogy to supervised learning in parity machines, the theory of zero-temperature Gibbs learning for the unsupervised procedure is presented for the case that the networks are perceptrons and for the case of fully connected committees.
Resumo:
A variation of low-density parity check (LDPC) error-correcting codes defined over Galois fields (GF(q)) is investigated using statistical physics. A code of this type is characterised by a sparse random parity check matrix composed of C non-zero elements per column. We examine the dependence of the code performance on the value of q, for finite and infinite C values, both in terms of the thermodynamical transition point and the practical decoding phase characterised by the existence of a unique (ferromagnetic) solution. We find different q-dependence in the cases of C = 2 and C ≥ 3; the analytical solutions are in agreement with simulation results, providing a quantitative measure to the improvement in performance obtained using non-binary alphabets.
Resumo:
This thesis is focussed on the role differentiationhypothesis as it relates to small groups (Bales, 1958). The hypothesis is systematically examined, both conceptually and empirically, in the light of the Equilibrium Hypothesis (Bales, 1953) and the Negotiated Order Theory of leadership (e.g. Hosking, 1988). Chapter 1 sketches in a context for the research,which was stimulated by attempts during the 60s and 70s to organise small groups without leaders (the leaderless group, based on isocratic principles). Chapter 2 gives a conceptual and developmental overview of Bales' work, concentrating on the Equilibrium Hypothesis. It is argued that Bales' conceptual approach, if developed, can potentially integrate the disparate small groups and leadership literatures. Chapters 3 and 4 examine the concepts `group', `leader' and `leadership' in terms of the Negotiated Order perspective. In chapter 3 it is argued that two aspects of the concept group need to be taken separately into account; physical attributes and social psychological aspects (the metaphysical glue). It is further argued that a collection of people becomes a group only when they begin to establish a shared sense of social order. In chapter 4 it is argued that leadership is best viewed as a process of negotiation between those who influence and those who are influenced, in the context of shared values about means and ends. It is further argued that leadership is the process by which a shared sense of social order is established and maintained, thus linking the concepts `leadership' and `group' in a single formulation. The correspondences with Bales' approach are discussed at the end of the chapter. Chapters 5 to 8 present a detailed critical description and evaluation of the empirical work which claims to show role differentiation or test the hypothesis, both Bales original work and subsequent studies. It is argued here, that the measurement and analytical procedures adopted by Bales and others, in particular the use of simple means as summaries of group structures, are fundamentally flawed, and that role differentiation in relation to particular identifiable groups has not been demonstrated clearly anywhere in the literature. Chapters 9 to 13 present the empirical work conducted for the thesis. 18 small groups are examined systematically for evidence of role differentiation using an approach based on early sociometry (Moreno, 1934). The results suggest that role differentiation, as described by Bales, does not occur as often as is implied in the literature, and not equivocally in any case. In particular structures derived from Liking are typically distributed or weak. This suggests that one of Bales' principal findings, that Liking varies independently of his other main dimensions, is the product of statistical artifact. Chapter 14 presents a general summary of results and presents some considerations about future research.
Resumo:
Deep hole drilling is one of the most complicated metal cutting processes and one of the most difficult to perform on CNC machine-tools or machining centres under conditions of limited manpower or unmanned operation. This research work investigates aspects of the deep hole drilling process with small diameter twist drills and presents a prototype system for real time process monitoring and adaptive control; two main research objectives are fulfilled in particular : First objective is the experimental investigation of the mechanics of the deep hole drilling process, using twist drills without internal coolant supply, in the range of diarneters Ø 2.4 to Ø4.5 mm and working length up to 40 diameters. The definition of the problems associated with the low strength of these tools and the study of mechanisms of catastrophic failure which manifest themselves well before and along with the classic mechanism of tool wear. The relationships between drilling thrust and torque with the depth of penetration and the various machining conditions are also investigated and the experimental evidence suggests that the process is inherently unstable at depths beyond a few diameters. Second objective is the design and implementation of a system for intelligent CNC deep hole drilling, the main task of which is to ensure integrity of the process and the safety of the tool and the workpiece. This task is achieved by means of interfacing the CNC system of the machine tool to an external computer which performs the following functions: On-line monitoring of the drilling thrust and torque, adaptive control of feed rate, spindle speed and tool penetration (Z-axis), indirect monitoring of tool wear by pattern recognition of variations of the drilling thrust with cumulative cutting time and drilled depth, operation as a data base for tools and workpieces and finally issuing of alarms and diagnostic messages.
Resumo:
Sodium formate, potassium acetate and a mixture of calcium and magnesium acetate (CMA) have all been identified as effective de-icing agents. In this project an attempt has been made to elucidate potentially deleterious effects of these substances on the durability of reinforced concrete. Aspects involving the corrosion behaviour of embedded steel along with the chemical and physical degradation of the cementitious matrix were studied. Ionic diffusion characteristics of deicer/pore solution systems in hardened cement paste were also studied since rates of ingress of deleterious agents into cement paste are commonly diffusion-controlled. It was found that all the compounds tested were generally non-corrosive to embedded steel, however, in a small number of cases potassium acetate did cause corrosion. Potassium acetate was also found to cause cracking in concrete and cement paste samples. CMA appeared to degrade hydrated cement paste although this was apparently less of a problem when commercial grade CMA was used in place of the reagent grade chemical. This was thought to be due to the insoluble material present in the commercial formulation forming a physical barrier between the concrete and the de-icing solution. With the test regimes used sodium formate was not seen to have any deleterious effect on the integrity of reinforced concrete. As a means of restoring the corrosion protective character of chloride-contaminated concrete the process of electrochemical chloride removal has been previously developed. Potential side-effects of this method and the effect of external electrolyte composition on chloride removal efficiency were investigated. It was seen that the composition of the external electrolyte has a significant effect on the amount of chloride removed. It was also found that, due to alterations to the composition of the C3A hydration reaction products, it was possible to remove bound chloride as well as that in the pore solution. The use of an external electrolyte containing lithium ions was also tried as a means of preventing cathodically-induced alkali-silica reaction in concretes containing potentially reactive aggregates. The results obtained were inconclusive and further practical development of this approach is needed.
Resumo:
This thesis presents an analysis of the stability of complex distribution networks. We present a stability analysis against cascading failures. We propose a spin [binary] model, based on concepts of statistical mechanics. We test macroscopic properties of distribution networks with respect to various topological structures and distributions of microparameters. The equilibrium properties of the systems are obtained in a statistical mechanics framework by application of the replica method. We demonstrate the validity of our approach by comparing it with Monte Carlo simulations. We analyse the network properties in terms of phase diagrams and found both qualitative and quantitative dependence of the network properties on the network structure and macroparameters. The structure of the phase diagrams points at the existence of phase transition and the presence of stable and metastable states in the system. We also present an analysis of robustness against overloading in the distribution networks. We propose a model that describes a distribution process in a network. The model incorporates the currents between any connected hubs in the network, local constraints in the form of Kirchoff's law and a global optimizational criterion. The flow of currents in the system is driven by the consumption. We study two principal types of model: infinite and finite link capacity. The key properties are the distributions of currents in the system. We again use a statistical mechanics framework to describe the currents in the system in terms of macroscopic parameters. In order to obtain observable properties we apply the replica method. We are able to assess the criticality of the level of demand with respect to the available resources and the architecture of the network. Furthermore, the parts of the system, where critical currents may emerge, can be identified. This, in turn, provides us with the characteristic description of the spread of the overloading in the systems.