999 resultados para Statistical Thermodynamics
Resumo:
The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks. In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.
Resumo:
Using solid oxide galvanic cells of the type: MnO + Gd2O3 + GdMnO3/O-2/Ni + NiO and Mn3O4 + GdMnO3 + GdMn2O5/O-2/air the equilibrium oxygen pressure for the following reactions :MnO + 1/2Gd(2)O(3) + 1/4O(2) = GdMnO3 1/3Mn(3)O(4) + GdMnO3 + 1/3O(2) = GdMn2O5 was determined in the temperature range from 1073 to 1450 K. From the determined equilibrium oxygen partial pressure the corresponding G i b b s free energy change for these reactions was derived: Delta G(f,GdMnO3)(0) (+/- 425J) 132721(+/ - 2240) +51.91(+/ - 0.81)T Delta G(f,GdMn2O5)(0)(+/- 670J) 121858(+/ - 6176) + 79.52(+/ - 4.83)T From these data, standard G i b b s energies, enthalpies and entropies of formation of GdMnO3 and GdMn2O5 from component oxides and from the elements are derived. Thermodynamic data tables for the two ternary phases are compiled from 298.15 to 1400 K.
Resumo:
The Gibbs' energy change for the reaction, 3CoO (r.s.)+1/2O2(g)→Co3O4(sp), has been measured between 730 and 1250 K using a solid state galvanic cell: Pt, CuO+Cu2O|(CaO)ZrO2|CoO+Co3O4,Pt. The emf of this cell varies nonlinearly with temperature between 1075 and 1150 K, indicating a second or higher order phase transition in Co3O4around 1120 (±20) K, associated with an entropy change of ∼43 Jmol-1K-1. The phase transition is accompanied by an anomalous increase in lattice parameter and electrical conductivity. The cubic spinel structure is retained during the transition, which is caused by the change in CO+3 ions from low spin to high spin state. The octahedral site preference energy of CO+3 ion in the high spin state has been evaluated as -24.8 kJ mol-1. This is more positive than the value for CO+2 ion (-32.9 kJ mol-1). The cation distribution therefore changes from normal to inverse side during the phase transition. The transformation is unique, coupling spin unpairing in CO+3 ion with cation rearrangement on the spinel lattice, DTA in pure oxygen revealed a small peak corresponding to the transition, which could be differentiated from the large peak due to decomposition. TGA showed that the stoichiometry of oxide is not significantly altered during the transition. The Gibbs' energy of formation of Co3O4 from CoO and O2 below and above phase transition can be represented by the equations:ΔG0=-205,685+170.79T(±200) J mol-1(730-1080 K) and ΔG0=-157,235+127.53T(±200) J mol-1(1150-1250 K).
Resumo:
Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.
Resumo:
In this paper, we propose a novel and efficient algorithm for modelling sub-65 nm clock interconnect-networks in the presence of process variation. We develop a method for delay analysis of interconnects considering the impact of Gaussian metal process variations. The resistance and capacitance of a distributed RC line are expressed as correlated Gaussian random variables which are then used to compute the standard deviation of delay Probability Distribution Function (PDF) at all nodes in the interconnect network. Main objective is to find delay PDF at a cheaper cost. Convergence of this approach is in probability distribution but not in mean of delay. We validate our approach against SPICE based Monte Carlo simulations while the current method entails significantly lower computational cost.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
A systematic structure analysis of the correlation functions of statistical quantum optics is carried out. From a suitably defined auxiliary two‐point function we are able to identify the excited modes in the wave field. The relative simplicity of the higher order correlation functions emerge as a byproduct and the conditions under which these are made pure are derived. These results depend in a crucial manner on the notion of coherence indices and of unimodular coherence indices. A new class of approximate expressions for the density operator of a statistical wave field is worked out based on discrete characteristic sets. These are even more economical than the diagonal coherent state representations. An appreciation of the subtleties of quantum theory obtains. Certain implications for the physics of light beams are cited.
Resumo:
The absorption produced by the audience in concert halls is considered a random variable. Beranek's proposal [L. L. Beranek, Music, Acoustics and Architecture (Wiley, New York, 1962), p. 543] that audience absorption is proportional to the area they occupy and not to their number is subjected to a statistical hypothesis test. A two variable linear regression model of the absorption with audience area and residual area as regressor variables is postulated for concert halls without added absorptive materials. Since Beranek's contention amounts to the statement that audience absorption is independent of the seating density, the test of the hypothesis lies in categorizing halls by seating density and examining for significant differences among slopes of regression planes of the different categories. Such a test shows that Beranek's hypothesis can be accepted. It is also shown that the audience area is a better predictor of the absorption than the audience number. The absorption coefficients and their 95% confidence limits are given for the audience and residual areas. A critique of the regression model is presented.
Resumo:
The activity of strontium in liquid Al-Sr alloys (X(Sr) less-than-or-equal-to 0.17) at 1323 K has been determined using the Knudsen effusion-mass loss technique. At higher concentrations (X(Sr) greater-than-or-equal-to 0.28), the activity of strontium has been determined by the pseudoisopiestic technique. Activity of aluminium has been derived by Gibbs-Duhem integration. The concentration - concentration structure factor of Bhatia and Thornton at zero wave vector has been computed from the thermodynamic data. The behaviour of the mean square thermal fluctuation in composition and the thermodynamic mixing functions suggest association tendencies in the liquid state. The associated solution model with Al2Sr as the predominant complex can account for the properties of the liquid alloy. Thermodynamic data for the intermetallic compunds in the Al-Sr system have been derived using the phase diagram and the Gibbs' energy and enthalpy of mixing of liquid alloys. The data indicate the need for redetermination of the phase diagram near the strontium-rich corner.
Resumo:
Removal of impurity elements from hot metal is essential in basic oxygen steelmaking. Oxidation of phosphorus from hot metal has been studied by several authors since the early days of steelmaking. Influence of different parameters on the distribution of phosphorus, seen during the recent work of the authors, differs somewhat from that reported earlier. On the other hand, removal of sulphur during steelmaking has drawn much less attention. This may be due to the magnitude of desulphurisation in oxygen steelmaking being relatively low and desulphurisation during hot metal pre-treatment or in the ladle furnace offering better commercial viability Further, it is normally accepted that sulphur is removed to steelmaking slag in the form of sulphide only However, recent investigations have indicated that a significant amount of sulphur removed during basic oxygen steelmaking can exist in the form of sulphate in the slag under oxidising conditions. The distribution of sulphur during steelmaking becomes more important in the event of carry-over of sulphur-rich blast-furnace slag, which increases sulphur load in the BOF. The chemical nature of sulphur in this slag undergoes a gradual transition from sulphide to sulphate as the oxidative refining progresses.