239 resultados para CRITICALITY
Resumo:
A stochastic nonlinear partial differential equation is constructed for two different models exhibiting self-organized criticality: the Bak-Tang-Wiesenfeld (BTW) sandpile model [Phys. Rev. Lett. 59, 381 (1987); Phys. Rev. A 38, 364 (1988)] and the Zhang model [Phys. Rev. Lett. 63, 470 (1989)]. The dynamic renormalization group (DRG) enables one to compute the critical exponents. However, the nontrivial stable fixed point of the DRG transformation is unreachable for the original parameters of the models. We introduce an alternative regularization of the step function involved in the threshold condition, which breaks the symmetry of the BTW model. Although the symmetry properties of the two models are different, it is shown that they both belong to the same universality class. In this case the DRG procedure leads to a symmetric behavior for both models, restoring the broken symmetry, and makes accessible the nontrivial fixed point. This technique could also be applied to other problems with threshold dynamics.
Resumo:
Critical exponents of the infinitely slowly driven Zhang model of self-organized criticality are computed for d=2 and 3, with particular emphasis devoted to the various roughening exponents. Besides confirming recent estimates of some exponents, new quantities are monitored, and their critical exponents computed. Among other results, it is shown that the three-dimensional exponents do not coincide with the Bak-Tang-Wiesenfeld [Phys. Rev. Lett. 59, 381 (1987); Phys. Rev. A 38, 364 (1988)] (Abelian) model, and that the dynamical exponent as computed from the correlation length and from the roughness of the energy profile do not necessarily coincide, as is usually implicitly assumed. An explanation for this is provided. The possibility of comparing these results with those obtained from renormalization group arguments is also briefly addressed.
Resumo:
Työn tarkoituksena oli kerätä käyttövarmuustietoa savukaasulinjasta kahdelta suomalaiselta sellutehtaalta niiden käyttöönotosta aina tähän päivään asti. Käyttövarmuustieto koostuu luotettavuustiedoista sekä kunnossapitotiedoista. Kerätyn tiedon avulla on mahdollista kuvata tarkasti laitoksen käyttövarmuutta seuraavilla tunnusluvuilla: suunnittelemattomien häiriöiden lukumäärä ja korjausajat, laitteiden seisokkiaika, vikojen todennäköisyys ja korjaavan kunnossapidon kustannukset suhteessa savukaasulinjan korjaavan kunnossapidon kokonaiskustannuksiin. Käyttövarmuustiedon keräysmetodi on esitelty. Savukaasulinjan kriittisten laitteiden määrittelyyn käytetty metodi on yhdistelmä kyselytutkimuksesta ja muunnellusta vian vaikutus- ja kriittisyysanalyysistä. Laitteiden valitsemiskriteerit lopulliseen kriittisyysanalyysiin päätettiin käyttövarmuustietojen sekä kyselytutkimuksen perusteella. Kriittisten laitteiden määrittämisen tarkoitus on löytää savukaasulinjasta ne laitteet, joiden odottamaton vikaantuminen aiheuttaa vakavimmat seuraukset savukaasulinjan luotettavuuteen, tuotantoon, turvallisuuteen, päästöihin ja kustannuksiin. Tiedon avulla rajoitetut kunnossapidon resurssit voidaan suunnata oikein. Kriittisten laitteiden määrittämisen tuloksena todetaan, että kolme kriittisintä laitetta savukaasulinjassa ovat molemmille sellutehtaille yhteisesti: savukaasupuhaltimet, laahakuljettimet sekä ketjukuljettimet. Käyttövarmuustieto osoittaa, että laitteiden luotettavuus on tehdaskohtaista, mutta periaatteessa samat päälinjat voidaan nähdä suunnittelemattomien vikojen todennäköisyyttä esittävissä kuvissa. Kustannukset, jotka esitetään laitteen suunnittelemattomien kunnossapitokustannusten suhteena savukaasulinjan kokonaiskustannuksiin, noudattelevat hyvin pitkälle luotettavuuskäyrää, joka on laskettu laitteen seisokkiajan suhteena käyttötunteihin. Käyttövarmuustiedon keräys yhdistettynä kriittisten laitteiden määrittämiseen mahdollistavat ennakoivan kunnossapidon oikean kohdistamisen ja ajoittamisen laitteiston elinaikana siten, että luotettavuus- ja kustannustehokkuusvaatimukset saavutetaan.
Resumo:
The aim of this Master’s thesis is to find a method for classifying spare part criticality in the case company. Several approaches exist for criticality classification of spare parts. The practical problem in this thesis is the lack of a generic analysis method for classifying spare parts of proprietary equipment of the case company. In order to find a classification method, a literature review of various analysis methods is required. The requirements of the case company also have to be recognized. This is achieved by consulting professionals in the company. The literature review states that the analytic hierarchy process (AHP) combined with decision tree models is a common method for classifying spare parts in academic literature. Most of the literature discusses spare part criticality in stock holding perspective. This is relevant perspective also for a customer orientated original equipment manufacturer (OEM), as the case company. A decision tree model is developed for classifying spare parts. The decision tree classifies spare parts into five criticality classes according to five criteria. The criteria are: safety risk, availability risk, functional criticality, predictability of failure and probability of failure. The criticality classes describe the level of criticality from non-critical to highly critical. The method is verified for classifying spare parts of a full deposit stripping machine. The classification can be utilized as a generic model for recognizing critical spare parts of other similar equipment, according to which spare part recommendations can be created. Purchase price of an item and equipment criticality were found to have no effect on spare part criticality in this context. Decision tree is recognized as the most suitable method for classifying spare part criticality in the company.
Resumo:
Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio
Resumo:
Some 50,000 Win Studies in Chess challenge White to find an effectively unique route to a win. Judging the impact of less than absolute uniqueness requires both technical analysis and artistic judgment. Here, for the first time, an algorithm is defined to help analyse uniqueness in endgame positions objectively. The key idea is to examine how critical certain positions are to White in achieving the win. The algorithm uses sub-n-man endgame tables (EGTs) for both Chess and relevant, adjacent variants of Chess. It challenges authors of EGT generators to generalise them to create EGTs for these chess variants. It has already proved efficient and effective in an implementation for Starchess, itself a variant of chess. The approach also addresses a number of similar questions arising in endgame theory, games and compositions.
Resumo:
The problem of technology obsolescence in information intensive businesses (software and hardware no longer being supported and replaced by improved and different solutions) and a cost constrained market can severely increase costs and operational, and ultimately reputation risk. Although many businesses recognise technological obsolescence, the pervasive nature of technology often means they have little information to identify the risk and location of pending obsolescence and little money to apply to the solution. This paper presents a low cost structured method to identify obsolete software and the risk of their obsolescence where the structure of a business and its supporting IT resources can be captured, modelled, analysed and the risk to the business of technology obsolescence identified to enable remedial action using qualified obsolescence information. The technique is based on a structured modelling approach using enterprise architecture models and a heatmap algorithm to highlight high risk obsolescent elements. The method has been tested and applied in practice in three consulting studies carried out by Capgemini involving four UK police forces. However the generic technique could be applied to any industry based on plans to improve it using ontology framework methods. This paper contains details of enterprise architecture meta-models and related modelling.
Resumo:
We consider bipartitions of one-dimensional extended systems whose probability distribution functions describe stationary states of stochastic models. We define estimators of the information shared between the two subsystems. If the correlation length is finite, the estimators stay finite for large system sizes. If the correlation length diverges, so do the estimators. The definition of the estimators is inspired by information theory. We look at several models and compare the behaviors of the estimators in the finite-size scaling limit. Analytical and numerical methods as well as Monte Carlo simulations are used. We show how the finite-size scaling functions change for various phase transitions, including the case where one has conformal invariance.
Resumo:
In certain Mott-insulating dimerized antiferromagnets, triplet excitations of the paramagnetic phase display both three-particle and four-particle interactions. When such a magnet undergoes a quantum phase transition into a magnetically ordered state, the three-particle interaction becomes part of the critical theory provided that the lattice ordering wave vector is zero. One microscopic example is the staggered-dimer antiferromagnet on the square lattice, for which deviations from O(3) universality have been reported in numerical studies. Using both symmetry arguments and microscopic calculations, we show that a nontrivial cubic term arises in the relevant order-parameter quantum field theory, and we assess its consequences using a combination of analytical and numerical methods. We also present finite-temperature quantum Monte Carlo data for the staggered-dimer antiferromagnet which complement recently published results. The data can be consistently interpreted in terms of critical exponents identical to that of the standard O(3) universality class, but with anomalously large corrections to scaling. We argue that the cubic interaction of critical triplons, although irrelevant in two spatial dimensions, is responsible for the leading corrections to scaling due to its small scaling dimension.
Resumo:
We comment on the off-critical perturbations of WZNW models by a mass term as well as by another descendent operator, when we can compare the results with further algebra obtained from the Dirac quantization of the model, in such a way that a more general class of models be included. We discover, in both cases, hidden Kac-Moody algebras obeyed by some currents in the off-critical case, which in several cases are enough to completely fix the correlation functions.
Resumo:
We present analytical and numerical results for the specific heat and susceptibility amplitude ratios in parallel plate geometries. The results are derived using field-theoretic techniques suitable to describe the system in the bulk limit, i.e., (L/ξ±)≫ 1, where L is the distance between the plates and ξ± is the correlation length above (+) and below (-) the bulk critical temperature. Advantages and drawbacks of our method are discussed in the light of other approaches previously reported in the literature.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We analyze long-range time correlations and self-similar characteristics of the electrostatic turbulence at the plasma edge and scrape-off layer in the Tokamak Chauffage Alfven Bresillien (TCABR), with low and high Magnetohydrodynamics (MHD) activity. We find evidence of self-organized criticality (SOC), mainly in the region near the tokamak limiter. Comparative analyses of data before and during the MHD activity reveals that during the high mHD activity the Hurst parameter decreases. Finally, we present a cellular automaton whose parameters are adjusted to simulate the analyzed turbulence SOC change with the MHD activity variation. (C) 2011 Published by Elsevier B.V.