78 resultados para generalized multiscale entropy
Resumo:
A systematic goal-driven top-down modelling methodology is proposed that is capable of developing a multiscale model of a process system for given diagnostic purposes. The diagnostic goal-set and the symptoms are extracted from HAZOP analysis results, where the possible actions to be performed in a fault situation are also described. The multiscale dynamic model is realized in the form of a hierarchical coloured Petri net by using a novel substitution place-transition pair. Multiscale simulation that focuses automatically on the fault areas is used to predict the effect of the proposed preventive actions. The notions and procedures are illustrated on some simple case studies including a heat exchanger network and a more complex wet granulation process.
Resumo:
The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine learning. combinatorial optimization
Resumo:
Consider a network of unreliable links, modelling for example a communication network. Estimating the reliability of the network-expressed as the probability that certain nodes in the network are connected-is a computationally difficult task. In this paper we study how the Cross-Entropy method can be used to obtain more efficient network reliability estimation procedures. Three techniques of estimation are considered: Crude Monte Carlo and the more sophisticated Permutation Monte Carlo and Merge Process. We show that the Cross-Entropy method yields a speed-up over all three techniques.
Resumo:
The buffer allocation problem (BAP) is a well-known difficult problem in the design of production lines. We present a stochastic algorithm for solving the BAP, based on the cross-entropy method, a new paradigm for stochastic optimization. The algorithm involves the following iterative steps: (a) the generation of buffer allocations according to a certain random mechanism, followed by (b) the modification of this mechanism on the basis of cross-entropy minimization. Through various numerical experiments we demonstrate the efficiency of the proposed algorithm and show that the method can quickly generate (near-)optimal buffer allocations for fairly large production lines.
Resumo:
We explore both the rheology and complex flow behavior of monodisperse polymer melts. Adequate quantities of monodisperse polymer were synthesized in order that both the materials rheology and microprocessing behavior could be established. In parallel, we employ a molecular theory for the polymer rheology that is suitable for comparison with experimental rheometric data and numerical simulation for microprocessing flows. The model is capable of matching both shear and extensional data with minimal parameter fitting. Experimental data for the processing behavior of monodisperse polymers are presented for the first time as flow birefringence and pressure difference data obtained using a Multipass Rheometer with an 11:1 constriction entry and exit flow. Matching of experimental processing data was obtained using the constitutive equation with the Lagrangian numerical solver, FLOWSOLVE. The results show the direct coupling between molecular constitutive response and macroscopic processing behavior, and differentiate flow effects that arise separately from orientation and stretch. (c) 2005 The Society of Rheology.
Resumo:
We consider the problem of estimating P(Yi + (...) + Y-n > x) by importance sampling when the Yi are i.i.d. and heavy-tailed. The idea is to exploit the cross-entropy method as a toot for choosing good parameters in the importance sampling distribution; in doing so, we use the asymptotic description that given P(Y-1 + (...) + Y-n > x), n - 1 of the Yi have distribution F and one the conditional distribution of Y given Y > x. We show in some specific parametric examples (Pareto and Weibull) how this leads to precise answers which, as demonstrated numerically, are close to being variance minimal within the parametric class under consideration. Related problems for M/G/l and GI/G/l queues are also discussed.
Resumo:
The use of sirolimus as an alternative to calcineurin antagonists has enabled the continuation of immunosuppression in patients with renal impairment with preservation of kidney function. Sirolimus is generally well tolerated, with the main causes of cessation of therapy related to its effect on blood lipid profile as well as leukopenia and thrombocytopenia. We report a case of a debilitating ulcerating maculopapular rash necessitating cessation of the drug in a liver transplantation patient. A 56-year-old Caucasian liver transplantation patient presented with a diffuse, debilitating rash attributed to sirolimus use. This ultimately necessitated cessation of the immunosuppressant with subsequent resolution of her symptoms. From a review of the current literature, this is a highly unusual adverse reaction to sirolimus.
Resumo:
The generalized secant hyperbolic distribution (GSHD) proposed in Vaughan (2002) includes a wide range of unimodal symmetric distributions, with the Cauchy and uniform distributions being the limiting cases, and the logistic and hyperbolic secant distributions being special cases. The current article derives an asymptotically efficient rank estimator of the location parameter of the GSHD and suggests the corresponding one- and two-sample optimal rank tests. The rank estimator derived is compared to the modified MLE of location proposed in Vaughan (2002). By combining these two estimators, a computationally attractive method for constructing an exact confidence interval of the location parameter is developed. The statistical procedures introduced in the current article are illustrated by examples.
Resumo:
Equilibrium adsorption and desorption in mesoporous adsorbents is considered on the basis of rigorous thermodynamic analysis, in which the curvature-dependent solid-fluid potential and the compressibility of the adsorbed phase are accounted for. The compressibility of the adsorbed phase is considered for the first time in the literature in the framework of a rigorous thermodynamic approach. Our model is a further development of continuum thermodynamic approaches proposed by Derjaguin and Broekhoff and de Boer, and it is based on a reference isotherm of a non-porous material having the same chemical structure as that of the pore wall. In this improved thermodynamic model, we incorporated a prescription for transforming the solid-fluid potential exerted by the flat reference surface to the potential inside cylindrical and spherical pores. We relax the assumption that the adsorbed film density is constant and equal to that of the saturated liquid. Instead, the density of the adsorbed fluid is allowed to vary over the adsorbed film thickness and is calculated by an equation of state. As a result, the model is capable to describe the adsorption-desorption reversibility in cylindrical pores having diameter less than 2 nm. The generalized thermodynamic model may be applied to the pore size characterization of mesoporous materials instead of much more time-consuming molecular approaches. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We prove that the elation groups of a certain infinite family of Roman generalized quadrangles are not isomorphic to those of associated flock generalized quadrangles.
Resumo:
Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.
Resumo:
We study a generalized Hubbard model on the two-leg ladder at zero temperature, focusing on a parameter region with staggered flux (SF)/d-density wave (DDW) order. To guide our numerical calculations, we first investigate the location of a SF/DDW phase in the phase diagram of the half-filled weakly interacting ladder using a perturbative renormalization group (RG) and bosonization approach. For hole doping 6 away from half-filling, finite-system density-matrix renormalizationgroup (DMRG) calculations are used to study ladders with up to 200 rungs for intermediate-strength interactions. In the doped SF/DDW phase, the staggered rung current and the rung electron density both show periodic spatial oscillations, with characteristic wavelengths 2/delta and 1/delta, respectively, corresponding to ordering wavevectors 2k(F) and 4k(F) for the currents and densities, where 2k(F) = pi(1 - delta). The density minima are located at the anti-phase domain walls of the staggered current. For sufficiently large dopings, SF/DDW order is suppressed. The rung density modulation also exists in neighboring phases where currents decay exponentially. We show that most of the DMRG results can be qualitatively understood from weak-coupling RG/bosonization arguments. However, while these arguments seem to suggest a crossover from non-decaying correlations to power-law decay at a length scale of order 1/delta, the DMRG results are consistent with a true long-range order scenario for the currents and densities. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Eukaryotic genomes display segmental patterns of variation in various properties, including GC content and degree of evolutionary conservation. DNA segmentation algorithms are aimed at identifying statistically significant boundaries between such segments. Such algorithms may provide a means of discovering new classes of functional elements in eukaryotic genomes. This paper presents a model and an algorithm for Bayesian DNA segmentation and considers the feasibility of using it to segment whole eukaryotic genomes. The algorithm is tested on a range of simulated and real DNA sequences, and the following conclusions are drawn. Firstly, the algorithm correctly identifies non-segmented sequence, and can thus be used to reject the null hypothesis of uniformity in the property of interest. Secondly, estimates of the number and locations of change-points produced by the algorithm are robust to variations in algorithm parameters and initial starting conditions and correspond to real features in the data. Thirdly, the algorithm is successfully used to segment human chromosome 1 according to GC content, thus demonstrating the feasibility of Bayesian segmentation of eukaryotic genomes. The software described in this paper is available from the author's website (www.uq.edu.au/similar to uqjkeith/) or upon request to the author.
Resumo:
We develop criteria sufficient to enable detection of macroscopic coherence where there are not just two macroscopically distinct outcomes for a pointer measurement, but rather a spread of outcomes over a macroscopic range. The criteria provide a means to distinguish a macroscopic quantum description from a microscopic one based on mixtures of microscopic superpositions of pointer-measurement eigenstates. The criteria are applied to Gaussian-squeezed and spin-entangled states.