934 resultados para Minimum Entropy Deconvolution
Resumo:
The hypothesis of minimum entropy production is applied to a simple one-dimensional energy balance model and is analysed for different values of the radiative forcing due to greenhouse gases. The extremum principle is used to determine the planetary “conductivity” and to avoid the “diffusive” approximation, which is commonly assumed in this type of model. For present conditions the result at minimum radiative entropy production is similar to that obtained by applying the classical model. Other climatic scenarios show visible differences, with better behaviour for the extremal case
Resumo:
Non-Equilibrium Statistical Mechanics is a broad subject. Grossly speaking, it deals with systems which have not yet relaxed to an equilibrium state, or else with systems which are in a steady non-equilibrium state, or with more general situations. They are characterized by external forcing and internal fluxes, resulting in a net production of entropy which quantifies dissipation and the extent by which, by the Second Law of Thermodynamics, time-reversal invariance is broken. In this thesis we discuss some of the mathematical structures involved with generic discrete-state-space non-equilibrium systems, that we depict with networks in all analogous to electrical networks. We define suitable observables and derive their linear regime relationships, we discuss a duality between external and internal observables that reverses the role of the system and of the environment, we show that network observables serve as constraints for a derivation of the minimum entropy production principle. We dwell on deep combinatorial aspects regarding linear response determinants, which are related to spanning tree polynomials in graph theory, and we give a geometrical interpretation of observables in terms of Wilson loops of a connection and gauge degrees of freedom. We specialize the formalism to continuous-time Markov chains, we give a physical interpretation for observables in terms of locally detailed balanced rates, we prove many variants of the fluctuation theorem, and show that a well-known expression for the entropy production due to Schnakenberg descends from considerations of gauge invariance, where the gauge symmetry is related to the freedom in the choice of a prior probability distribution. As an additional topic of geometrical flavor related to continuous-time Markov chains, we discuss the Fisher-Rao geometry of nonequilibrium decay modes, showing that the Fisher matrix contains information about many aspects of non-equilibrium behavior, including non-equilibrium phase transitions and superposition of modes. We establish a sort of statistical equivalence principle and discuss the behavior of the Fisher matrix under time-reversal. To conclude, we propose that geometry and combinatorics might greatly increase our understanding of nonequilibrium phenomena.
Resumo:
2000 Mathematics Subject Classification: 49L20, 60J60, 93E20
Resumo:
We answer the following question: given any n∈ℕ, which is the minimum number of endpoints en of a tree admitting a zero-entropy map f with a periodic orbit of period n? We prove that en=s1s2…sk−∑i=2ksisi+1…sk, where n=s1s2…sk is the decomposition of n into a product of primes such that si≤si+1 for 1≤i
Resumo:
This paper presents a novel segmentation method for cuboidal cell nuclei in images of prostate tissue stained with hematoxylin and eosin. The proposed method allows segmenting normal, hyperplastic and cancerous prostate images in three steps: pre-processing, segmentation of cuboidal cell nuclei and post-processing. The pre-processing step consists of applying contrast stretching to the red (R) channel to highlight the contrast of cuboidal cell nuclei. The aim of the second step is to apply global thresholding based on minimum cross entropy to generate a binary image with candidate regions for cuboidal cell nuclei. In the post-processing step, false positives are removed using the connected component method. The proposed segmentation method was applied to an image bank with 105 samples and measures of sensitivity, specificity and accuracy were compared with those provided by other segmentation approaches available in the specialized literature. The results are promising and demonstrate that the proposed method allows the segmentation of cuboidal cell nuclei with a mean accuracy of 97%. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we propose a fast adaptive importance sampling method for the efficient simulation of buffer overflow probabilities in queueing networks. The method comprises three stages. First, we estimate the minimum cross-entropy tilting parameter for a small buffer level; next, we use this as a starting value for the estimation of the optimal tilting parameter for the actual (large) buffer level. Finally, the tilting parameter just found is used to estimate the overflow probability of interest. We study various properties of the method in more detail for the M/M/1 queue and conjecture that similar properties also hold for quite general queueing networks. Numerical results support this conjecture and demonstrate the high efficiency of the proposed algorithm.
Resumo:
A detailed mathematical analysis on the q = 1/2 non-extensive maximum entropydistribution of Tsallis' is undertaken. The analysis is based upon the splitting of such adistribution into two orthogonal components. One of the components corresponds to theminimum norm solution of the problem posed by the fulfillment of the a priori conditionson the given expectation values. The remaining component takes care of the normalizationconstraint and is the projection of a constant onto the Null space of the "expectation-values-transformation"
Resumo:
A maximum entropy statistical treatment of an inverse problem concerning frame theory is presented. The problem arises from the fact that a frame is an overcomplete set of vectors that defines a mapping with no unique inverse. Although any vector in the concomitant space can be expressed as a linear combination of frame elements, the coefficients of the expansion are not unique. Frame theory guarantees the existence of a set of coefficients which is “optimal” in a minimum norm sense. We show here that these coefficients are also “optimal” from a maximum entropy viewpoint.
Resumo:
Neuronal dynamics are fundamentally constrained by the underlying structural network architecture, yet much of the details of this synaptic connectivity are still unknown even in neuronal cultures in vitro. Here we extend a previous approach based on information theory, the Generalized Transfer Entropy, to the reconstruction of connectivity of simulated neuronal networks of both excitatory and inhibitory neurons. We show that, due to the model-free nature of the developed measure, both kinds of connections can be reliably inferred if the average firing rate between synchronous burst events exceeds a small minimum frequency. Furthermore, we suggest, based on systematic simulations, that even lower spontaneous inter-burst rates could be raised to meet the requirements of our reconstruction algorithm by applying a weak spatially homogeneous stimulation to the entire network. By combining multiple recordings of the same in silico network before and after pharmacologically blocking inhibitory synaptic transmission, we show then how it becomes possible to infer with high confidence the excitatory or inhibitory nature of each individual neuron.
Resumo:
An alternative blind deconvolution algorithm for white-noise driven minimum phase systems is presented and verified by computer simulation. This algorithm uses a cost function based on a novel idea: variance approximation and series decoupling (VASD), and suggests that not all autocorrelation function values are necessary to implement blind deconvolution.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong's algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong's implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the state of the art.
Resumo:
In this paper we propose a fast adaptive Importance Sampling method for the efficient simulation of buffer overflow probabilities in queueing networks. The method comprises three stages. First we estimate the minimum Cross-Entropy tilting parameter for a small buffer level; next, we use this as a starting value for the estimation of the optimal tilting parameter for the actual (large) buffer level; finally, the tilting parameter just found is used to estimate the overflow probability of interest. We recognize three distinct properties of the method which together explain why the method works well; we conjecture that they hold for quite general queueing networks. Numerical results support this conjecture and demonstrate the high efficiency of the proposed algorithm.
Resumo:
A dolgozatban a döntéselméletben fontos szerepet játszó páros összehasonlítás mátrix prioritásvektorának meghatározására új megközelítést alkalmazunk. Az A páros összehasonlítás mátrix és a prioritásvektor által definiált B konzisztens mátrix közötti eltérést a Kullback-Leibler relatív entrópia-függvény segítségével mérjük. Ezen eltérés minimalizálása teljesen kitöltött mátrix esetében konvex programozási feladathoz vezet, nem teljesen kitöltött mátrix esetében pedig egy fixpont problémához. Az eltérésfüggvényt minimalizáló prioritásvektor egyben azzal a tulajdonsággal is rendelkezik, hogy az A mátrix elemeinek összege és a B mátrix elemeinek összege közötti különbség éppen az eltérésfüggvény minimumának az n-szerese, ahol n a feladat mérete. Így az eltérésfüggvény minimumának értéke két szempontból is lehet alkalmas az A mátrix inkonzisztenciájának a mérésére. _____ In this paper we apply a new approach for determining a priority vector for the pairwise comparison matrix which plays an important role in Decision Theory. The divergence between the pairwise comparison matrix A and the consistent matrix B defined by the priority vector is measured with the help of the Kullback-Leibler relative entropy function. The minimization of this divergence leads to a convex program in case of a complete matrix, leads to a fixed-point problem in case of an incomplete matrix. The priority vector minimizing the divergence also has the property that the difference of the sums of elements of the matrix A and the matrix B is n times the minimum of the divergence function where n is the dimension of the problem. Thus we developed two reasons for considering the value of the minimum of the divergence as a measure of inconsistency of the matrix A.
Resumo:
In a recent paper [1] Reis showed that both the principles of extremum of entropy production rate, which are often used in the study of complex systems, are corollaries of the Constructal Law. In fact, both follow from the maximization of overall system conductivities, under appropriate constraints. In this way, the maximum rate of entropy production (MEP) occurs when all the forces in the system are kept constant. On the other hand, the minimum rate of entropy production (mEP) occurs when all the currents that cross the system are kept constant. In this paper it is shown how the so-called principle of "minimum energy expenditure" which is often used as the basis for explaining many morphologic features in biologic systems, and also in inanimate systems, is also a corollary of Bejan's Constructal Law [2]. Following the general proof some cases namely, the scaling laws of human vascular systems and river basins are discussed as illustrations from the side of life, and inanimate systems, respectively.