31 resultados para APPROXIMATE ENTROPY
Resumo:
Local computation in join trees or acyclic hypertrees has been shown to be linked to a particular algebraic structure, called valuation algebra.There are many models of this algebraic structure ranging from probability theory to numerical analysis, relational databases and various classical and non-classical logics. It turns out that many interesting models of valuation algebras may be derived from semiring valued mappings. In this paper we study how valuation algebras are induced by semirings and how the structure of the valuation algebra is related to the algebraic structure of the semiring. In particular, c-semirings with idempotent multiplication induce idempotent valuation algebras and therefore permit particularly efficient architectures for local computation. Also important are semirings whose multiplicative semigroup is embedded in a union of groups. They induce valuation algebras with a partially defined division. For these valuation algebras, the well-known architectures for Bayesian networks apply. We also extend the general computational framework to allow derivation of bounds and approximations, for when exact computation is not feasible.
Resumo:
A silicon implementation of the Approximate Rotations algorithm capable of carrying the computational load of algorithms such as QRD and SVD, within the real-time realisation of applications such as Adaptive Beamforming, is described. A modification to the original Approximate Rotations algorithm to simplify the method of optimal angle selection is proposed. Analysis shows that fewer iterations of the Approximate Rotations algorithm are required compared with the conventional CORDIC algorithm to achieve similar degrees of accuracy. The silicon design studies undertaken provide direct practical evidence of superior performance with the Approximate Rotations algorithm, requiring approximately 40% of the total computation time of the conventional CORDIC algorithm, for a similar silicon area cost. © 2004 IEEE.
Resumo:
The standard local density approximation and generalized gradient approximations fail to properly describe the dissociation of an electron pair bond, yielding large errors (on the order of 50 kcal/mol) at long bond distances. To remedy this failure, a self-consistent Kohn-Sham (KS) method is proposed with the exchange-correlation (xc) energy and potential depending on both occupied and virtual KS orbitals. The xc energy functional of Buijse and Baerends [Mol. Phys. 100, 401 (2002); Phys. Rev. Lett. 87, 133004 (2001)] is employed, which, based on an ansatz for the xc-hole amplitude, is able to reproduce the important dynamical and nondynamical effects of Coulomb correlation through the efficient use of virtual orbitals. Self-consistent calculations require the corresponding xc potential to be obtained, to which end the optimized effective potential (OEP) method is used within the common energy denominator approximation for the static orbital Green's function. The problem of the asymptotic divergence of the xc potential of the OEP when a finite number of virtual orbitals is used is addressed. The self-consistent calculations reproduce very well the entire H-2 potential curve, describing correctly the gradual buildup of strong left-right correlation in stretched H-2. (C) 2003 American Institute of Physics.
Resumo:
Life science research aims to continuously improve the quality and standard of human life. One of the major challenges in this area is to maintain food safety and security. A number of image processing techniques have been used to investigate the quality of food products. In this paper,we propose a new algorithm to effectively segment connected grains so that each of them can be inspected in a later processing stage. One family of the existing segmentation methods is based on the idea of watersheding, and it has shown promising results in practice.However,due to the over-segmentation issue,this technique has experienced poor performance in various applications,such as inhomogeneous background and connected targets. To solve this problem,we present a combination of two classical techniques to handle this issue.In the first step,a mean shift filter is used to eliminate the inhomogeneous background, where entropy is used to be a converging criterion. Secondly,a color gradient algorithm is used in order to detect the most significant edges, and a marked watershed transform is applied to segment cluttered objects out of the previous processing stages. The proposed framework is capable of compromising among execution time, usability, efficiency and segmentation outcome in analyzing ring die pellets. The experimental results demonstrate that the proposed approach is effectiveness and robust.
Resumo:
Approximate execution is a viable technique for energy-con\-strained environments, provided that applications have the mechanisms to produce outputs of the highest possible quality within the given energy budget.
We introduce a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows users to express the relative importance of computations for the quality of the end result, as well as minimum quality requirements. The significance-aware runtime system uses an application-specific analytical energy model to identify the degree of concurrency and approximation that maximizes quality while meeting user-specified energy constraints. Evaluation on a dual-socket 8-core server shows that the proposed
framework predicts the optimal configuration with high accuracy, enabling energy-constrained executions that result in significantly higher quality compared to loop perforation, a compiler approximation technique.
Resumo:
Credal nets are probabilistic graphical models which extend Bayesian nets to cope with sets of distributions. An algorithm for approximate credal network updating is presented. The problem in its general formulation is a multilinear optimization task, which can be linearized by an appropriate rule for fixing all the local models apart from those of a single variable. This simple idea can be iterated and quickly leads to accurate inferences. A transformation is also derived to reduce decision making in credal networks based on the maximality criterion to updating. The decision task is proved to have the same complexity of standard inference, being NPPP-complete for general credal nets and NP-complete for polytrees. Similar results are derived for the E-admissibility criterion. Numerical experiments confirm a good performance of the method.
Resumo:
Credal networks generalize Bayesian networks by relaxing the requirement of precision of probabilities. Credal networks are considerably more expressive than Bayesian networks, but this makes belief updating NP-hard even on polytrees. We develop a new efficient algorithm for approximate belief updating in credal networks. The algorithm is based on an important representation result we prove for general credal networks: that any credal network can be equivalently reformulated as a credal network with binary variables; moreover, the transformation, which is considerably more complex than in the Bayesian case, can be implemented in polynomial time. The equivalent binary credal network is then updated by L2U, a loopy approximate algorithm for binary credal networks. Overall, we generalize L2U to non-binary credal networks, obtaining a scalable algorithm for the general case, which is approximate only because of its loopy nature. The accuracy of the inferences with respect to other state-of-the-art algorithms is evaluated by extensive numerical tests.
Resumo:
Credal nets generalize Bayesian nets by relaxing the requirement of precision of probabilities. Credal nets are considerably more expressive than Bayesian nets, but this makes belief updating NP-hard even on polytrees. We develop a new efficient algorithm for approximate belief updating in credal nets. The algorithm is based on an important representation result we prove for general credal nets: that any credal net can be equivalently reformulated as a credal net with binary variables; moreover, the transformation, which is considerably more complex than in the Bayesian case, can be implemented in polynomial time. The equivalent binary credal net is updated by L2U, a loopy approximate algorithm for binary credal nets. Thus, we generalize L2U to non-binary credal nets, obtaining an accurate and scalable algorithm for the general case, which is approximate only because of its loopy nature. The accuracy of the inferences is evaluated by empirical tests.
Resumo:
Background: The identification of pre-clinical microvascular damage in hypertension by non-invasive techniques has proved frustrating for clinicians. This proof of concept study investigated whether entropy, a novel summary measure for characterizing blood velocity waveforms, is altered in participants with hypertension and may therefore be useful in risk stratification.
Methods: Doppler ultrasound waveforms were obtained from the carotid and retrobulbar circulation in 42 participants with uncomplicated grade 1 hypertension (mean systolic/diastolic blood pressure (BP) 142/92 mmHg), and 26 healthy controls (mean systolic/diastolic BP 116/69 mmHg). Mean wavelet entropy was derived from flow-velocity data and compared with traditional haemodynamic measures of microvascular function, namely the resistive and pulsatility indices.
Results: Entropy, was significantly higher in control participants in the central retinal artery (CRA) (differential mean 0.11 (standard error 0.05 cms(-1)), CI 0.009 to 0.219, p 0.017) and ophthalmic artery (0.12 (0.05), CI 0.004 to 0.215, p 0.04). In comparison, the resistive index (0.12 (0.05), CI 0.005 to 0.226, p 0.029) and pulsatility index (0.96 (0.38), CI 0.19 to 1.72, p 0.015) showed significant differences between groups in the CRA alone. Regression analysis indicated that entropy was significantly influenced by age and systolic blood pressure (r values 0.4-0.6). None of the measures were significantly altered in the larger conduit vessel.
Conclusion: This is the first application of entropy to human blood velocity waveform analysis and shows that this new technique has the ability to discriminate health from early hypertensive disease, thereby promoting the early identification of cardiovascular disease in a young hypertensive population.
Resumo:
Embedded memories account for a large fraction of the overall silicon area and power consumption in modern SoC(s). While embedded memories are typically realized with SRAM, alternative solutions, such as embedded dynamic memories (eDRAM), can provide higher density and/or reduced power consumption. One major challenge that impedes the widespread adoption of eDRAM is that they require frequent refreshes potentially reducing the availability of the memory in periods of high activity and also consuming significant amount of power due to such frequent refreshes. Reducing the refresh rate while on one hand can reduce the power overhead, if not performed in a timely manner, can cause some cells to lose their content potentially resulting in memory errors. In this paper, we consider extending the refresh period of gain-cell based dynamic memories beyond the worst-case point of failure, assuming that the resulting errors can be tolerated when the use-cases are in the domain of inherently error-resilient applications. For example, we observe that for various data mining applications, a large number of memory failures can be accepted with tolerable imprecision in output quality. In particular, our results indicate that by allowing as many as 177 errors in a 16 kB memory, the maximum loss in output quality is 11%. We use this failure limit to study the impact of relaxing reliability constraints on memory availability and retention power for different technologies.
Resumo:
Measurements of explosive nucleosynthesis yields in core-collapse supernovae provide tests for explosion models. We investigate constraints on explosive conditions derivable from measured amounts of nickel and iron after radioactive decays using nucleosynthesis networks with parameterized thermodynamic trajectories. The Ni/Fe ratio is for most regimes dominated by the production ratio of Ni-58/(Fe-54 + Ni-56), which tends to grow with higher neutron excess and with higher entropy. For SN 2012ec, a supernova (SN) that produced a Ni/Fe ratio of 3.4 +/- 1.2 times solar, we find that burning of a fuel with neutron excess eta approximate to 6 x 10(-3) is required. Unless the progenitor metallicity is over five times solar, the only layer in the progenitor with such a neutron excess is the silicon shell. SNe producing large amounts of stable nickel thus suggest that this deep-lying layer can be, at least partially, ejected in the explosion. We find that common spherically symmetric models of M-ZAMS less than or similar to 13 M-circle dot stars exploding with a delay time of less than one second (M-cut < 1.5 M-circle dot) are able to achieve such silicon-shell ejection. SNe that produce solar or subsolar Ni/Fe ratios, such as SN 1987A, must instead have burnt and ejected only oxygen-shell material, which allows a lower limit to the mass cut to be set. Finally, we find that the extreme Ni/Fe value of 60-75 times solar derived for the Crab cannot be reproduced by any realistic entropy burning outside the iron core, and neutrino-neutronization obtained in electron capture models remains the only viable explanation.
Resumo:
Cascade control is one of the routinely used control strategies in industrial processes because it can dramatically improve the performance of single-loop control, reducing both the maximum deviation and the integral error of the disturbance response. Currently, many control performance assessment methods of cascade control loops are developed based on the assumption that all the disturbances are subject to Gaussian distribution. However, in the practical condition, several disturbance sources occur in the manipulated variable or the upstream exhibits nonlinear behaviors. In this paper, a general and effective index of the performance assessment of the cascade control system subjected to the unknown disturbance distribution is proposed. Like the minimum variance control (MVC) design, the output variances of the primary and the secondary loops are decomposed into a cascade-invariant and a cascade-dependent term, but the estimated ARMA model for the cascade control loop based on the minimum entropy, instead of the minimum mean squares error, is developed for non-Gaussian disturbances. Unlike the MVC index, an innovative control performance index is given based on the information theory and the minimum entropy criterion. The index is informative and in agreement with the expected control knowledge. To elucidate wide applicability and effectiveness of the minimum entropy cascade control index, a simulation problem and a cascade control case of an oil refinery are applied. The comparison with MVC based cascade control is also included.
Resumo:
Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 benchmark kernels shows that the proposed framework picks the optimal configuration with high accuracy. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.
Resumo:
Wavelet entropy assesses the degree of order or disorder in signals and presents this complex information in a simple metric. Relative wavelet entropy assesses the similarity between the spectral distributions of two signals, again in a simple metric. Wavelet entropy is therefore potentially a very attractive tool for waveform analysis. The ability of this method to track the effects of pharmacologic modulation of vascular function on Doppler blood velocity waveforms was assessed. Waveforms were captured from ophthalmic arteries of 10 healthy subjects at baseline, after the administration of glyceryl trinitrate (GTN) and after two doses of N(G)-nitro-L-arginine-methyl ester (L-NAME) to produce vasodilation and vasoconstriction, respectively. Wavelet entropy had a tendency to decrease from baseline in response to GTN, but significantly increased after the administration of L-NAME (mean: 1.60 ± 0.07 after 0.25 mg/kg and 1.72 ± 0.13 after 0.5 mg/kg vs. 1.50 ± 0.10 at baseline, p < 0.05). Relative wavelet entropy had a spectral distribution from increasing doses of L-NAME comparable to baseline, 0.07 ± 0.04 and 0.08 ± 0.03, respectively, whereas GTN had the most dissimilar spectral distribution compared with baseline (0.17 ± 0.08, p = 0.002). Wavelet entropy can detect subtle changes in Doppler blood velocity waveform structure in response to nitric-oxide-mediated changes in arteriolar smooth muscle tone.