159 resultados para Entropy diagrams
Resumo:
En el marc del conveni de col·laboració entre el Grup de Gràfics de Girona de la Universitat de Girona i el Grup de Neuroradiologia de l’Institut de Diagnòstic per la Imatge de l’Hospital Universitari Dr. Josep Trueta de Girona, es planteja desenvolupar la plataforma StarViewer, una plataforma que incorpori les tècniques bàsiques de visualització científica complementant la visualització 2D tradicional amb una visualització 3D que permeti inspeccionar la informació del pacient de forma més eficient i facilitant-ne el seu diagnòstic. En aquest projecte s’implementen dos tècniques que formaran part de la plataforma StarViewer. El primer objectiu és implementar un mètode per facilitar la visualització i la interpretació de models de vòxels simples i models de vòxels fusionats, i el segon és implementar un mètode basat en mesures de la Teoria de la Informació per ajudar l’usuari a trobar el punt de vista òptim per a un model donat. Per assolir el primer objectiu ens centrarem en la tècnica dels Miralls màgics o Magic Mirrors, que permeten la visualització simultània del model de vòxels des de diferents punts de vista, i per al segon objectiu, en el concepte d’excess entropy, que és una mesura de la informació, per determinar quin punt de vista aporta més informació a l’usuari
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
Differential scanning calorimetry (DSC) was used to study the dehydrogenation processes that take place in three hydrogenated amorphous silicon materials: nanoparticles, polymorphous silicon, and conventional device-quality amorphous silicon. Comparison of DSC thermograms with evolved gas analysis (EGA) has led to the identification of four dehydrogenation processes arising from polymeric chains (A), SiH groups at the surfaces of internal voids (A'), SiH groups at interfaces (B), and in the bulk (C). All of them are slightly exothermic with enthalpies below 50 meV/H atoms , indicating that, after dissociation of any SiH group, most dangling bonds recombine. The kinetics of the three low-temperature processes [with DSC peak temperatures at around 320 (A),360 (A'), and 430°C (B)] exhibit a kinetic-compensation effect characterized by a linea relationship between the activation entropy and enthalpy, which constitutes their signature. Their Si-H bond-dissociation energies have been determined to be E (Si-H)0=3.14 (A), 3.19 (A'), and 3.28 eV (B). In these cases it was possible to extract the formation energy E(DB) of the dangling bonds that recombine after Si-H bond breaking [0.97 (A), 1.05 (A'), and 1.12 (B)]. It is concluded that E(DB) increases with the degree of confinement and that E(DB)>1.10 eV for the isolated dangling bond in the bulk. After Si-H dissociation and for the low-temperature processes, hydrogen is transported in molecular form and a low relaxation of the silicon network is promoted. This is in contrast to the high-temperature process for which the diffusion of H in atomic form induces a substantial lattice relaxation that, for the conventional amorphous sample, releases energy of around 600 meV per H atom. It is argued that the density of sites in the Si network for H trapping diminishes during atomic diffusion
Resumo:
Through the history of Electrical Engineering education, vectorial and phasorial diagrams have been used as a fundamental learning tool. At present, computational power has replaced them by long data lists, the result of solving equation systems by means of numerical methods. In this sense, diagrams have been shifted to an academic background and although theoretically explained, they are not used in a practical way within specific examples. This fact may be against the understanding of the complex behavior of the electrical power systems by students. This article proposes a modification of the classical Perrine-Baum diagram construction to allowing both a more practical representation and a better understanding of the behavior of a high-voltage electric line under different levels of load. This modification allows, at the same time, the forecast of the obsolescence of this behavior and line’s loading capacity. Complementary, we evaluate the impact of this tool in the learning process showing comparative undergraduate results during three academic years
Resumo:
The computational approach to the Hirshfeld [Theor. Chim. Acta 44, 129 (1977)] atom in a molecule is critically investigated, and several difficulties are highlighted. It is shown that these difficulties are mitigated by an alternative, iterative version, of the Hirshfeld partitioning procedure. The iterative scheme ensures that the Hirshfeld definition represents a mathematically proper information entropy, allows the Hirshfeld approach to be used for charged molecules, eliminates arbitrariness in the choice of the promolecule, and increases the magnitudes of the charges. The resulting "Hirshfeld-I charges" correlate well with electrostatic potential derived atomic charges
Resumo:
A Carnatic music concert is made up of a sequence of pieces, where each piece corresponds to a particular genre and ra¯aga (melody). Unlike a western music concert, the artist may be applauded intra-performance inter-performance. Most Carnatic music that is archived today correspond to a single audio recordings of entire concerts.The purpose of this paper is to segment single audio recordings into a sequence of pieces using thecharacteristic features of applause and music. Spectral flux, spectral entropy change quite significantly from music to applause and vice-versa. The characteristics of these features for a subset of concerts was studied. A threshold based approach was used to segment the pieces into music fragments and applauses. Preliminary resultson recordings 19 concerts from matched microphones show that the EER is about 17% for a resolution of 0.25 seconds. Further, a parameter called CUSUM is estimatedfor the applause regions. The CUSUM values determine the strength of the applause. The CUSUM is used to characterise the highlights of a concert.
Resumo:
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
The criterion, based on the thermodynamics theory, that the climatic system tends to extremizesome function has suggested several studies. In particular, special attention has been devoted to the possibility that the climate reaches an extremal rate of planetary entropy production.Due to both radiative and material effects contribute to total planetary entropy production,climatic simulations obtained at the extremal rates of total, radiative or material entropy production appear to be of interest in order to elucidate which of the three extremal assumptions behaves more similar to current data. In the present paper, these results have been obtainedby applying a 2-dimensional (2-Dim) horizontal energy balance box-model, with a few independent variables (surface temperature, cloud-cover and material heat fluxes). In addition, climatic simulations for current conditions by assuming a fixed cloud-cover have been obtained. Finally,sensitivity analyses for both variable and fixed cloud models have been carried out
Resumo:
Different climatic simulations have been obtained by using a 2-Dim horizontal energy balancemodel (EBM), which has been constrained to satisfy several extremal principles on dissipationand convection. Moreover, 2 different versions of the model with fixed and variable cloud-coverhave been used. The assumption of an extremal type of behaviour for the climatic system canacquire additional support depending on the similarities found with measured data for pastconditions as well as with usual projections for possible future scenarios
Resumo:
Johnson CCD photometry was performed in the two subgroups of the association Cepheus OB3, for selected fields each containing at least one star with previous UBV photoelectric photometry. Photometry for about 1000 stars down to visual magnitude 21 is provided, although the completeness tests show that the sample is complete down to V=19mag. Individual errors were assigned to the magnitude and colours for each star. Colour-colour and colour-magnitude diagrams are shown. Astrometric positions of the stars are also given. Description of the reduction procedure is fully detailed.
Resumo:
Our procedure to detect moving groups in the solar neighbourhood (Chen et al., 1997) in the four-dimensional space of the stellar velocity components and age has been improved. The method, which takes advantadge of non-parametric estimators of density distribution to avoid any a priori knowledge of the kinematic properties of these stellar groups, now includes the effect of observational errors on the process to select moving group stars, uses a better estimation of the density distribution of the total sample and field stars, and classifies moving group stars using all the available information. It is applied here to an accurately selected sample of early-type stars with known radial velocities and Strömgren photometry. Astrometric data are taken from the HIPPARCOS catalogue (ESA, 1997), which results in an important decrease in the observational errors with respect to ground-based data, and ensures the uniformity of the observed data. Both the improvement of our method and the use of precise astrometric data have allowed us not only to confirm the existence of classical moving groups, but also to detect finer structures that in several cases can be related to kinematic properties of nearby open clusters or associations.
Resumo:
In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.
Resumo:
We consider systems that can be described in terms of two kinds of degree of freedom. The corresponding ordering modes may, under certain conditions, be coupled to each other. We may thus assume that the primary ordering mode gives rise to a diffusionless first-order phase transition. The change of its thermodynamic properties as a function of the secondary-ordering-mode state is then analyzed. Two specific examples are discussed. First, we study a three-state Potts model in a binary system. Using mean-field techniques, we obtain the phase diagram and different properties of the system as a function of the distribution of atoms on the different lattice sites. In the second case, the properties of a displacive structural phase transition of martensitic type in a binary alloy are studied as a function of atomic order. Because of the directional character of the martensitic-transition mechanism, we find only a very weak dependence of the entropy on atomic order. Experimental results are found to be in quite good agreement with theoretical predictions.