125 resultados para Assymetric loss functions
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densitiesby generalizing the Aitchison geometry for compositions in the simplex into the set probability densities
Resumo:
L'objectiu d'aquest estudi és definir els patrons d’hipoacúsia en dones amb Síndrome de Turner i els possibles factors que poden afavorir el desenvolupament d’hipoacúsia neurosensorial en dones adultes amb Síndrome de Turner. Es va trobar que més de la meitat de les dones amb Sindrome de Turner presenten hipoacúsia a l’audiometria, confirmat pels potencials evocats auditius de tronc; la hipoacúsia neurosensorial és el tipus de pèrdua d'audició més freqüent entre dones de mitjana edat amb síndrome de Turner i l'edat, el cariotip i la història prèvia d'otitis mitja recurrent són possibles factors de risc per l’aparició d’hipoacúsia en aquestes pacients.
Resumo:
The occurrence of negative values for Fukui functions was studied through the electronegativity equalization method. Using algebraic relations between Fukui functions and different other conceptual DFT quantities on the one hand and the hardness matrix on the other hand, expressions were obtained for Fukui functions for several archetypical small molecules. Based on EEM calculations for large molecular sets, no negative Fukui functions were found
Resumo:
Different procedures to obtain atom condensed Fukui functions are described. It is shown how the resulting values may differ depending on the exact approach to atom condensed Fukui functions. The condensed Fukui function can be computed using either the fragment of molecular response approach or the response of molecular fragment approach. The two approaches are nonequivalent; only the latter approach corresponds in general with a population difference expression. The Mulliken approach does not depend on the approach taken but has some computational drawbacks. The different resulting expressions are tested for a wide set of molecules. In practice one must make seemingly arbitrary choices about how to compute condensed Fukui functions, which suggests questioning the role of these indicators in conceptual density-functional theory
Resumo:
Linear response functions are implemented for a vibrational configuration interaction state allowing accurate analytical calculations of pure vibrational contributions to dynamical polarizabilities. Sample calculations are presented for the pure vibrational contributions to the polarizabilities of water and formaldehyde. We discuss the convergence of the results with respect to various details of the vibrational wave function description as well as the potential and property surfaces. We also analyze the frequency dependence of the linear response function and the effect of accounting phenomenologically for the finite lifetime of the excited vibrational states. Finally, we compare the analytical response approach to a sum-over-states approach
Resumo:
Report for the scientific sojourn carried out at the University of Aarhus, Denmark, from 2010 to 2012. Reprogramming of cellular metabolism is a key process during tumorigenesis. This metabolic adaptation is required in order to sustain the energetic and anabolic demands of highly proliferative cancer cells. Despite known for decades (Warburg effect), the precise molecular mechanisms regulating this switch remained unexplored. We have identify SIRT6 as a novel tumor suppressor that regulates aerobic glycolysis in cancer cells. Importantly, loss of this sirtuin in non-transformed cells leads to tumor formation without activation of known oncogenes, indicating that SIRT6 functions as a first-hit tumor suppressor. Furthermore, transformed SIRT6-deficient cells display increased glycolysis and tumor growth in vivo, suggesting that SIRT6 plays a role in both establishment and maintenance of cancer. We provide data demonstrating that the glycolytic switch towards aerobic glycolysis is the main driving force for tumorigenesis in SIRT6-deficient cells, since inhibition of glycolysis in these cells abrogates their tumorigenic potential. By using a conditional SIRT6-targeted allele, we show that deletion of SIRT6 in vivo increases the number, size and aggressiveness of tumors, thereby confirming a role of SIRT6 as a tumor suppressor in vivo. In addition, we describe a new role for SIRT6 as a regulator of ribosome biogenesis by co-repressing MYC transcriptional activity. Therefore, by repressing glycolysis and ribosomal gene expression, SIRT6 inhibits tumor establishment and progression. Further validating these data, SIRT6 is selectively downregulated in several human cancers, and expression levels of SIRT6 predict both prognosis and tumor-free survival rates, highlighting SIRT6 as a critical modulator of cancer metabolism. Our results provide a potential Achilles’ hill to tackle cancer metabolism.
Resumo:
BACKGROUND: Selenoproteins are a diverse family of proteins notable for the presence of the 21st amino acid, selenocysteine. Until very recently, all metazoan genomes investigated encoded selenoproteins, and these proteins had therefore been believed to be essential for animal life. Challenging this assumption, recent comparative analyses of insect genomes have revealed that some insect genomes appear to have lost selenoprotein genes. METHODOLOGY/PRINCIPAL FINDINGS: In this paper we investigate in detail the fate of selenoproteins, and that of selenoprotein factors, in all available arthropod genomes. We use a variety of in silico comparative genomics approaches to look for known selenoprotein genes and factors involved in selenoprotein biosynthesis. We have found that five insect species have completely lost the ability to encode selenoproteins and that selenoprotein loss in these species, although so far confined to the Endopterygota infraclass, cannot be attributed to a single evolutionary event, but rather to multiple, independent events. Loss of selenoproteins and selenoprotein factors is usually coupled to the deletion of the entire no-longer functional genomic region, rather than to sequence degradation and consequent pseudogenisation. Such dynamics of gene extinction are consistent with the high rate of genome rearrangements observed in Drosophila. We have also found that, while many selenoprotein factors are concomitantly lost with the selenoproteins, others are present and conserved in all investigated genomes, irrespective of whether they code for selenoproteins or not, suggesting that they are involved in additional, non-selenoprotein related functions. CONCLUSIONS/SIGNIFICANCE: Selenoproteins have been independently lost in several insect species, possibly as a consequence of the relaxation in insects of the selective constraints acting across metazoans to maintain selenoproteins. The dispensability of selenoproteins in insects may be related to the fundamental differences in antioxidant defense between these animals and other metazoans.
Resumo:
Background: Recent advances on high-throughput technologies have produced a vast amount of protein sequences, while the number of high-resolution structures has seen a limited increase. This has impelled the production of many strategies to built protein structures from its sequence, generating a considerable amount of alternative models. The selection of the closest model to the native conformation has thus become crucial for structure prediction. Several methods have been developed to score protein models by energies, knowledge-based potentials and combination of both.Results: Here, we present and demonstrate a theory to split the knowledge-based potentials in scoring terms biologically meaningful and to combine them in new scores to predict near-native structures. Our strategy allows circumventing the problem of defining the reference state. In this approach we give the proof for a simple and linear application that can be further improved by optimizing the combination of Zscores. Using the simplest composite score () we obtained predictions similar to state-of-the-art methods. Besides, our approach has the advantage of identifying the most relevant terms involved in the stability of the protein structure. Finally, we also use the composite Zscores to assess the conformation of models and to detect local errors.Conclusion: We have introduced a method to split knowledge-based potentials and to solve the problem of defining a reference state. The new scores have detected near-native structures as accurately as state-of-art methods and have been successful to identify wrongly modeled regions of many near-native conformations.
Resumo:
This paper studies two important reasons why people violate procedure invariance, loss aversion and scale compatibility. The paper extends previous research on loss aversion and scale compatibility by studying loss aversion and scale compatibility simultaneously, by looking at a new decision domain, medical decision analysis, and by examining the effect of loss aversion and scale compatibility on "well-contemplated preferences." We find significant evidence both of loss aversion and scale compatibility. However, the sizes of the biases due to loss aversion and scale compatibility vary over trade-offs and most participants do not behave consistently according to loss aversion or scale compatibility. In particular, the effect of loss aversion in medical trade-offs decreases with duration. These findings are encouraging for utility measurement and prescriptive decision analysis. There appear to exist decision contexts in which the effects of loss aversion and scale compatibility can be minimized and utilities can be measured that do not suffer from these distorting factors.
Resumo:
In 1952 F. Riesz and Sz.Nágy published an example of a monotonic continuous function whose derivative is zero almost everywhere, that is to say, a singular function. Besides, the function was strictly increasing. Their example was built as the limit of a sequence of deformations of the identity function. As an easy consequence of the definition, the derivative, when it existed and was finite, was found to be zero. In this paper we revisit the Riesz-N´agy family of functions and we relate it to a system for real numberrepresentation which we call (t, t-1) expansions. With the help of these real number expansions we generalize the family. The singularity of the functions is proved through some metrical properties of the expansions used in their definition which also allows us to give a more precise way of determining when the derivative is 0 or infinity.
Resumo:
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.
Resumo:
We propose a model and solution methods, for locating a fixed number ofmultiple-server, congestible common service centers or congestible publicfacilities. Locations are chosen so to minimize consumers congestion (orqueuing) and travel costs, considering that all the demand must be served.Customers choose the facilities to which they travel in order to receiveservice at minimum travel and congestion cost. As a proxy for thiscriterion, total travel and waiting costs are minimized. The travel costis a general function of the origin and destination of the demand, whilethe congestion cost is a general function of the number of customers inqueue at the facilities.
Resumo:
This paper proposes an exploration of the methodology of utilityfunctions that distinguishes interpretation from representation. Whilerepresentation univocally assigns numbers to the entities of the domainof utility functions, interpretation relates these entities withempirically observable objects of choice. This allows us to makeexplicit the standard interpretation of utility functions which assumesthat two objects have the same utility if and only if the individual isindifferent among them. We explore the underlying assumptions of suchan hypothesis and propose a non-standard interpretation according towhich objects of choice have a well-defined utility although individualsmay vary in the way they treat these objects in a specific context.We provide examples of such a methodological approach that may explainsome reversal of preferences and suggest possible mathematicalformulations for further research.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.