921 resultados para non-negative matrix factorization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Starting with logratio biplots for compositional data, which are based on the principle of subcompositional coherence, and then adding weights, as in correspondence analysis, we rediscover Lewi's spectral map and many connections to analyses of two-way tables of non-negative data. Thanks to the weighting, the method also achieves the property of distributional equivalence

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Epidemiological screening combined with serological tests has become an important tool at blood banks for the characterization of donors with or without Trypanosoma cruzi infection. Thus, the objective of the present study was to describe the sociodemographic and epidemiological characteristics of blood donors with non-negative serology for T. cruzito determine possible risk factors associated with serological ineligibility. Sociodemographic and epidemiological data were collected by analysis of patient histories and interviews. The data were analyzed descriptively using absolute and relative frequencies and odds ratio (OR) evaluation. The frequency of serological ineligibility was 0.28%, with a predominance of inconclusive reactions (52%) and seropositivity among first-time donors (OR = 607), donors older than 30 years (OR = 3.7), females (OR = 1.9), donors from risk areas (OR = 4) and subjects living in rural areas (OR = 1.7). The risk of seropositivity was higher among donors who had contact with the triatomine vector (OR = 11.7) and those with a family history of Chagas disease (OR = 4.8). The results demonstrate the value of detailed clinical-epidemiological screening as an auxiliary tool for serological definition that, together with more specific and more sensitive laboratory methods, will guarantee a higher efficacy in the selection of donors at blood centres.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that any cooperative TU game is the maximum of a finite collection of convex games. This max-convex decomposition can be refined by using convex games with non-negative dividends for all coalitions of at least two players. As a consequence of the above results we show that the class of modular games is a set of generators of the distributive lattice of all cooperative TU games. Finally, we characterize zero-monotonic games using a strong max-convex decomposition

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that any cooperative TU game is the maximum of a finite collection of convex games. This max-convex decomposition can be refined by using convex games with non-negative dividends for all coalitions of at least two players. As a consequence of the above results we show that the class of modular games is a set of generators of the distributive lattice of all cooperative TU games. Finally, we characterize zero-monotonic games using a strong max-convex decomposition

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study the equity core (Selten, 1978) and compare it with the core. A payo vector is in the equity core if no coalition can divide its value among its members proportionally to a given weight system and, in this way, give more to each member than the amount he or she receives in the payo vector. We show that the equity core is a compact extension of the core and that, for non-negative games, the intersection of all equity cores with respect to all weights coincides with the core of the game. Keywords: Cooperative game, equity core, equal division core, core. JEL classi cation: C71

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many European states apply score systems to evaluate the disability severity of non-fatal motor victims under the law of third-party liability. The score is a non-negative integer with an upper bound at 100 that increases with severity. It may be automatically converted into financial terms and thus also reflects the compensation cost for disability. In this paper, discrete regression models are applied to analyze the factors that influence the disability severity score of victims. Standard and zero-altered regression models are compared from two perspectives: an interpretation of the data generating process and the level of statistical fit. The results have implications for traffic safety policy decisions aimed at reducing accident severity. An application using data from Spain is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conservation laws in physics are numerical invariants of the dynamics of a system. In cellular automata (CA), a similar concept has already been defined and studied. To each local pattern of cell states a real value is associated, interpreted as the “energy” (or “mass”, or . . . ) of that pattern.The overall “energy” of a configuration is simply the sum of the energy of the local patterns appearing on different positions in the configuration. We have a conservation law for that energy, if the total energy of each configuration remains constant during the evolution of the CA. For a given conservation law, it is desirable to find microscopic explanations for the dynamics of the conserved energy in terms of flows of energy from one region toward another. Often, it happens that the energy values are from non-negative integers, and are interpreted as the number of “particles” distributed on a configuration. In such cases, it is conjectured that one can always provide a microscopic explanation for the conservation laws by prescribing rules for the local movement of the particles. The onedimensional case has already been solved by Fuk´s and Pivato. We extend this to two-dimensional cellular automata with radius-0,5 neighborhood on the square lattice. We then consider conservation laws in which the energy values are chosen from a commutative group or semigroup. In this case, the class of all conservation laws for a CA form a partially ordered hierarchy. We study the structure of this hierarchy and prove some basic facts about it. Although the local properties of this hierarchy (at least in the group-valued case) are tractable, its global properties turn out to be algorithmically inaccessible. In particular, we prove that it is undecidable whether this hierarchy is trivial (i.e., if the CA has any non-trivial conservation law at all) or unbounded. We point out some interconnections between the structure of this hierarchy and the dynamical properties of the CA. We show that positively expansive CA do not have non-trivial conservation laws. We also investigate a curious relationship between conservation laws and invariant Gibbs measures in reversible and surjective CA. Gibbs measures are known to coincide with the equilibrium states of a lattice system defined in terms of a Hamiltonian. For reversible cellular automata, each conserved quantity may play the role of a Hamiltonian, and provides a Gibbs measure (or a set of Gibbs measures, in case of phase multiplicity) that is invariant. Conversely, every invariant Gibbs measure provides a conservation law for the CA. For surjective CA, the former statement also follows (in a slightly different form) from the variational characterization of the Gibbs measures. For one-dimensional surjective CA, we show that each invariant Gibbs measure provides a conservation law. We also prove that surjective CA almost surely preserve the average information content per cell with respect to any probability measure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polycyclic aromatic hydrocarbons (PAHs) are a great environmental concern mainly because of their toxic, mutagenic and carcinogenic potential. This paper reports utilization of the solid-phase extraction (SPE) technique to determine PAHs in environmental aqueous matrices. The recovery from environmental aqueous matrices fortified with PAHs varied from 63.7 to 93.1% for atmospheric liquid precipitation, from 38.3 to 95.1% for superficial river water, and from 71.0 to 95.5% for marine water. No negative matrix effect was observed for the recovery of PAHs from atmospheric liquid precipitation and marine water, but was observed for superficial river water, particularly for PAHs possessing 5 and 6 aromatic rings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The processes and sources that regulate the elemental composition of aerosol particles were investigated in both fine and coarse modes during the dry and wet seasons. One hundred and nine samples were collected from the biological reserve Cuieiras - Manaus from February to October 2008, and analyzed together with 668 samples that were previously collected at Balbina from 1998 to 2002. Particle induced X-ray emission technique was used to determine the elemental composition, while the concentration of black carbon was obtained from the measurement of optical reflectance. Absolute principal factor analysis and positive matrix factorization were performed for source apportionment, which was complemented with back trajectory analysis. A regional identity for the natural biogenic aerosol was found for the Central Amazon Basin and can be used in dynamical chemical region models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How can we think the destinal place of language in the essentially historical condition of our existence if such historicity cannot be understood on the basis of the labor of negativity alone? The attempt is made here to think language in a more originary manner, as non-negative finitude, that affirms what is outside dialectical-speculative closure, what is to come. The notion of 'destinal' itself is thus transformed. No longer being merely a categorical grasp of "entities presently given", language is an originary exposure to the event of arrival in its lightning flash. Destiny appears as that of the messianic arrival of the 'not yet' which is not a telos that the immanent movement of historical reason reaches by an irresistible force of the negative. This essay reads Schelling, Heidegger and Kierkegaard to think language as a "place" of exposure to the non-teleological destiny that may erupt even today, here and now, without any given conditionality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We associate some graphs to a ring R and we investigate the interplay between the ring-theoretic properties of R and the graph-theoretic properties of the graphs associated to R. Let Z(R) be the set of zero-divisors of R. We define an undirected graph ᴦ(R) with nonzero zero-divisors as vertices and distinct vertices x and y are adjacent if xy=0 or yx=0. We investigate the Isomorphism Problem for zero-divisor graphs of group rings RG. Let Sk denote the sphere with k handles, where k is a non-negative integer, that is, Sk is an oriented surface of genus k. The genus of a graph is the minimal integer n such that the graph can be embedded in Sn. The annihilating-ideal graph of R is defined as the graph AG(R) with the set of ideals with nonzero annihilators as vertex such that two distinct vertices I and J are adjacent if IJ=(0). We characterize Artinian rings whose annihilating-ideal graphs have finite genus. Finally, we extend the definition of the annihilating-ideal graph to non-commutative rings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Travail réalisé en cotutelle avec l'université Paris-Diderot et le Commissariat à l'Energie Atomique sous la direction de John Harnad et Bertrand Eynard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of regulating an economy with environmental pollution. We examine the distributional impact of the polluter-pays principle which requires that any agent compensates all other agents for the damages caused by his or her (pollution) emissions. With constant marginal damages we show that regulation via the polluter-pays principle leads to the unique welfare distribution that assigns non-negative individual welfare and renders each agent responsible for his or her pollution impact. We extend both the polluter-pays principle and this result to increasing marginal damages due to pollution. We also discuss the acceptability of the polluter-pays principle and compare it with the Vickrey-Clark-Groves mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse présente des méthodes de traitement de données de comptage en particulier et des données discrètes en général. Il s'inscrit dans le cadre d'un projet stratégique du CRNSG, nommé CC-Bio, dont l'objectif est d'évaluer l'impact des changements climatiques sur la répartition des espèces animales et végétales. Après une brève introduction aux notions de biogéographie et aux modèles linéaires mixtes généralisés aux chapitres 1 et 2 respectivement, ma thèse s'articulera autour de trois idées majeures. Premièrement, nous introduisons au chapitre 3 une nouvelle forme de distribution dont les composantes ont pour distributions marginales des lois de Poisson ou des lois de Skellam. Cette nouvelle spécification permet d'incorporer de l'information pertinente sur la nature des corrélations entre toutes les composantes. De plus, nous présentons certaines propriétés de ladite distribution. Contrairement à la distribution multidimensionnelle de Poisson qu'elle généralise, celle-ci permet de traiter les variables avec des corrélations positives et/ou négatives. Une simulation permet d'illustrer les méthodes d'estimation dans le cas bidimensionnel. Les résultats obtenus par les méthodes bayésiennes par les chaînes de Markov par Monte Carlo (CMMC) indiquent un biais relatif assez faible de moins de 5% pour les coefficients de régression des moyennes contrairement à ceux du terme de covariance qui semblent un peu plus volatils. Deuxièmement, le chapitre 4 présente une extension de la régression multidimensionnelle de Poisson avec des effets aléatoires ayant une densité gamma. En effet, conscients du fait que les données d'abondance des espèces présentent une forte dispersion, ce qui rendrait fallacieux les estimateurs et écarts types obtenus, nous privilégions une approche basée sur l'intégration par Monte Carlo grâce à l'échantillonnage préférentiel. L'approche demeure la même qu'au chapitre précédent, c'est-à-dire que l'idée est de simuler des variables latentes indépendantes et de se retrouver dans le cadre d'un modèle linéaire mixte généralisé (GLMM) conventionnel avec des effets aléatoires de densité gamma. Même si l'hypothèse d'une connaissance a priori des paramètres de dispersion semble trop forte, une analyse de sensibilité basée sur la qualité de l'ajustement permet de démontrer la robustesse de notre méthode. Troisièmement, dans le dernier chapitre, nous nous intéressons à la définition et à la construction d'une mesure de concordance donc de corrélation pour les données augmentées en zéro par la modélisation de copules gaussiennes. Contrairement au tau de Kendall dont les valeurs se situent dans un intervalle dont les bornes varient selon la fréquence d'observations d'égalité entre les paires, cette mesure a pour avantage de prendre ses valeurs sur (-1;1). Initialement introduite pour modéliser les corrélations entre des variables continues, son extension au cas discret implique certaines restrictions. En effet, la nouvelle mesure pourrait être interprétée comme la corrélation entre les variables aléatoires continues dont la discrétisation constitue nos observations discrètes non négatives. Deux méthodes d'estimation des modèles augmentés en zéro seront présentées dans les contextes fréquentiste et bayésien basées respectivement sur le maximum de vraisemblance et l'intégration de Gauss-Hermite. Enfin, une étude de simulation permet de montrer la robustesse et les limites de notre approche.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La méthode de factorisation est appliquée sur les données initiales d'un problème de mécanique quantique déja résolu. Les solutions (états propres et fonctions propres) sont presque tous retrouvés.