145 resultados para Interval discrete log problem
Resumo:
A family of nonempty closed convex sets is built by using the data of the Generalized Nash equilibrium problem (GNEP). The sets are selected iteratively such that the intersection of the selected sets contains solutions of the GNEP. The algorithm introduced by Iusem-Sosa (2003) is adapted to obtain solutions of the GNEP. Finally some numerical experiments are given to illustrate the numerical behavior of the algorithm.
Resumo:
The objective of this paper is to re-examine the risk-and effort attitude in the context of strategic dynamic interactions stated as a discrete-time finite-horizon Nash game. The analysis is based on the assumption that players are endogenously risk-and effort-averse. Each player is characterized by distinct risk-and effort-aversion types that are unknown to his opponent. The goal of the game is the optimal risk-and effort-sharing between the players. It generally depends on the individual strategies adopted and, implicitly, on the the players' types or characteristics.
Resumo:
This paper studies a dynamic principal-monitor-agent relation where a strategic principal delegates the task of monitoring the effort of a strategic agent to a third party. The latter we call the monitor, whose type is initially unknown. Through repeated interaction the agent might learn his type. We show that this process damages the principal's payoffs. Compensation is assumed exogenous, limiting to a great extent the provision of incentives. We go around this difficulty by introducing costly replacement strategies, i.e. the principal replaces the monitor, thus disrupting the agent's learning. We found that even when replacement costs are null, if the revealed monitor is strictly preferred by both parties, there is a loss in efficiency due to the impossibility of bene…tting from it. Nonetheless, these strategies can partially recover the principal's losses. Additionally, we establish upper and lower bounds on the payoffs that the principal and the agent can achieve. Finally we characterize the equilibrium strategies under public and private monitoring (with communication) for different cost and impatience levels.
Resumo:
In this study I try to explain the systemic problem of the low economic competitiveness of nuclear energy for the production of electricity by carrying out a biophysical analysis of its production process. Given the fact that neither econometric approaches nor onedimensional methods of energy analyses are effective, I introduce the concept of biophysical explanation as a quantitative analysis capable of handling the inherent ambiguity associated with the concept of energy. In particular, the quantities of energy, considered as relevant for the assessment, can only be measured and aggregated after having agreed on a pre-analytical definition of a grammar characterizing a given set of finite transformations. Using this grammar it becomes possible to provide a biophysical explanation for the low economic competitiveness of nuclear energy in the production of electricity. When comparing the various unit operations of the process of production of electricity with nuclear energy to the analogous unit operations of the process of production of fossil energy, we see that the various phases of the process are the same. The only difference is related to characteristics of the process associated with the generation of heat which are completely different in the two systems. Since the cost of production of fossil energy provides the base line of economic competitiveness of electricity, the (lack of) economic competitiveness of the production of electricity from nuclear energy can be studied, by comparing the biophysical costs associated with the different unit operations taking place in nuclear and fossil power plants when generating process heat or net electricity. In particular, the analysis focuses on fossil-fuel requirements and labor requirements for those phases that both nuclear plants and fossil energy plants have in common: (i) mining; (ii) refining/enriching; (iii) generating heat/electricity; (iv) handling the pollution/radioactive wastes. By adopting this approach, it becomes possible to explain the systemic low economic competitiveness of nuclear energy in the production of electricity, because of: (i) its dependence on oil, limiting its possible role as a carbon-free alternative; (ii) the choices made in relation to its fuel cycle, especially whether it includes reprocessing operations or not; (iii) the unavoidable uncertainty in the definition of the characteristics of its process; (iv) its large inertia (lack of flexibility) due to issues of time scale; and (v) its low power level.
Resumo:
Projecte de recerca elaborat a partir d’una estada a la Università degli studi di Siena, Italy , entre 2007 i 2009. El projecte ha consistit en un estudi de la formalització lògica del raonament en presència de vaguetat amb els mètodes de la Lògica Algebraica i de la Teoria de la Prova. S'ha treballat fonamental en quatre direccions complementàries. En primer lloc, s'ha proposat un nou plantejament, més abstracte que el paradigma dominant fins ara, per l'estudi dels sistemes de lògica borrosa. Fins ara en l'estudi d'aquests sistemes l'atenció havia recaigut essencialment en l'obtenció de semàntiques basades en tnormes contínues (o almenys contínues per l'esquerra). En primer nivell de major abstracció hem estudiat les propietats de completesa de les lògiques borroses (tant proposicionals com de primer ordre) respecte de semàntiques definides sobre qualsevol cadena de valors de veritat, no necessàriament només sobre l'interval unitat dels nombres reals. A continuació, en un nivell encara més abstracte, s’ha pres l'anomenada jerarquia de Leibniz de la Lògica Algebraica Abstracta que classifica tots els sistemes lògics amb un bon comportament algebraic i s'ha expandit a una nova jerarquia (que anomenem implicacional) que permet definir noves classes de lògiques borroses que contenen quasi totes les conegudes fins ara. En segon lloc, s’ha continuat una línia d'investigació iniciada els darrers anys consistent en l'estudi de la veritat parcial com a noció sintàctica (és a dir, com a constants de veritat explícites en els sistemes de prova de les lògiques borroses). Per primer cop, s’ha considerat la semàntica racional per les lògiques proposicionals i la semàntica real i racional per les lògiques de primer ordre expandides amb constants. En tercer lloc, s’ha tractat el problema més fonamental del significat i la utilitat de les lògiques borroses com a modelitzadores de (part de) els fenòmens de la vaguetat en un darrer article de caràcter més filosòfic i divulgatiu, i en un altre més tècnic en què defensem la necessitat i presentem l'estat de l'art de l'estudi de les estructures algèbriques associades a les lògiques borroses. Finalment, s’ha dedicat la darrera part del projecte a l'estudi de la complexitat aritmètica de les lògiques borroses de primer ordre.
Resumo:
La aplicación Log2XML tiene como objeto principal la transformación de archivos log en formato texto con separador de campos a un formato XML estandarizado. Para permitir que la aplicación pueda trabajar con logs de diferentes sistemas o aplicaciones, dispone de un sistema de plantillas (indicación de orden de campos y carácter separador) que permite definir la estructura mínima para poder extraer la información de cualquier tipo de log que se base en separadores de campo. Por último, la aplicación permite el procesamiento de la información extraída para la generación de informes y estadísticas.Por otro lado, en el proyecto se profundiza en la tecnología Grails.
Resumo:
Aquest projecte permetrà aprofundir en el coneixement de l'estructura de funcionament del PL/SQL d'Oracle (crides a procediments i, especialment, tractament d'excepcions), en la utilització de JDBC com a mecanisme de comunicació entre JAVA i Oracle, i en l'ús de les classes de generació d'interfícies gràfiques d'usuari (swing) i, a més, permetrà posar en pràctica funcionalitats d'Oracle que no havia tingut oportunitat d'emprar, com ara tipus genèrics de dades, objectes persistents o transaccions autònomes.
Resumo:
In this paper we examine the problem of compositional data from a different startingpoint. Chemical compositional data, as used in provenance studies on archaeologicalmaterials, will be approached from the measurement theory. The results will show, in avery intuitive way that chemical data can only be treated by using the approachdeveloped for compositional data. It will be shown that compositional data analysis is aparticular case in projective geometry, when the projective coordinates are in thepositive orthant, and they have the properties of logarithmic interval metrics. Moreover,it will be shown that this approach can be extended to a very large number ofapplications, including shape analysis. This will be exemplified with a case study inarchitecture of Early Christian churches dated back to the 5th-7th centuries AD
Resumo:
This paper shows how instructors can use the problem‐based learning method to introduce producer theory and market structure in intermediate microeconomics courses. The paper proposes a framework where different decision problems are presented to students, who are asked to imagine that they are the managers of a firm who need to solve a problem in a particular business setting. In this setting, the instructors’ role isto provide both guidance to facilitate student learning and content knowledge on a just‐in‐time basis
Resumo:
We compare correspondance análisis to the logratio approach based on compositional data. We also compare correspondance análisis and an alternative approach using Hellinger distance, for representing categorical data in a contingency table. We propose a coefficient which globally measures the similarity between these approaches. This coefficient can be decomposed into several components, one component for each principal dimension, indicating the contribution of the dimensions to the difference between the two representations. These three methods of representation can produce quite similar results. One illustrative example is given
Resumo:
The application of compositional data analysis through log ratio trans-formations corresponds to a multinomial logit model for the shares themselves.This model is characterized by the property of Independence of Irrelevant Alter-natives (IIA). IIA states that the odds ratio in this case the ratio of shares is invariant to the addition or deletion of outcomes to the problem. It is exactlythis invariance of the ratio that underlies the commonly used zero replacementprocedure in compositional data analysis. In this paper we investigate using thenested logit model that does not embody IIA and an associated zero replacementprocedure and compare its performance with that of the more usual approach ofusing the multinomial logit model. Our comparisons exploit a data set that com-bines voting data by electoral division with corresponding census data for eachdivision for the 2001 Federal election in Australia
Resumo:
The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given
Resumo:
At CoDaWork'03 we presented work on the analysis of archaeological glass composi-tional data. Such data typically consist of geochemical compositions involving 10-12variables and approximates completely compositional data if the main component, sil-ica, is included. We suggested that what has been termed `crude' principal componentanalysis (PCA) of standardized data often identi ed interpretable pattern in the datamore readily than analyses based on log-ratio transformed data (LRA). The funda-mental problem is that, in LRA, minor oxides with high relative variation, that maynot be structure carrying, can dominate an analysis and obscure pattern associatedwith variables present at higher absolute levels. We investigate this further using sub-compositional data relating to archaeological glasses found on Israeli sites. A simplemodel for glass-making is that it is based on a `recipe' consisting of two `ingredients',sand and a source of soda. Our analysis focuses on the sub-composition of componentsassociated with the sand source. A `crude' PCA of standardized data shows two clearcompositional groups that can be interpreted in terms of di erent recipes being used atdi erent periods, reected in absolute di erences in the composition. LRA analysis canbe undertaken either by normalizing the data or de ning a `residual'. In either case,after some `tuning', these groups are recovered. The results from the normalized LRAare di erently interpreted as showing that the source of sand used to make the glassdi ered. These results are complementary. One relates to the recipe used. The otherrelates to the composition (and presumed sources) of one of the ingredients. It seemsto be axiomatic in some expositions of LRA that statistical analysis of compositionaldata should focus on relative variation via the use of ratios. Our analysis suggests thatabsolute di erences can also be informative
Resumo:
In human Population Genetics, routine applications of principal component techniques are oftenrequired. Population biologists make widespread use of certain discrete classifications of humansamples into haplotypes, the monophyletic units of phylogenetic trees constructed from severalsingle nucleotide bimorphisms hierarchically ordered. Compositional frequencies of the haplotypesare recorded within the different samples. Principal component techniques are then required as adimension-reducing strategy to bring the dimension of the problem to a manageable level, say two,to allow for graphical analysis.Population biologists at large are not aware of the special features of compositional data and normally make use of the crude covariance of compositional relative frequencies to construct principalcomponents. In this short note we present our experience with using traditional linear principalcomponents or compositional principal components based on logratios, with reference to a specificdataset
Resumo:
The statistical analysis of literary style is the part of stylometry that compares measurable characteristicsin a text that are rarely controlled by the author, with those in other texts. When thegoal is to settle authorship questions, these characteristics should relate to the author’s style andnot to the genre, epoch or editor, and they should be such that their variation between authors islarger than the variation within comparable texts from the same author.For an overview of the literature on stylometry and some of the techniques involved, see for exampleMosteller and Wallace (1964, 82), Herdan (1964), Morton (1978), Holmes (1985), Oakes (1998) orLebart, Salem and Berry (1998).Tirant lo Blanc, a chivalry book, is the main work in catalan literature and it was hailed to be“the best book of its kind in the world” by Cervantes in Don Quixote. Considered by writterslike Vargas Llosa or Damaso Alonso to be the first modern novel in Europe, it has been translatedseveral times into Spanish, Italian and French, with modern English translations by Rosenthal(1996) and La Fontaine (1993). The main body of this book was written between 1460 and 1465,but it was not printed until 1490.There is an intense and long lasting debate around its authorship sprouting from its first edition,where its introduction states that the whole book is the work of Martorell (1413?-1468), while atthe end it is stated that the last one fourth of the book is by Galba (?-1490), after the death ofMartorell. Some of the authors that support the theory of single authorship are Riquer (1990),Chiner (1993) and Badia (1993), while some of those supporting the double authorship are Riquer(1947), Coromines (1956) and Ferrando (1995). For an overview of this debate, see Riquer (1990).Neither of the two candidate authors left any text comparable to the one under study, and thereforediscriminant analysis can not be used to help classify chapters by author. By using sample textsencompassing about ten percent of the book, and looking at word length and at the use of 44conjunctions, prepositions and articles, Ginebra and Cabos (1998) detect heterogeneities that mightindicate the existence of two authors. By analyzing the diversity of the vocabulary, Riba andGinebra (2000) estimates that stylistic boundary to be near chapter 383.Following the lead of the extensive literature, this paper looks into word length, the use of the mostfrequent words and into the use of vowels in each chapter of the book. Given that the featuresselected are categorical, that leads to three contingency tables of ordered rows and therefore tothree sequences of multinomial observations.Section 2 explores these sequences graphically, observing a clear shift in their distribution. Section 3describes the problem of the estimation of a suden change-point in those sequences, in the followingsections we propose various ways to estimate change-points in multinomial sequences; the methodin section 4 involves fitting models for polytomous data, the one in Section 5 fits gamma modelsonto the sequence of Chi-square distances between each row profiles and the average profile, theone in Section 6 fits models onto the sequence of values taken by the first component of thecorrespondence analysis as well as onto sequences of other summary measures like the averageword length. In Section 7 we fit models onto the marginal binomial sequences to identify thefeatures that distinguish the chapters before and after that boundary. Most methods rely heavilyon the use of generalized linear models