128 resultados para Zero-one laws
Resumo:
It has been argued that by truncating the sample space of the negative binomial and of the inverse Gaussian-Poisson mixture models at zero, one is allowed to extend the parameter space of the model. Here that is proved to be the case for the more general three parameter Tweedie-Poisson mixture model. It is also proved that the distributions in the extended part of the parameter space are not the zero truncation of mixed poisson distributions and that, other than for the negative binomial, they are not mixtures of zero truncated Poisson distributions either. By extending the parameter space one can improve the fit when the frequency of one is larger and the right tail is heavier than is allowed by the unextended model. Considering the extended model also allows one to use the basic maximum likelihood based inference tools when parameter estimates fall in the extended part of the parameter space, and hence when the m.l.e. does not exist under the unextended model. This extended truncated Tweedie-Poisson model is proved to be useful in the analysis of words and species frequency count data.
Resumo:
El propòsit d'aquest TFC és presentar una metodologia lleugera per a aplicar en aquesta anomenada
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
We show that if performance measures in a stochastic scheduling problem satisfy a set of so-called partial conservation laws (PCL), which extend previously studied generalized conservation laws (GCL), then the problem is solved optimally by a priority-index policy for an appropriate range of linear performance objectives, where the optimal indices are computed by a one-pass adaptive-greedy algorithm, based on Klimov's. We further apply this framework to investigate the indexability property of restless bandits introduced by Whittle, obtaining the following results: (1) we identify a class of restless bandits (PCL-indexable) which are indexable; membership in this class is tested through a single run of the adaptive-greedy algorithm, which also computes the Whittle indices when the test is positive; this provides a tractable sufficient condition for indexability; (2) we further indentify the class of GCL-indexable bandits, which includes classical bandits, having the property that they are indexable under any linear reward objective. The analysis is based on the so-called achievable region method, as the results follow fromnew linear programming formulations for the problems investigated.
Resumo:
We present a complete calculation of the structure of liquid 4He confined to a concave nanoscopic wedge, as a function of the opening angle of the walls. This is achieved within a finite-range density functional formalism. The results here presented, restricted to alkali metal substrates, illustrate the change in meniscus shape from rather broad to narrow wedges on weak and strong alkali adsorbers, and we relate this change to the wetting behavior of helium on the corresponding planar substrate. As the wedge angle is varied, we find a sequence of stable states that, in the case of cesium, undergo one filling and one emptying transition at large and small openings, respectively. A computationally unambiguous criterion to determine the contact angle of 4He on cesium is also proposed.
Resumo:
Within local-spin-density functional theory, we have investigated the ¿dissociation¿ of few-electron circular vertical semiconductor double quantum ring artificial molecules at zero magnetic field as a function of interring distance. In a first step, the molecules are constituted by two identical quantum rings. When the rings are quantum mechanically strongly coupled, the electronic states are substantially delocalized, and the addition energy spectra of the artificial molecule resemble those of a single quantum ring in the few-electron limit. When the rings are quantum mechanically weakly coupled, the electronic states in the molecule are substantially localized in one ring or the other, although the rings can be electrostatically coupled. The effect of a slight mismatch introduced in the molecules from nominally identical quantum wells, or from changes in the inner radius of the constituent rings, induces localization by offsetting the energy levels in the quantum rings. This plays a crucial role in the appearance of the addition spectra as a function of coupling strength particularly in the weak coupling limit.
Resumo:
A generic prediction of inflation is that the thermalized region we inhabit is spatially infinite. Thus, it contains an infinite number of regions of the same size as our observable universe, which we shall denote as O regions. We argue that the number of possible histories which may take place inside of an O region, from the time of recombination up to the present time, is finite. Hence, there are an infinite number of O regions with identical histories up to the present, but which need not be identical in the future. Moreover, all histories which are not forbidden by conservation laws will occur in a finite fraction of all O regions. The ensemble of O regions is reminiscent of the ensemble of universes in the many-world picture of quantum mechanics. An important difference, however, is that other O regions are unquestionably real.
Resumo:
We study energy relaxation in thermalized one-dimensional nonlinear arrays of the Fermi-Pasta-Ulam type. The ends of the thermalized systems are placed in contact with a zero-temperature reservoir via damping forces. Harmonic arrays relax by sequential phonon decay into the cold reservoir, the lower-frequency modes relaxing first. The relaxation pathway for purely anharmonic arrays involves the degradation of higher-energy nonlinear modes into lower-energy ones. The lowest-energy modes are absorbed by the cold reservoir, but a small amount of energy is persistently left behind in the array in the form of almost stationary low-frequency localized modes. Arrays with interactions that contain both a harmonic and an anharmonic contribution exhibit behavior that involves the interplay of phonon modes and breather modes. At long times relaxation is extremely slow due to the spontaneous appearance and persistence of energetic high-frequency stationary breathers. Breather behavior is further ascertained by explicitly injecting a localized excitation into the thermalized arrays and observing the relaxation behavior.
Resumo:
The paper is devoted to the study of a type of differential systems which appear usually in the study of some Hamiltonian systems with 2 degrees of freedom. We prove the existence of infinitely many periodic orbits on each negative energy level. All these periodic orbits pass near the total collision. Finally we apply these results to study the existence of periodic orbits in the charged collinear 3–body problem.
Resumo:
This paper analyzes the linkages between the credibility of a target zone regime, the volatility of the exchange rate, and the width of the band where the exchange rate is allowed to fluctuate. These three concepts should be related since the band width induces a trade-off between credibility and volatility. Narrower bands should give less scope for the exchange rate to fluctuate but may make agents perceive a larger probability of realignment which by itself should increase the volatility of the exchange rate. We build a model where this trade-off is made explicit. The model is used to understand the reduction in volatility experienced by most EMS countries after their target zones were widened on August 1993. As a natural extension, the model also rationalizes the existence of non-official, implicit target zones (or fear of floating), suggested by some authors.
Resumo:
This paper provides empirical evidence that continuous time models with one factor of volatility, in some conditions, are able to fit the main characteristics of financial data. It also reports the importance of the feedback factor in capturing the strong volatility clustering of data, caused by a possible change in the pattern of volatility in the last part of the sample. We use the Efficient Method of Moments (EMM) by Gallant and Tauchen (1996) to estimate logarithmic models with one and two stochastic volatility factors (with and without feedback) and to select among them.
Resumo:
For the many-to-one matching model in which firms have substitutable and quota q-separable preferences over subsets of workers we show that the workers-optimal stable mechanism is group strategy-proof for the workers. In order to prove this result, we also show that under this domain of preferences (which contains the domain of responsive preferences of the college admissions problem) the workers-optimal stable matching is weakly Pareto optimal for the workers and the Blocking Lemma holds as well. We exhibit an example showing that none of these three results remain true if the preferences of firms are substitutable but not quota q-separable.
Resumo:
This paper examines competition in a spatial model of two-candidate elections, where one candidate enjoys a quality advantage over the other candidate. The candidates care about winning and also have policy preferences. There is two-dimensional private information. Candidate ideal points as well as their tradeoffs between policy preferences and winning are private information. The distribution of this two-dimensional type is common knowledge. The location of the median voter's ideal point is uncertain, with a distribution that is commonly known by both candidates. Pure strategy equilibria always exist in this model. We characterize the effects of increased uncertainty about the median voter, the effect of candidate policy preferences, and the effects of changes in the distribution of private information. We prove that the distribution of candidate policies approaches the mixed equilibrium of Aragones and Palfrey (2002a), when both candidates' weights on policy preferences go to zero.
Resumo:
The presence of subcentres cannot be captured by an exponential function. Cubic spline functions seem more appropriate to depict the polycentricity pattern of modern urban systems. Using data from Barcelona Metropolitan Region, two possible population subcentre delimitation procedures are discussed. One, taking an estimated derivative equal to zero, the other, a density gradient equal to zero. It is argued that, in using a cubic spline function, a delimitation strategy based on derivatives is more appropriate than one based on gradients because the estimated density can be negative in sections with very low densities and few observations, leading to sudden changes in estimated gradients. It is also argued that using as a criteria for subcentre delimitation a second derivative with value zero allow us to capture a more restricted subcentre area than using as a criteria a first derivative zero. This methodology can also be used for intermediate ring delimitation.