44 resultados para HETEROGENEOUS VARIANCE

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Was the increase in income inequality in the US due to permanent shocks or merely to an increase in the variance of transitory shocks? The implications for consumption and welfare depend crucially on the answer to this question. We use CEX repeated cross-section data on consumption and income to decompose idiosyncratic changes in income into predictable life-cycle changes, transitory and permanent shocks and estimate the contribution of each to total inequality. Our model fits the joint evolution of consumption and income inequality well and delivers two main results. First, we find that permanent changes in income explain all of the increase in inequality in the 1980s and 90s. Second, we reconcile this finding with the fact that consumption inequality did not increase much over this period. Our results support the view that many permanent changes in income are predictable for consumers, even if they look unpredictable to the econometrician, consistent withmodels of heterogeneous income profiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of conditional stability constant is extended to the competitive binding of small molecules to heterogeneous surfaces or macromolecules via the introduction of the conditional affinity spectrum (CAS). The CAS describes the distribution of effective binding energies experienced by one complexing agent at a fixed concentration of the rest. We show that, when the multicomponent system can be described in terms of an underlying affinity spectrum [integral equation (IE) approach], the system can always be characterized by means of a CAS. The thermodynamic properties of the CAS and its dependence on the concentration of the rest of components are discussed. In the context of metal/proton competition, analytical expressions for the mean (conditional average affinity) and the variance (conditional heterogeneity) of the CAS as functions of pH are reported and their physical interpretation discussed. Furthermore, we show that the dependence of the CAS variance on pH allows for the analytical determination of the correlation coefficient between the binding energies of the metal and the proton. Nonideal competitive adsorption isotherm and Frumkin isotherms are used to illustrate the results of this work. Finally, the possibility of using CAS when the IE approach does not apply (for instance, when multidentate binding is present) is explored. © 2006 American Institute of Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyze the incentives for cooperation of three players differing in their efficiency of effort in a contest game. We concentrate on the non-cooperative bargaining foundation of coalition formation, and therefore, we adopt a two-stage model. In the first stage, individuals form coalitions following a bargaining protocol similar to the one proposed by Gul (1989). Afterwards, coalitions play the contest game of Esteban and Ray (1999) within the resulting coalition structure of the first stage. We find that the grand coalition forms whenever the distribution of the bargaining power in the coalition formation game is equal to the distribution of the relative efficiency of effort. Finally, we use the case of equal bargaining power for all individuals to show that other types of coalition structures may be observed as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Public opinion surveys have become progressively incorporated into systems of official statistics. Surveys of the economic climate are usually qualitative because they collect opinions of businesspeople and/or experts about the long-term indicators described by a number of variables. In such cases the responses are expressed in ordinal numbers, that is, the respondents verbally report, for example, whether during a given trimester the sales or the new orders have increased, decreased or remained the same as in the previous trimester. These data allow to calculate the percent of respondents in the total population (results are extrapolated), who select every one of the three options. Data are often presented in the form of an index calculated as the difference between the percent of those who claim that a given variable has improved in value and of those who claim that it has deteriorated. As in any survey conducted on a sample the question of the measurement of the sample error of the results has to be addressed, since the error influences both the reliability of the results and the calculation of the sample size adequate for a desired confidence interval. The results presented here are based on data from the Survey of the Business Climate (Encuesta de Clima Empresarial) developed through the collaboration of the Statistical Institute of Catalonia (Institut d’Estadística de Catalunya) with the Chambers of Commerce (Cámaras de Comercio) of Sabadell and Terrassa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Various methodologies in economic literature have been used to analyse the international hydrocarbon retail sector. Nevertheless at a Spanish level these studies are much more recent and most conclude that generally there is no effective competition present in this market, regardless of the approach used. In this paper, in order to analyse the price levels in the Spanish petrol market, our starting hypothesis is that in uncompetitive markets the prices are higher and the standard deviation is lower. We use weekly retail petrol price data from the ten biggest Spanish cities, and apply Markov chains to fill the missing values for petrol 95 and diesel, and we also employ a variance filter. We conclude that this market demonstrates reduced price dispersion, regardless of brand or city.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been recently emphasized that, if individuals have heterogeneous dynamics, estimates of shock persistence based on aggregate data are significatively higher than those derived from its disaggregate counterpart. However, a careful examination of the implications of this statement on the various tools routinely employed to measure persistence is missing in the literature. This paper formally examines this issue. We consider a disaggregate linear model with heterogeneous dynamics and compare the values of several measures of persistence across aggregation levels. Interestingly, we show that the average persistence of aggregate shocks, as measured by the impulse response function (IRF) of the aggregate model or by the average of the individual IRFs, is identical on all horizons. This result remains true even in situations where the units are (short-memory) stationary but the aggregate process is long-memory or even nonstationary. In contrast, other popular persistence measures, such as the sum of the autoregressive coefficients or the largest autoregressive root, tend to be higher the higher the aggregation level. We argue, however, that this should be seen more as an undesirable property of these measures than as evidence of different average persistence across aggregation levels. The results are illustrated in an application using U.S. inflation data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A multiple-partners assignment game with heterogeneous sales and multiunit demands consists of a set of sellers that own a given number of indivisible units of (potentially many different) goods and a set of buyers who value those units and want to buy at most an exogenously fixed number of units. We define a competitive equilibrium for this generalized assignment game and prove its existence by using only linear programming. In particular, we show how to compute equilibrium price vectors from the solutions of the dual linear program associated to the primal linear program defined to find optimal assignments. Using only linear programming tools, we also show (i) that the set of competitive equilibria (pairs of price vectors and assignments) has a Cartesian product structure: each equilibrium price vector is part of a competitive equilibrium with all optimal assignments, and vice versa; (ii) that the set of (restricted) equilibrium price vectors has a natural lattice structure; and (iii) how this structure is translated into the set of agents' utilities that are attainable at equilibrium.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider the ATM networks in which the virtual path concept is implemented. The question of how to multiplex two or more diverse traffic classes while providing different quality of service requirements is a very complicated open problem. Two distinct options are available: integration and segregation. In an integration approach all the traffic from different connections are multiplexed onto one VP. This implies that the most restrictive QOS requirements must be applied to all services. Therefore, link utilization will be decreased because unnecessarily stringent QOS is provided to all connections. With the segregation approach the problem can be much simplified if different types of traffic are separated by assigning a VP with dedicated resources (buffers and links). Therefore, resources may not be efficiently utilized because no sharing of bandwidth can take place across the VP. The probability that the bandwidth required by the accepted connections exceeds the capacity of the link is evaluated with the probability of congestion (PC). Since the PC can be expressed as the CLP, we shall simply carry out bandwidth allocation using the PC. We first focus on the influence of some parameters (CLP, bit rate and burstiness) on the capacity required by a VP supporting a single traffic class using the new convolution approach. Numerical results are presented both to compare the required capacity and to observe which conditions under each approach are preferred

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a study of the continuous-time equations governing the dynamics of a susceptible infected-susceptible model on heterogeneous metapopulations. These equations have been recently proposed as an alternative formulation for the spread of infectious diseases in metapopulations in a continuous-time framework. Individual-based Monte Carlo simulations of epidemic spread in uncorrelated networks are also performed revealing a good agreement with analytical predictions under the assumption of simultaneous transmission or recovery and migration processes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the derivation of the continuous-time equations governing the limit dynamics of discrete-time reaction-diffusion processes defined on heterogeneous metapopulations. We show that, when a rigorous time limit is performed, the lack of an epidemic threshold in the spread of infections is not limited to metapopulations with a scale-free architecture, as it has been predicted from dynamical equations in which reaction and diffusion occur sequentially in time

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The front speed problem for nonuniform reaction rate and diffusion coefficient is studied by using singular perturbation analysis, the geometric approach of Hamilton-Jacobi dynamics, and the local speed approach. Exact and perturbed expressions for the front speed are obtained in the limit of large times. For linear and fractal heterogeneities, the analytic results have been compared with numerical results exhibiting a good agreement. Finally we reach a general expression for the speed of the front in the case of smooth and weak heterogeneities

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Systematic approaches for identifying proteins involved in different types of cancer are needed. Experimental techniques such as microarrays are being used to characterize cancer, but validating their results can be a laborious task. Computational approaches are used to prioritize between genes putatively involved in cancer, usually based on further analyzing experimental data. Results: We implemented a systematic method using the PIANA software that predicts cancer involvement of genes by integrating heterogeneous datasets. Specifically, we produced lists of genes likely to be involved in cancer by relying on: (i) protein-protein interactions; (ii) differential expression data; and (iii) structural and functional properties of cancer genes. The integrative approach that combines multiple sources of data obtained positive predictive values ranging from 23% (on a list of 811 genes) to 73% (on a list of 22 genes), outperforming the use of any of the data sources alone. We analyze a list of 20 cancer gene predictions, finding that most of them have been recently linked to cancer in literature. Conclusion: Our approach to identifying and prioritizing candidate cancer genes can be used to produce lists of genes likely to be involved in cancer. Our results suggest that differential expression studies yielding high numbers of candidate cancer genes can be filtered using protein interaction networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing volume of data describing humandisease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the@neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system’s architecture is generic enough that it could be adapted to the treatment of other diseases.Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers cliniciansthe tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medicalresearchers gain access to a critical mass of aneurysm related data due to the system’s ability to federate distributed informationsources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access andwork on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand forperforming computationally intensive simulations for treatment planning and research.