17 resultados para variance and coherence
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
This paper investigates the relationship between monetary policy and the changes experienced by the US economy using a small scale New-Keynesian model. The model is estimated with Bayesian techniques and the stability of policy parameter estimates and of the transmission of policy shocks examined. The model fits well the data and produces forecasts comparable or superior to those of alternative specifications. The parameters of the policy rule, the variance and the transmission of policy shocks have been remarkably stable. The parameters of the Phillips curve and of the Euler equations are varying.
Resumo:
Principal curves have been defined Hastie and Stuetzle (JASA, 1989) assmooth curves passing through the middle of a multidimensional dataset. They are nonlinear generalizations of the first principalcomponent, a characterization of which is the basis for the principalcurves definition.In this paper we propose an alternative approach based on a differentproperty of principal components. Consider a point in the space wherea multivariate normal is defined and, for each hyperplane containingthat point, compute the total variance of the normal distributionconditioned to belong to that hyperplane. Choose now the hyperplaneminimizing this conditional total variance and look for thecorresponding conditional mean. The first principal component of theoriginal distribution passes by this conditional mean and it isorthogonal to that hyperplane. This property is easily generalized todata sets with nonlinear structure. Repeating the search from differentstarting points, many points analogous to conditional means are found.We call them principal oriented points. When a one-dimensional curveruns the set of these special points it is called principal curve oforiented points. Successive principal curves are recursively definedfrom a generalization of the total variance.
Resumo:
In this paper, we document the fact that countries that have experienced occasional financial crises have on average grown faster than countries with stable financial conditions. We measure the incidence of crisis with the skewness of credit growth, and find that it has a robust negative effect on GDP growth. This link coexists with the negative link between variance and growth typically found in the literature. To explain the link between crises and growth we present a model where weak institutions lead to severe financial constraints and low growth. Financial liberalization policies that facilitaterisk-taking increase leverage and investment. This leads to higher growth, but also toa greater incidence of crises. Conditions are established under which the costs of crises are outweighed by the benefits of higher growth.
Resumo:
We study the motion of an unbound particle under the influence of a random force modeled as Gaussian colored noise with an arbitrary correlation function. We derive exact equations for the joint and marginal probability density functions and find the associated solutions. We analyze in detail anomalous diffusion behaviors along with the fractal structure of the trajectories of the particle and explore possible connections between dynamical exponents of the variance and the fractal dimension of the trajectories.
Resumo:
The objective of research was to analyse the potential of Normalized Difference Vegetation Index (NDVI) maps from satellite images, yield maps and grapevine fertility and load variables to delineate zones with different wine grape properties for selective harvesting. Two vineyard blocks located in NE Spain (Cabernet Sauvignon and Syrah) were analysed. The NDVI was computed from a Quickbird-2 multi-spectral image at veraison (July 2005). Yield data was acquired by means of a yield monitor during September 2005. Other variables, such as the number of buds, number of shoots, number of wine grape clusters and weight of 100 berries were sampled in a 10 rows × 5 vines pattern and used as input variables, in combination with the NDVI, to define the clusters as alternative to yield maps. Two days prior to the harvesting, grape samples were taken. The analysed variables were probable alcoholic degree, pH of the juice, total acidity, total phenolics, colour, anthocyanins and tannins. The input variables, alone or in combination, were clustered (2 and 3 Clusters) by using the ISODATA algorithm, and an analysis of variance and a multiple rang test were performed. The results show that the zones derived from the NDVI maps are more effective to differentiate grape maturity and quality variables than the zones derived from the yield maps. The inclusion of other grapevine fertility and load variables did not improve the results.
Resumo:
El present estudi es centra en els programes d’activitat física adreçats a persones grans en situació de dependència, que es esenvolupen en grup i que utilitzen el moviment actiu com a principal eina de treball. Es tracta d’una recerca que, per una banda, estudia i analitza les bases teòriques que avalen la importància i la necessitat de l’aplicació d’aquest tipus de programes. Per l’altra, justifica i defineix, amb detall, les línies directrius que han de guiar el seu desenvolupament i la seva aplicació en l’àmbit d’institucions d’atenció a les persones grans (residències, centres de dia i centres sòciosanitaris). El marc conceptual (capítol II) es construeix a partir d’una àmplia recerca bibliogràfica sobre les quatre dimensions d’anàlisi clau (l’envelliment, la dependència, l’atenció a les persones grans i l’activitat física) que fonamenta i justifica la proposta de programa que es fa en la segona part (capítol III). En aquesta es defineixen els referents, les finalitats, els objectius, els recursos que es poden utilitzar, les indicacions bàsiques per a organitzar el treball, els aspectes metodològics essencials per a l’aplicació del programa i les condicions necessàries per a poder-lo implementar. El resultat de tot el procés de recerca i estudi permet dir que, a nivell teòric, l’activitat física és una eina útil, eficaç i amb moltes possibilitats cara a l’atenció a les persones grans en situació de dependència. Que els esmentats programes han de ser fruit d’un procés de planificació, han de considerar les diferents dimensions de l’ésser en interacció constant, han de ser aplicats donant més importància al procés que al producte i que els recursos disponibles s’han d’utilitzar d’acord amb aquests plantejaments. En les línies de futur per a aquesta recerca, es planteja la seva continuació a partir de l’aplicació del programa VAFiD en diferents grups i la seva avaluació (seguint el model d’avaluació responent de Robert Stake) per determinar-ne la coherència i la qualitat.
Resumo:
We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes.This method separates the search for an exact test from the goal to create a non-randomized test. Randomization is used to extend any exact test relating to meansof variables with finitely many outcomes to variables with outcomes belonging to agiven bounded set. Tests in terms of variance and covariance are reduced to testsrelating to means. Randomness is then eliminated in a separate step.This method is used to create confidence intervals for the difference between twomeans (or variances) and tests of stochastic inequality and correlation.
Resumo:
Asymptotic chi-squared test statistics for testing the equality ofmoment vectors are developed. The test statistics proposed aregeneralizedWald test statistics that specialize for different settings by inserting andappropriate asymptotic variance matrix of sample moments. Scaled teststatisticsare also considered for dealing with situations of non-iid sampling. Thespecializationwill be carried out for testing the equality of multinomial populations, andtheequality of variance and correlation matrices for both normal andnon-normaldata. When testing the equality of correlation matrices, a scaled versionofthe normal theory chi-squared statistic is proven to be an asymptoticallyexactchi-squared statistic in the case of elliptical data.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
This paper investigates the comparative performance of five small areaestimators. We use Monte Carlo simulation in the context of boththeoretical and empirical populations. In addition to the direct andindirect estimators, we consider the optimal composite estimator withpopulation weights, and two composite estimators with estimatedweights: one that assumes homogeneity of within area variance andsquare bias, and another one that uses area specific estimates ofvariance and square bias. It is found that among the feasibleestimators, the best choice is the one that uses area specificestimates of variance and square bias.
Resumo:
[spa] La política de Vecindad de la Unión Europea se acostumbra a interpretar como un instrumento de europeización forzada. Gracias a su fuerza de negociación, la Unión Europea impondría a sus vecinos su modelo económico y hasta político y social. Esta sin embargo no es la evidencia que se obtiene en el ámbito del comercio. En consonancia con el modelo teórico de relaciones exteriores desarrollado por varios investigadores bajo la dirección de Esther Barbé, observamos como, en el ámbito comercial, el modelo de relaciones entre la Unión Europea y cuatro países de la política de Vecindad puede ser tanto de europeización como también de internacionalización o de coordinación. El tipo de modelo aplicado viene condicionado, como asevera el marco teórico, por el cumplimiento de las condiciones necesarias que se requieren para que Europa imponga sus normas: legitimidad, incentivos y coherencia interna. Estas condiciones varían en función tanto del tema tratado como del país vecino.
Resumo:
[spa] La política de Vecindad de la Unión Europea se acostumbra a interpretar como un instrumento de europeización forzada. Gracias a su fuerza de negociación, la Unión Europea impondría a sus vecinos su modelo económico y hasta político y social. Esta sin embargo no es la evidencia que se obtiene en el ámbito del comercio. En consonancia con el modelo teórico de relaciones exteriores desarrollado por varios investigadores bajo la dirección de Esther Barbé, observamos como, en el ámbito comercial, el modelo de relaciones entre la Unión Europea y cuatro países de la política de Vecindad puede ser tanto de europeización como también de internacionalización o de coordinación. El tipo de modelo aplicado viene condicionado, como asevera el marco teórico, por el cumplimiento de las condiciones necesarias que se requieren para que Europa imponga sus normas: legitimidad, incentivos y coherencia interna. Estas condiciones varían en función tanto del tema tratado como del país vecino.
Resumo:
Objective: Health status measures usually have an asymmetric distribution and present a highpercentage of respondents with the best possible score (ceiling effect), specially when they areassessed in the overall population. Different methods to model this type of variables have beenproposed that take into account the ceiling effect: the tobit models, the Censored Least AbsoluteDeviations (CLAD) models or the two-part models, among others. The objective of this workwas to describe the tobit model, and compare it with the Ordinary Least Squares (OLS) model,that ignores the ceiling effect.Methods: Two different data sets have been used in order to compare both models: a) real datacomming from the European Study of Mental Disorders (ESEMeD), in order to model theEQ5D index, one of the measures of utilities most commonly used for the evaluation of healthstatus; and b) data obtained from simulation. Cross-validation was used to compare thepredicted values of the tobit model and the OLS models. The following estimators werecompared: the percentage of absolute error (R1), the percentage of squared error (R2), the MeanSquared Error (MSE) and the Mean Absolute Prediction Error (MAPE). Different datasets werecreated for different values of the error variance and different percentages of individuals withceiling effect. The estimations of the coefficients, the percentage of explained variance and theplots of residuals versus predicted values obtained under each model were compared.Results: With regard to the results of the ESEMeD study, the predicted values obtained with theOLS model and those obtained with the tobit models were very similar. The regressioncoefficients of the linear model were consistently smaller than those from the tobit model. In thesimulation study, we observed that when the error variance was small (s=1), the tobit modelpresented unbiased estimations of the coefficients and accurate predicted values, specially whenthe percentage of individuals wiht the highest possible score was small. However, when theerrror variance was greater (s=10 or s=20), the percentage of explained variance for the tobitmodel and the predicted values were more similar to those obtained with an OLS model.Conclusions: The proportion of variability accounted for the models and the percentage ofindividuals with the highest possible score have an important effect in the performance of thetobit model in comparison with the linear model.
Resumo:
We show that the statistics of an edge type variable in natural images exhibits self-similarity properties which resemble those of local energy dissipation in turbulent flows. Our results show that self-similarity and extended self-similarity hold remarkably for the statistics of the local edge variance, and that the very same models can be used to predict all of the associated exponents. These results suggest using natural images as a laboratory for testing more elaborate scaling models of interest for the statistical description of turbulent flows. The properties we have exhibited are relevant for the modeling of the early visual system: They should be included in models designed for the prediction of receptive fields.
Resumo:
A general criterion for the design of adaptive systemsin digital communications called the statistical reference criterionis proposed. The criterion is based on imposition of the probabilitydensity function of the signal of interest at the outputof the adaptive system, with its application to the scenario ofhighly powerful interferers being the main focus of this paper.The knowledge of the pdf of the wanted signal is used as adiscriminator between signals so that interferers with differingdistributions are rejected by the algorithm. Its performance isstudied over a range of scenarios. Equations for gradient-basedcoefficient updates are derived, and the relationship with otherexisting algorithms like the minimum variance and the Wienercriterion are examined.