80 resultados para Variety scale
Resumo:
Background: Despite the fact that labour market flexibility has resulted in an expansion of precarious employment in industrialized countries, to date there is limited empirical evidence about its health consequences. The Employment Precariousness Scale (EPRES) is a newly developed, theory-based, multidimensional questionnaire specifically devised for epidemiological studies among waged and salaried workers. Objective: To assess acceptability, reliability and construct validity of EPRES in a sample of waged and salaried workers in Spain. Methods: Cross-sectional study, using a sub-sample of 6.968 temporary and permanent workers from a population-based survey carried out in 2004-2005. The survey questionnaire was interviewer administered and included the six EPRES subscales, measures of the psychosocial work environment (COPSOQ ISTAS21), and perceived general and mental health (SF-36). Results: A high response rate to all EPRES items indicated good acceptability; Cronbach’s alpha coefficients, over 0.70 for all subscales and the global score, demonstrated good internal consistency reliability; exploratory factor analysis using principal axis analysis and varimax rotation confirmed the six-subscale structure and the theoretical allocation of all items. Patterns across known groups and correlation coefficients with psychosocial work environment measures and perceived health demonstrated the expected relations, providing evidence of construct validity. Conclusions: Our results provide evidence in support of the psychometric properties of EPRES, which appears to be a promising tool for the measurement of employment precariousness in public health research.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
In this paper we propose a simple and general model for computing the Ramsey optimal inflation tax, which includes several models from the previous literature as special cases. We show that it cannot be claimed that the Friedman rule is always optimal (or always non--optimal) on theoretical grounds. The Friedman rule is optimal or not, depending on conditions related to the shape of various relevant functions. One contribution of this paper is to relate these conditions to {\it measurable} variables such as the interest rate or the consumption elasticity of money demand. We find that it tends to be optimal to tax money when there are economies of scale in the demand for money (the scale elasticity is smaller than one) and/or when money is required for the payment of consumption or wage taxes. We find that it tends to be optimal to tax money more heavily when the interest elasticity of money demand is small. We present empirical evidence on the parameters that determine the optimal inflation tax. Calibrating the model to a variety of empirical studies yields a optimal nominal interest rate of less than 1\%/year, although that finding is sensitive to the calibration.
Resumo:
In order to interpret the biplot it is necessary to know which points usually variables are the ones that are important contributors to the solution, and this information is available separately as part of the biplot s numerical results. We propose a new scaling of the display, called the contribution biplot, which incorporates this diagnostic directly into the graphical display, showing visually the important contributors and thus facilitating the biplot interpretation and often simplifying the graphical representation considerably. The contribution biplot can be applied to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. In the contribution biplot one set of points, usually the rows of the data matrix, optimally represent the spatial positions of the cases or sample units, according to some distance measure that usually incorporates some form of standardization unless all data are comparable in scale. The other set of points, usually the columns, is represented by vectors that are related to their contributions to the low-dimensional solution. A fringe benefit is that usually only one common scale for row and column points is needed on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot legible. Furthermore, this version of the biplot also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important, when they are in fact contributing minimally to the solution.
Resumo:
This paper studies two important reasons why people violate procedure invariance, loss aversion and scale compatibility. The paper extends previous research on loss aversion and scale compatibility by studying loss aversion and scale compatibility simultaneously, by looking at a new decision domain, medical decision analysis, and by examining the effect of loss aversion and scale compatibility on "well-contemplated preferences." We find significant evidence both of loss aversion and scale compatibility. However, the sizes of the biases due to loss aversion and scale compatibility vary over trade-offs and most participants do not behave consistently according to loss aversion or scale compatibility. In particular, the effect of loss aversion in medical trade-offs decreases with duration. These findings are encouraging for utility measurement and prescriptive decision analysis. There appear to exist decision contexts in which the effects of loss aversion and scale compatibility can be minimized and utilities can be measured that do not suffer from these distorting factors.
Resumo:
We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.
Resumo:
We estimate an open economy dynamic stochastic general equilibrium (DSGE)model of Australia with a number of shocks, frictions and rigidities, matching alarge number of observable time series. We find that both foreign and domesticshocks are important drivers of the Australian business cycle.We also find that theinitial impact on inflation of an increase in demand for Australian commoditiesis negative, due to an improvement in the real exchange rate, though there is apersistent positive effect on inflation that dominates at longer horizons.
Resumo:
The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.
Resumo:
Geological and geomorphological mapping at scale 1:10.000 besides from being an important source of scientific information it is also a necessary tool for municipal organs in order to make proper decisions when dealing with geo-environmental problems concerning integral territorial development. In this work, detailed information is given on the contents of such maps, their social and economical application, and a balance of the investment and gains that derives from them
Resumo:
In this paper, we present a method to deal with the constraints of the underwater medium for finding changes between sequences of underwater images. One of the main problems of underwater medium for automatically detecting changes is the low altitude of the camera when taking pictures. This emphasise the parallax effect between the images as they are not taken exactly at the same position. In order to solve this problem, we are geometrically registering the images together taking into account the relief of the scene
Resumo:
We show that the motive of the quotient of a scheme by a finite group coincides with the invariant submotive.
Resumo:
Multiexponential decays may contain time-constants differing in several orders of magnitudes. In such cases, uniform sampling results in very long records featuring a high degree of oversampling at the final part of the transient. Here, we analyze a nonlinear time scale transformation to reduce the total number of samples with minimum signal distortion, achieving an important reduction of the computational cost of subsequent analyses. We propose a time-varying filter whose length is optimized for minimum mean square error
Resumo:
A sequential weakly efficient two-auction game with entry costs, interdependence between objects, two potential bidders and IPV assumption is presented here in order to give some theoretical predictions on the effects of geographical scale economies on local service privatization performance. It is shown that the first object seller takes profit of this interdependence. The interdependence externality rises effective competition for the first object, expressed as the probability of having more than one final bidder. Besides, if there is more than one final bidder in the first auction, seller extracts the entire bidder¿s expected future surplus differential between having won the first auction and having lost. Consequences for second object seller are less clear, reflecting the contradictory nature of the two main effects of object interdependence. On the one hand, first auction winner becomes ¿stronger¿, so that expected payments rise in a competitive environment. On the other hand, first auction loser becomes relatively ¿weaker¿, hence (probably) reducing effective competition for the second object. Additionally, some contributions to static auction theory with entry cost and asymmetric bidders are presented in the appendix