110 resultados para WEIGHTED EARLINESS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contextual effects on child health have been investigated extensively in previous research. However, few studies have considered the interplay between community characteristics and individual-level variables. This study examines the influence of community education and family socioeconomic characteristics on child health (as measured by height and weight-for-age Z-scores), as well as their interactions. We adapted the Commission on Social Determinants of Health (CSDH) framework to the context of child health. Using data from the 2010 Colombian Demographic and Health Survey (DHS), weighted multilevel models are fitted since the data are not self-weighting. The results show a positive impact of the level of education of other women in the community on child health, even after controlling for individual and family socioeconomic characteristics. Different pathways through which community education can substitute for the effect of family characteristics on child nutrition are found. The interaction terms highlight the importance of community education as a moderator of the impact of the mother’s own education and autonomy, on child health. In addition, the results reveal differences between height and weight-for-age indicators in their responsiveness to individual and contextual factors. Our findings suggest that community intervention programmes may have differential effects on child health. Therefore, their identification can contribute to a better targeting of child care policies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The COSMIN checklist is a tool for evaluating the methodological quality of studies on measurement properties of health-related patient-reported outcomes. The aim of this study is to determine the inter-rater agreement and reliability of each item score of the COSMIN checklist (n = 114). Methods: 75 articles evaluating measurement properties were randomly selected from the bibliographic database compiled by the Patient-Reported Outcome Measurement Group, Oxford, UK. Raters were asked to assess the methodological quality of three articles, using the COSMIN checklist. In a one-way design, percentage agreement and intraclass kappa coefficients or quadratic-weighted kappa coefficients were calculated for each item. Results: 88 raters participated. Of the 75 selected articles, 26 articles were rated by four to six participants, and 49 by two or three participants. Overall, percentage agreement was appropriate (68% was above 80% agreement), and the kappa coefficients for the COSMIN items were low (61% was below 0.40, 6% was above 0.75). Reasons for low inter-rater agreement were need for subjective judgement, and accustom to different standards, terminology and definitions.Conclusions: Results indicated that raters often choose the same response option, but that it is difficult on item level to distinguish between articles. When using the COSMIN checklist in a systematic review, we recommend getting some training and experience, completing it by two independent raters, and reaching consensus on one final rating. Instructions for using the checklist are improved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The view of a 1870-1913 expanding European economy providing increasing welfare to everybody has been challenged by many, then and now. We focus on the amazing growth that was experienced, its diffusion and its sources, in the context of the permanent competition among European nation states. During 1870-193 the globalized European economy reached a silver age . GDP growth was quite rapid (2.15% per annum) and diffused all over Europe. Even discounting the high rates of population growth (1.06%), per capita growth was left at a respectable 1.08%. Income per capita was rising in every country, and the rates of improvement were quite similar. This was a major achievement after two generations of highly localized growth, both geographically and socially. Growth was based on the increased use of labour and capital, but a good part of growth (73 per cent for the weighted average of the best documented European countries) came out of total factor productivity efficiency gains resulting from not well specified ultimate sources of growth. This proportion suggests that the European economy was growing at full capacity at its production frontier. It would have been very difficult to improve its performance. Within Europe, convergence was limited, and it only was in motion after 1900. What happened was more the end of the era of big divergence rather than an era of convergence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

``Negativity effect'' refers to the psychological phenomenon that peopletend to attach greater weight to negative information than to equallyextreme and equally likely positive information in a variety of informationprocessing tasks. Numerous studies of impression formation have found thatnegative information is weighted more heavily than positive information asimpressions of others are formed. There is empirical evidence in politicalscience that shows the importance of the negativity effect in the informationprocessing of the voters. This effect can explain the observed decreaseof popularity for a president the longer he is in office. \\We construct a dynamic model of political competition, incorporating thenegativity effect in the decision rule of the voters and allowing their preferencesto change over time, according to the past performance of the candidateswhile in office. Our model may explain the emergence of ideologies out ofthe competition for votes of myopic candidates freely choosing policypositions. This result gives rise to the formation of political parties,as infinitely--lived agents with a certain ideology. Furthermore, in thismodel some voters may start out by switching among parties associated withdifferent policies, but find themselves supporting one of the parties fromsome point on. Thus, the model describes a process by which some votersbecome identified with a ``right'' or ``left'' bloc, while others ``swing''between the two parties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to thecontingency ratios, that is the values in the table relative to expected values based on the marginals this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes to estimate the covariance matrix of stock returnsby an optimally weighted average of two existing estimators: the samplecovariance matrix and single-index covariance matrix. This method isgenerally known as shrinkage, and it is standard in decision theory andin empirical Bayesian statistics. Our shrinkage estimator can be seenas a way to account for extra-market covariance without having to specifyan arbitrary multi-factor structure. For NYSE and AMEX stock returns from1972 to 1995, it can be used to select portfolios with significantly lowerout-of-sample variance than a set of existing estimators, includingmulti-factor models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we address a problem arising in risk management; namely the study of price variations of different contingent claims in the Black-Scholes model due to anticipating future events. The method we propose to use is an extension of the classical Vega index, i.e. the price derivative with respect to the constant volatility, in thesense that we perturb the volatility in different directions. Thisdirectional derivative, which we denote the local Vega index, will serve as the main object in the paper and one of the purposes is to relate it to the classical Vega index. We show that for all contingent claims studied in this paper the local Vega index can be expressed as a weighted average of the perturbation in volatility. In the particular case where the interest rate and the volatility are constant and the perturbation is deterministic, the local Vega index is an average of this perturbation multiplied by the classical Vega index. We also study the well-known goal problem of maximizing the probability of a perfect hedge and show that the speed of convergence is in fact dependent of the local Vega index.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Agent-based computational economics is becoming widely used in practice. This paperexplores the consistency of some of its standard techniques. We focus in particular on prevailingwholesale electricity trading simulation methods. We include different supply and demandrepresentations and propose the Experience-Weighted Attractions method to include severalbehavioural algorithms. We compare the results across assumptions and to economic theorypredictions. The match is good under best-response and reinforcement learning but not underfictitious play. The simulations perform well under flat and upward-slopping supply bidding,and also for plausible demand elasticity assumptions. Learning is influenced by the number ofbids per plant and the initial conditions. The overall conclusion is that agent-based simulationassumptions are far from innocuous. We link their performance to underlying features, andidentify those that are better suited to model wholesale electricity markets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conventional financial accounting informationis slanted in favour of certain economic interests. This paper argues in favour ofaccounting information capturing and showingrelevant aspects of the economic-social situation,and of decision-making based on it allowingfor decisions to be taken with economic-social,and not purely economic-weighted, awareness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the induced aggregation operators. The analysis begins with a revision of some basic concepts such as the induced ordered weighted averaging (IOWA) operator and the induced ordered weighted geometric (IOWG) operator. We then analyze the problem of decision making with Dempster-Shafer theory of evidence. We suggest the use of induced aggregation operators in decision making with Dempster-Shafer theory. We focus on the aggregation step and examine some of its main properties, including the distinction between descending and ascending orders and different families of induced operators. Finally, we present an illustrative example in which the results obtained using different types of aggregation operators can be seen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[spa] Se presenta el operador OWA generalizado inducido (IGOWA). Es un nuevo operador de agregación que generaliza al operador OWA a través de utilizar las principales características de dos operadores muy conocidos como son el operador OWA generalizado y el operador OWA inducido. Entonces, este operador utiliza medias generalizadas y variables de ordenación inducidas en el proceso de reordenación. Con esta formulación, se obtiene una amplia gama de operadores de agregación que incluye a todos los casos particulares de los operadores IOWA y GOWA, y otros casos particulares. A continuación, se realiza una generalización mayor al operador IGOWA a través de utilizar medias cuasi-aritméticas. Finalmente, también se desarrolla un ejemplo numérico del nuevo modelo en un problema de toma de decisiones financieras.