852 resultados para Cross-fostering Analysis
Resumo:
There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from realworld dynamics even though these are not necessarily deterministic and stationary. In the present study we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose we here propose a recurrence quantification analysis measure that allows tracking potentially curved and disrupted traces in cross recurrence plots. We apply this measure to cross recurrence plots constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Rössler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.
Resumo:
We offer new evidence on multi-level determinants of the gender division of housework. Using data from the 2004 European Social Survey (ESS) for 26 European, we study the micro and macro-level factors which increase the likelihood of men doing an equal or greater share of housework than their female partners. A sample of 11,915 young men and women is analysed with a multi-level logistic regression in order to test at individual level the classic relative-income, time-availability and gender-role values, and a new couple conflict hypothesis. At individual level we find significant relationships between relative resources, values, couple's disagreement, and the division of housework which support more economic dependency than "doing gender" perspectives. At the macro-level, we find important composition effects and also support for gender empowerment, family model and social stratification explanations of cross-country differences.
Resumo:
This paper is aimed at exploring the determinants of female activity from a dynamic perspective. An event-history analysis of the transition form employment to housework has been made resorting to data from the European Household Panel Survey. Four countries representing different welfare regimes and, more specifically, different family policies, have been selected for the analysis: Britain, Denmark, Germany and Spain. The results confirm the importance of individual-level factors, which is consistent with an economic approach to female labour supply. Nonetheless, there are significant cross-national differences in how these factors act over the risk of abandoning the labour market. First, the number of trnasitions is much lower among Danish working women than among British, German or Spanish ones, revealing the relative importance of universal provision of childcare services, vis-à-vis other elements of the family policy, as time or money.
Resumo:
This paper empirically explores the link between quality and concentration in a cross-section of manufactured goods. Using concentration data and product quality indicators, an ordered probit estimation explores the impact of concentration on quality that is defined as an index of quality characteristics. The results demonstrate that market concentration and quality are positively correlated across different industries. When industry concentration increases, the likelihood of the product being higher quality increases and the likelihood of observing a lower quality decreases.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
This was a descriptive, retrospective study, with a quantitative method, with the aim of analyzing the nursing diagnoses contained in the records of children of 0 to 36 months of age who attended infant health nursing consults. A documentary analysis and the cross-mapping technique were used. One hundred eighty-eight different nursing diagnoses were encountered, of which 33 (58.9%) corresponded to diagnoses contained in the Nomenclature of Nursing Diagnoses and Interventions and 23 (41.1%) were derived from ICNP® Version 1.0. Of the 56 nursing diagnoses, 43 (76.8%) were considered to be deviations from normalcy. It was concluded that the infant health nursing consults enabled the identification of situations of normalcy and abnormality, with an emphasis on the diagnoses of deviations from normalcy. Standardized language favors nursing documentation, contributing to the care of the patient and facilitating communication between nurses and other health professionals.
Resumo:
Objective Assessing the accuracy of the defining characteristics (DC) of the nursing diagnosis Sedentary Lifestyle (SL) in people with hypertension. Method A cross-sectional study carried out in a referral center in the outpatient care of people with hypertension and diabetes, with a sample of 285 individuals. The form used in the study was designed from operational definitions constructed for each DC of the diagnosis. Four nurses with training to carry out diagnostic inferences did the clinical assessment for the presence of SL. Results The prevalence of SL was 55.8%. Regarding measures of accuracy, the main DC for SL was chooses a daily routine lacking physical exercise, with sensitivity of 100% and specificity of 84.13%. Two DC stood out in the logistic regression, namely: reports preference for activities low in physical activity and poor performance in instrumental activities of daily living (IADL). Conclusion The results allowed identifying the best clinical indicators for SL in hypertensive adults.
Resumo:
OBJECTIVE To evaluate the effect of using antihypertensive classes of drugs of the calcium channel antagonists and inhibitors of angiotensin-converting enzyme in plasma concentrations of hydrogen sulfide and nitric oxide in patients with hypertension. METHODS Cross-sectional study with quantitative approach conducted with hypertensive patients in use of antihypertensive classes of drugs: angiotensin-converting enzyme inhibitors or calcium channel antagonists. RESULTS It was found that the concentration of plasma nitric oxide was significantly higher in hypertensive patients that were in use of angiotensin-converting enzyme inhibitors (p<0.03) and the hydrogen sulphide concentration was significantly higher in hypertensive plasma in use of calcium channel antagonists (p<0.002). CONCLUSION The findings suggest that these medications have as additional action mechanism the improvement of endothelial dysfunction by elevate plasma levels of vasodilatory substances.
Resumo:
Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to thecontingency ratios, that is the values in the table relative to expected values based on the marginals this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.
Resumo:
BACKGROUND: Circulating 25-hydroxyvitamin D [25(OH)D] concentration is inversely associated with peripheral arterial disease and hypertension. Vascular remodeling may play a role in this association, however, data relating vitamin D level to specific remodeling biomarkers among ESRD patients is sparse. We tested whether 25(OH)D concentration is associated with markers of vascular remodeling and inflammation in African American ESRD patients.METHODS: We conducted a cross-sectional study among ESRD patients receiving maintenance hemodialysis within Emory University-affiliated outpatient hemodialysis units. Demographic, clinical and dialysis treatment data were collected via direct patient interview and review of patients records at the time of enrollment, and each patient gave blood samples. Associations between 25(OH)D and biomarker concentrations were estimated in univariate analyses using Pearson's correlation coefficients and in multivariate analyses using linear regression models. 25(OH) D concentration was entered in multivariate linear regression models as a continuous variable and binary variable (<15 ng/ml and =15 ng/ml). Adjusted estimate concentrations of biomarkers were compared between 25(OH) D groups using analysis of variance (ANOVA). Finally, results were stratified by vascular access type.RESULTS: Among 91 patients, mean (standard deviation) 25(OH)D concentration was 18.8 (9.6) ng/ml, and was low (<15 ng/ml) in 43% of patients. In univariate analyses, low 25(OH) D was associated with lower serum calcium, higher serum phosphorus, and higher LDL concentrations. 25(OH) D concentration was inversely correlated with MMP-9 concentration (r = -0.29, p = 0.004). In multivariate analyses, MMP-9 concentration remained negatively associated with 25(OH) D concentration (P = 0.03) and anti-inflammatory IL-10 concentration positively correlated with 25(OH) D concentration (P = 0.04).CONCLUSIONS: Plasma MMP-9 and circulating 25(OH) D concentrations are significantly and inversely associated among ESRD patients. This finding may suggest a potential mechanism by which low circulating 25(OH) D functions as a cardiovascular risk factor.
Resumo:
We propose a new econometric estimation method for analyzing the probabilityof leaving unemployment using uncompleted spells from repeated cross-sectiondata, which can be especially useful when panel data are not available. Theproposed method-of-moments-based estimator has two important features:(1) it estimates the exit probability at the individual level and(2) it does not rely on the stationarity assumption of the inflowcomposition. We illustrate and gauge the performance of the proposedestimator using the Spanish Labor Force Survey data, and analyze the changesin distribution of unemployment between the 1980s and 1990s during a periodof labor market reform. We find that the relative probability of leavingunemployment of the short-term unemployed versus the long-term unemployedbecomes significantly higher in the 1990s.
Resumo:
Aim This study used data from temperate forest communities to assess: (1) five different stepwise selection methods with generalized additive models, (2) the effect of weighting absences to ensure a prevalence of 0.5, (3) the effect of limiting absences beyond the environmental envelope defined by presences, (4) four different methods for incorporating spatial autocorrelation, and (5) the effect of integrating an interaction factor defined by a regression tree on the residuals of an initial environmental model. Location State of Vaud, western Switzerland. Methods Generalized additive models (GAMs) were fitted using the grasp package (generalized regression analysis and spatial predictions, http://www.cscf.ch/grasp). Results Model selection based on cross-validation appeared to be the best compromise between model stability and performance (parsimony) among the five methods tested. Weighting absences returned models that perform better than models fitted with the original sample prevalence. This appeared to be mainly due to the impact of very low prevalence values on evaluation statistics. Removing zeroes beyond the range of presences on main environmental gradients changed the set of selected predictors, and potentially their response curve shape. Moreover, removing zeroes slightly improved model performance and stability when compared with the baseline model on the same data set. Incorporating a spatial trend predictor improved model performance and stability significantly. Even better models were obtained when including local spatial autocorrelation. A novel approach to include interactions proved to be an efficient way to account for interactions between all predictors at once. Main conclusions Models and spatial predictions of 18 forest communities were significantly improved by using either: (1) cross-validation as a model selection method, (2) weighted absences, (3) limited absences, (4) predictors accounting for spatial autocorrelation, or (5) a factor variable accounting for interactions between all predictors. The final choice of model strategy should depend on the nature of the available data and the specific study aims. Statistical evaluation is useful in searching for the best modelling practice. However, one should not neglect to consider the shapes and interpretability of response curves, as well as the resulting spatial predictions in the final assessment.
Resumo:
The generalization of simple correspondence analysis, for two categorical variables, to multiple correspondence analysis where they may be three or more variables, is not straighforward, both from a mathematical and computational point of view. In this paper we detail the exact computational steps involved in performing a multiple correspondence analysis, including the special aspects of adjusting the principal inertias to correct the percentages of inertia, supplementary points and subset analysis. Furthermore, we give the algorithm for joint correspondence analysis where the cross-tabulations of all unique pairs of variables are analysed jointly. The code in the R language for every step of the computations is given, as well as the results of each computation.