796 resultados para Empirical Algorithm Analysis
Resumo:
Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to thecontingency ratios, that is the values in the table relative to expected values based on the marginals this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.
Resumo:
The generalization of simple correspondence analysis, for two categorical variables, to multiple correspondence analysis where they may be three or more variables, is not straighforward, both from a mathematical and computational point of view. In this paper we detail the exact computational steps involved in performing a multiple correspondence analysis, including the special aspects of adjusting the principal inertias to correct the percentages of inertia, supplementary points and subset analysis. Furthermore, we give the algorithm for joint correspondence analysis where the cross-tabulations of all unique pairs of variables are analysed jointly. The code in the R language for every step of the computations is given, as well as the results of each computation.
Resumo:
In the analysis of multivariate categorical data, typically the analysis of questionnaire data, it is often advantageous, for substantive and technical reasons, to analyse a subset of response categories. In multiple correspondence analysis, where each category is coded as a column of an indicator matrix or row and column of Burt matrix, it is not correct to simply analyse the corresponding submatrix of data, since the whole geometric structure is different for the submatrix . A simple modification of the correspondence analysis algorithm allows the overall geometric structure of the complete data set to be retained while calculating the solution for the selected subset of points. This strategy is useful for analysing patterns of response amongst any subset of categories and relating these patterns to demographic factors, especially for studying patterns of particular responses such as missing and neutral responses. The methodology is illustrated using data from the International Social Survey Program on Family and Changing Gender Roles in 1994.
Resumo:
This paper provides updated empirical evidence about the real and nominal effects of monetary policy in Italy, by using structural VAR analysis. We discuss different empirical approaches that have been used in order to identify monetary policy exogenous shocks. We argue that the data support the view that the Bank of Italy, at least in the recent past, has been targeting the rate on overnight interbank loans. Therefore, we interpret shocks to the overnight rate as purely exogenous monetary policy shocks and study how different macroeconomic variables react to such shocks.
Resumo:
A Method is offered that makes it possible to apply generalized canonicalcorrelations analysis (CANCOR) to two or more matrices of different row and column order. The new method optimizes the generalized canonical correlationanalysis objective by considering only the observed values. This is achieved byemploying selection matrices. We present and discuss fit measures to assessthe quality of the solutions. In a simulation study we assess the performance of our new method and compare it to an existing procedure called GENCOM,proposed by Green and Carroll. We find that our new method outperforms the GENCOM algorithm both with respect to model fit and recovery of the truestructure. Moreover, as our new method does not require any type of iteration itis easier to implement and requires less computation. We illustrate the methodby means of an example concerning the relative positions of the political parties inthe Netherlands based on provincial data.
Resumo:
We obtain minimax lower and upper bounds for the expected distortionredundancy of empirically designed vector quantizers. We show that the meansquared distortion of a vector quantizer designed from $n$ i.i.d. datapoints using any design algorithm is at least $\Omega (n^{-1/2})$ awayfrom the optimal distortion for some distribution on a bounded subset of${\cal R}^d$. Together with existing upper bounds this result shows thatthe minimax distortion redundancy for empirical quantizer design, as afunction of the size of the training data, is asymptotically on the orderof $n^{1/2}$. We also derive a new upper bound for the performance of theempirically optimal quantizer.
Resumo:
This paper introduces the approach of using Total Unduplicated Reach and Frequency analysis (TURF) to design a product line through a binary linear programming model. This improves the efficiency of the search for the solution to the problem compared to the algorithms that have been used to date. The results obtained through our exact algorithm are presented, and this method shows to be extremely efficient both in obtaining optimal solutions and in computing time for very large instances of the problem at hand. Furthermore, the proposed technique enables the model to be improved in order to overcome the main drawbacks presented by TURF analysis in practice.
Resumo:
This paper provides some first empirical evidence on the relationshipbetween R&D spillovers and R&D cooperation. The results suggest disentangling different aspects of know-how flows. Firms which rate incoming spillovers more importantly and who can limit outgoing spillovers by a more effective protection of know-how, are more likely to cooperate in R&D. Our analysis also finds that cooperating firms have higher incoming spillovers and higher protection of know-how, indicating that cooperation may serve as a vehicle to manage information flows. Our results thus suggest that on the one hand the information sharing and coordination aspects of incoming spillovers are crucial in understanding cooperation, while on the other hand, protection against outgoing spillovers is important for firms to engage in stable cooperative agreements by reducing free-rider problems. Distinguishing different types of cooperative partners reveals that while managing outgoing spillovers is less critical in alliances with non-commercial research partners than between vertically related partners, the incoming spillovers seem to be more critical in understanding the former type of R&D cooperation.
Resumo:
An attendance equation is estimated using data on individual games playedin the Spanish First Division Football League. The specification includesas explanatory factors: economic variables, quality, uncertainty andopportunity costs. We concentrate the analysis on some specificationissues such as controlling the effect of unobservables given the paneldata structure of the data set, the type of functional form and thepotential endogeneity of prices. We obtain the expected effects onattendance for all the variables. The estimated price elasticities aresmaller than one in absolute value as usually occurs in this literaturebut are sensitive to the specification issues.
Endogeneous matching in university-industry collaboration: Theory and empirical evidence from the UK
Resumo:
We develop a two-sided matching model to analyze collaboration between heterogeneousacademics and firms. We predict a positive assortative matching in terms of both scientificability and affinity for type of research, but negative assortative in terms of ability on one sideand affinity in the other. In addition, the most able and most applied academics and the mostable and most basic firms shall collaborate rather than stay independent. Our predictionsreceive strong support from the analysis of the teams of academics and firms that proposeresearch projects to the UK's Engineering and Physical Sciences Research Council.
Resumo:
Climate science indicates that climate stabilization requires low GHG emissions. Is thisconsistent with nondecreasing human welfare?Our welfare or utility index emphasizes education, knowledge, and the environment. Weconstruct and calibrate a multigenerational model with intertemporal links provided by education,physical capital, knowledge and the environment.We reject discounted utilitarianism and adopt, first, the Pure Sustainability Optimization (orIntergenerational Maximin) criterion, and, second, the Sustainable Growth Optimization criterion,that maximizes the utility of the first generation subject to a given future rate of growth. We applythese criteria to our calibrated model via a novel algorithm inspired by the turnpike property.The computed paths yield levels of utility higher than the level at reference year 2000 for allgenerations. They require the doubling of the fraction of labor resources devoted to the creation ofknowledge relative to the reference level, whereas the fractions of labor allocated to consumptionand leisure are similar to the reference ones. On the other hand, higher growth rates requiresubstantial increases in the fraction of labor devoted to education, together with moderate increasesin the fractions of labor devoted to knowledge and the investment in physical capital.
Resumo:
Until recently, the hard X-ray, phase-sensitive imaging technique called grating interferometry was thought to provide information only in real space. However, by utilizing an alternative approach to data analysis we demonstrated that the angular resolved ultra-small angle X-ray scattering distribution can be retrieved from experimental data. Thus, reciprocal space information is accessible by grating interferometry in addition to real space. Naturally, the quality of the retrieved data strongly depends on the performance of the employed analysis procedure, which involves deconvolution of periodic and noisy data in this context. The aim of this article is to compare several deconvolution algorithms to retrieve the ultra-small angle X-ray scattering distribution in grating interferometry. We quantitatively compare the performance of three deconvolution procedures (i.e., Wiener, iterative Wiener and Lucy-Richardson) in case of realistically modeled, noisy and periodic input data. The simulations showed that the algorithm of Lucy-Richardson is the more reliable and more efficient as a function of the characteristics of the signals in the given context. The availability of a reliable data analysis procedure is essential for future developments in grating interferometry.
Resumo:
We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.
Resumo:
According to most political scientists and commentators, direct democracy seems to weaken political parties. Our empirical analysis in the 26 Swiss cantons shows that this thesis in its general form cannot be maintained. Political parties in cantons with extensive use of referendums and initiatives are not in all respects weaker than parties in cantons with little use of direct democratic means of participation. On the contrary, direct democracy goes together with more professional and formalized party organizations. Use of direct democracy is associated with more fragmented and volatile party systems, and with greater support for small parties, but causal interpretations of these relationships are difficult.
Resumo:
Background:¦Infection after total or partial hip arthroplasty (HA) leads to significant long-term morbidity and high healthcare cost. We evaluated reasons for treatment failure of different surgical modalities in a 12-year prosthetic hip joint infection cohort study.¦Method:¦All patients hospitalized at our institution with infected HA were included either retrospectively (1999-‐2007) or prospectively¦(2008-‐2010). HA infection was defined as growth of the same microorganism in ≥2 tissues or synovialfluid culture, visible purulence, sinus tract or acute inflammation on tissue histopathology. Outcome analysis was performed at outpatient visits, followed by contacting patients, their relatives and/or treating physicians afterwards.¦Results:¦During the study period, 117 patients with infected HA were identified. We excluded 2 patients due to missing data. The average age was 69 years (range, 33-‐102 years); 42% were female. HA was mainly performed for osteoarthritis (n=84), followed by trauma (n=22), necrosis (n=4), dysplasia(n=2), rheumatoid arthritis (n=1), osteosarcoma (n=1) and tuberculosis (n=1). 28 infections occurred early(≤3 months), 25 delayed (3-‐24 months) and 63 late (≥24 months after surgery). Infected HA were¦treated with (i) two-‐stage exchange in 59 patients (51%, cure rate: 93%), (ii) one-‐stage exchange in 5 (4.3%, cure rate: 100%), (iii) debridement with change of mobile parts in 18 (17%, cure rate: 83%), (iv) debridement without change of mobile¦parts in 17 (14%, cure rate : 53% ), (v) Girdlestone in 13 (11%, cure rate: 100%), and (vi) two-‐stage exchange followed by¦removal in 3 (2.6%). Patients were followed for an average of 3.9 years (range, 0.1 to 9 years), 7 patients died unrelated to the infected HA. 15 patients (13%) needed additional operations, 1 for mechanical reasons(dislocation of spacer) and 14 for persistent infection: 11 treated with debridement and retention (8 without change; and 3 with change of mobile parts) and 3 with two-‐stage exchange. The average number of surgery was 2.2 (range, 1 to 5). The infection was finally eradicated in all patients, but the functional outcome remained unsatisfactory in 20% (persistent pain or impaired mobility due to spacer or Girdlestone situation).¦Conclusions:¦Non-‐respect of current treatment concept leads to treatment failure with subsequent operations. Precise analysis of each treatment failure can be used for improving the treatment algorithm leading to better results.