12 resultados para common method variance
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Boar taint is the off-odour or off flavour of cooked pork. Currently, the most common method of controlling boar taint is surgical castration. However, immunocastration has been used in some parts of the world as an alternative to surgical castration. The aim of this study was to evaluate the sensory acceptability of meat from immunocastrated pigs (IM) compared with meat from females (FE), surgically castrated (CM) and entire males (EM). Twenty animals of each type were evaluated by 201 consumers in 20 sessions. Longissimus thoracis muscle of the different animals was cooked in an oven at 180 °C for 10 min. Consumers scored the odour and the flavour of the meat in a 9-point category scale without an intermediate level. There were no significant differences in consumer’s evaluation of meat from IM, CM, and FE. In contrast, EM meat presented a higher percentage of dissatisfied scores and was significantly (P & 0.05) less accepted than meat from CM, IM and FE. Consumers’ acceptability of EM meat was always lower, independently of its androstenone levels. However meat with low levels of androstenone was more accepted that meat with medium or high levels of this substance. It can be concluded that immunocastration produced pork that was accepted by the consumers, and was indistinguishable from pork from CM or FE.
Resumo:
A new drift compensation method based on Common Principal Component Analysis (CPCA) is proposed. The drift variance in data is found as the principal components computed by CPCA. This method finds components that are common for all gasses in feature space. The method is compared in classification task with respect to the other approaches published where the drift direction is estimated through a Principal Component Analysis (PCA) of a reference gas. The proposed new method ¿ employing no specific reference gas, but information from all gases ¿has shown the same performance as the traditional approach with the best-fitted reference gas. Results are shown with data lasting 7-months including three gases at different concentrations for an array of 17 polymeric sensors.
Resumo:
Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.
Resumo:
Aortic stiffness is an independent predictor factor for cardiovascular risk. Different methods for determining pulse wave velocity (PWV) are used, among which the most common are mechanical methods such as SphygmoCor or Complior, which require specific devices and are limited by technical difficulty in obtaining measurements. Doppler guided by 2D ultrasound is a good alternative to these methods. We studied 40 patients (29 male, aged 21 to 82 years) comparing the Complior method with Doppler. Agreement of both devices was high (R = 0.91, 0.84-0.95, 95% CI). The reproducibility analysis revealed no intra-nor interobserver differences. Based on these results, we conclude that Doppler ultrasound is a reliable and reproducible alternative to other established methods for themeasurement of aortic PWV
Resumo:
In this paper a method for extracting semantic informationfrom online music discussion forums is proposed. The semantic relations are inferred from the co-occurrence of musical concepts in forum posts, using network analysis. The method starts by defining a dictionary of common music terms in an art music tradition. Then, it creates a complex network representation of the online forum by matchingsuch dictionary against the forum posts. Once the complex network is built we can study different network measures, including node relevance, node co-occurrence andterm relations via semantically connecting words. Moreover, we can detect communities of concepts inside the forum posts. The rationale is that some music terms are more related to each other than to other terms. All in all, this methodology allows us to obtain meaningful and relevantinformation from forum discussions.
Resumo:
Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to obtain richer resources and a broader range of potential uses for a significant number of languages. With the objective of reducing cost by eliminating human intervention, we present a new method towards the automatic merging of resources. This method includes both, the automatic mapping of resources involved to a common format and merging them, once in this format. This paper presents how we have addressed the merging of two verb subcategorization frame lexica for Spanish, but our method will be extended to cover other types of Lexical Resources. The achieved results, that almost replicate human work, demonstrate the feasibility of the approach.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes.This method separates the search for an exact test from the goal to create a non-randomized test. Randomization is used to extend any exact test relating to meansof variables with finitely many outcomes to variables with outcomes belonging to agiven bounded set. Tests in terms of variance and covariance are reduced to testsrelating to means. Randomness is then eliminated in a separate step.This method is used to create confidence intervals for the difference between twomeans (or variances) and tests of stochastic inequality and correlation.
Resumo:
WO3 nanocrystalline powders were obtained from tungstic acid following a sol-gel process. Evolution of structural properties with annealing temperature was studied by X-ray diffraction and Raman spectroscopy. These structural properties were compared with those of WO3 nanopowders obtained by the most common process of pyrolysis of ammonium paratungstate, usually used in gas sensors applications. Sol-gel WO3 showed a high sensor response to NO2 and low response to CO and CH4. The response of these sensor devices was compared with that of WO3 obtained from pyrolysis, showing the latter a worse sensor response to NO2. Influence of operating temperature, humidity, and film thickness on NO2 detection was studied in order to improve the sensing conditions to this gas.
Resumo:
ABSTRACT Dual-trap optical tweezers are often used in high-resolution measurements in single-molecule biophysics. Such measurements can be hindered by the presence of extraneous noise sources, the most prominent of which is the coupling of fluctuations along different spatial directions, which may affect any optical tweezers setup. In this article, we analyze, both from the theoretical and the experimental points of view, the most common source for these couplings in dual-trap optical-tweezers setups: the misalignment of traps and tether. We give criteria to distinguish different kinds of misalignment, to estimate their quantitative relevance and to include them in the data analysis. The experimental data is obtained in a, to our knowledge, novel dual-trap optical-tweezers setup that directly measures forces. In the case in which misalignment is negligible, we provide a method to measure the stiffness of traps and tether based on variance analysis. This method can be seen as a calibration technique valid beyond the linear trap region. Our analysis is then employed to measure the persistence length of dsDNA tethers of three different lengths spanning two orders of magnitude. The effective persistence length of such tethers is shown to decrease with the contour length, in accordance with previous studies.
Resumo:
The epiphytic macroinvertebrate communities associated with the Common Reed, Phragmites australis (Cav.) Trin. ex Steudel, were examined seasonally from summer 2004 to spring 2005 in eleven coastal lagoons of the Llobregat Delta (NE Spain) following the method proposed by Kornijów & Kairesalo (1994). The aims of the study were to: 1) characterise and quantify changes in epiphytic macroinvertebrate communities along environmental gradients; 2) assess the contribution of elements of the epiphytic compartment to structuring the community; 3) define the optima and tolerances of selected epiphytic macroinvertebrate taxa for the most relevant ecological factors responsible for assemblage composition; and 4) identify possible epiphytic species assemblages that would allow a lagoon"s typology to be established, as well as their representative indicator species. Communities showed statistically significant seasonal variation, with two faunal peaks: one in summer, with high chironomid densities, and the other in winter, with high naidid densities. These peaks showed a clear response to the influence of environmental factors. Salinity explained the highest percentage of total variance (36%), while trophic variables (nutrients, phytoplanktonic chlorophyll-a, and total organic carbon) and epiphyton biomass (19.2 and 4% of total variance explained, respectively) were secondary. Three different epiphytic macroinvertebrate species assemblages could be defined. These assemblages were directly linked to conductivity conditions, which determined the rate of survival of certain taxa, and to the existence of a direct connection with the sea, which permitted the establishment of 'brackish-water' species. In spite of the existence of these species assemblages, the species composition and biomass of epiphytic macroinvertebrates and epiphyton differed substantially between lagoons; both elements were subject to changes in the environment, which finally determined the site-to-site variation in the density and composition of the macroinvertebrate population
Resumo:
Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique.