928 resultados para automation of fit analysis
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
γ-Hydroxybutyric acid (GHB) is an endogenous short-chain fatty acid popular as a recreational drug due to sedative and euphoric effects, but also often implicated in drug-facilitated sexual assaults owing to disinhibition and amnesic properties. Whilst discrimination between endogenous and exogenous GHB as required in intoxication cases may be achieved by the determination of the carbon isotope content, such information has not yet been exploited to answer source inference questions of forensic investigation and intelligence interests. However, potential isotopic fractionation effects occurring through the whole metabolism of GHB may be a major concern in this regard. Thus, urine specimens from six healthy male volunteers who ingested prescription GHB sodium salt, marketed as Xyrem(®), were analysed by means of gas chromatography/combustion/isotope ratio mass spectrometry to assess this particular topic. A very narrow range of δ(13)C values, spreading from -24.810/00 to -25.060/00, was observed, whilst mean δ(13)C value of Xyrem(®) corresponded to -24.990/00. Since urine samples and prescription drug could not be distinguished by means of statistical analysis, carbon isotopic effects and subsequent influence on δ(13)C values through GHB metabolism as a whole could be ruled out. Thus, a link between GHB as a raw matrix and found in a biological fluid may be established, bringing relevant information regarding source inference evaluation. Therefore, this study supports a diversified scope of exploitation for stable isotopes characterized in biological matrices from investigations on intoxication cases to drug intelligence programmes.
Resumo:
We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.
Resumo:
A biplot, which is the multivariate generalization of the two-variable scatterplot, can be used to visualize the results of many multivariate techniques, especially those that are based on the singular value decomposition. We consider data sets consisting of continuous-scale measurements, their fuzzy coding and the biplots that visualize them, using a fuzzy version of multiple correspondence analysis. Of special interest is the way quality of fit of the biplot is measured, since it is well-known that regular (i.e., crisp) multiple correspondence analysis seriously under-estimates this measure. We show how the results of fuzzy multiple correspondence analysis can be defuzzified to obtain estimated values of the original data, and prove that this implies an orthogonal decomposition of variance. This permits a measure of fit to be calculated in the familiar form of a percentage of explained variance, which is directly comparable to the corresponding fit measure used in principal component analysis of the original data. The approach is motivated initially by its application to a simulated data set, showing how the fuzzy approach can lead to diagnosing nonlinear relationships, and finally it is applied to a real set of meteorological data.
Resumo:
A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model say ${\cal M}_0$ implies on a less restricted one ${\cal M}_1$. If $T_0$ and $T_1$ denote the goodness-of-fit test statistics associated to ${\cal M}_0$ and ${\cal M}_1$, respectively, then typically the difference $T_d = T_0 - T_1$ is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the models ${\cal M}_0$ and ${\cal M}_1$. As in the case of the goodness-of-fit test, it is of interest to scale the statistic $T_d$ in order to improve its chi-square approximation in realistic, i.e., nonasymptotic and nonnormal, applications. In a recent paper, Satorra (1999) shows that the difference between two Satorra-Bentler scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are notavailable in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of models ${\cal M}_0$ and ${\cal M}_1$. A Monte Carlo study is provided to illustrate the performance of the competing statistics.
Resumo:
This study aimed to evaluate the effects of (g a.i. L-1) abamectin (0.02), carbaryl (1.73), sulphur (4.8), fenitrothion (0.75), methidathion (0.4), and trichlorfon (1.5) on the survival of larvae and pupae, on the oviposition of adults and hatching of eggs from treated Chrysoperla externa third-instar larvae from two different populations (Bento Gonçalves and Vacaria, Rio Grande do Sul State, Brazil). Morphological changes caused by abamectin to eggs laid by C. externa from Vacaria population were evaluated by mean of ultrastructural analysis. The pesticides were applied on glass plates. Distilled water was used as control. For the evaluation of larvae mortality, a fully randomized experimental design in a 2 x 7 (two populations x seven treatments) factorial scheme was used, whereas for the effects of the compounds on oviposition capacity and egg viability, a 2 x 4 factorial scheme was used. Carbaryl, fenitrothion, and methidathion caused 100% mortality of larvae. Abamectin reduced the hatching of eggs from treated third-instar larvae of both populations; however, this pesticide presented highest toxicity on insects from Vacaria. The ultrastructural analysis showed that abamectin caused malformations in micropyle and in chorion external surface of C. externa eggs. Based in the total effect (E), carbaryl, fenitrothion, and methidathion are harmful to C. externa; trichlorfon is harmless to third-instar larvae, while abamectin and sulphur are harmless and slightly harmful to third-instar larvae from Bento Gonçalves and Vacaria, respectively.
Resumo:
Structural equation models (SEM) are commonly used to analyze the relationship between variables some of which may be latent, such as individual ``attitude'' to and ``behavior'' concerning specific issues. A number of difficulties arise when we want to compare a large number of groups, each with large sample size, and the manifest variables are distinctly non-normally distributed. Using an specific data set, we evaluate the appropriateness of the following alternative SEM approaches: multiple group versus MIMIC models, continuous versus ordinal variables estimation methods, and normal theory versus non-normal estimation methods. The approaches are applied to the ISSP-1993 Environmental data set, with the purpose of exploring variation in the mean level of variables of ``attitude'' to and ``behavior''concerning environmental issues and their mutual relationship across countries. Issues of both theoretical and practical relevance arise in the course of this application.
Resumo:
Prediction of species' distributions is central to diverse applications in ecology, evolution and conservation science. There is increasing electronic access to vast sets of occurrence records in museums and herbaria, yet little effective guidance on how best to use this information in the context of numerous approaches for modelling distributions. To meet this need, we compared 16 modelling methods over 226 species from 6 regions of the world, creating the most comprehensive set of model comparisons to date. We used presence-only data to fit models, and independent presence-absence data to evaluate the predictions. Along with well-established modelling methods such as generalised additive models and GARP and BIOCLIM, we explored methods that either have been developed recently or have rarely been applied to modelling species' distributions. These include machine-learning methods and community models, both of which have features that may make them particularly well suited to noisy or sparse information, as is typical of species' occurrence data. Presence-only data were effective for modelling species' distributions for many species and regions. The novel methods consistently outperformed more established methods. The results of our analysis are promising for the use of data from museums and herbaria, especially as methods suited to the noise inherent in such data improve.
Resumo:
With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.
Resumo:
This study was designed to check for the equivalence of the ZKPQ-50-CC (Spanish and French versions) through Internet on-line (OL) and paper and pencil (PP) answer format. Differences in means and devia- tions were significant in some scales, but effect sizes are minimal except for Sociability in the Spanish sample. Alpha reliabilities are also very similar in both versions with no significant differences between formats. A robust factorial structure was found for the two formats and the average congruency coefficients were 0.98. The goodness-of-fit indexes obtained by confirmatory factorial analysis are very similar to those obtained in the ZKPQ-50-CC validation study and they do not differ between the two formats. The multi-group analysis confirms the equivalence among the OL-PP formats in both countries. These results in general support the validity and reliability of the Internet as a method in investigations using the ZKPQ-50-CC.
Resumo:
The aim of this study was to determine the effect of using video analysis software on the interrater reliability of visual assessments of gait videos in children with cerebral palsy. Two clinicians viewed the same random selection of 20 sagittal and frontal video recordings of 12 children with cerebral palsy routinely acquired during outpatient rehabilitation clinics. Both observers rated these videos in a random sequence for each lower limb using the Observational Gait Scale, once with standard video software and another with video analysis software (Dartfish(®)) which can perform angle and timing measurements. The video analysis software improved interrater agreement, measured by weighted Cohen's kappas, for the total score (κ 0.778→0.809) and all of the items that required angle and/or timing measurements (knee position mid-stance κ 0.344→0.591; hindfoot position mid-stance κ 0.160→0.346; foot contact mid-stance κ 0.700→0.854; timing of heel rise κ 0.769→0.835). The use of video analysis software is an efficient approach to improve the reliability of visual video assessments.
Resumo:
The present research deals with the review of the analysis and modeling of Swiss franc interest rate curves (IRC) by using unsupervised (SOM, Gaussian Mixtures) and supervised machine (MLP) learning algorithms. IRC are considered as objects embedded into different feature spaces: maturities; maturity-date, parameters of Nelson-Siegel model (NSM). Analysis of NSM parameters and their temporal and clustering structures helps to understand the relevance of model and its potential use for the forecasting. Mapping of IRC in a maturity-date feature space is presented and analyzed for the visualization and forecasting purposes.
Resumo:
This paper focused on four alternatives of analysis of experiments in square lattice as far as the estimation of variance components and some genetic parameters are concerned: 1) intra-block analysis with adjusted treatment and blocks within unadjusted repetitions; 2) lattice analysis as complete randomized blocks; 3) intrablock analysis with unadjusted treatment and blocks within adjusted repetitions; 4) lattice analysis as complete randomized blocks, by utilizing the adjusted means of treatments, obtained from the analysis with recovery of interblock information, having as mean square of the error the mean effective variance of this same analysis with recovery of inter-block information. For the four alternatives of analysis, the estimators and estimates were obtained for the variance components and heritability coefficients. The classification of material was also studied. The present study suggests that for each experiment and depending of the objectives of the analysis, one should observe which alternative of analysis is preferable, mainly in cases where a negative estimate is obtained for the variance component due to effects of blocks within adjusted repetitions.
Resumo:
This research project investigated the use of image analysis to measure the air void parameters of concrete specimens produced under standard laboratory conditions. The results obtained from the image analysis technique were compared to results obtained from plastic air content tests, Danish air meter tests (also referred to as Air Void Analyzer tests), high-pressure air content tests on hardened concrete, and linear traverse tests (as per ASTM C-457). Hardened concrete specimens were sent to three different laboratories for the linear traverse tests. The samples that were circulated to the three labs consisted of specimens that needed different levels of surface preparation. The first set consisted of approximately 18 specimens that had been sectioned from a 4 in. by 4 in. by 18 in. (10 cm by 10 cm by 46 cm) beam using a saw equipped with a diamond blade. These specimens were subjected to the normal sample preparation techniques that were commonly employed by the three different labs (each lab practiced slightly different specimen preparation techniques). The second set of samples consisted of eight specimens that had been ground and polished at a single laboratory. The companion labs were only supposed to retouch the sample surfaces if they exhibited major flaws. In general, the study indicated that the image analysis test results for entrained air content exhibited good to strong correlation to the average values determined via the linear traverse technique. Specimens ground and polished in a single laboratory and then circulated to the other participating laboratories for the air content determinations exhibited the strongest correlation between the image analysis and linear traverse techniques (coefficient of determination, r-squared = 0.96, for n=8). Specimens ground and polished at each of the individual laboratories exhibited considerably more scatter (coefficient of determination, r-squared = 0.78, for n=16). The image analysis technique tended to produce low estimates of the specific surface of the voids when compared to the results from the linear traverse method. This caused the image analysis spacing factor calculations to produce larger values than those obtained from the linear traverse tests. The image analysis spacing factors were still successful at distinguishing between the frost-prone test specimens and the other (more durable) test specimens that were studied in this research project.
Resumo:
BACKGROUND: The purpose of this study was to explore the potential use of image analysis on tissue sections preparation as a predictive marker of early malignant changes during squamous cell (SC) carcinogenesis in the esophagus. Results of DNA ploidy quantification on formalin-fixed, paraffin-embedded tissue using two different techniques were compared: imprint-cytospin and 6 microm thick tissue sections preparation. METHODS: This retrospective study included 26 surgical specimens of squamous cell carcinoma (SCC) from patients who underwent surgery alone at the Department of Surgery in CHUV Hospital in Lausanne between January 1993 and December 2000. We analyzed 53 samples of healthy tissue, 43 tumors and 7 lymph node metastases. RESULTS: Diploid DNA histogram patterns were observed in all histologically healthy tissues, either distant or proximal to the lesion. Aneuploidy was observed in 34 (79%) of 43 carcinomas, namely 24 (75%) of 32 early squamous cell carcinomas and 10 (91%) of 11 advanced carcinomas. DNA content was similar in the different tumor stages, whether patients presented with single or multiple synchronous tumors. All lymph node metastases had similar DNA content as their primary tumor. CONCLUSIONS: Early malignant changes in the esophagus are associated with alteration in DNA content, and aneuploidy tends to correlate with progression of invasive SCC. A very good correlation between imprint-cytospin and tissue section analysis was observed. Although each method used here showed advantages and disadvantages; tissue sections preparation provided useful information on aberrant cell-cycle regulation and helped select the optimal treatment for the individual patient along with consideration of other clinical parameters.