932 resultados para data reduction by factor analysis
Resumo:
The present investigation aimed to critically examine the factor structure and psychometric properties of the Anxiety Sensitivity Index - Revised (ASI-R). Confirmatory factor analysis using a clinical sample of adults (N = 248) revealed that the ASI-R could be improved substantially through the removal of 15 problematic items in order to account for the most robust dimensions of anxiety sensitivity. This modified scale was renamed the 21-item Anxiety Sensitivity Index (21-item ASI) and reanalyzed with a large sample of normative adults (N = 435), revealing configural and metric invariance across groups. Further comparisons with other alternative models, using multi-sample analysis, indicated the 21-item ASI to be the best fitting model for both groups. There was also evidence of internal consistency, test-retest reliability, and construct validity for both samples suggesting that the 21-item ASI is a useful assessment device for investigating the construct of anxiety sensitivity in both clinical and normative populations.
Resumo:
The self-rating Dysexecutive Questionnaire (DEX-S) is a recently developed standardized self-report measure of behavioral difficulties associated with executive functioning such as impulsivity, inhibition, control, monitoring, and planning. Few studies have examined its construct validity, particularly for its potential wider use across a variety of clinical and nonclinical populations. This study examines the factor structure of the DEX-S questionnaire using a sample of nonclinical (N = 293) and clinical (N = 49) participants. A series of factor analyses were evaluated to determine the best factor solution for this scale. This was found to be a 4-factor solution with factors best described as inhibition, intention, social regulation, and abstract problem solving. The first 2 factors replicate factors from the 5-factor solutions found in previous studies that examined specific subpopulations. Although further research is needed to evaluate the factor structure within a range of subpopulations, this study supports the view that the DEX has the factor structure sufficient for its use in a wider context than only with neurological or head-injured patients. Overall, a 4-factor solution is recommended as the most stable and parsimonious solution in the wider context.
Resumo:
This paper considers a model-based approach to the clustering of tissue samples of a very large number of genes from microarray experiments. It is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. Frequently in practice, there are also clinical data available on those cases on which the tissue samples have been obtained. Here we investigate how to use the clinical data in conjunction with the microarray gene expression data to cluster the tissue samples. We propose two mixture model-based approaches in which the number of components in the mixture model corresponds to the number of clusters to be imposed on the tissue samples. One approach specifies the components of the mixture model to be the conditional distributions of the microarray data given the clinical data with the mixing proportions also conditioned on the latter data. Another takes the components of the mixture model to represent the joint distributions of the clinical and microarray data. The approaches are demonstrated on some breast cancer data, as studied recently in van't Veer et al. (2002).
Resumo:
Purpose – To investigate the impact of performance measurement in strategic planning process. Design/methodology/approach – A large scale survey was conducted online with Warwick Business School alumni. The questionnaire was based on the Strategic Development Process model by Dyson. The questionnaire was designed to map the current practice of strategic planning and to determine its most influential factors on the effectiveness of the process. All questions were close ended and a seven-point Likert scale used. The independent variables were grouped into four meaningful factors by factor analysis (Varimax, coefficient of rotation 0.4). The factors produced were used to build regression models (stepwise) for the five assessments of strategic planning process. Regression models were developed for the totality of the responses, comparing SMEs and large organizations and comparing organizations operating in slowly and rapidly changing environments. Findings – The results indicate that performance measurement stands as one of the four main factors characterising the current practice of strategic planning. This research has determined that complexity coming from organizational size and rate of change in the sector creates variation in the impact of performance measurement in strategic planning. Large organizations and organizations operating in rapidly changing environments make greater use of performance measurement. Research limitations/implications – This research is based on subjective data, therefore the conclusions do not concern the impact of strategic planning process' elements on the organizational performance achievements, but on the success/effectiveness of the strategic planning process itself. Practical implications – This research raises a series of questions about the use and potential impact of performance measurement, especially in the categories of organizations that are not significantly influenced by its utilisation. It contributes to the field of performance measurement impact. Originality/value – This research fills in the gap literature concerning the lack of large scale surveys on strategic development processes and performance measurement. It also contributes in the literature of this field by providing empirical evidences on the impact of performance measurement upon the strategic planning process.
Resumo:
Pulse compression techniques originated in radar.The present work is concerned with the utilization of these techniques in general, and the linear FM (LFM) technique in particular, for comnunications. It introduces these techniques from an optimum communications viewpoint and outlines their capabilities.It also considers the candidacy of the class of LFM signals for digital data transmission and the LFM spectrum. Work related to the utilization of LFM signals for digital data transmission has been mostly experimental and mainly concerned with employing two rectangular LFM pulses (or chirps) with reversed slopes to convey the bits 1 and 0 in an incoherent node.No systematic theory for LFM signal design and system performance has been available. Accordingly, the present work establishes such a theory taking into account coherent and noncoherent single-link and multiplex signalling modes. Some new results concerning the slope-reversal chirp pair are obtained. The LFM technique combines the typical capabilities of pulse compression with a relative ease of implementation. However, these merits are often hampered by the difficulty of handling the LFM spectrum which cannot generally be expressed closed-form. The common practice is to obtain a plot of this spectrum with a digital computer for every single set of LFM pulse parameters.Moreover, reported work has been Justly confined to the spectrum of an ideally rectangular chirp pulse with no rise or fall times.Accordingly, the present work comprises a systerratic study of the LFM spectrum which takes the rise and fall time of the chirp pulse into account and can accommodate any LFM pulse with any parameters.It· formulates rather simple and accurate prediction criteria concerning the behaviour of this spectrum in the different frequency regions. These criteria would facilitate the handling of the LFM technique in theory and practice.
Resumo:
Spread spectrum systems make use of radio frequency bandwidths which far exceed the minimum bandwidth necessary to transmit the basic message information.These systems are designed to provide satisfactory communication of the message information under difficult transmission conditions. Frequency-hopped multilevel frequency shift keying (FH-MFSK) is one of the many techniques used in spread spectrum systems. It is a combination of frequency hopping and time hopping. In this system many users share a common frequency band using code division multiplexing. Each user is assigned an address and the message is modulated into the address. The receiver, knowing the address, decodes the received signal and extracts the message. This technique is suggested for digital mobile telephony. This thesis is concerned with an investigation of the possibility of utilising FH-MFSK for data transmission corrupted by additive white gaussian noise (A.W.G.N.). Work related to FH-MFSK has so far been mostly confined to its validity, and its performance in the presence of A.W.G.N. has not been reported before. An experimental system was therefore constructed which utilised combined hardware and software and operated under the supervision of a microprocessor system. The experimental system was used to develop an error-rate model for the system under investigation. The performance of FH-MFSK for data transmission was established in the presence of A.W.G.N. and with deleted and delayed sample effects. Its capability for multiuser applications was determined theoretically. The results show that FH-MFSK is a suitable technique for data transmission in the presence of A.W.G.N.
Resumo:
Purpose: To develop a questionnaire that subjectively assesses near visual function in patients with 'accommodating' intraocular lenses (IOLs). Methods: A literature search of existing vision-related quality-of-life instruments identified all questions relating to near visual tasks. Questions were combined if repeated in multiple instruments. Further relevant questions were added and item interpretation confirmed through multidisciplinary consultation and focus groups. A preliminary 19-item questionnaire was presented to 22 subjects at their 4-week visit post first eye phacoemulsification with 'accommodative' IOL implantation, and again 6 and 12 weeks post-operatively. Rasch Analysis, Frequency of Endorsement, and tests of normality (skew and kurtosis) were used to reduce the instrument. Cronbach's alpha and test-retest reliability (intraclass correlation coefficient, ICC) were determined for the final questionnaire. Construct validity was obtained by Pearson's product moment correlation (PPMC) of questionnaire scores to reading acuity (RA) and to Critical Print Size (CPS) reading speed. Criterion validity was obtained by receiver operating characteristic (ROC) curve analysis and dimensionality of the questionnaire was assessed by factor analysis. Results: Rasch Analysis eliminated nine items due to poor fit statistics. The final items have good separation (2.55), internal consistency (Cronbach's α = 0.97) and test-retest reliability (ICC = 0.66). PPMC of questionnaire scores with RA was 0.33, and with CPS reading speed was 0.08. Area under the ROC curve was 0.88 and Factor Analysis revealed one principal factor. Conclusion: The pilot data indicates the questionnaire to be internally consistent, reliable and a valid instrument that could be useful for assessing near visual function in patients with 'accommodating' IOLS. The questionnaire will now be expanded to include other types of presbyopic correction. © 2007 British Contact Lens Association.
Resumo:
Experiments combining different groups or factors are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the number of replications required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the degrees of freedom (DF) of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than simply the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for each error term of the ANOVA. Finally, in a factorial experiment, it is important to define the design of the experiment in detail because this determines the appropriate type of ANOVA. We will discuss some of the common variations of factorial ANOVA in future statnotes. If there is doubt about which ANOVA to use, the researcher should seek advice from a statistician with experience of research in applied microbiology.
Resumo:
The design and implementation of data bases involve, firstly, the formulation of a conceptual data model by systematic analysis of the structure and information requirements of the organisation for which the system is being designed; secondly, the logical mapping of this conceptual model onto the data structure of the target data base management system (DBMS); and thirdly, the physical mapping of this structured model into storage structures of the target DBMS. The accuracy of both the logical and physical mapping determine the performance of the resulting systems. This thesis describes research which develops software tools to facilitate the implementation of data bases. A conceptual model describing the information structure of a hospital is derived using the Entity-Relationship (E-R) approach and this model forms the basis for mapping onto the logical model. Rules are derived for automatically mapping the conceptual model onto relational and CODASYL types of data structures. Further algorithms are developed for partly automating the implementation of these models onto INGRES, MIMER and VAX-11 DBMS.
Resumo:
Information extraction or knowledge discovery from large data sets should be linked to data aggregation process. Data aggregation process can result in a new data representation with decreased number of objects of a given set. A deterministic approach to separable data aggregation means a lesser number of objects without mixing of objects from different categories. A statistical approach is less restrictive and allows for almost separable data aggregation with a low level of mixing of objects from different categories. Layers of formal neurons can be designed for the purpose of data aggregation both in the case of deterministic and statistical approach. The proposed designing method is based on minimization of the of the convex and piecewise linear (CPL) criterion functions.
Resumo:
2010 Mathematics Subject Classification: 94A17.