37 resultados para discriminant analysis and cluster analysis
Resumo:
On 14 January 2001, the four Cluster spacecraft passed through the northern magnetospheric mantle in close conjunction to the EISCAT Svalbard Radar (ESR) and approached the post-noon dayside magnetopause over Greenland between 13:00 and 14:00 UT During that interval, a sudden reorganisation of the high-latitude dayside convection pattern accurred after 13:20 UT most likely caused by a direction change of the Solar wind magnetic field. The result was an eastward and poleward directed flow-channel, as monitored by the SuperDARN radar network and also by arrays of ground-based magnetometers in Canada, Greenland and Scandinavia. After an initial eastward and later poleward expansion of the flow-channel between 13:20 and 13:40 UT, the four Cluster spacecraft, and the field line footprints covered by the eastward looking scan cycle of the Sondre Stromfjord incoherent scatter radar were engulfed by cusp-like precipitation with transient magnetic and electric field signatures. In addition, the EISCAT Svalbard Radar detected strong transient effects of the convection reorganisation, a poleward moving precipitation, and a fast ion flow-channel in association with the auroral structures that suddenly formed to the west and north of the radar. From a detailed analysis of the coordinated Cluster and ground-based data, it was found that this extraordinary transient convection pattern, indeed, had moved the cusp precipitation from its former pre-noon position into the late post-noon sector, allowing for the first and quite unexpected encounter of the cusp by the Cluster spacecraft. Our findings illustrate the large amplitude of cusp dynamics even in response to moderate solar wind forcing. The global ground-based data proves to be an invaluable tool to monitor the dynamics and width of the affected magnetospheric regions.
Resumo:
The four Cluster spacecraft offer a unique opportunity to study structure and dynamics in the magnetosphere and we discuss four general ways in which ground-based remote-sensing observations of the ionosphere can be used to support the in-situ measurements. The ionosphere over the Svalbard islands will be studied in particular detail, not only by the ESR and EISCAT incoherent scatter radars, but also by optical instruments, magnetometers, imaging riometers and the CUTLASS bistatic HF radar. We present an on-line procedure to plan coordinated measurements by the Cluster spacecraft with these combined ground-based systems. We illustrate the philosophy of the method, using two important examples of the many possible configurations between the Cluster satellites and the ground-based instruments.
Resumo:
Much prior research on the structure and performance of UK real estate portfolios has relied on aggregated measures for sector and region. For these groupings to have validity, the performance of individual properties within each group should be similar. This paper analyses a sample of 1,200 properties using multiple discriminant analysis and cluster analysis techniques. It is shown that conventional property type and spatial classifications do not capture the variation in return behaviour at the individual building level. The major feature is heterogeneity - but there may be distinctions between growth and income properties and between single and multi-let properties that could help refine portfolio structures.
Resumo:
Gene Chips are finding extensive use in animal and plant science. Generally microarrays are of two kind, cDNA or oligonucleotide. cDNA microarrays were developed at Stanford University, whereas oligonucleotide were developed by Affymetrix. The construction of cDNA or oligonucleotide on a glass slide helps to compare the gene expression level of treated and control samples by labeling mRNA with green (Cy3) and red (Cy5) dyes. The hybridized gene chip emit fluorescence whose intensity and colour can be measured. RNA labeling can be done directly or indirectly. Indirect method involves amino allyle modified dUTP instead of pre-labelled nucleotide. Hybridization of gene chip generally occurs in a minimum volume possible and to ensure the hetroduplex formation, a ten fold more DNA is spotted on slide than in the solutions. A confocal or semi confocal laser technologies coupled with CCD camera are used for image acquisition. For standardization, house keeping genes are used or cDNA are spotted in gene chip that are not present in treated or control samples. Moreover, statistical analysis (image analysis) and cluster analysis softwares have been developed by Stanford University. The gene-chip technology has many applications like expression analysis, gene expression signatures (molecular phenotypes) and promoter regulatory element co-expression.
Resumo:
The application of metabolomics in multi-centre studies is increasing. The aim of the present study was to assess the effects of geographical location on the metabolic profiles of individuals with the metabolic syndrome. Blood and urine samples were collected from 219 adults from seven European centres participating in the LIPGENE project (Diet, genomics and the metabolic syndrome: an integrated nutrition, agro-food, social and economic analysis). Nutrient intakes, BMI, waist:hip ratio, blood pressure, and plasma glucose, insulin and blood lipid levels were assessed. Plasma fatty acid levels and urine were assessed using a metabolomic technique. The separation of three European geographical groups (NW, northwest; NE, northeast; SW, southwest) was identified using partial least-squares discriminant analysis models for urine (R 2 X: 0•33, Q 2: 0•39) and plasma fatty acid (R 2 X: 0•32, Q 2: 0•60) data. The NW group was characterised by higher levels of urinary hippurate and N-methylnicotinate. The NE group was characterised by higher levels of urinary creatine and citrate and plasma EPA (20 : 5 n-3). The SW group was characterised by higher levels of urinary trimethylamine oxide and lower levels of plasma EPA. The indicators of metabolic health appeared to be consistent across the groups. The SW group had higher intakes of total fat and MUFA compared with both the NW and NE groups (P≤ 0•001). The NE group had higher intakes of fibre and n-3 and n-6 fatty acids compared with both the NW and SW groups (all P< 0•001). It is likely that differences in dietary intakes contributed to the separation of the three groups. Evaluation of geographical factors including diet should be considered in the interpretation of metabolomic data from multi-centre studies.
Resumo:
Objective. Assimilating the diagnosis complete spinal cord injury (SCI) takes time and is not easy, as patients know that there is no ‘cure’ at the present time. Brain–computer interfaces (BCIs) can facilitate daily living. However, inter-subject variability demands measurements with potential user groups and an understanding of how they differ to healthy users BCIs are more commonly tested with. Thus, a three-class motor imagery (MI) screening (left hand, right hand, feet) was performed with a group of 10 able-bodied and 16 complete spinal-cord-injured people (paraplegics, tetraplegics) with the objective of determining what differences were present between the user groups and how they would impact upon the ability of these user groups to interact with a BCI. Approach. Electrophysiological differences between patient groups and healthy users are measured in terms of sensorimotor rhythm deflections from baseline during MI, electroencephalogram microstate scalp maps and strengths of inter-channel phase synchronization. Additionally, using a common spatial pattern algorithm and a linear discriminant analysis classifier, the classification accuracy was calculated and compared between groups. Main results. It is seen that both patient groups (tetraplegic and paraplegic) have some significant differences in event-related desynchronization strengths, exhibit significant increases in synchronization and reach significantly lower accuracies (mean (M) = 66.1%) than the group of healthy subjects (M = 85.1%). Significance. The results demonstrate significant differences in electrophysiological correlates of motor control between healthy individuals and those individuals who stand to benefit most from BCI technology (individuals with SCI). They highlight the difficulty in directly translating results from healthy subjects to participants with SCI and the challenges that, therefore, arise in providing BCIs to such individuals
Resumo:
OBJECTIVE: Assimilating the diagnosis complete spinal cord injury (SCI) takes time and is not easy, as patients know that there is no 'cure' at the present time. Brain-computer interfaces (BCIs) can facilitate daily living. However, inter-subject variability demands measurements with potential user groups and an understanding of how they differ to healthy users BCIs are more commonly tested with. Thus, a three-class motor imagery (MI) screening (left hand, right hand, feet) was performed with a group of 10 able-bodied and 16 complete spinal-cord-injured people (paraplegics, tetraplegics) with the objective of determining what differences were present between the user groups and how they would impact upon the ability of these user groups to interact with a BCI. APPROACH: Electrophysiological differences between patient groups and healthy users are measured in terms of sensorimotor rhythm deflections from baseline during MI, electroencephalogram microstate scalp maps and strengths of inter-channel phase synchronization. Additionally, using a common spatial pattern algorithm and a linear discriminant analysis classifier, the classification accuracy was calculated and compared between groups. MAIN RESULTS: It is seen that both patient groups (tetraplegic and paraplegic) have some significant differences in event-related desynchronization strengths, exhibit significant increases in synchronization and reach significantly lower accuracies (mean (M) = 66.1%) than the group of healthy subjects (M = 85.1%). SIGNIFICANCE: The results demonstrate significant differences in electrophysiological correlates of motor control between healthy individuals and those individuals who stand to benefit most from BCI technology (individuals with SCI). They highlight the difficulty in directly translating results from healthy subjects to participants with SCI and the challenges that, therefore, arise in providing BCIs to such individuals.
Resumo:
Effective public policy to mitigate climate change footprints should build on data-driven analysis of firm-level strategies. This article’s conceptual approach augments the resource-based view (RBV) of the firm and identifies investments in four firm-level resource domains (Governance, Information management, Systems, and Technology [GISTe]) to develop capabilities in climate change impact mitigation. The authors denote the resulting framework as the GISTe model, which frames their analysis and public policy recommendations. This research uses the 2008 Carbon Disclosure Project (CDP) database, with high-quality information on firm-level climate change strategies for 552 companies from North America and Europe. In contrast to the widely accepted myth that European firms are performing better than North American ones, the authors find a different result. Many firms, whether European or North American, do not just “talk” about climate change impact mitigation, but actually do “walk the talk.” European firms appear to be better than their North American counterparts in “walk I,” denoting attention to governance, information management, and systems. But when it comes down to “walk II,” meaning actual Technology-related investments, North American firms’ performance is equal or superior to that of the European companies. The authors formulate public policy recommendations to accelerate firm-level, sector-level, and cluster-level implementation of climate change strategies.
An LDA and probability-based classifier for the diagnosis of Alzheimer's Disease from structural MRI
Resumo:
In this paper a custom classification algorithm based on linear discriminant analysis and probability-based weights is implemented and applied to the hippocampus measurements of structural magnetic resonance images from healthy subjects and Alzheimer’s Disease sufferers; and then attempts to diagnose them as accurately as possible. The classifier works by classifying each measurement of a hippocampal volume as healthy controlsized or Alzheimer’s Disease-sized, these new features are then weighted and used to classify the subject as a healthy control or suffering from Alzheimer’s Disease. The preliminary results obtained reach an accuracy of 85.8% and this is a similar accuracy to state-of-the-art methods such as a Naive Bayes classifier and a Support Vector Machine. An advantage of the method proposed in this paper over the aforementioned state of the art classifiers is the descriptive ability of the classifications it produces. The descriptive model can be of great help to aid a doctor in the diagnosis of Alzheimer’s Disease, or even further the understand of how Alzheimer’s Disease affects the hippocampus.
Resumo:
Thirty one new sodium heterosulfamates, RNHSO3Na, where the R portion contains mainly thiazole, benzothiazole, thiadiazole and pyridine ring structures, have been synthesized and their taste portfolios have been assessed. A database of 132 heterosulfamates ( both open-chain and cyclic) has been formed by combining these new compounds with an existing set of 101 heterosulfamates which were previously synthesized and for which taste data are available. Simple descriptors have been obtained using (i) measurements with Corey-Pauling-Koltun (CPK) space- filling models giving x, y and z dimensions and a volume VCPK, (ii) calculated first order molecular connectivities ((1)chi(v)) and (iii) the calculated Spartan program parameters to obtain HOMO, LUMO energies, the solvation energy E-solv and V-SPART AN. The techniques of linear (LDA) and quadratic (QDA) discriminant analysis and Tree analysis have then been employed to develop structure-taste relationships (SARs) that classify the sweet (S) and non-sweet (N) compounds into separate categories. In the LDA analysis 70% of the compounds were correctly classified ( this compares with 65% when the smaller data set of 101 compounds was used) and in the QDA analysis 68% were correctly classified ( compared to 80% previously). TheTree analysis correctly classified 81% ( compared to 86% previously). An alternative Tree analysis derived using the Cerius2 program and a set of physicochemical descriptors correctly classified only 54% of the compounds.
Resumo:
This work compares and contrasts results of classifying time-domain ECG signals with pathological conditions taken from the MITBIH arrhythmia database. Linear discriminant analysis and a multi-layer perceptron were used as classifiers. The neural network was trained by two different methods, namely back-propagation and a genetic algorithm. Converting the time-domain signal into the wavelet domain reduced the dimensionality of the problem at least 10-fold. This was achieved using wavelets from the db6 family as well as using adaptive wavelets generated using two different strategies. The wavelet transforms used in this study were limited to two decomposition levels. A neural network with evolved weights proved to be the best classifier with a maximum of 99.6% accuracy when optimised wavelet-transform ECG data wits presented to its input and 95.9% accuracy when the signals presented to its input were decomposed using db6 wavelets. The linear discriminant analysis achieved a maximum classification accuracy of 95.7% when presented with optimised and 95.5% with db6 wavelet coefficients. It is shown that the much simpler signal representation of a few wavelet coefficients obtained through an optimised discrete wavelet transform facilitates the classification of non-stationary time-variant signals task considerably. In addition, the results indicate that wavelet optimisation may improve the classification ability of a neural network. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Accurate single trial P300 classification lends itself to fast and accurate control of Brain Computer Interfaces (BCIs). Highly accurate classification of single trial P300 ERPs is achieved by characterizing the EEG via corresponding stationary and time-varying Wackermann parameters. Subsets of maximally discriminating parameters are then selected using the Network Clustering feature selection algorithm and classified with Naive-Bayes and Linear Discriminant Analysis classifiers. Hence the method is assessed on two different data-sets from BCI competitions and is shown to produce accuracies of between approximately 70% and 85%. This is promising for the use of Wackermann parameters as features in the classification of single-trial ERP responses.
Resumo:
Using NCANDS data of US child maltreatment reports for 2009, logistic regression, probit analysis, discriminant analysis and an artificial neural network are used to determine the factors which explain the decision to place a child in out-of-home care. As well as developing a new model for 2009, a previous study using 2005 data is replicated. While there are many small differences, the four estimation techniques give broadly the same results, demonstrating the robustness of the results. Similarly, apart from age and sexual abuse, the 2005 and 2009 results are roughly similar. For 2009, child characteristics (particularly child emotional problems) are more important than the nature of the abuse and the situation of the household; while caregiver characteristics are the least important. All these models have low explanatory power.