919 resultados para feature analysis
Resumo:
Naming impairments in aphasia are typically targeted using semantic and/or phonologically based tasks. However, it is not known whether these treatments have different neural mechanisms. Eight participants with aphasia received twelve treatment sessions using an alternating treatment design, with fMRI scans pre- and post-treatment. Half the sessions employed Phonological Components Analysis (PCA), and half the sessions employed Semantic Feature Analysis (SFA). Pre-treatment activity in the left caudate correlated with greater immediate treatment success for items treated with SFA, whereas recruitment of the left supramarginal gyrus and right precuneus post-treatment correlated with greater immediate treatment success for items treated with PCA. The results support previous studies that have found greater treatment outcome to be associated with activity in predominantly left hemisphere regions, and suggest that different mechanisms may be engaged dependent on the type of treatment employed.
Resumo:
Objective Death certificates provide an invaluable source for cancer mortality statistics; however, this value can only be realised if accurate, quantitative data can be extracted from certificates – an aim hampered by both the volume and variable nature of certificates written in natural language. This paper proposes an automatic classification system for identifying cancer related causes of death from death certificates. Methods Detailed features, including terms, n-grams and SNOMED CT concepts were extracted from a collection of 447,336 death certificates. These features were used to train Support Vector Machine classifiers (one classifier for each cancer type). The classifiers were deployed in a cascaded architecture: the first level identified the presence of cancer (i.e., binary cancer/nocancer) and the second level identified the type of cancer (according to the ICD-10 classification system). A held-out test set was used to evaluate the effectiveness of the classifiers according to precision, recall and F-measure. In addition, detailed feature analysis was performed to reveal the characteristics of a successful cancer classification model. Results The system was highly effective at identifying cancer as the underlying cause of death (F-measure 0.94). The system was also effective at determining the type of cancer for common cancers (F-measure 0.7). Rare cancers, for which there was little training data, were difficult to classify accurately (F-measure 0.12). Factors influencing performance were the amount of training data and certain ambiguous cancers (e.g., those in the stomach region). The feature analysis revealed a combination of features were important for cancer type classification, with SNOMED CT concept and oncology specific morphology features proving the most valuable. Conclusion The system proposed in this study provides automatic identification and characterisation of cancers from large collections of free-text death certificates. This allows organisations such as Cancer Registries to monitor and report on cancer mortality in a timely and accurate manner. In addition, the methods and findings are generally applicable beyond cancer classification and to other sources of medical text besides death certificates.
Resumo:
通过建立空间电动绳系系统动力学模型,研究了通有电流的导电系绳高速运动切割地球磁力线时系绳的振荡和变形特性。通过数值计算,分别给出了导电系绳在不同长度和不同主子星质量比下对系绳动态特性的影响,得到了一些规律性认识和结果。
Resumo:
The recent release of the domestic dog genome provides us with an ideal opportunity to investigate dog-specific genomic features. In this study, we performed a systematic analysis of CpG islands (CGIs), which are often considered gene markers, in the dog
Resumo:
Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an information-theoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.
Resumo:
Our nervous system can efficiently recognize objects in spite of changes in contextual variables such as perspective or lighting conditions. Several lines of research have proposed that this ability for invariant recognition is learned by exploiting the fact that object identities typically vary more slowly in time than contextual variables or noise. Here, we study the question of how this "temporal stability" or "slowness" approach can be implemented within the limits of biologically realistic spike-based learning rules. We first show that slow feature analysis, an algorithm that is based on slowness, can be implemented in linear continuous model neurons by means of a modified Hebbian learning rule. This approach provides a link to the trace rule, which is another implementation of slowness learning. Then, we show analytically that for linear Poisson neurons, slowness learning can be implemented by spike-timing-dependent plasticity (STDP) with a specific learning window. By studying the learning dynamics of STDP, we show that for functional interpretations of STDP, it is not the learning window alone that is relevant but rather the convolution of the learning window with the postsynaptic potential. We then derive STDP learning windows that implement slow feature analysis and the "trace rule." The resulting learning windows are compatible with physiological data both in shape and timescale. Moreover, our analysis shows that the learning window can be split into two functionally different components that are sensitive to reversible and irreversible aspects of the input statistics, respectively. The theory indicates that irreversible input statistics are not in favor of stable weight distributions but may generate oscillatory weight dynamics. Our analysis offers a novel interpretation for the functional role of STDP in physiological neurons.
Resumo:
We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system []. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer.
Resumo:
© 2015 John P. Cunningham and Zoubin Ghahramani. Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sufficient dimensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward generalizations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.
Resumo:
延安地区是黄土高原水土流失最为严重的地区之一。利用延安气象站1951-2005年的日降雨量数据,采用日雨量侵蚀力模型估算延安地区降雨侵蚀力,结果表明:该地区降雨侵蚀力主要集中在6-9月,占到了全年的85.6%。年降雨侵蚀力的平均值为1765.73MJ.mm/(hm2.h),55年间,年降雨侵蚀力变异程度适中,从整体上看,趋势保持平稳,其离差系数Cv和变异趋势系数r分别为0.41和-0.071。
Resumo:
Proven by the petroleum exploration activities, the karsts-fissure reservoir in carbonate rocks is significant to find out the large scale oil & gas field. They are made up of the four reservoir types: karsts-cave, karsts-crack, crack-cave and fracture-pore-cave. Each reservoir space and each reservoir bed has different features of reservoir heterogeneity and small scale of pore-crack-cave. The fracture-cave reservoir in carbonate rocks is characteristic by multi-types and long oiliness well. The reservoir shape is controlled by the irregular pore-crack-cave. The development level of fracture and karst-cave is the key element of hydrocarbon enriching, high productivity and stable production. However, most of Carbonate formation are buried deeply and the signal-ration-noise of seismic reflection are very low. It is reason why the fracture-cave reservoir are difficult to be predicted effectively. In terms of surveyed and studied lots of the former research outcome, The author applied the methods of synthetical reservoir geophysical prediction from two ways including macrosopic and microcomic technics in terms of the reservoir-cap condition, geophysics and geology feature and difficulty of prediction in carbonate rocks. It is guiden by the new ideas of stratigraphy, sedimentology, sedimentography, reservoir geology and karst geology. The geophysics technology is key technics. In aspects of macroscopic studies, starting off the three efficiencies of controlling the reservoir distribution including sedimental facies, karst and fracture, by means of comprehensive utilization of geology, geophysics, boring well and well log, the study of reservoir features and karst inside story are developed in terms of data of individual well and multiple well. Through establishing the carbonate deposition model, karstic model and fracture model, the macro-distribution laws of carbonatite are carried out by the study of coherence analysis, seismic reflection feature analysis and palaeotectonics analysis. In aspects of microcosmic studies, starting off analysis in reservoir geophysical response feature of fracture and karst-cave model according to guidance of the macroscopic geological model in carbonate reservoir, the methods of the carbonate reservoir prediction are developed by comprehensively utilization of seismic multi-attribution intersection analysis, seismic inversion restricted by log, seismic discontinuity analysis, seimic spectrum attenuation gradient, moniliform reflection feature analysis and multiparameter karst reservoir appraisement.Through application of carbonate reservoir synthetical geophysics prediction, the author r successfully develops the beneficial reservoir distribution province in Ordovician of Katake block 1in middle Tarim basin. The fracture-cave reservoir distributions are delineated. The prospect direction and favorable aims are demonstrated. There are a set of carbonate reservoir prediction methods in middle Tarim basin. It is the favorable basic technique in predicting reservoir of the Ordovician carbonate in middle Tarim. Proven by exploration drilling, the favorable region of moniliform reflection fracture and pore-cave and cave-fracture in lower-middle Ordovician are coincidence with the region of hydrocarbon show. It’s indicated that the reservoir prediction methods described in the study of Ordovician carbonate formation are feasible practicably.
Resumo:
The time-courses of orthographic, phonological and semantic processing of Chinese characters were investigated systematically with multi-channel event-related potentials (ERPs). New evidences concerning whether phonology or semantics is processed first and whether phonology mediates semantic access were obtained, supporting and developing the new concept of repetition, overlapping, and alternating processing in Chinese character recognition. Statistic parameter mapping based on physiological double dissociation has been developed. Seven experiments were conducted: I) deciding which type of structure, left-right or non-left-right, the character displayed on the screen was; 2) deciding whether or not there was a vowel/a/in the pronunciation of the character; 3) deciding which classification, natural object or non-natural object, the character was; 4) deciding which color, red or green, the character was; 5) deciding which color, red or green, the non-character was; 6) fixing on the non-character; 7) fixing on the crosslet. The main results are: 1. N240 and P240:N240 and P240 localized at occipital and prefrontal respectively were found in experiments 1, 2, 3, and 4, but not in experiments 5, 6, or 7. The difference between the former 4 and the latter 3 experiments was only their stimuli: the former's were true Chinese characters while the latter's were non-characters or crosslet. Thus Chinese characters were related to these two components, which reflected unique processing of Chinese characters peaking at about 240 msec. 2. Basic visual feature analysis: In comparison with experiment 7 there was a common cognitive process in experiments 1, 2, 4, and 6 - basic visual feature analysis. The corresponding ERP amplitude increase in most sites started from about 60 msec. 3. Orthography: The ERP differences located at the main processing area of orthography (occipital) between experiments 1, 2, 3, 4 and experiment 5 started from about 130 msec. This was the category difference between Chinese characters and non-characters, which revealed that orthographic processing started from about 130 msec. The ERP differences between the experiments 1, 2, 3 and the experiment 4 occurred in 210-250, 230-240, and 190-250 msec respectively, suggesting orthography was processed again. These were the differences between language and non-language tasks, which revealed a higher level processing than that in the above mentioned 130 msec. All the phenomena imply that the orthographic processing does not finished in one time of processing; the second time of processing is not a simple repetition, but a higher level one. 4. Phonology: The ERPs of experiment 2 (phonological task) were significantly stronger than those of experiment 3 (semantic task) at the main processing areas of phonology (temporal and left prefrontal) starting from about 270 msec, which revealed phonologic processing. The ERP differences at left frontal between experiment 2 and experiment 1 (orthographic task) started from about 250 msec. When comparing phonological task with experiment 4 (character color decision), the ERP differences at left temporal and prefrontal started from about 220 msec. Thus phonological processing may start before 220 msec. 5. Semantic: The ERPs of experiment 3 (semantic task) were significantly stronger than those of experiment 2 (phonological task) at the main processing areas of semantics (parietal and occipital) starting from about 290 msec, which revealed semantic processing. The ERP differences at these areas between experiment 3 and experiment 4 (character color decision) started from about 270 msec. The ERP differences between experiment 3 and experiment 1 (orthographic task) started from about 260 msec. Thus semantic processing may start before 260 msec. 6. Overlapping of phonological and semantic processing: From about 270 to 350 msec, the ERPs of experiment 2 (phonological task) were significantly larger than those of experiment 3 (semantic task) at the main processing areas of phonology (temporal and left prefrontal); while from about 290-360 msec, the ERPs of experiment 3 were significantly larger than those of experiment 2 at the main processing areas of semantics (frontal, parietal, and occipital). Thus phonological processing may start earlier than semantic and their time-courses may alternate, which reveals parallel processing. 7. Semantic processing needs part phonology: When experiment 1 (orthographic task) served as baseline, the ERPs of experiment 2 and 3 (phonological and semantic tasks) significantly increased at the main processing areas of phonology (left temporal and frontal) starting from about 250 msec. The ERPs of experiment 3, besides, increased significantly at the main processing areas of semantics (parietal and frontal) starting from about 260 msec. When experiment 4 (character color decision) served as baseline, the ERPs of experiment 2 and 3 significantly increased at phonological areas (left temporal and frontal) starting from about 220 msec. The ERPs of experiment 3, similarly, increased significantly at semantic areas (parietal and frontal) starting from about270 msec. Hence, before semantic processing, a part of phonological information may be required. The conclusion could be got from above results in the present experimental conditions: 1. The basic visual feature processing starts from about 60 msec; 2. Orthographic processing starts from about 130 msec, and repeats at about 240 msec. The second processing is not simple repetition of the first one, but a higher level processing; 3. Phonological processing begins earlier than semantic, and their time-courses overlap; 4. Before semantic processing, a part of phonological information may be required; 5. The repetition, overlapping, and alternating of the orthographic, phonological and semantic processing of Chinese characters could exist in cognition. Thus the problem of whether phonology mediates semantics access is not a simple, but a complicated issue.
Resumo:
A novel hybrid data-driven approach is developed for forecasting power system parameters with the goal of increasing the efficiency of short-term forecasting studies for non-stationary time-series. The proposed approach is based on mode decomposition and a feature analysis of initial retrospective data using the Hilbert-Huang transform and machine learning algorithms. The random forests and gradient boosting trees learning techniques were examined. The decision tree techniques were used to rank the importance of variables employed in the forecasting models. The Mean Decrease Gini index is employed as an impurity function. The resulting hybrid forecasting models employ the radial basis function neural network and support vector regression. A part from introduction and references the paper is organized as follows. The second section presents the background and the review of several approaches for short-term forecasting of power system parameters. In the third section a hybrid machine learningbased algorithm using Hilbert-Huang transform is developed for short-term forecasting of power system parameters. Fourth section describes the decision tree learning algorithms used for the issue of variables importance. Finally in section six the experimental results in the following electric power problems are presented: active power flow forecasting, electricity price forecasting and for the wind speed and direction forecasting.
Resumo:
This paper presents a novel method that leverages reasoning capabilities in a computer vision system dedicated to human action recognition. The proposed methodology is decomposed into two stages. First, a machine learning based algorithm - known as bag of words - gives a first estimate of action classification from video sequences, by performing an image feature analysis. Those results are afterward passed to a common-sense reasoning system, which analyses, selects and corrects the initial estimation yielded by the machine learning algorithm. This second stage resorts to the knowledge implicit in the rationality that motivates human behaviour. Experiments are performed in realistic conditions, where poor recognition rates by the machine learning techniques are significantly improved by the second stage in which common-sense knowledge and reasoning capabilities have been leveraged. This demonstrates the value of integrating common-sense capabilities into a computer vision pipeline. © 2012 Elsevier B.V. All rights reserved.
Resumo:
L’aphasie est un trouble acquis du langage entraînant des problèmes de communication pouvant toucher la compréhension et/ou l’expression. Lorsque l’aphasie fait suite à un accident vasculaire cérébral, une régression des déficits communicatifs s'observe initialement, mais elle peut demeurer sévère pour certains et est considérée chronique après un an. Par ailleurs, l’aphasie peut aussi être observée dans l’aphasie progressive primaire, une maladie dégénérative affectant uniquement le langage dans les premières années. Un nombre grandissant d’études s’intéressent à l’impact de la thérapie dans l’aphasie chronique et ont démontré des améliorations langagières après plusieurs années. L’hémisphère gauche semble avoir un rôle crucial et est associé à de meilleures améliorations langagières, mais la compréhension des mécanismes de plasticité cérébrale est encore embryonnaire. Or, l’efficacité de la thérapie dans l’aphasie progressive primaire est peu étudiée. À l’aide de la résonance magnétique fonctionnelle, le but des présentes études consiste à examiner les mécanismes de plasticité cérébrale induits par la thérapie Semantic Feature Analysis auprès de dix personnes souffrant d’aphasie chronique et d’une personne souffrant d’aphasie progressive primaire. Les résultats suggèrent que le cerveau peut se réorganiser plusieurs années après une lésion cérébrale ainsi que dans une maladie dégénérative. Au niveau individuel, une meilleure amélioration langagière est associée au recrutement de l’hémisphère gauche ainsi qu’une concentration des activations. Les analyses de groupe mettent en évidence le recrutement du lobule pariétal inférieur gauche, alors que l’activation du gyrus précentral gauche prédit l’amélioration suite à la thérapie. D’autre part, les analyses de connectivité fonctionnelle ont permis d’identifier pour la première fois le réseau par défaut dans l’aphasie. Suite à la thérapie, l’intégration de ce réseau bien connu est comparable à celle des contrôles et les analyses de corrélation suggèrent que l’intégration du réseau par défaut a une valeur prédictive d’amélioration. Donc, les résultats de ces études appuient l’idée que l’hémisphère gauche a un rôle prépondérant dans la récupération de l’aphasie et fournissent des données probantes sur la neuroplasticité induite par une thérapie spécifique du langage dans l’aphasie. De plus, l’identification d’aires clés et de réseaux guideront de futures recherches afin d’éventuellement maximiser la récupération de l’aphasie et permettre de mieux prédire le pronostic.
Resumo:
In recent years there is an apparent shift in research from content based image retrieval (CBIR) to automatic image annotation in order to bridge the gap between low level features and high level semantics of images. Automatic Image Annotation (AIA) techniques facilitate extraction of high level semantic concepts from images by machine learning techniques. Many AIA techniques use feature analysis as the first step to identify the objects in the image. However, the high dimensional image features make the performance of the system worse. This paper describes and evaluates an automatic image annotation framework which uses SURF descriptors to select right number of features and right features for annotation. The proposed framework uses a hybrid approach in which k-means clustering is used in the training phase and fuzzy K-NN classification in the annotation phase. The performance of the system is evaluated using standard metrics.