70 resultados para Milling machines


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To be diagnostically useful, structural MRI must reliably distinguish Alzheimer's disease (AD) from normal aging in individual scans. Recent advances in statistical learning theory have led to the application of support vector machines to MRI for detection of a variety of disease states. The aims of this study were to assess how successfully support vector machines assigned individual diagnoses and to determine whether data-sets combined from multiple scanners and different centres could be used to obtain effective classification of scans. We used linear support vector machines to classify the grey matter segment of T1-weighted MR scans from pathologically proven AD patients and cognitively normal elderly individuals obtained from two centres with different scanning equipment. Because the clinical diagnosis of mild AD is difficult we also tested the ability of support vector machines to differentiate control scans from patients without post-mortem confirmation. Finally we sought to use these methods to differentiate scans between patients suffering from AD from those with frontotemporal lobar degeneration. Up to 96% of pathologically verified AD patients were correctly classified using whole brain images. Data from different centres were successfully combined achieving comparable results from the separate analyses. Importantly, data from one centre could be used to train a support vector machine to accurately differentiate AD and normal ageing scans obtained from another centre with different subjects and different scanner equipment. Patients with mild, clinically probable AD and age/sex matched controls were correctly separated in 89% of cases which is compatible with published diagnosis rates in the best clinical centres. This method correctly assigned 89% of patients with post-mortem confirmed diagnosis of either AD or frontotemporal lobar degeneration to their respective group. Our study leads to three conclusions: Firstly, support vector machines successfully separate patients with AD from healthy aging subjects. Secondly, they perform well in the differential diagnosis of two different forms of dementia. Thirdly, the method is robust and can be generalized across different centres. This suggests an important role for computer based diagnostic image analysis for clinical practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose two active learning algorithms for semiautomatic definition of training samples in remote sensing image classification. Based on predefined heuristics, the classifier ranks the unlabeled pixels and automatically chooses those that are considered the most valuable for its improvement. Once the pixels have been selected, the analyst labels them manually and the process is iterated. Starting with a small and nonoptimal training set, the model itself builds the optimal set of samples which minimizes the classification error. We have applied the proposed algorithms to a variety of remote sensing data, including very high resolution and hyperspectral images, using support vector machines. Experimental results confirm the consistency of the methods. The required number of training samples can be reduced to 10% using the methods proposed, reaching the same level of accuracy as larger data sets. A comparison with a state-of-the-art active learning method, margin sampling, is provided, highlighting advantages of the methods proposed. The effect of spatial resolution and separability of the classes on the quality of the selection of pixels is also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of statistical models for forensic fingerprint identification purposes has been the subject of increasing research attention in recent years. This can be partly seen as a response to a number of commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. In addition, key forensic identification bodies such as ENFSI [1] and IAI [2] have recently endorsed and acknowledged the potential benefits of using statistical models as an important tool in support of the fingerprint identification process within the ACE-V framework. In this paper, we introduce a new Likelihood Ratio (LR) model based on Support Vector Machines (SVMs) trained with features discovered via morphometric and spatial analyses of corresponding minutiae configurations for both match and close non-match populations often found in AFIS candidate lists. Computed LR values are derived from a probabilistic framework based on SVMs that discover the intrinsic spatial differences of match and close non-match populations. Lastly, experimentation performed on a set of over 120,000 publicly available fingerprint images (mostly sourced from the National Institute of Standards and Technology (NIST) datasets) and a distortion set of approximately 40,000 images, is presented, illustrating that the proposed LR model is reliably guiding towards the right proposition in the identification assessment of match and close non-match populations. Results further indicate that the proposed model is a promising tool for fingerprint practitioners to use for analysing the spatial consistency of corresponding minutiae configurations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contexte et but de l'étude: Les fractures du triquetrum sont les deuxièmes fractures des os du carpe en fréquence, après celles du scaphoïde. Elles représentent environ 3.5% de toutes les lésions traumatiques du poignet, et résultent le plus souvent d'une chute de sa hauteur avec réception sur le poignet en hyper-extension. Leur mécanisme physiopathologique reste débattu. La première théorie fut celle de l'avulsion ligamentaire d'un fragment osseux dorsal. Puis, Levy et coll. ainsi que Garcia-Elias ont successivement suggéré que ces fractures résultaient plutôt d'une impaction ulno-carpienne. De nombreux ligaments (intrinsèques et extrinsèques du carpe) s'insèrent sur les versants palmaires et dorsaux du triquetrum. Ces ligaments jouent un rôle essentiel dans le maintien de la stabilité du carpe. Bien que l'arthro-IRM du poignet soit l'examen de référence pour évaluer ces ligaments, Shahabpour et coll. ont récemment démontré leur visibilité en IRM tridimensionnelle (volumique) après injection iv. de produit de contraste (Gadolinium). L'atteinte ligamentaire associée aux fractures dorsales du triquetrum n'a jusqu'à présent jamais été évalué. Ces lésions pourraient avoir un impact sur l'évolution et la prise en charge de ces fractures. Les objectifs de l'étude étaient donc les suivants: premièrement, déterminer l'ensemble des caractéristiques des fractures dorsales du triquetrum en IRM, en mettant l'accent sur les lésions ligamentaires extrinsèques associées; secondairement, discuter les différents mécanismes physiopathologiques (i.e. avulsion ligamentaire ou impaction ulno-carpienne) de ces fractures d'après nos résultats en IRM. Patients et méthodes: Ceci est une étude rétrospective multicentrique (CHUV, Lausanne; Hôpital Cochin, AP-HP, Paris) d'examens IRM et radiographies conventionnelles du poignet. A partir de janvier 2008, nous avons recherché dans les bases de données institutionnelles les patients présentant une fracture du triquetrum et ayant bénéficié d'une IRM volumique du poignet dans un délai de six semaines entre le traumatisme et l'IRM. Les examens IRM ont été effectués sur deux machines à haut champ magnétique (3 Tesla) avec une antenne dédiée et un protocole d'acquisition incluant une séquence tridimensionnelle isotropique (« 3D VIBE ») après injection iv. de produit de contraste (Gadolinium). Ces examens ont été analysés par deux radiologues ostéo-articulaires expérimentés. Les mesures ont été effectuées par un troisième radiologue ostéo-articulaire. En ce qui concerne l'analyse qualitative, le type de fracture du triquetrum (selon la classification de Garcia-Elias), la distribution de l'oedème osseux post- traumatique, ainsi que le nombre et la distribution des lésions ligamentaires extrinsèques associées ont été évalués. Pour l'analyse quantitative, l'index du processus de la styloïde ulnaire (selon la formule de Garcia-Elias), le volume du fragment osseux détaché du triquetrum, et la distance séparant ce fragment osseux du triquetrum ont été mesurés.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Development and environmental issues of small cities in developing countries have largely been overlooked although these settlements are of global demographic importance and often face a "triple challenge"; that is, they have limited financial and human resources to address growing environmental problems that are related to both development (e.g., pollution) and under-development (e.g., inadequate water supply). Neoliberal policy has arguably aggravated this challenge as public investments in infrastructure generally declined while the focus shifted to the metropolitan "economic growth machines". This paper develops a conceptual framework and agenda for the study of small cities in the global south, their environmental dynamics, governance and politics in the current neoliberal context. While small cities are governed in a neoliberal policy context, they are not central to neoliberalism, and their (environmental) governance therefore seems to differ from that of global cities. Furthermore, "actually existing" neoliberal governance of small cities is shaped by the interplay of regional and local politics and environmental situations. The approach of urban political ecology and the concept of rural-urban linkages are used to consider these socio-ecological processes. The conceptual framework and research agenda are illustrated in the case of India, where the agency of small cities in regard to environmental governance seems to remain limited despite formal political decentralization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. In this paper, we focus on the prediction of drug concentrations using Support Vector Machines (S VM) and the analysis of the influence of each feature to the prediction results. Our study shows that SVM-based approaches achieve similar prediction results compared with pharmacokinetic model. The two proposed example-based SVM methods demonstrate that the individual features help to increase the accuracy in the predictions of drug concentration with a reduced library of training data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article presents an experimental study about the classification ability of several classifiers for multi-classclassification of cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland lawenforcement authorities regularly ask forensic laboratories to determinate the chemotype of a seized cannabisplant and then to conclude if the plantation is legal or not. This classification is mainly performed when theplant is mature as required by the EU official protocol and then the classification of cannabis seedlings is a timeconsuming and costly procedure. A previous study made by the authors has investigated this problematic [1]and showed that it is possible to differentiate between drug type (illegal) and fibre type (legal) cannabis at anearly stage of growth using gas chromatography interfaced with mass spectrometry (GC-MS) based on therelative proportions of eight major leaf compounds. The aims of the present work are on one hand to continueformer work and to optimize the methodology for the discrimination of drug- and fibre type cannabisdeveloped in the previous study and on the other hand to investigate the possibility to predict illegal cannabisvarieties. Seven classifiers for differentiating between cannabis seedlings are evaluated in this paper, namelyLinear Discriminant Analysis (LDA), Partial Least Squares Discriminant Analysis (PLS-DA), Nearest NeighbourClassification (NNC), Learning Vector Quantization (LVQ), Radial Basis Function Support Vector Machines(RBF SVMs), Random Forest (RF) and Artificial Neural Networks (ANN). The performance of each method wasassessed using the same analytical dataset that consists of 861 samples split into drug- and fibre type cannabiswith drug type cannabis being made up of 12 varieties (i.e. 12 classes). The results show that linear classifiersare not able to manage the distribution of classes in which some overlap areas exist for both classificationproblems. Unlike linear classifiers, NNC and RBF SVMs best differentiate cannabis samples both for 2-class and12-class classifications with average classification results up to 99% and 98%, respectively. Furthermore, RBFSVMs correctly classified into drug type cannabis the independent validation set, which consists of cannabisplants coming from police seizures. In forensic case work this study shows that the discrimination betweencannabis samples at an early stage of growth is possible with fairly high classification performance fordiscriminating between cannabis chemotypes or between drug type cannabis varieties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

THESIS ABSTRACTThis thesis project was aimed at studying the molecular mechanisms underlying learning and memory formation, in particular as they relate to the metabolic coupling between astrocytes and neurons. For that, changes in the metabolic activity of different mice brain regions after 1 or 9 days of training in an eight-arm radial maze were assessed by (14C) 2-deoxyglucose (2DG) autoradiography. Significant differences in the areas engaged during the behavioral task at day 1 (when animals are confronted for the first time to the learning task) and at day 9 (when animals are highly performing) have been identified. These areas include the hippocampus, the fornix, the parietal cortex, the laterodorsal thalamic nucleus and the mammillary bodies at day 1 ; and the anterior cingulate, the retrosplenial cortex and the dorsal striatum at day 9. Two of these cerebral regions (those presenting the greatest changes at day 1 and day 9: the hippocampus and the retrosplenial cortex, respectively) were microdissected by laser capture microscopy and selected genes related to neuron-glia metabolic coupling, glucose metabolism and synaptic plasticity were analyzed by RT-PCR. 2DG and gene expression analysis were performed at three different times: 1) immediately after the end of the behavioral paradigm, 2) 45 minutes and 3) 6 hours after training. The main goal of this study was the identification of the metabolic adaptations following the learning task. Gene expression results demonstrate that the learning task profoundly modulates the pattern of gene expression in time, meaning that these two cerebral regions with high 2DG signal (hippocampus and retrosplenial cortex) have adapted their metabolic molecular machinery in consequence. Almost all studied genes show a higher expression in the hippocampus at day 1 compared to day 9, while an increased expression was found in the retrosplenial cortex at day 9. We can observe these molecular adaptations with a short delay of 45 minutes after the end of the task. However, 6 hours after training a high gene expression was found at day 9 (compared to day 1) in both regions, suggesting that only one day of training is not sufficient to detect transcriptional modifications several hours after the task. Thus, gene expression data match 2DG results indicating a transfer of information in time (from day 1 to day 9) and in space (from the hippocampus to the retrosplenial cortex), and this at a cellular and a molecular level. Moreover, learning seems to modify the neuron-glia metabolic coupling, since several genes involved in this coupling are induced. These results also suggest a role of glia in neuronal plasticity.RESUME DU TRAVAIL DE THESECe projet de thèse a eu pour but l'étude des mécanismes moléculaires qui sont impliqués dans l'apprentissage et la mémoire et, en particulier, à les mettre en rapport avec le couplage métabolique existant entre les astrocytes et les neurones. Pour cela, des changements de l'activité métabolique dans différentes régions du cerveau des souris après 1 ou 9 jours d'entraînement dans un labyrinthe radial à huit-bras ont été évalués par autoradiographie au 2-désoxyglucose (2DG). Des différences significatives dans les régions engagées pendant la tâche comportementale au jour 1 (quand les animaux sont confrontés pour la première fois à la tâche) et au jour 9 (quand les animaux ont déjà appris) ont été identifiés. Ces régions incluent, au jour 1, l'hippocampe, le fornix, le cortex pariétal, le noyau thalamic laterodorsal et les corps mamillaires; et, au jour 9, le cingulaire antérieur, le cortex retrosplenial et le striatum dorsal. Deux de ces régions cérébrales (celles présentant les plus grands changements à jour 1 et à jour 9: l'hippocampe et le cortex retrosplenial, respectivement) ont été découpées par microdissection au laser et quelques gènes liés au couplage métabolique neurone-glie, au métabolisme du glucose et à la plasticité synaptique ont été analysées par RT-PCR. L'étude 2DG et l'analyse de l'expression de gènes ont été exécutés à trois temps différents: 1) juste après entraînement, 2) 45 minutes et 3) 6 heures après la fin de la tâche. L'objectif principal de cette étude était l'identification des adaptations métaboliques suivant la tâche d'apprentissage. Les résultats de l'expression de gènes démontrent que la tâche d'apprentissage module profondément le profile d'expression des gènes dans le temps, signifiant que ces deux régions cérébrales avec un signal 2DG élevé (l'hippocampe et le cortex retrosplenial) ont adapté leurs « machines moléculaires » en conséquence. Presque tous les gènes étudiés montrent une expression plus élevée dans l'hippocampe au jour 1 comparé au jour 9, alors qu'une expression accrue a été trouvée dans le cortex retrosplenial au jour 9. Nous pouvons observer ces adaptations moléculaires avec un retard court de 45 minutes après la fin de la tâche. Cependant, 6 heures après l'entraînement, une expression de gènes élevée a été trouvée au jour 9 (comparé à jour 1) dans les deux régions, suggérant que seulement un jour d'entraînement ne suffit pas pour détecter des modifications transcriptionelles plusieurs heures après la tâche. Ainsi, les données d'expression de gènes corroborent les résultats 2DG indiquant un transfert d'information dans le temps (de jour 1 à jour 9) et dans l'espace (de l'hippocampe au cortex retrosplenial), et ceci à un niveau cellulaire et moléculaire. D'ailleurs, la tâche d'apprentissage semble modifier le couplage métabolique neurone-glie, puisque de nombreux gènes impliqués dans ce couplage sont induits. Ces observations suggèrent un rôle important de la glie dans les mécanismes de plasticité du système nerveux.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.