866 resultados para Analisi Discriminante, Teoria dei Network, Cross-Validation, Validazione.
Resumo:
The early detection of subjects with probable Alzheimer's disease (AD) is crucial for effective appliance of treatment strategies. Here we explored the ability of a multitude of linear and non-linear classification algorithms to discriminate between the electroencephalograms (EEGs) of patients with varying degree of AD and their age-matched control subjects. Absolute and relative spectral power, distribution of spectral power, and measures of spatial synchronization were calculated from recordings of resting eyes-closed continuous EEGs of 45 healthy controls, 116 patients with mild AD and 81 patients with moderate AD, recruited in two different centers (Stockholm, New York). The applied classification algorithms were: principal component linear discriminant analysis (PC LDA), partial least squares LDA (PLS LDA), principal component logistic regression (PC LR), partial least squares logistic regression (PLS LR), bagging, random forest, support vector machines (SVM) and feed-forward neural network. Based on 10-fold cross-validation runs it could be demonstrated that even tough modern computer-intensive classification algorithms such as random forests, SVM and neural networks show a slight superiority, more classical classification algorithms performed nearly equally well. Using random forests classification a considerable sensitivity of up to 85% and a specificity of 78%, respectively for the test of even only mild AD patients has been reached, whereas for the comparison of moderate AD vs. controls, using SVM and neural networks, values of 89% and 88% for sensitivity and specificity were achieved. Such a remarkable performance proves the value of these classification algorithms for clinical diagnostics.
Resumo:
Dynamic changes in ERP topographies can be conveniently analyzed by means of microstates, the so-called "atoms of thoughts", that represent brief periods of quasi-stable synchronized network activation. Comparing temporal microstate features such as on- and offset or duration between groups and conditions therefore allows a precise assessment of the timing of cognitive processes. So far, this has been achieved by assigning the individual time-varying ERP maps to spatially defined microstate templates obtained from clustering the grand mean data into predetermined numbers of topographies (microstate prototypes). Features obtained from these individual assignments were then statistically compared. This has the problem that the individual noise dilutes the match between individual topographies and templates leading to lower statistical power. We therefore propose a randomization-based procedure that works without assigning grand-mean microstate prototypes to individual data. In addition, we propose a new criterion to select the optimal number of microstate prototypes based on cross-validation across subjects. After a formal introduction, the method is applied to a sample data set of an N400 experiment and to simulated data with varying signal-to-noise ratios, and the results are compared to existing methods. In a first comparison with previously employed statistical procedures, the new method showed an increased robustness to noise, and a higher sensitivity for more subtle effects of microstate timing. We conclude that the proposed method is well-suited for the assessment of timing differences in cognitive processes. The increased statistical power allows identifying more subtle effects, which is particularly important in small and scarce patient populations.
Resumo:
Accurate quantitative estimation of exposure using retrospective data has been one of the most challenging tasks in the exposure assessment field. To improve these estimates, some models have been developed using published exposure databases with their corresponding exposure determinants. These models are designed to be applied to reported exposure determinants obtained from study subjects or exposure levels assigned by an industrial hygienist, so quantitative exposure estimates can be obtained. ^ In an effort to improve the prediction accuracy and generalizability of these models, and taking into account that the limitations encountered in previous studies might be due to limitations in the applicability of traditional statistical methods and concepts, the use of computer science- derived data analysis methods, predominantly machine learning approaches, were proposed and explored in this study. ^ The goal of this study was to develop a set of models using decision trees/ensemble and neural networks methods to predict occupational outcomes based on literature-derived databases, and compare, using cross-validation and data splitting techniques, the resulting prediction capacity to that of traditional regression models. Two cases were addressed: the categorical case, where the exposure level was measured as an exposure rating following the American Industrial Hygiene Association guidelines and the continuous case, where the result of the exposure is expressed as a concentration value. Previously developed literature-based exposure databases for 1,1,1 trichloroethane, methylene dichloride and, trichloroethylene were used. ^ When compared to regression estimations, results showed better accuracy of decision trees/ensemble techniques for the categorical case while neural networks were better for estimation of continuous exposure values. Overrepresentation of classes and overfitting were the main causes for poor neural network performance and accuracy. Estimations based on literature-based databases using machine learning techniques might provide an advantage when they are applied to other methodologies that combine `expert inputs' with current exposure measurements, like the Bayesian Decision Analysis tool. The use of machine learning techniques to more accurately estimate exposures from literature-based exposure databases might represent the starting point for the independence from the expert judgment.^
Resumo:
Acquired brain injury (ABI) is one of the leading causes of death and disability in the world and is associated with high health care costs as a result of the acute treatment and long term rehabilitation involved. Different algorithms and methods have been proposed to predict the effectiveness of rehabilitation programs. In general, research has focused on predicting the overall improvement of patients with ABI. The purpose of this study is the novel application of data mining (DM) techniques to predict the outcomes of cognitive rehabilitation in patients with ABI. We generate three predictive models that allow us to obtain new knowledge to evaluate and improve the effectiveness of the cognitive rehabilitation process. Decision tree (DT), multilayer perceptron (MLP) and general regression neural network (GRNN) have been used to construct the prediction models. 10-fold cross validation was carried out in order to test the algorithms, using the Institut Guttmann Neurorehabilitation Hospital (IG) patients database. Performance of the models was tested through specificity, sensitivity and accuracy analysis and confusion matrix analysis. The experimental results obtained by DT are clearly superior with a prediction average accuracy of 90.38%, while MLP and GRRN obtained a 78.7% and 75.96%, respectively. This study allows to increase the knowledge about the contributing factors of an ABI patient recovery and to estimate treatment efficacy in individual patients.
Resumo:
Objective The main purpose of this research is the novel use of artificial metaplasticity on multilayer perceptron (AMMLP) as a data mining tool for prediction the outcome of patients with acquired brain injury (ABI) after cognitive rehabilitation. The final goal aims at increasing knowledge in the field of rehabilitation theory based on cognitive affectation. Methods and materials The data set used in this study contains records belonging to 123 ABI patients with moderate to severe cognitive affectation (according to Glasgow Coma Scale) that underwent rehabilitation at Institut Guttmann Neurorehabilitation Hospital (IG) using the tele-rehabilitation platform PREVIRNEC©. The variables included in the analysis comprise the neuropsychological initial evaluation of the patient (cognitive affectation profile), the results of the rehabilitation tasks performed by the patient in PREVIRNEC© and the outcome of the patient after a 3–5 months treatment. To achieve the treatment outcome prediction, we apply and compare three different data mining techniques: the AMMLP model, a backpropagation neural network (BPNN) and a C4.5 decision tree. Results The prediction performance of the models was measured by ten-fold cross validation and several architectures were tested. The results obtained by the AMMLP model are clearly superior, with an average predictive performance of 91.56%. BPNN and C4.5 models have a prediction average accuracy of 80.18% and 89.91% respectively. The best single AMMLP model provided a specificity of 92.38%, a sensitivity of 91.76% and a prediction accuracy of 92.07%. Conclusions The proposed prediction model presented in this study allows to increase the knowledge about the contributing factors of an ABI patient recovery and to estimate treatment efficacy in individual patients. The ability to predict treatment outcomes may provide new insights toward improving effectiveness and creating personalized therapeutic interventions based on clinical evidence.
Resumo:
El objetivo principal de esta tesis doctoral es profundizar en el análisis y diseño de un sistema inteligente para la predicción y control del acabado superficial en un proceso de fresado a alta velocidad, basado fundamentalmente en clasificadores Bayesianos, con el prop´osito de desarrollar una metodolog´ıa que facilite el diseño de este tipo de sistemas. El sistema, cuyo propósito es posibilitar la predicción y control de la rugosidad superficial, se compone de un modelo aprendido a partir de datos experimentales con redes Bayesianas, que ayudar´a a comprender los procesos dinámicos involucrados en el mecanizado y las interacciones entre las variables relevantes. Dado que las redes neuronales artificiales son modelos ampliamente utilizados en procesos de corte de materiales, también se incluye un modelo para fresado usándolas, donde se introdujo la geometría y la dureza del material como variables novedosas hasta ahora no estudiadas en este contexto. Por lo tanto, una importante contribución en esta tesis son estos dos modelos para la predicción de la rugosidad superficial, que se comparan con respecto a diferentes aspectos: la influencia de las nuevas variables, los indicadores de evaluación del desempeño, interpretabilidad. Uno de los principales problemas en la modelización con clasificadores Bayesianos es la comprensión de las enormes tablas de probabilidad a posteriori producidas. Introducimos un m´etodo de explicación que genera un conjunto de reglas obtenidas de árboles de decisión. Estos árboles son inducidos a partir de un conjunto de datos simulados generados de las probabilidades a posteriori de la variable clase, calculadas con la red Bayesiana aprendida a partir de un conjunto de datos de entrenamiento. Por último, contribuimos en el campo multiobjetivo en el caso de que algunos de los objetivos no se puedan cuantificar en números reales, sino como funciones en intervalo de valores. Esto ocurre a menudo en aplicaciones de aprendizaje automático, especialmente las basadas en clasificación supervisada. En concreto, se extienden las ideas de dominancia y frontera de Pareto a esta situación. Su aplicación a los estudios de predicción de la rugosidad superficial en el caso de maximizar al mismo tiempo la sensibilidad y la especificidad del clasificador inducido de la red Bayesiana, y no solo maximizar la tasa de clasificación correcta. Los intervalos de estos dos objetivos provienen de un m´etodo de estimación honesta de ambos objetivos, como e.g. validación cruzada en k rodajas o bootstrap.---ABSTRACT---The main objective of this PhD Thesis is to go more deeply into the analysis and design of an intelligent system for surface roughness prediction and control in the end-milling machining process, based fundamentally on Bayesian network classifiers, with the aim of developing a methodology that makes easier the design of this type of systems. The system, whose purpose is to make possible the surface roughness prediction and control, consists of a model learnt from experimental data with the aid of Bayesian networks, that will help to understand the dynamic processes involved in the machining and the interactions among the relevant variables. Since artificial neural networks are models widely used in material cutting proceses, we include also an end-milling model using them, where the geometry and hardness of the piecework are introduced as novel variables not studied so far within this context. Thus, an important contribution in this thesis is these two models for surface roughness prediction, that are then compared with respecto to different aspects: influence of the new variables, performance evaluation metrics, interpretability. One of the main problems with Bayesian classifier-based modelling is the understanding of the enormous posterior probabilitiy tables produced. We introduce an explanation method that generates a set of rules obtained from decision trees. Such trees are induced from a simulated data set generated from the posterior probabilities of the class variable, calculated with the Bayesian network learned from a training data set. Finally, we contribute in the multi-objective field in the case that some of the objectives cannot be quantified as real numbers but as interval-valued functions. This often occurs in machine learning applications, especially those based on supervised classification. Specifically, the dominance and Pareto front ideas are extended to this setting. Its application to the surface roughness prediction studies the case of maximizing simultaneously the sensitivity and specificity of the induced Bayesian network classifier, rather than only maximizing the correct classification rate. Intervals in these two objectives come from a honest estimation method of both objectives, like e.g. k-fold cross-validation or bootstrap.
Resumo:
Recently, the target function for crystallographic refinement has been improved through a maximum likelihood analysis, which makes proper allowance for the effects of data quality, model errors, and incompleteness. The maximum likelihood target reduces the significance of false local minima during the refinement process, but it does not completely eliminate them, necessitating the use of stochastic optimization methods such as simulated annealing for poor initial models. It is shown that the combination of maximum likelihood with cross-validation, which reduces overfitting, and simulated annealing by torsion angle molecular dynamics, which simplifies the conformational search problem, results in a major improvement of the radius of convergence of refinement and the accuracy of the refined structure. Torsion angle molecular dynamics and the maximum likelihood target function interact synergistically, the combination of both methods being significantly more powerful than each method individually. This is demonstrated in realistic test cases at two typical minimum Bragg spacings (dmin = 2.0 and 2.8 Å, respectively), illustrating the broad applicability of the combined method. In an application to the refinement of a new crystal structure, the combined method automatically corrected a mistraced loop in a poor initial model, moving the backbone by 4 Å.
Resumo:
We present a method for predicting protein folding class based on global protein chain description and a voting process. Selection of the best descriptors was achieved by a computer-simulated neural network trained on a data base consisting of 83 folding classes. Protein-chain descriptors include overall composition, transition, and distribution of amino acid attributes, such as relative hydrophobicity, predicted secondary structure, and predicted solvent exposure. Cross-validation testing was performed on 15 of the largest classes. The test shows that proteins were assigned to the correct class (correct positive prediction) with an average accuracy of 71.7%, whereas the inverse prediction of proteins as not belonging to a particular class (correct negative prediction) was 90-95% accurate. When tested on 254 structures used in this study, the top two predictions contained the correct class in 91% of the cases.
Resumo:
Purpose. To assess in a sample of normal, keratoconic, and keratoconus (KC) suspect eyes the performance of a set of new topographic indices computed directly from the digitized images of the Placido rings. Methods. This comparative study was composed of a total of 124 eyes of 106 patients from the ophthalmic clinics Vissum Alicante and Vissum Almería (Spain) divided into three groups: control group (50 eyes), KC group (50 eyes), and KC suspect group (24 eyes). In all cases, a comprehensive examination was performed, including the corneal topography with a Placidobased CSO topography system. Clinical outcomes were compared among groups, along with the discriminating performance of the proposed irregularity indices. Results. Significant differences at level 0.05 were found on the values of the indices among groups by means of Mann-Whitney-Wilcoxon nonparametric test and Fisher exact test. Additional statistical methods, such as receiver operating characteristic analysis and K-fold cross validation, confirmed the capability of the indices to discriminate between the three groups. Conclusions. Direct analysis of the digitized images of the Placido mires projected on the cornea is a valid and effective tool for detection of corneal irregularities. Although based only on the data from the anterior surface of the cornea, the new indices performed well even when applied to the KC suspect eyes. They have the advantage of simplicity of calculation combined with high sensitivity in corneal irregularity detection and thus can be used as supplementary criteria for diagnosing and grading KC that can be added to the current keratometric classifications.
Resumo:
Questo elaborato finale intende presentare una proposta di traduzione di una parte del sito internet del celebre Museo ebraico di Berlino progettato da Daniel Libeskind, architetto statunitense di origine polacca. Il presente elaborato si articola nelle seguenti sezioni: il primo capitolo si concentrerà sulla teoria dei siti internet e sulla loro organizzazione, focalizzando l’interesse sulle modalità di scrittura dei siti web, nel secondo capitolo, verranno presentate la vita e le opere di Daniel Libeskind con una breve introduzione al Museo Ebraico. Seguirà poi un'analisi testuale e nel quarto capitolo la proposta di traduzione. Nel quinto capitolo verranno analizzate le strategie adottate nella traduzione. L’elaborato si concluderà con alcune considerazioni finali, e con indicazioni sulla bibliografia e sitografia.
Resumo:
L’obiettivo di questo lavoro di tesi è rappresentato dalla definizione di un metodo di ricerca terminologica e documentazione, nonché di traduzione assistita, supportato dalle moderne tecnologie disponibili in questo campo (Antconc, Bootcat, Trados ecc.), valido per la traduzione di questo tipo di documenti, gli standard, ma sfruttabile anche in altri ambiti della traduzione tecnico-scientifica, permettendo al traduttore e, di conseguenza, al committente, di ottenere un documento “accettabile” e qualitativamente idoneo in lingua di arrivo. Il percorso tracciato in questo elaborato parte dalla presentazione del quadro storico generale, per poi passare alla classificazione degli additivi alimentari in base alla tipologia e agli impieghi in campo alimentare. Verranno illustrati in modo generale i metodi di analisi degli additivi e i criteri di validazione dei metodi impiegati, in funzione degli standard internazionali relativi alla materia, rivolgendo particolare attenzione al quadro normativo e alle agli organi coinvolti nella regolamentazione e nel controllo di queste sostanze, sia in Italia che in Russia e nel resto del mondo. Tutto ciò in funzione degli avvenimenti sul piano geopolitico e su quello culturale: da un lato le sanzioni economiche tra UE e Russia, dall’altro EXPO 2015, opportunità per numerosi traduttori e terminologi di approfondire e arricchire le proprie conoscenze in un ambito tanto importante: alimentazione e sicurezza alimentare, in relazione al progetto di gestione terminologica VOCA9. La parte finale della tesi è dedicata alla presentazione degli standard russi GOST R e alla loro traduzione in italiano, in funzione della documentazione e alla ricerca terminologica necessarie per la traduzione tramite CAT tools ed indispensabili per la creazione di glossari.
Resumo:
Breve analisi sulla gestione dei rifiuti e simulazione del ciclo termico del termovalorizzatore del Frullo (gruppo Hera, Bologna), con modellazione di due possibili variazioni operative (dettate dal gestore dell'impianto), per valutare la loro fattibilità in termini di miglioramento dell'efficienza elettrica e termica. Conferma dei risultati ottenuti mediante un'analisi economica basata sulla vendita stagionale dell'energia elettrica e termica.
Resumo:
Background: Published birthweight references in Australia do not fully take into account constitutional factors that influence birthweight and therefore may not provide an accurate reference to identify the infant with abnormal growth. Furthermore, studies in other regions that have derived adjusted (customised) birthweight references have applied untested assumptions in the statistical modelling. Aims: To validate the customised birthweight model and to produce a reference set of coefficients for estimating a customised birthweight that may be useful for maternity care in Australia and for future research. Methods: De-identified data were extracted from the clinical database for all births at the Mater Mother's Hospital, Brisbane, Australia, between January 1997 and June 2005. Births with missing data for the variables under study were excluded. In addition the following were excluded: multiple pregnancies, births less than 37 completed week's gestation, stillbirths, and major congenital abnormalities. Multivariate analysis was undertaken. A double cross-validation procedure was used to validate the model. Results: The study of 42 206 births demonstrated that, for statistical purposes, birthweight is normally distributed. Coefficients for the derivation of customised birthweight in an Australian population were developed and the statistical model is demonstrably robust. Conclusions: This study provides empirical data as to the robustness of the model to determine customised birthweight. Further research is required to define where normal physiology ends and pathology begins, and which segments of the population should be included in the construction of a customised birthweight standard.
Resumo:
This Letter addresses image segmentation via a generative model approach. A Bayesian network (BNT) in the space of dyadic wavelet transform coefficients is introduced to model texture images. The model is similar to a Hidden Markov model (HMM), but with non-stationary transitive conditional probability distributions. It is composed of discrete hidden variables and observable Gaussian outputs for wavelet coefficients. In particular, the Gabor wavelet transform is considered. The introduced model is compared with the simplest joint Gaussian probabilistic model for Gabor wavelet coefficients for several textures from the Brodatz album [1]. The comparison is based on cross-validation and includes probabilistic model ensembles instead of single models. In addition, the robustness of the models to cope with additive Gaussian noise is investigated. We further study the feasibility of the introduced generative model for image segmentation in the novelty detection framework [2]. Two examples are considered: (i) sea surface pollution detection from intensity images and (ii) image segmentation of the still images with varying illumination across the scene.
Resumo:
This research is to establish new optimization methods for pattern recognition and classification of different white blood cells in actual patient data to enhance the process of diagnosis. Beckman-Coulter Corporation supplied flow cytometry data of numerous patients that are used as training sets to exploit the different physiological characteristics of the different samples provided. The methods of Support Vector Machines (SVM) and Artificial Neural Networks (ANN) were used as promising pattern classification techniques to identify different white blood cell samples and provide information to medical doctors in the form of diagnostic references for the specific disease states, leukemia. The obtained results prove that when a neural network classifier is well configured and trained with cross-validation, it can perform better than support vector classifiers alone for this type of data. Furthermore, a new unsupervised learning algorithm---Density based Adaptive Window Clustering algorithm (DAWC) was designed to process large volumes of data for finding location of high data cluster in real-time. It reduces the computational load to ∼O(N) number of computations, and thus making the algorithm more attractive and faster than current hierarchical algorithms.