790 resultados para Learning techniques


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Practical usage of machine learning is gaining strategic importance in enterprises looking for business intelligence. However, most enterprise data is distributed in multiple relational databases with expert-designed schema. Using traditional single-table machine learning techniques over such data not only incur a computational penalty for converting to a flat form (mega-join), even the human-specified semantic information present in the relations is lost. In this paper, we present a practical, two-phase hierarchical meta-classification algorithm for relational databases with a semantic divide and conquer approach. We propose a recursive, prediction aggregation technique over heterogeneous classifiers applied on individual database tables. The proposed algorithm was evaluated on three diverse datasets. namely TPCH, PKDD and UCI benchmarks and showed considerable reduction in classification time without any loss of prediction accuracy. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Biomolecular recognition underlying drug-target interactions is determined by both binding affinity and specificity. Whilst, quantification of binding efficacy is possible, determining specificity remains a challenge, as it requires affinity data for multiple targets with the same ligand dataset. Thus, understanding the interaction space by mapping the target space to model its complementary chemical space through computational techniques are desirable. In this study, active site architecture of FabD drug target in two apicomplexan parasites viz. Plasmodium falciparum (PfFabD) and Toxoplasma gondii (TgFabD) is explored, followed by consensus docking calculations and identification of fifteen best hit compounds, most of which are found to be derivatives of natural products. Subsequently, machine learning techniques were applied on molecular descriptors of six FabD homologs and sixty ligands to induce distinct multivariate partial-least square models. The biological space of FabD mapped by the various chemical entities explain their interaction space in general. It also highlights the selective variations in FabD of apicomplexan parasites with that of the host. Furthermore, chemometric models revealed the principal chemical scaffolds in PfFabD and TgFabD as pyrrolidines and imidazoles, respectively, which render target specificity and improve binding affinity in combination with other functional descriptors conducive for the design and optimization of the leads.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As part of its award winning staff development programme Burnley College has introduced an end of year ‘Feast’ of staff development activities. The Feast includes two weeks of Continuing Professional Development (CPD) workshops for all staff designed to promote e-Learning techniques, develop innovative teaching and learning methods as well as encourage staff to develop sharing hobbies and interests while always looking to expand the learning element.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[SPA] Las disciplinas artísticas siempre han sido un campo tan beneficioso como poco utilizado en educación. En esta propuesta de taller de teatro musical en educación primaria se ha indagado en el mundo de la pedagogía teatral y su relación con la música y el juego como técnicas de transmisión de valores y desarrollo de aprendizajes. La puesta en práctica mediante el diseño e implementación de un taller por parte del mediador ha logrado que los alumnos muestren avances significativos en diversos campos como el autoconocimiento corporal o el descubrimiento de nuevos géneros musicales, además de favorecer a la dinámica grupal dentro y fuera del taller. De cara al futuro, este proceso ha abierto las puertas a nuevas propuestas en el ámbito de la dramatización infantil así como al interés formativo del propio educador.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.

Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.

To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main contribution of this work is to analyze and describe the state of the art performance as regards answer scoring systems from the SemEval- 2013 task, as well as to continue with the development of an answer scoring system (EHU-ALM) developed in the University of the Basque Country. On the overall this master thesis focuses on finding any possible configuration that lets improve the results in the SemEval dataset by using attribute engineering techniques in order to find optimal feature subsets, along with trying different hierarchical configurations in order to analyze its performance against the traditional one versus all approach. Altogether, throughout the work we propose two alternative strategies: on the one hand, to improve the EHU-ALM system without changing the architecture, and, on the other hand, to improve the system adapting it to an hierarchical con- figuration. To build such new models we describe and use distinct attribute engineering, data preprocessing, and machine learning techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Several research studies have been recently initiated to investigate the use of construction site images for automated infrastructure inspection, progress monitoring, etc. In these studies, it is always necessary to extract material regions (concrete or steel) from the images. Existing methods made use of material's special color/texture ranges for material information retrieval, but they do not sufficiently discuss how to find these appropriate color/texture ranges. As a result, users have to define appropriate ones by themselves, which is difficult for those who do not have enough image processing background. This paper presents a novel method of identifying concrete material regions using machine learning techniques. Under the method, each construction site image is first divided into regions through image segmentation. Then, the visual features of each region are calculated and classified with a pre-trained classifier. The output value determines whether the region is composed of concrete or not. The method was implemented using C++ and tested over hundreds of construction site images. The results were compared with the manual classification ones to indicate the method's validity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many visual datasets are traditionally used to analyze the performance of different learning techniques. The evaluation is usually done within each dataset, therefore it is questionable if such results are a reliable indicator of true generalization ability. We propose here an algorithm to exploit the existing data resources when learning on a new multiclass problem. Our main idea is to identify an image representation that decomposes orthogonally into two subspaces: a part specific to each dataset, and a part generic to, and therefore shared between, all the considered source sets. This allows us to use the generic representation as un-biased reference knowledge for a novel classification task. By casting the method in the multi-view setting, we also make it possible to use different features for different databases. We call the algorithm MUST, Multitask Unaligned Shared knowledge Transfer. Through extensive experiments on five public datasets, we show that MUST consistently improves the cross-datasets generalization performance. © 2013 Springer-Verlag.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A learning based framework is proposed for estimating human body pose from a single image. Given a differentiable function that maps from pose space to image feature space, the goal is to invert the process: estimate the pose given only image features. The inversion is an ill-posed problem as the inverse mapping is a one to many process. Hence multiple solutions exist, and it is desirable to restrict the solution space to a smaller subset of feasible solutions. For example, not all human body poses are feasible due to anthropometric constraints. Since the space of feasible solutions may not admit a closed form description, the proposed framework seeks to exploit machine learning techniques to learn an approximation that is smoothly parameterized over such a space. One such technique is Gaussian Process Latent Variable Modelling. Scaled conjugate gradient is then used find the best matching pose in the space of feasible solutions when given an input image. The formulation allows easy incorporation of various constraints, e.g. temporal consistency and anthropometric constraints. The performance of the proposed approach is evaluated in the task of upper-body pose estimation from silhouettes and compared with the Specialized Mapping Architecture. The estimation accuracy of the Specialized Mapping Architecture is at least one standard deviation worse than the proposed approach in the experiments with synthetic data. In experiments with real video of humans performing gestures, the proposed approach produces qualitatively better estimation results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p < 0.02) and achieved AUC=0.85 +/- 0.01. The DF-P surpassed the other classifiers in terms of pAUC (p < 0.01) and reached pAUC=0.38 +/- 0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p < 0.04) and achieved AUC=0.94 +/- 0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57 +/- 0.07 to 0.67 +/- 0.05, p > 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p < 0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In diesem Beitrag wird das von Thomas Martens und Jürgen Rost entwickelte Integrierte Handlungsmodell reflektiert, auf Lernprozesse übertragen, theoretische erweitert und beispielhaft empirisch überprüft. Zunächst werden die theoretischen Potenziale des Modells aufgezeigt, indem es auf Lernprozesse angewendet wird. Danach werden die typischen Prozesse der drei Handlungsphasen dargestellt, insbesondere die funktionslogische Integration von bewussten und unbewussten sowie kognitiven und affektiven Prozessen. Abschließend wird eine empirische Studie vorgestellt, die das Lernen von Statistik im Hochschulbereich untersucht. Mit Hilfe von Mischverteilungsmodellen werden sieben Lerntypen identifiziert, die den entsprechenden motivationalen, intentionalen und volitionalen Mustern der Lern- und Handlungsgenese entsprechen. (DIPF/Orig.)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ecosystems consist of complex dynamic interactions among species and the environment, the understanding of which has implications for predicting the environmental response to changes in climate and biodiversity. However, with the recent adoption of more explorative tools, like Bayesian networks, in predictive ecology, few assumptions can be made about the data and complex, spatially varying interactions can be recovered from collected field data. In this study, we compare Bayesian network modelling approaches accounting for latent effects to reveal species dynamics for 7 geographically and temporally varied areas within the North Sea. We also apply structure learning techniques to identify functional relationships such as prey–predator between trophic groups of species that vary across space and time. We examine if the use of a general hidden variable can reflect overall changes in the trophic dynamics of each spatial system and whether the inclusion of a specific hidden variable can model unmeasured group of species. The general hidden variable appears to capture changes in the variance of different groups of species biomass. Models that include both general and specific hidden variables resulted in identifying similarity with the underlying food web dynamics and modelling spatial unmeasured effect. We predict the biomass of the trophic groups and find that predictive accuracy varies with the models' features and across the different spatial areas thus proposing a model that allows for spatial autocorrelation and two hidden variables. Our proposed model was able to produce novel insights on this ecosystem's dynamics and ecological interactions mainly because we account for the heterogeneous nature of the driving factors within each area and their changes over time. Our findings demonstrate that accounting for additional sources of variation, by combining structure learning from data and experts' knowledge in the model architecture, has the potential for gaining deeper insights into the structure and stability of ecosystems. Finally, we were able to discover meaningful functional networks that were spatially and temporally differentiated with the particular mechanisms varying from trophic associations through interactions with climate and commercial fisheries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present novel topological mappings between graphs, trees and generalized trees that means between structured objects with different properties. The two major contributions of this paper are, first, to clarify the relation between graphs, trees and generalized trees, a graph class recently introduced. Second, these transformations provide a unique opportunity to transform structured objects into a representation that might be beneficial for a processing, e.g., by machine learning techniques for graph classification. (c) 2006 Elsevier Inc. All rights reserved.