65 resultados para Machine learning,Keras,Tensorflow,Data parallelism,Model parallelism,Container,Docker


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent years, kernel methods have revealed very powerful tools in many application domains in general and in remote sensing image classification in particular. The special characteristics of remote sensing images (high dimension, few labeled samples and different noise sources) are efficiently dealt with kernel machines. In this paper, we propose the use of structured output learning to improve remote sensing image classification based on kernels. Structured output learning is concerned with the design of machine learning algorithms that not only implement input-output mapping, but also take into account the relations between output labels, thus generalizing unstructured kernel methods. We analyze the framework and introduce it to the remote sensing community. Output similarity is here encoded into SVM classifiers by modifying the model loss function and the kernel function either independently or jointly. Experiments on a very high resolution (VHR) image classification problem shows promising results and opens a wide field of research with structured output kernel methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neuroimaging techniques provide valuable tools for diagnosing Alzheimer's disease (AD), monitoring disease progression and evaluating responses to treatment. There is currently a wide array of techniques available including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and, for recording electrical brain activity, electroencephalography (EEG). The choice of technique depends on the contrast between tissues of interest, spatial resolution, temporal resolution, requirements for functional data and the probable number of scans required. For example, while PET, CT and MRI can be used to differentiate between AD and other dementias, MRI is safer and provides better contrast of soft tissues. Neuroimaging is a technique spanning many disciplines and requires effective communication between doctors requesting a scan of a patient or group of patients and those with technical expertise. Consideration and discussion of the most suitable type of scan and the necessary settings to achieve the best results will help ensure appropriate techniques are chosen and used effectively. Neuroimaging techniques are currently expanding understanding of the structural and functional changes that occur in dementia. Further research may allow identification of early neurological signs ofAD, before clinical symptoms are evident, providing the opportunity to test preventative therapies. CombiningMRI and machine learning techniques may be a powerful approach to improve diagnosis ofAD and to predict clinical outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among various advantages, their small size makes model organisms preferred subjects of investigation. Yet, even in model systems detailed analysis of numerous developmental processes at cellular level is severely hampered by their scale. For instance, secondary growth of Arabidopsis hypocotyls creates a radial pattern of highly specialized tissues that comprises several thousand cells starting from a few dozen. This dynamic process is difficult to follow because of its scale and because it can only be investigated invasively, precluding comprehensive understanding of the cell proliferation, differentiation, and patterning events involved. To overcome such limitation, we established an automated quantitative histology approach. We acquired hypocotyl cross-sections from tiled high-resolution images and extracted their information content using custom high-throughput image processing and segmentation. Coupled with automated cell type recognition through machine learning, we could establish a cellular resolution atlas that reveals vascular morphodynamics during secondary growth, for example equidistant phloem pole formation. DOI: http://dx.doi.org/10.7554/eLife.01567.001.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Current guidelines underline the limitations of existing instruments to assess fitness to drive and the poor adaptability of batteries of neuropsychological tests in primary care settings. Aims: To provide a free, reliable, transparent computer based instrument capable of detecting effects of age or drugs on visual processing and cognitive functions. Methods: Relying on systematic reviews of neuropsychological tests and driving performances, we conceived four new computed tasks measuring: visual processing (Task1), movement attention shift (Task2), executive response, alerting and orientation gain (Task3), and spatial memory (Task4). We then planned five studies to test MedDrive's reliability and validity. Study-1 defined instructions and learning functions collecting data from 105 senior drivers attending an automobile club course. Study-2 assessed concurrent validity for detecting minor cognitive impairment (MCI) against useful field of view (UFOV) on 120 new senior drivers. Study-3 collected data from 200 healthy drivers aged 20-90 to model age related normal cognitive decline. Study-4 measured MedDrive's reliability having 21 healthy volunteers repeat tests five times. Study-5 tested MedDrive's responsiveness to alcohol in a randomised, double-blinded, placebo, crossover, dose-response validation trial including 20 young healthy volunteers. Results: Instructions were well understood and accepted by all senior drivers. Measures of visual processing (Task1) showed better performances than the UFOV in detecting MCI (ROC 0.770 vs. 0.620; p=0.048). MedDrive was capable of explaining 43.4% of changes occurring with natural cognitive decline. In young healthy drivers, learning effects became negligible from the third session onwards for all tasks except for dual tasking (ICC=0.769). All measures except alerting and orientation gain were affected by blood alcohol concentrations. Finally, MedDrive was able to explain 29.3% of potential causes of swerving on the driving simulator. Discussion and conclusions: MedDrive reveals improved performances compared to existing computed neuropsychological tasks. It shows promising results both for clinical and research purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents and discusses the use of Bayesian procedures - introduced through the use of Bayesian networks in Part I of this series of papers - for 'learning' probabilities from data. The discussion will relate to a set of real data on characteristics of black toners commonly used in printing and copying devices. Particular attention is drawn to the incorporation of the proposed procedures as an integral part in probabilistic inference schemes (notably in the form of Bayesian networks) that are intended to address uncertainties related to particular propositions of interest (e.g., whether or not a sample originates from a particular source). The conceptual tenets of the proposed methodologies are presented along with aspects of their practical implementation using currently available Bayesian network software.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel filtering method for multispectral satellite image classification. The proposed method learns a set of spatial filters that maximize class separability of binary support vector machine (SVM) through a gradient descent approach. Regularization issues are discussed in detail and a Frobenius-norm regularization is proposed to efficiently exclude uninformative filters coefficients. Experiments carried out on multiclass one-against-all classification and target detection show the capabilities of the learned spatial filters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine learning and pattern recognition methods have been used to diagnose Alzheimer's disease (AD) and mild cognitive impairment (MCI) from individual MRI scans. Another application of such methods is to predict clinical scores from individual scans. Using relevance vector regression (RVR), we predicted individuals' performances on established tests from their MRI T1 weighted image in two independent data sets. From Mayo Clinic, 73 probable AD patients and 91 cognitively normal (CN) controls completed the Mini-Mental State Examination (MMSE), Dementia Rating Scale (DRS), and Auditory Verbal Learning Test (AVLT) within 3months of their scan. Baseline MRI's from the Alzheimer's disease Neuroimaging Initiative (ADNI) comprised the other data set; 113 AD, 351 MCI, and 122 CN subjects completed the MMSE and Alzheimer's Disease Assessment Scale-Cognitive subtest (ADAS-cog) and 39 AD, 92 MCI, and 32 CN ADNI subjects completed MMSE, ADAS-cog, and AVLT. Predicted and actual clinical scores were highly correlated for the MMSE, DRS, and ADAS-cog tests (P<0.0001). Training with one data set and testing with another demonstrated stability between data sets. DRS, MMSE, and ADAS-Cog correlated better than AVLT with whole brain grey matter changes associated with AD. This result underscores their utility for screening and tracking disease. RVR offers a novel way to measure interactions between structural changes and neuropsychological tests beyond that of univariate methods. In clinical practice, we envision using RVR to aid in diagnosis and predict clinical outcome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new framework for large-scale data clustering. The main idea is to modify functional dimensionality reduction techniques to directly optimize over discrete labels using stochastic gradient descent. Compared to methods like spectral clustering our approach solves a single optimization problem, rather than an ad-hoc two-stage optimization approach, does not require a matrix inversion, can easily encode prior knowledge in the set of implementable functions, and does not have an ?out-of-sample? problem. Experimental results on both artificial and real-world datasets show the usefulness of our approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remote sensing image processing is nowadays a mature research area. The techniques developed in the field allow many real-life applications with great societal value. For instance, urban monitoring, fire detection or flood prediction can have a great impact on economical and environmental issues. To attain such objectives, the remote sensing community has turned into a multidisciplinary field of science that embraces physics, signal theory, computer science, electronics, and communications. From a machine learning and signal/image processing point of view, all the applications are tackled under specific formalisms, such as classification and clustering, regression and function approximation, image coding, restoration and enhancement, source unmixing, data fusion or feature selection and extraction. This paper serves as a survey of methods and applications, and reviews the last methodological advances in remote sensing image processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational anatomy with magnetic resonance imaging (MRI) is well established as a noninvasive biomarker of Alzheimer's disease (AD); however, there is less certainty about its dependency on the staging of AD. We use classical group analyses and automated machine learning classification of standard structural MRI scans to investigate AD diagnostic accuracy from the preclinical phase to clinical dementia. Longitudinal data from the Alzheimer's Disease Neuroimaging Initiative were stratified into 4 groups according to the clinical status-(1) AD patients; (2) mild cognitive impairment (MCI) converters; (3) MCI nonconverters; and (4) healthy controls-and submitted to a support vector machine. The obtained classifier was significantly above the chance level (62%) for detecting AD already 4 years before conversion from MCI. Voxel-based univariate tests confirmed the plausibility of our findings detecting a distributed network of hippocampal-temporoparietal atrophy in AD patients. We also identified a subgroup of control subjects with brain structure and cognitive changes highly similar to those observed in AD. Our results indicate that computational anatomy can detect AD substantially earlier than suggested by current models. The demonstrated differential spatial pattern of atrophy between correctly and incorrectly classified AD patients challenges the assumption of a uniform pathophysiological process underlying clinically identified AD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modeling the mechanisms that determine how humans and other agents choose among different behavioral and cognitive processes-be they strategies, routines, actions, or operators-represents a paramount theoretical stumbling block across disciplines, ranging from the cognitive and decision sciences to economics, biology, and machine learning. By using the cognitive and decision sciences as a case study, we provide an introduction to what is also known as the strategy selection problem. First, we explain why many researchers assume humans and other animals to come equipped with a repertoire of behavioral and cognitive processes. Second, we expose three descriptive, predictive, and prescriptive challenges that are common to all disciplines which aim to model the choice among these processes. Third, we give an overview of different approaches to strategy selection. These include cost‐benefit, ecological, learning, memory, unified, connectionist, sequential sampling, and maximization approaches. We conclude by pointing to opportunities for future research and by stressing that the selection problem is far from being resolved.