818 resultados para Machine learning,Keras,Tensorflow,Data parallelism,Model parallelism,Container,Docker
Resumo:
El projecte consisteix en la implementació d'una aplicació que generi el codi en ANSI C de maneraautomàtica a partir del diagrama d'una màquina d'estats, més concretament del model generat enformat XMI XML Metada Interchange (XML d'intercanvi de metadades) per l'aplicació gràfica de disseny de programari argoUML0.24, aquesta aplicació és independent de la plataforma, de codi obert i gratuïta.
Mejora diagnóstica de hepatopatías de afectación difusa mediante técnicas de inteligencia artificial
Resumo:
The automatic diagnostic discrimination is an application of artificial intelligence techniques that can solve clinical cases based on imaging. Diffuse liver diseases are diseases of wide prominence in the population and insidious course, yet early in its progression. Early and effective diagnosis is necessary because many of these diseases progress to cirrhosis and liver cancer. The usual technique of choice for accurate diagnosis is liver biopsy, an invasive and not without incompatibilities one. It is proposed in this project an alternative non-invasive and free of contraindications method based on liver ultrasonography. The images are digitized and then analyzed using statistical techniques and analysis of texture. The results are validated from the pathology report. Finally, we apply artificial intelligence techniques as Fuzzy k-Means or Support Vector Machines and compare its significance to the analysis Statistics and the report of the clinician. The results show that this technique is significantly valid and a promising alternative as a noninvasive diagnostic chronic liver disease from diffuse involvement. Artificial Intelligence classifying techniques significantly improve the diagnosing discrimination compared to other statistics.
Resumo:
Reinforcement learning (RL) is a very suitable technique for robot learning, as it can learn in unknown environments and in real-time computation. The main difficulties in adapting classic RL algorithms to robotic systems are the generalization problem and the correct observation of the Markovian state. This paper attempts to solve the generalization problem by proposing the semi-online neural-Q_learning algorithm (SONQL). The algorithm uses the classic Q_learning technique with two modifications. First, a neural network (NN) approximates the Q_function allowing the use of continuous states and actions. Second, a database of the most representative learning samples accelerates and stabilizes the convergence. The term semi-online is referred to the fact that the algorithm uses the current but also past learning samples. However, the algorithm is able to learn in real-time while the robot is interacting with the environment. The paper shows simulated results with the "mountain-car" benchmark and, also, real results with an underwater robot in a target following behavior
Resumo:
Among various advantages, their small size makes model organisms preferred subjects of investigation. Yet, even in model systems detailed analysis of numerous developmental processes at cellular level is severely hampered by their scale. For instance, secondary growth of Arabidopsis hypocotyls creates a radial pattern of highly specialized tissues that comprises several thousand cells starting from a few dozen. This dynamic process is difficult to follow because of its scale and because it can only be investigated invasively, precluding comprehensive understanding of the cell proliferation, differentiation, and patterning events involved. To overcome such limitation, we established an automated quantitative histology approach. We acquired hypocotyl cross-sections from tiled high-resolution images and extracted their information content using custom high-throughput image processing and segmentation. Coupled with automated cell type recognition through machine learning, we could establish a cellular resolution atlas that reveals vascular morphodynamics during secondary growth, for example equidistant phloem pole formation. DOI: http://dx.doi.org/10.7554/eLife.01567.001.
Resumo:
Background: Current guidelines underline the limitations of existing instruments to assess fitness to drive and the poor adaptability of batteries of neuropsychological tests in primary care settings. Aims: To provide a free, reliable, transparent computer based instrument capable of detecting effects of age or drugs on visual processing and cognitive functions. Methods: Relying on systematic reviews of neuropsychological tests and driving performances, we conceived four new computed tasks measuring: visual processing (Task1), movement attention shift (Task2), executive response, alerting and orientation gain (Task3), and spatial memory (Task4). We then planned five studies to test MedDrive's reliability and validity. Study-1 defined instructions and learning functions collecting data from 105 senior drivers attending an automobile club course. Study-2 assessed concurrent validity for detecting minor cognitive impairment (MCI) against useful field of view (UFOV) on 120 new senior drivers. Study-3 collected data from 200 healthy drivers aged 20-90 to model age related normal cognitive decline. Study-4 measured MedDrive's reliability having 21 healthy volunteers repeat tests five times. Study-5 tested MedDrive's responsiveness to alcohol in a randomised, double-blinded, placebo, crossover, dose-response validation trial including 20 young healthy volunteers. Results: Instructions were well understood and accepted by all senior drivers. Measures of visual processing (Task1) showed better performances than the UFOV in detecting MCI (ROC 0.770 vs. 0.620; p=0.048). MedDrive was capable of explaining 43.4% of changes occurring with natural cognitive decline. In young healthy drivers, learning effects became negligible from the third session onwards for all tasks except for dual tasking (ICC=0.769). All measures except alerting and orientation gain were affected by blood alcohol concentrations. Finally, MedDrive was able to explain 29.3% of potential causes of swerving on the driving simulator. Discussion and conclusions: MedDrive reveals improved performances compared to existing computed neuropsychological tasks. It shows promising results both for clinical and research purposes.
Resumo:
This paper presents and discusses the use of Bayesian procedures - introduced through the use of Bayesian networks in Part I of this series of papers - for 'learning' probabilities from data. The discussion will relate to a set of real data on characteristics of black toners commonly used in printing and copying devices. Particular attention is drawn to the incorporation of the proposed procedures as an integral part in probabilistic inference schemes (notably in the form of Bayesian networks) that are intended to address uncertainties related to particular propositions of interest (e.g., whether or not a sample originates from a particular source). The conceptual tenets of the proposed methodologies are presented along with aspects of their practical implementation using currently available Bayesian network software.
Resumo:
We present a novel filtering method for multispectral satellite image classification. The proposed method learns a set of spatial filters that maximize class separability of binary support vector machine (SVM) through a gradient descent approach. Regularization issues are discussed in detail and a Frobenius-norm regularization is proposed to efficiently exclude uninformative filters coefficients. Experiments carried out on multiclass one-against-all classification and target detection show the capabilities of the learned spatial filters.
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
Student guidance is an always desired characteristic in any educational system, butit represents special difficulty if it has to be deployed in an automated way to fulfilsuch needs in a computer supported educational tool. In this paper we explorepossible avenues relying on machine learning techniques, to be included in a nearfuture -in the form of a tutoring navigational tool- in a teleeducation platform -InterMediActor- currently under development. Since no data from that platform isavailable yet, the preliminary experiments presented in this paper are builtinterpreting every subject in the Telecommunications Degree at Universidad CarlosIII de Madrid as an aggregated macro-competence (following the methodologicalconsiderations in InterMediActor), such that marks achieved by students can beused as data for the models, to be replaced in a near future by real data directlymeasured inside InterMediActor. We evaluate the predictability of students qualifications, and we deploy a preventive early detection system -failure alert-, toidentify those students more prone to fail a certain subject such that correctivemeans can be deployed with sufficient anticipation.
Resumo:
Machine learning and pattern recognition methods have been used to diagnose Alzheimer's disease (AD) and mild cognitive impairment (MCI) from individual MRI scans. Another application of such methods is to predict clinical scores from individual scans. Using relevance vector regression (RVR), we predicted individuals' performances on established tests from their MRI T1 weighted image in two independent data sets. From Mayo Clinic, 73 probable AD patients and 91 cognitively normal (CN) controls completed the Mini-Mental State Examination (MMSE), Dementia Rating Scale (DRS), and Auditory Verbal Learning Test (AVLT) within 3months of their scan. Baseline MRI's from the Alzheimer's disease Neuroimaging Initiative (ADNI) comprised the other data set; 113 AD, 351 MCI, and 122 CN subjects completed the MMSE and Alzheimer's Disease Assessment Scale-Cognitive subtest (ADAS-cog) and 39 AD, 92 MCI, and 32 CN ADNI subjects completed MMSE, ADAS-cog, and AVLT. Predicted and actual clinical scores were highly correlated for the MMSE, DRS, and ADAS-cog tests (P<0.0001). Training with one data set and testing with another demonstrated stability between data sets. DRS, MMSE, and ADAS-Cog correlated better than AVLT with whole brain grey matter changes associated with AD. This result underscores their utility for screening and tracking disease. RVR offers a novel way to measure interactions between structural changes and neuropsychological tests beyond that of univariate methods. In clinical practice, we envision using RVR to aid in diagnosis and predict clinical outcome.
Resumo:
Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.
Resumo:
A simple holographic model is presented and analyzed that describes chiral symmetry breaking and the physics of the meson sector in QCD. This is a bottom-up model that incorporates string theory ingredients like tachyon condensation which is expected to be the main manifestation of chiral symmetry breaking in the holographic context. As a model for glue the Kuperstein-Sonnenschein background is used. The structure of the flavor vacuum is analyzed in the quenched approximation. Chiral symmetry breaking is shown at zero temperature. Above the deconfinement transition chiral symmetry is restored. A complete holographic renormalization is performed and the chiral condensate is calculated for different quark masses both at zero and non-zero temperatures. The 0++, 0¿+, 1++, 1¿¿ meson trajectories are analyzed and their masses and decay constants are computed. The asymptotic trajectories are linear. The model has one phenomenological parameter beyond those of QCD that affects the 1++, 0¿+ sectors. Fitting this parameter we obtain very good agreement with data. The model improves in several ways the popular hard-wall and soft wall bottom-up models.
Resumo:
This report documents an extensive field program carried out to identify the relationships between soil engineering properties, as measured by various in situ devices, and the results of machine compaction monitoring using prototype compaction monitoring technology developed by Caterpillar Inc. Primary research tasks for this study include the following: (1) experimental testing and statistical analyses to evaluate machine power in terms of the engineering properties of the compacted soil (e.g., density, strength, stiffness) and (2) recommendations for using the compaction monitoring technology in practice. The compaction monitoring technology includes sensors that monitor the power consumption used to move the compaction machine, an on-board computer and display screen, and a GPS system to map the spatial location of the machine. In situ soil density, strength, and stiffness data characterized the soil at various stages of compaction. For each test strip or test area, in situ soil properties were compared directly to machine power values to establish statistical relationships. Statistical models were developed to predict soil density, strength, and stiffness from the machine power values. Field data for multiple test strips were evaluated. The R2 correlation coefficient was generally used to assess the quality of the regressions. Strong correlations were observed between averaged machine power and field measurement data. The relationships are based on the compaction model derived from laboratory data. Correlation coefficients (R2) were consistently higher for thicker lifts than for thin lifts, indicating that the depth influencing machine power response exceeds the representative lift thickness encountered under field conditions. Caterpillar Inc. compaction monitoring technology also identified localized areas of an earthwork project with weak or poorly compacted soil. The soil properties at these locations were verified using in situ test devices. This report also documents the steps required to implement the compaction monitoring technology evaluated.
Resumo:
A simple holographic model is presented and analyzed that describes chiral symmetry breaking and the physics of the meson sector in QCD. This is a bottom-up model that incorporates string theory ingredients like tachyon condensation which is expected to be the main manifestation of chiral symmetry breaking in the holographic context. As a model for glue the Kuperstein-Sonnenschein background is used. The structure of the flavor vacuum is analyzed in the quenched approximation. Chiral symmetry breaking is shown at zero temperature. Above the deconfinement transition chiral symmetry is restored. A complete holographic renormalization is performed and the chiral condensate is calculated for different quark masses both at zero and non-zero temperatures. The 0++, 0¿+, 1++, 1¿¿ meson trajectories are analyzed and their masses and decay constants are computed. The asymptotic trajectories are linear. The model has one phenomenological parameter beyond those of QCD that affects the 1++, 0¿+ sectors. Fitting this parameter we obtain very good agreement with data. The model improves in several ways the popular hard-wall and soft wall bottom-up models.
Resumo:
We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.