984 resultados para Single machine
Resumo:
High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.
Resumo:
This document presents particular description of work done during student’s internship in PR Metal company realized as ERASMUS PROJECT at ISEP. All information including company’s description and its structure, overview of the problems and analyzed cases, all stages of projects from concept to conclusion can be found here. Description of work done during the internship is divided here into two pieces. First part concerns one activities of the company which is robotic chefs (kitchen robot) production line. Work, that was done for development of this line involved several tasks, among them: creating a single-worker montage station for screwing robots housing’s parts, improve security system for laser welding chamber, what particularly consists in designing automatically closing door system with special surface, that protects against destructive action of laser beam, test station for examination of durability of heating connectors, solving problem with rotors vibrations. Second part tells about main task, realized in second half of internship and stands a complete description of machine development and design. The machine is a part of car handle latch cable production line and its tasks are: cutting cable to required length and hot-forming plastic cover for further assembly needs.
Resumo:
Dissertação para obtenção do Grau de Doutor em Estatística e Gestão do Risco
Resumo:
Nowadays, many of the manufactory and industrial system has a diagnosis system on top of it, responsible for ensuring the lifetime of the system itself. It achieves this by performing both diagnosis and error recovery procedures in real production time, on each of the individual parts of the system. There are many paradigms currently being used for diagnosis. However, they still fail to answer all the requirements imposed by the enterprises making it necessary for a different approach to take place. This happens mostly on the error recovery paradigms since the great diversity that is nowadays present in the industrial environment makes it highly unlikely for every single error to be fixed under a real time, no production stop, perspective. This work proposes a still relatively unknown paradigm to manufactory. The Artificial Immune Systems (AIS), which relies on bio-inspired algorithms, comes as a valid alternative to the ones currently being used. The proposed work is a multi-agent architecture that establishes the Artificial Immune Systems, based on bio-inspired algorithms. The main goal of this architecture is to solve for a resolution to the error currently detected by the system. The proposed architecture was tested using two different simulation environment, each meant to prove different points of views, using different tests. These tests will determine if, as the research suggests, this paradigm is a promising alternative for the industrial environment. It will also define what should be done to improve the current architecture and if it should be applied in a decentralised system.
Resumo:
We report on experiments aimed at comparing the hysteretic response of a Cu-Zn-Al single crystal undergoing a martensitic transition under strain-driven and stress-driven conditions. Strain-driven experiments were performed using a conventional tensile machine while a special device was designed to perform stress-driven experiments. Significant differences in the hysteresis loops were found. The strain-driven curves show reentrant behavior yield point which is not observed in the stress-driven case. The dissipated energy in the stress-driven curves is larger than in the strain-driven ones. Results from recently proposed models qualitatively agree with experiments.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
Although cross-sectional diffusion tensor imaging (DTI) studies revealed significant white matter changes in mild cognitive impairment (MCI), the utility of this technique in predicting further cognitive decline is debated. Thirty-five healthy controls (HC) and 67 MCI subjects with DTI baseline data were neuropsychologically assessed at one year. Among them, there were 40 stable (sMCI; 9 single domain amnestic, 7 single domain frontal, 24 multiple domain) and 27 were progressive (pMCI; 7 single domain amnestic, 4 single domain frontal, 16 multiple domain). Fractional anisotropy (FA) and longitudinal, radial, and mean diffusivity were measured using Tract-Based Spatial Statistics. Statistics included group comparisons and individual classification of MCI cases using support vector machines (SVM). FA was significantly higher in HC compared to MCI in a distributed network including the ventral part of the corpus callosum, right temporal and frontal pathways. There were no significant group-level differences between sMCI versus pMCI or between MCI subtypes after correction for multiple comparisons. However, SVM analysis allowed for an individual classification with accuracies up to 91.4% (HC versus MCI) and 98.4% (sMCI versus pMCI). When considering the MCI subgroups separately, the minimum SVM classification accuracy for stable versus progressive cognitive decline was 97.5% in the multiple domain MCI group. SVM analysis of DTI data provided highly accurate individual classification of stable versus progressive MCI regardless of MCI subtype, indicating that this method may become an easily applicable tool for early individual detection of MCI subjects evolving to dementia.
Resumo:
This work had two primary objectives: 1) to produce a working prototype for automated printability assessment and 2) to perform a study of available machine vision and other necessary hardware solutions. The three printability testing methods, IGT Picking,He¬liotest, and mottling, considered in this work have several different requirements and the task was to produce a single automated testing system suitable for all methods. A system was designed and built and its performance was tested using the Heliotest. Working proto¬types are important tools for implementing theoretical methods into practical systems and testing and demonstrating the methodsin real life conditions. The system was found to be sufficient for the Heliotest method. Further testing and possible modifications related to other two test methods were left for future works. A short study of available systems and solutions concerning image acquisition of machine vision was performed. The theoretical part of this study includes lighting systems, optical systems and image acquisition tools, mainly cameras and the underlying physical aspects for each portion.
Resumo:
BACKGROUND: Recent neuroimaging studies suggest that value-based decision-making may rely on mechanisms of evidence accumulation. However no studies have explicitly investigated the time when single decisions are taken based on such an accumulation process. NEW METHOD: Here, we outline a novel electroencephalography (EEG) decoding technique which is based on accumulating the probability of appearance of prototypical voltage topographies and can be used for predicting subjects' decisions. We use this approach for studying the time-course of single decisions, during a task where subjects were asked to compare reward vs. loss points for accepting or rejecting offers. RESULTS: We show that based on this new method, we can accurately decode decisions for the majority of the subjects. The typical time-period for accurate decoding was modulated by task difficulty on a trial-by-trial basis. Typical latencies of when decisions are made were detected at ∼500ms for 'easy' vs. ∼700ms for 'hard' decisions, well before subjects' response (∼340ms). Importantly, this decision time correlated with the drift rates of a diffusion model, evaluated independently at the behavioral level. COMPARISON WITH EXISTING METHOD(S): We compare the performance of our algorithm with logistic regression and support vector machine and show that we obtain significant results for a higher number of subjects than with these two approaches. We also carry out analyses at the average event-related potential level, for comparison with previous studies on decision-making. CONCLUSIONS: We present a novel approach for studying the timing of value-based decision-making, by accumulating patterns of topographic EEG activity at single-trial level.
Resumo:
Within the latest decade high-speed motor technology has been increasingly commonly applied within the range of medium and large power. More particularly, applications like such involved with gas movement and compression seem to be the most important area in which high-speed machines are used. In manufacturing the induction motor rotor core of one single piece of steel it is possible to achieve an extremely rigid rotor construction for the high-speed motor. In a mechanical sense, the solid rotor may be the best possible rotor construction. Unfortunately, the electromagnetic properties of a solid rotor are poorer than the properties of the traditional laminated rotor of an induction motor. This thesis analyses methods for improving the electromagnetic properties of a solid-rotor induction machine. The slip of the solid rotor is reduced notably if the solid rotor is axially slitted. The slitting patterns of the solid rotor are examined. It is shown how the slitting parameters affect the produced torque. Methods for decreasing the harmonic eddy currents on the surface of the rotor are also examined. The motivation for this is to improve the efficiency of the motor to reach the efficiency standard of a laminated rotor induction motor. To carry out these research tasks the finite element analysis is used. An analytical calculation of solid rotors based on the multi-layer transfer-matrix method is developed especially for the calculation of axially slitted solid rotors equipped with wellconducting end rings. The calculation results are verified by using the finite element analysis and laboratory measurements. The prototype motors of 250 – 300 kW and 140 Hz were tested to verify the results. Utilization factor data are given for several other prototypes the largest of which delivers 1000 kW at 12000 min-1.
Resumo:
Axial-flux machines tend to have cooling difficulties since it is difficult to arrange continuous heat path between the stator stack and the frame. One important reason for this is that no shrink fitting of the stator is possible in an axial-flux machine. Using of liquid-cooled end shields does not alone solve this issue. Cooling of the rotor and the end windings may also be difficult at least in case of two-stator-single-rotor construction where air circulation in the rotor and in the end-winding areas may be difficult to arrange. If the rotor has significant losses air circulation via the rotor and behind the stator yokes should be arranged which, again, weakens the stator cooling. In this paper we study a novel way of using copper bars as extra heat transfer paths between the stator teeth and liquid cooling pools in the end shields. After this the end windings still suffer of low thermal conductivity and means for improving this by high-heat-conductance material was also studied. The design principle of each cooling system is presented in details. Thermal models based on Computational Fluid Dynamics (CFD) are used to analyse the temperature distribution in the machine. Measurement results are provided from different versions of the machine. The results show that significant improvements in the cooling can be gained by these steps.
Resumo:
Malgré des progrès constants en termes de capacité de calcul, mémoire et quantité de données disponibles, les algorithmes d'apprentissage machine doivent se montrer efficaces dans l'utilisation de ces ressources. La minimisation des coûts est évidemment un facteur important, mais une autre motivation est la recherche de mécanismes d'apprentissage capables de reproduire le comportement d'êtres intelligents. Cette thèse aborde le problème de l'efficacité à travers plusieurs articles traitant d'algorithmes d'apprentissage variés : ce problème est vu non seulement du point de vue de l'efficacité computationnelle (temps de calcul et mémoire utilisés), mais aussi de celui de l'efficacité statistique (nombre d'exemples requis pour accomplir une tâche donnée). Une première contribution apportée par cette thèse est la mise en lumière d'inefficacités statistiques dans des algorithmes existants. Nous montrons ainsi que les arbres de décision généralisent mal pour certains types de tâches (chapitre 3), de même que les algorithmes classiques d'apprentissage semi-supervisé à base de graphe (chapitre 5), chacun étant affecté par une forme particulière de la malédiction de la dimensionalité. Pour une certaine classe de réseaux de neurones, appelés réseaux sommes-produits, nous montrons qu'il peut être exponentiellement moins efficace de représenter certaines fonctions par des réseaux à une seule couche cachée, comparé à des réseaux profonds (chapitre 4). Nos analyses permettent de mieux comprendre certains problèmes intrinsèques liés à ces algorithmes, et d'orienter la recherche dans des directions qui pourraient permettre de les résoudre. Nous identifions également des inefficacités computationnelles dans les algorithmes d'apprentissage semi-supervisé à base de graphe (chapitre 5), et dans l'apprentissage de mélanges de Gaussiennes en présence de valeurs manquantes (chapitre 6). Dans les deux cas, nous proposons de nouveaux algorithmes capables de traiter des ensembles de données significativement plus grands. Les deux derniers chapitres traitent de l'efficacité computationnelle sous un angle différent. Dans le chapitre 7, nous analysons de manière théorique un algorithme existant pour l'apprentissage efficace dans les machines de Boltzmann restreintes (la divergence contrastive), afin de mieux comprendre les raisons qui expliquent le succès de cet algorithme. Finalement, dans le chapitre 8 nous présentons une application de l'apprentissage machine dans le domaine des jeux vidéo, pour laquelle le problème de l'efficacité computationnelle est relié à des considérations d'ingénierie logicielle et matérielle, souvent ignorées en recherche mais ô combien importantes en pratique.
Resumo:
We report on experiments aimed at comparing the hysteretic response of a Cu-Zn-Al single crystal undergoing a martensitic transition under strain-driven and stress-driven conditions. Strain-driven experiments were performed using a conventional tensile machine while a special device was designed to perform stress-driven experiments. Significant differences in the hysteresis loops were found. The strain-driven curves show reentrant behavior yield point which is not observed in the stress-driven case. The dissipated energy in the stress-driven curves is larger than in the strain-driven ones. Results from recently proposed models qualitatively agree with experiments.
Resumo:
Accurate single trial P300 classification lends itself to fast and accurate control of Brain Computer Interfaces (BCIs). Highly accurate classification of single trial P300 ERPs is achieved by characterizing the EEG via corresponding stationary and time-varying Wackermann parameters. Subsets of maximally discriminating parameters are then selected using the Network Clustering feature selection algorithm and classified with Naive-Bayes and Linear Discriminant Analysis classifiers. Hence the method is assessed on two different data-sets from BCI competitions and is shown to produce accuracies of between approximately 70% and 85%. This is promising for the use of Wackermann parameters as features in the classification of single-trial ERP responses.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)