954 resultados para MACHINE-TOOL
Resumo:
In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.
In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.
Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.
In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
Silicon carbide (SiC) is a material of great technological interest for engineering applications concerning hostile environments where silicon-based components cannot work (beyond 623 K). Single point diamond turning (SPDT) has remained a superior and viable method to harness process efficiency and freeform shapes on this harder material. However, it is extremely difficult to machine this ceramic consistently in the ductile regime due to sudden and rapid tool wear. It thus becomes non trivial to develop an accurate understanding of tool wear mechanism during SPDT of SiC in order to identify measures to suppress wear to minimize operational cost.
In this paper, molecular dynamics (MD) simulation has been deployed with a realistic analytical bond order potential (ABOP) formalism based potential energy function to understand tool wear mechanism during single point diamond turning of SiC. The most significant result was obtained using the radial distribution function which suggests graphitization of diamond tool during the machining process. This phenomenon occurs due to the abrasive processes between these two ultra hard materials. The abrasive action results in locally high temperature which compounds with the massive cutting forces leading to sp3–sp2 order–disorder transition of diamond tool. This represents the root cause of tool wear during SPDT operation of cubic SiC. Further testing led to the development of a novel method for quantitative assessment of the progression of diamond tool wear from MD simulations.
Resumo:
Cloud data centres are critical business infrastructures and the fastest growing service providers. Detecting anomalies in Cloud data centre operation is vital. Given the vast complexity of the data centre system software stack, applications and workloads, anomaly detection is a challenging endeavour. Current tools for detecting anomalies often use machine learning techniques, application instance behaviours or system metrics distribu- tion, which are complex to implement in Cloud computing environments as they require training, access to application-level data and complex processing. This paper presents LADT, a lightweight anomaly detection tool for Cloud data centres that uses rigorous correlation of system metrics, implemented by an efficient corre- lation algorithm without need for training or complex infrastructure set up. LADT is based on the hypothesis that, in an anomaly-free system, metrics from data centre host nodes and virtual machines (VMs) are strongly correlated. An anomaly is detected whenever correlation drops below a threshold value. We demonstrate and evaluate LADT using a Cloud environment, where it shows that the hosting node I/O operations per second (IOPS) are strongly correlated with the aggregated virtual machine IOPS, but this correlation vanishes when an application stresses the disk, indicating a node-level anomaly.
Resumo:
Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.
Resumo:
The study of electricity markets operation has been gaining an increasing importance in last years, as result of the new challenges that the electricity markets restructuring produced. This restructuring increased the competitiveness of the market, but with it its complexity. The growing complexity and unpredictability of the market’s evolution consequently increases the decision making difficulty. Therefore, the intervenient entities are forced to rethink their behaviour and market strategies. Currently, lots of information concerning electricity markets is available. These data, concerning innumerous regards of electricity markets operation, is accessible free of charge, and it is essential for understanding and suitably modelling electricity markets. This paper proposes a tool which is able to handle, store and dynamically update data. The development of the proposed tool is expected to be of great importance to improve the comprehension of electricity markets and the interactions among the involved entities.
Resumo:
De plus en plus de recherches sur les Interactions Humain-Machine (IHM) tentent d’effectuer des analyses fines de l’interaction afin de faire ressortir ce qui influence les comportements des utilisateurs. Tant au niveau de l’évaluation de la performance que de l’expérience des utilisateurs, on note qu’une attention particulière est maintenant portée aux réactions émotionnelles et cognitives lors de l’interaction. Les approches qualitatives standards sont limitées, car elles se fondent sur l’observation et des entrevues après l’interaction, limitant ainsi la précision du diagnostic. L’expérience utilisateur et les réactions émotionnelles étant de nature hautement dynamique et contextualisée, les approches d’évaluation doivent l’être de même afin de permettre un diagnostic précis de l’interaction. Cette thèse présente une approche d’évaluation quantitative et dynamique qui permet de contextualiser les réactions des utilisateurs afin d’en identifier les antécédents dans l’interaction avec un système. Pour ce faire, ce travail s’articule autour de trois axes. 1) La reconnaissance automatique des buts et de la structure de tâches de l’utilisateur, à l’aide de mesures oculométriques et d’activité dans l’environnement par apprentissage machine. 2) L’inférence de construits psychologiques (activation, valence émotionnelle et charge cognitive) via l’analyse des signaux physiologiques. 3) Le diagnostic de l‘interaction reposant sur le couplage dynamique des deux précédentes opérations. Les idées et le développement de notre approche sont illustrés par leur application dans deux contextes expérimentaux : le commerce électronique et l’apprentissage par simulation. Nous présentons aussi l’outil informatique complet qui a été implémenté afin de permettre à des professionnels en évaluation (ex. : ergonomes, concepteurs de jeux, formateurs) d’utiliser l’approche proposée pour l’évaluation d’IHM. Celui-ci est conçu de manière à faciliter la triangulation des appareils de mesure impliqués dans ce travail et à s’intégrer aux méthodes classiques d’évaluation de l’interaction (ex. : questionnaires et codage des observations).
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
Learning Disability (LD) is a general term that describes specific kinds of learning problems. It is a neurological condition that affects a child's brain and impairs his ability to carry out one or many specific tasks. The learning disabled children are neither slow nor mentally retarded. This disorder can make it problematic for a child to learn as quickly or in the same way as some child who isn't affected by a learning disability. An affected child can have normal or above average intelligence. They may have difficulty paying attention, with reading or letter recognition, or with mathematics. It does not mean that children who have learning disabilities are less intelligent. In fact, many children who have learning disabilities are more intelligent than an average child. Learning disabilities vary from child to child. One child with LD may not have the same kind of learning problems as another child with LD. There is no cure for learning disabilities and they are life-long. However, children with LD can be high achievers and can be taught ways to get around the learning disability. In this research work, data mining using machine learning techniques are used to analyze the symptoms of LD, establish interrelationships between them and evaluate the relative importance of these symptoms. To increase the diagnostic accuracy of learning disability prediction, a knowledge based tool based on statistical machine learning or data mining techniques, with high accuracy,according to the knowledge obtained from the clinical information, is proposed. The basic idea of the developed knowledge based tool is to increase the accuracy of the learning disability assessment and reduce the time used for the same. Different statistical machine learning techniques in data mining are used in the study. Identifying the important parameters of LD prediction using the data mining techniques, identifying the hidden relationship between the symptoms of LD and estimating the relative significance of each symptoms of LD are also the parts of the objectives of this research work. The developed tool has many advantages compared to the traditional methods of using check lists in determination of learning disabilities. For improving the performance of various classifiers, we developed some preprocessing methods for the LD prediction system. A new system based on fuzzy and rough set models are also developed for LD prediction. Here also the importance of pre-processing is studied. A Graphical User Interface (GUI) is designed for developing an integrated knowledge based tool for prediction of LD as well as its degree. The designed tool stores the details of the children in the student database and retrieves their LD report as and when required. The present study undoubtedly proves the effectiveness of the tool developed based on various machine learning techniques. It also identifies the important parameters of LD and accurately predicts the learning disability in school age children. This thesis makes several major contributions in technical, general and social areas. The results are found very beneficial to the parents, teachers and the institutions. They are able to diagnose the child’s problem at an early stage and can go for the proper treatments/counseling at the correct time so as to avoid the academic and social losses.
Resumo:
The paper reports an interactive tool for calibrating a camera, suitable for use in outdoor scenes. The motivation for the tool was the need to obtain an approximate calibration for images taken with no explicit calibration data. Such images are frequently presented to research laboratories, especially in surveillance applications, with a request to demonstrate algorithms. The method decomposes the calibration parameters into intuitively simple components, and relies on the operator interactively adjusting the parameter settings to achieve a visually acceptable agreement between a rectilinear calibration model and his own perception of the scene. Using the tool, we have been able to calibrate images of unknown scenes, taken with unknown cameras, in a matter of minutes. The standard of calibration has proved to be sufficient for model-based pose recovery and tracking of vehicles.
Resumo:
Digital imaging technologies enable a mastery of the visual that in recent mainstream cinema frequently manifests as certain kinds of spatial reach, orientation and motion. In such a context Michael Bay’s Transformers franchise can be framed as a digital re-tooling of a familiar fantasy of vehicular propulsion, US car culture writ large in digitally crafted spectacles of diegetic speed, the vehicular chase film ‘2.0’. Movement is central to these films, calling up Scott Bukatman’s observation that in spectacular visual media ‘movement has become more than a tool of bodily knowledge; it has become an end in itself’ (2003: 125). Not all movements and not all instances of vehicular propulsion are the same however. How might we evaluate what is at stake in a film’s assertion of movement as an end in itself, and the form that assertion takes, its articulations of diegetic velocity, corporeality, and spatial penetration? Deploying an attentiveness towards the specificity of aesthetic detail and affective impact in Bay’s delineation of movement, this essay suggests that the franchise poses questions about the relationship of human movement to machine movement that exceed their narrative basis. Identifying a persistent rotational trope in the franchise that in its audio-visual articulation combines oddly anachronistic elements (evoking the mechanical rather than the digital), the article argues that the films prioritise certain fantasies of transformation and spatial penetration, and certain modes of corporeality, as one response to contemporary debates about digital technologisation, sustainable energy, and cinematic spectacle. In this way the franchise also represents a particular moment in a more widely discernible preoccupation in contemporary cinema with what we might call a ‘rotational aesthetics’ of action, a machine movement made possible by the digital, but which invokes earlier histories and fantasies of animation, propulsion, mechanization and mechanization to particular ends.
Resumo:
Due to their high hardness and wear resistance, Si3N4 based ceramics are one of the most suitable cutting tool materials for machining cast iron, nickel alloys and hardened steels. However, their high degree of brittleness usually leads to inconsistent results and sudden catastrophic failures. This necessitates a process optimization when machining superalloys with Si3N4 based ceramic cutting tools. The tools are expected to withstand the heat and pressure developed when machining at higher cutting conditions because of their high hardness and melting point. This paper evaluates the performance of α-SiAlON tool in turning Ti-6Al-4V alloy at high cutting conditions, up to 250 m min-1, without coolant. Tool wear, failure modes and temperature were monitored to access the performance of the cutting tool. Test results showed that the performance of α-SiAl0N tool, in terms of tool life, at the cutting conditions investigated is relatively poor due probably to rapid notching and excessive chipping of the cutting edge. These facts are associated with adhesion and diffusion wear rate that tends to weaken the bond strength of the cutting tool.
Resumo:
To simplify computer management, several system administrators are adopting advanced techniques to manage software configuration on grids, but the tight coupling between hardware and software makes every PC an individual managed entity, lowering the scalability and increasing the costs to manage hundreds or thousands of PCs. This paper discusses the feasibility of a distributed virtual machine environment, named Flexlab: a new approach for computer management that combines virtualization and distributed system architectures as the basis of a management system. Flexlab is able to extend the coverage of a computer management solution beyond client operating system limitations and also offers a convenient hardware abstraction, decoupling software and hardware, simplifying computer management. The results obtained in this work indicate that FlexLab is able to overcome the limitations imposed by the coupling between software and hardware, simplifying the management of homogeneous and heterogeneous grids. © 2009 IEEE.
Resumo:
Silicon nitride cutting tools have been used successfully for machining hard materials, like: cast irons, nickel based alloys, etc. However these cutting tools with diamond coating present little information on dry turning operations of gray cast iron. In the present work, Si3N4 square inserts was developed, characterized and subsequently coated with diamond for dry machining operations on gray cast iron. All experiments were conducted with replica. It was used a 1500, 3000, 4500 m cutting length, feed rate of 0.33 mm/rev and keeping the depth of cut constant and equal to 1 mm. The results show that wear in the tool tips of the Si3N4 inserts, in all cutting conditions, was caused by both mechanical and chemical processes. To understand the tool wear mechanisms, a morphological analysis of the inserts, after experiments, has been performed by SEM and optical microscopy. Diamond coated PVD inserts showed to be capable to reach large cutting lengths when machining gray cast iron. © (2010) Trans Tech Publications.