998 resultados para Machine Project
Resumo:
This document is a summary of the Bachelor thesis titled “VHDL-Based System Design of a Cognitive Sensorimotor Loop (CSL) for Haptic Human-Machine Interaction (HMI)” written by Pablo de Miguel Morales, Electronics Engineering student at the Universidad Politécnica de Madrid (UPM Madrid, Spain) during an Erasmus+ Exchange Program at the Beuth Hochschule für Technik (BHT Berlin, Germany). The tutor of this project is Dr. Prof. Hild. This project has been developed inside the Neurobotics Research Laboratory (NRL) in close collaboration with Benjamin Panreck, a member of the NRL, and another exchange student from the UPM Pablo Gabriel Lezcano. For a deeper comprehension of the content of the thesis, a deeper look in the document is needed as well as the viewing of the videos and the VHDL design. In the growing field of automation, a large amount of workforce is dedicated to improve, adapt and design motor controllers for a wide variety of applications. In the specific field of robotics or other machinery designed to interact with humans or their environment, new needs and technological solutions are often being discovered due to the existing, relatively unexplored new scenario it is. The project consisted of three main parts: Two VHDL-based systems and one short experiment on the haptic perception. Both VHDL systems are based on a Cognitive Sensorimotor Loop (CSL) which is a control loop designed by the NRL and mainly developed by Dr. Prof. Hild. The CSL is a control loop whose main characteristic is the fact that it does not use any external sensor to measure the speed or position of the motor but the motor itself. The motor always generates a voltage that is proportional to its angular speed so it does not need calibration. This method is energy efficient and simplifies control loops in complex systems. The first system, named CSL Stay In Touch (SIT), consists in a one DC motor system controller by a FPGA Board (Zynq ZYBO 7000) whose aim is to keep contact with any external object that touches its Sensing Platform in both directions. Apart from the main behavior, three features (Search Mode, Inertia Mode and Return Mode) have been designed to enhance the haptic interaction experience. Additionally, a VGA-Screen is also controlled by the FPGA Board for the monitoring of the whole system. This system has been completely developed, tested and improved; analyzing its timing and consumption properties. The second system, named CSL Fingerlike Mechanism (FM), consists in a fingerlike mechanical system controlled by two DC motors (Each controlling one part of the finger). The behavior is similar to the first system but in a more complex structure. This system was optional and not part of the original objectives of the thesis and it could not be properly finished and tested due to the lack of time. The haptic perception experiment was an experiment conducted to have an insight into the complexity of human haptic perception in order to implement this knowledge into technological applications. The experiment consisted in testing the capability of the subjects to recognize different objects and shapes while being blindfolded and with their ears covered. Two groups were done, one had full haptic perception while the other had to explore the environment with a plastic piece attached to their finger to create a haptic handicap. The conclusion of the thesis was that a haptic system based only on a CSL-based system is not enough to retrieve valuable information from the environment and that other sensors are needed (temperature, pressure, etc.) but that a CSL-based system is very useful to control the force applied by the system to interact with haptic sensible surfaces such as skin or tactile screens. RESUMEN. Este documento es un resumen del proyecto fin de grado titulado “VHDL-Based System Design of a Cognitive Sensorimotor Loop (CSL) for Haptic Human-Machine Interaction (HMI)” escrito por Pablo de Miguel, estudiante de Ingeniería Electrónica de Comunicaciones en la Universidad Politécnica de Madrid (UPM Madrid, España) durante un programa de intercambio Erasmus+ en la Beuth Hochschule für Technik (BHT Berlin, Alemania). El tutor de este proyecto ha sido Dr. Prof. Hild. Este proyecto se ha desarrollado dentro del Neurorobotics Research Laboratory (NRL) en estrecha colaboración con Benjamin Panreck (un miembro del NRL) y con Pablo Lezcano (Otro estudiante de intercambio de la UPM). Para una comprensión completa del trabajo es necesaria una lectura detenida de todo el documento y el visionado de los videos y análisis del diseño VHDL incluidos en el CD adjunto. En el creciente sector de la automatización, una gran cantidad de esfuerzo está dedicada a mejorar, adaptar y diseñar controladores de motor para un gran rango de aplicaciones. En el campo específico de la robótica u otra maquinaria diseñada para interactuar con los humanos o con su entorno, nuevas necesidades y soluciones tecnológicas se siguen desarrollado debido al relativamente inexplorado y nuevo escenario que supone. El proyecto consta de tres partes principales: Dos sistemas basados en VHDL y un pequeño experimento sobre la percepción háptica. Ambos sistemas VHDL están basados en el Cognitive Sesnorimotor Loop (CSL) que es un lazo de control creado por el NRL y cuyo desarrollador principal ha sido Dr. Prof. Hild. El CSL es un lazo de control cuya principal característica es la ausencia de sensores externos para medir la velocidad o la posición del motor, usando el propio motor como sensor. El motor siempre genera un voltaje proporcional a su velocidad angular de modo que no es necesaria calibración. Este método es eficiente en términos energéticos y simplifica los lazos de control en sistemas complejos. El primer sistema, llamado CSL Stay In Touch (SIT), consiste en un sistema formado por un motor DC controlado por una FPGA Board (Zynq ZYBO 7000) cuyo objetivo es mantener contacto con cualquier objeto externo que toque su plataforma sensible en ambas direcciones. Aparte del funcionamiento básico, tres modos (Search Mode, Inertia Mode y Return Mode) han sido diseñados para mejorar la interacción. Adicionalmente, se ha diseñado el control a través de la FPGA Board de una pantalla VGA para la monitorización de todo el sistema. El sistema ha sido totalmente desarrollado, testeado y mejorado; analizando su propiedades de timing y consumo energético. El segundo sistema, llamado CSL Fingerlike Mechanism (FM), consiste en un mecanismo similar a un dedo controlado por dos motores DC (Cada uno controlando una falange). Su comportamiento es similar al del primer sistema pero con una estructura más compleja. Este sistema no formaba parte de los objetivos iniciales del proyecto y por lo tanto era opcional. No pudo ser plenamente desarrollado debido a la falta de tiempo. El experimento de percepción háptica fue diseñado para profundizar en la percepción háptica humana con el objetivo de aplicar este conocimiento en aplicaciones tecnológicas. El experimento consistía en testear la capacidad de los sujetos para reconocer diferentes objetos, formas y texturas en condiciones de privación del sentido del oído y la vista. Se crearon dos grupos, en uno los sujetos tenían plena percepción háptica mientras que en el otro debían interactuar con los objetos a través de una pieza de plástico para generar un hándicap háptico. La conclusión del proyecto fue que un sistema háptico basado solo en sistemas CSL no es suficiente para recopilar información valiosa del entorno y que debe hacer uso de otros sensores (temperatura, presión, etc.). En cambio, un sistema basado en CSL es idóneo para el control de la fuerza aplicada por el sistema durante la interacción con superficies hápticas sensibles tales como la piel o pantallas táctiles.
Resumo:
We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.
Resumo:
Invasive vertebrate pests together with overabundant native species cause significant economic and environmental damage in the Australian rangelands. Access to artificial watering points, created for the pastoral industry, has been a major factor in the spread and survival of these pests. Existing methods of controlling watering points are mechanical and cannot discriminate between target species. This paper describes an intelligent system of controlling watering points based on machine vision technology. Initial test results clearly demonstrate proof of concept for machine vision in this application. These initial experiments were carried out as part of a 3-year project using machine vision software to manage all large vertebrates in the Australian rangelands. Concurrent work is testing the use of automated gates and innovative laneway and enclosure design. The system will have application in any habitat throughout the world where a resource is limited and can be enclosed for the management of livestock or wildlife.
Resumo:
This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains information relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of concept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network approach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the presence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear techniques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.
Resumo:
The project was made during the Erasmus+ Program in Instituto Superior de Engenharia do Porto, Portugal. I had a pleasure to do this in Gislotica Mechanical Solution, Lda. This document presents a process of design a vertical inspection station for truck tires. The first part contains an introduction. There are information about Gislotica Company and also first analysis of problem. In next part is presented way to figured out the task and described all issues connected with designed machine. In last part were made some conclusions about problems and results. There is a place not only for sum up design process but also my develop during the project. I repeatedly pointed out which issues were new for me. A lot of times I focus on myself and gained experience and information about design process.
Resumo:
The aim of the project was to design in Solidworks and improve an existing Tire inspection machine. The project was developed in Gislotica - Mechanical Solutions, guided by ing. Rui Manuel Fazenda Silva who is a professor in ISEP. The designed device relates to the inspection of automobile tires for holes and weak places caused by punctures and usage. Such inspection includes careful examination of the inside surface of the tire which is difficult because of its cylindrical shape, stiff and resistant nature of the material out of which the tire is made. The whole idea is to provide a machine by which the walls of the tire may be spread and hold apart, presenting the inner surface for the worker to control. The device must also perform rotational and vertical movement of the tire. It is meant to provide inspection in hich there is no need for the controller to use force. It makes his work easier and more efficient.
Resumo:
The aim of this thesis project is to automatically localize HCC tumors in the human liver and subsequently predict if the tumor will undergo microvascular infiltration (MVI), the initial stage of metastasis development. The input data for the work have been partially supplied by Sant'Orsola Hospital and partially downloaded from online medical databases. Two Unet models have been implemented for the automatic segmentation of the livers and the HCC malignancies within it. The segmentation models have been evaluated with the Intersection-over-Union and the Dice Coefficient metrics. The outcomes obtained for the liver automatic segmentation are quite good (IOU = 0.82; DC = 0.35); the outcomes obtained for the tumor automatic segmentation (IOU = 0.35; DC = 0.46) are, instead, affected by some limitations: it can be state that the algorithm is almost always able to detect the location of the tumor, but it tends to underestimate its dimensions. The purpose is to achieve the CT images of the HCC tumors, necessary for features extraction. The 14 Haralick features calculated from the 3D-GLCM, the 120 Radiomic features and the patients' clinical information are collected to build a dataset of 153 features. Now, the goal is to build a model able to discriminate, based on the features given, the tumors that will undergo MVI and those that will not. This task can be seen as a classification problem: each tumor needs to be classified either as “MVI positive” or “MVI negative”. Techniques for features selection are implemented to identify the most descriptive features for the problem at hand and then, a set of classification models are trained and compared. Among all, the models with the best performances (around 80-84% ± 8-15%) result to be the XGBoost Classifier, the SDG Classifier and the Logist Regression models (without penalization and with Lasso, Ridge or Elastic Net penalization).
Resumo:
Biology is now a “Big Data Science” thanks to technological advancements allowing the characterization of the whole macromolecular content of a cell or a collection of cells. This opens interesting perspectives, but only a small portion of this data may be experimentally characterized. From this derives the demand of accurate and efficient computational tools for automatic annotation of biological molecules. This is even more true when dealing with membrane proteins, on which my research project is focused leading to the development of two machine learning-based methods: BetAware-Deep and SVMyr. BetAware-Deep is a tool for the detection and topology prediction of transmembrane beta-barrel proteins found in Gram-negative bacteria. These proteins are involved in many biological processes and primary candidates as drug targets. BetAware-Deep exploits the combination of a deep learning framework (bidirectional long short-term memory) and a probabilistic graphical model (grammatical-restrained hidden conditional random field). Moreover, it introduced a modified formulation of the hydrophobic moment, designed to include the evolutionary information. BetAware-Deep outperformed all the available methods in topology prediction and reported high scores in the detection task. Glycine myristoylation in Eukaryotes is the binding of a myristic acid on an N-terminal glycine. SVMyr is a fast method based on support vector machines designed to predict this modification in dataset of proteomic scale. It uses as input octapeptides and exploits computational scores derived from experimental examples and mean physicochemical features. SVMyr outperformed all the available methods for co-translational myristoylation prediction. In addition, it allows (as a unique feature) the prediction of post-translational myristoylation. Both the tools here described are designed having in mind best practices for the development of machine learning-based tools outlined by the bioinformatics community. Moreover, they are made available via user-friendly web servers. All this make them valuable tools for filling the gap between sequential and annotated data.
Resumo:
With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services.
Resumo:
The rapid progression of biomedical research coupled with the explosion of scientific literature has generated an exigent need for efficient and reliable systems of knowledge extraction. This dissertation contends with this challenge through a concentrated investigation of digital health, Artificial Intelligence, and specifically Machine Learning and Natural Language Processing's (NLP) potential to expedite systematic literature reviews and refine the knowledge extraction process. The surge of COVID-19 complicated the efforts of scientists, policymakers, and medical professionals in identifying pertinent articles and assessing their scientific validity. This thesis presents a substantial solution in the form of the COKE Project, an initiative that interlaces machine reading with the rigorous protocols of Evidence-Based Medicine to streamline knowledge extraction. In the framework of the COKE (“COVID-19 Knowledge Extraction framework for next-generation discovery science”) Project, this thesis aims to underscore the capacity of machine reading to create knowledge graphs from scientific texts. The project is remarkable for its innovative use of NLP techniques such as a BERT + bi-LSTM language model. This combination is employed to detect and categorize elements within medical abstracts, thereby enhancing the systematic literature review process. The COKE project's outcomes show that NLP, when used in a judiciously structured manner, can significantly reduce the time and effort required to produce medical guidelines. These findings are particularly salient during times of medical emergency, like the COVID-19 pandemic, when quick and accurate research results are critical.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
The emissions estimation, both during homologation and standard driving, is one of the new challenges that automotive industries have to face. The new European and American regulation will allow a lower and lower quantity of Carbon Monoxide emission and will require that all the vehicles have to be able to monitor their own pollutants production. Since numerical models are too computationally expensive and approximated, new solutions based on Machine Learning are replacing standard techniques. In this project we considered a real V12 Internal Combustion Engine to propose a novel approach pushing Random Forests to generate meaningful prediction also in extreme cases (extrapolation, very high frequency peaks, noisy instrumentation etc.). The present work proposes also a data preprocessing pipeline for strongly unbalanced datasets and a reinterpretation of the regression problem as a classification problem in a logarithmic quantized domain. Results have been evaluated for two different models representing a pure interpolation scenario (more standard) and an extrapolation scenario, to test the out of bounds robustness of the model. The employed metrics take into account different aspects which can affect the homologation procedure, so the final analysis will focus on combining all the specific performances together to obtain the overall conclusions.
Resumo:
36
Resumo:
The 2005 National Institutes of Health (NIH) Consensus Conference proposed new criteria for diagnosing and scoring the severity of chronic graft-versus-host disease (GVHD). The 2014 NIH consensus maintains the framework of the prior consensus with further refinement based on new evidence. Revisions have been made to address areas of controversy or confusion, such as the overlap chronic GVHD subcategory and the distinction between active disease and past tissue damage. Diagnostic criteria for involvement of mouth, eyes, genitalia, and lungs have been revised. Categories of chronic GVHD should be defined in ways that indicate prognosis, guide treatment, and define eligibility for clinical trials. Revisions have been made to focus attention on the causes of organ-specific abnormalities. Attribution of organ-specific abnormalities to chronic GVHD has been addressed. This paradigm shift provides greater specificity and more accurately measures the global burden of disease attributed to GVHD, and it will facilitate biomarker association studies.
Resumo:
This paper analyses some aspects of the trajectory of the Argentinian physician and sociologist Juan César García (1932-1984) in the field of Latin American Social Medicine. Three dimensions constituting his basic orientations are highlighted: the elaboration of systematic and reflective social thought; a critical attitude in questioning teaching and professional practices; a commitment to the institutionalization and dissemination of health knowledge.