985 resultados para Process machine
Resumo:
The current level of demand by customers in the electronics industry requires the production of parts with an extremely high level of reliability and quality to ensure complete confidence on the end customer. Automatic Optical Inspection (AOI) machines have an important role in the monitoring and detection of errors during the manufacturing process for printed circuit boards. These machines present images of products with probable assembly mistakes to an operator and him decide whether the product has a real defect or if in turn this was an automated false detection. Operator training is an important aspect for obtaining a lower rate of evaluation failure by the operator and consequently a lower rate of actual defects that slip through to the following processes. The Gage R&R methodology for attributes is part of a Six Sigma strategy to examine the repeatability and reproducibility of an evaluation system, thus giving important feedback on the suitability of each operator in classifying defects. This methodology was already applied in several industry sectors and services at different processes, with excellent results in the evaluation of subjective parameters. An application for training operators of AOI machines was developed, in order to be able to check their fitness and improve future evaluation performance. This application will provide a better understanding of the specific training needs for each operator, and also to accompany the evolution of the training program for new components which in turn present additional new difficulties for the operator evaluation. The use of this application will contribute to reduce the number of defects misclassified by the operators that are passed on to the following steps in the productive process. This defect reduction will also contribute to the continuous improvement of the operator evaluation performance, which is seen as a quality management goal.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
This study is based on a previous experimental work in which embedded cylindrical heaters were applied to a pultrusion machine die, and resultant energetic performance compared with that achieved with the former heating system based on planar resistances. The previous work allowed to conclude that the use of embedded resistances enhances significantly the energetic performance of pultrusion process, leading to 57% decrease of energy consumption. However, the aforementioned study was developed with basis on an existing pultrusion die, which only allowed a single relative position for the heaters. In the present work, new relative positions for the heaters were investigated in order to optimise heat distribution process and energy consumption. Finite Elements Analysis was applied as an efficient tool to identify the best relative position of the heaters into the die, taking into account the usual parameters involved in the process and the control system already tested in the previous study. The analysis was firstly developed based on eight cylindrical heaters located in four different location plans. In a second phase, in order to refine the results, a new approach was adopted using sixteen heaters with the same total power. Final results allow to conclude that the correct positioning of the heaters can contribute to about 10% of energy consumption reduction, decreasing the production costs and leading to a better eco-efficiency of pultrusion process.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
This study is based on a previous experimental work in which embedded cylindrical heaters were applied to a pultrusion machine die, and resultant energetic performance compared with that achieved with the former heating system based on planar resistances. The previous work allowed to conclude that the use of embedded resistances enhances significantly the energetic performance of pultrusion process, leading to 57% decrease of energy consumption. However, the aforementioned study was developed with basis on an existing pultrusion die, which only allowed a single relative position for the heaters. In the present work, new relative positions for the heaters were investigated in order to optimize heat distribution process and energy consumption. Finite Elements Analysis was applied as an efficient tool to identify the best relative position of the heaters into the die, taking into account the usual parameters involved in the process and the control system already tested in the previous study. The analysis was firstly developed with basis on eight cylindrical heaters located in four different location plans. In a second phase, in order to refine the results, a new approach was adopted using sixteen heaters with the same total power. Final results allow to conclude that the correct positioning of the heaters can contribute to about 10% of energy consumption reduction, decreasing the production costs and leading to a better eco-efficiency of pultrusion process.
Resumo:
Research Project submited as partial fulfilment for the Master Degree in Statistics and Information Management
Resumo:
Dissertação para obtenção do Grau de Doutor em Estatística e Gestão do Risco
Resumo:
Doctoral Program in Computer Science
Resumo:
A laboratory study has been conducted with two aims in mind. The first goal was to develop a description of how a cutting edge scrapes ice from the road surface. The second goal was to investigate the extent, if any, to which serrated blades were better than un-serrated or "classical" blades at ice removal. The tests were conducted in the Ice Research Laboratory at the Iowa Institute of Hydraulic Research of the University of Iowa. A specialized testing machine, with a hydraulic ram capable of attaining scraping velocities of up to 30 m.p.h. was used in the testing. In order to determine the ice scraping process, the effects of scraping velocity, ice thickness, and blade geometry on the ice scraping forces were determined. Higher ice thickness lead to greater ice chipping (as opposed to pulverization at lower thicknesses) and thus lower loads. Behavior was observed at higher velocities. The study of blade geometry included the effect of rake angle, clearance angle, and flat width. The latter were found to be particularly important in developing a clear picture of the scraping process. As clearance angle decreases and flat width increases, the scraping loads show a marked increase, due to the need to re-compress pulverized ice fragments. The effect of serrations was to decrease the scraping forces. However, for the coarsest serrated blades (with the widest teeth and gaps) the quantity of ice removed was significantly less than for a classical blade. Finer serrations appear to be able to match the ice removal of classical blades at lower scraping loads. Thus, one of the recommendations of this study is to examine the use of serrated blades in the field. Preliminary work (by Nixon and Potter, 1996) suggests such work will be fruitful. A second and perhaps more challenging result of the study is that chipping of ice is more preferable to pulverization of the ice. How such chipping can be forced to occur is at present an open question.
Resumo:
A laboratory study has been conducted with two aims in mind. The first goal was to develop a description of how a cutting edge scrapes ice from the road surface. The second goal was to investigate the extent, if any, to which serrated blades were better than un-serrated or "classical" blades at ice removal. The tests were conducted in the Ice Research Laboratory at the Iowa Institute of Hydraulic Research of the University of Iowa. A specialized testing machine, with a hydraulic ram capable of attaining scraping velocities of up to 30 m.p.h. was used in the testing. In order to determine the ice scraping process, the effects of scraping velocity, ice thickness, and blade geometry on the ice scraping forces were determined. Higher ice thickness lead to greater ice chipping (as opposed to pulverization at lower thicknesses) and thus lower loads. S~milabr ehavior was observed at higher velocities. The study of blade geometry included the effect of rake angle, clearance angle, and flat width. The latter were found to be particularly important in developing a clear picture of the scraping process. As clearance angle decreases and flat width increases, the scraping loads show a marked increase, due to the need to re-compress pulverized ice fragments. The effect of serrations was to decrease the scraping forces. However, for the coarsest serrated blades (with the widest teeth and gaps) the quantity of ice removed was significantly less than for a classical blade. Finer serrations appear to be able to match the ice removal of classical blades at lower scraping loads. Thus, one of the recommendations of this study is to examine the use of serrated blades in the field. Preliminary work (by Nixon and Potter, 1996) suggests such work will be fruitful. A second and perhaps more challenging result of the study is that chipping of ice is more preferable to pulverization of the ice. How such chipping can be forced to occur is at present an open question.
Resumo:
Avalanche forecasting is a complex process involving the assimilation of multiple data sources to make predictions over varying spatial and temporal resolutions. Numerically assisted forecasting often uses nearest neighbour methods (NN), which are known to have limitations when dealing with high dimensional data. We apply Support Vector Machines to a dataset from Lochaber, Scotland to assess their applicability in avalanche forecasting. Support Vector Machines (SVMs) belong to a family of theoretically based techniques from machine learning and are designed to deal with high dimensional data. Initial experiments showed that SVMs gave results which were comparable with NN for categorical and probabilistic forecasts. Experiments utilising the ability of SVMs to deal with high dimensionality in producing a spatial forecast show promise, but require further work.
Resumo:
Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.
Resumo:
In this work we present a simulation of a recognition process with perimeter characterization of a simple plant leaves as a unique discriminating parameter. Data coding allowing for independence of leaves size and orientation may penalize performance recognition for some varieties. Border description sequences are then used, and Principal Component Analysis (PCA) is applied in order to study which is the best number of components for the classification task, implemented by means of a Support Vector Machine (SVM) System. Obtained results are satisfactory, and compared with [4] our system improves the recognition success, diminishing the variance at the same time.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Diplomityön tarkoituksena oli löytää keino korkean mangaanipitoisuuden hallintaan ECF-valkaisussa. Kirjallisuusosassa käsiteltiin eri metallien ja kuidun vuorovaikutuksia sekä niiden vaikutuksia prosessiin. Lisäksi käytiin läpi sellunvalmituksen yleisimpiä metallienhallintamenetelmiä. Työn kokeellisessa osassa tehtiin esikokeina laboratoriokokeita, jotta löydettiin oikeat kelatointistrategiat tehdasmittakaavan koeajoille. Laboratoriovalkaisut suoritettiin kuudella eri kemikaalilla käyttäen DD3-pesurin jälkeistä massaa ja samanlaisia parametrejä kuin tehdasvalkaisussa. Kolmesta eri valkaisusekvenssistä paras tulos saavutettiin D0-QEP-sekvenssillä. Tehdasmittakaavan koeajojen tavoitteena oli saavuttaa alle 1 mg/kg jäännösmangaanipitoisuus valkaistussa massassa ja korkeampi vaaleus EOP-vaiheessa pienemmällä klooridioksidin kulutuksella. Koeajoissa käytettiinDTPA:ta ja EDTA:ta kahdeksassa eri koepisteessä. Pienimpiin jäännöspitoisuuksiin päästiin koepisteissä, joissa kelatointiaine annosteltiin ennen valkaisun viimeistä pesuvaihetta tai sen jälkeen. Samanlaisia tuloksia saavutettiin koepisteissä, joissa kelatointiaine lisättiin suoraan EOP-vaiheeseen. Tällöin kelatointiaineen käyttö johti myös korkeampaan vaaleuteen EOP-vaiheessa pienemmällä kappakertoimella kuin referenssissä. Säästöt klooridioksidin kulutuksessa eivät olleet kuitenkaan tarpeeksi suuret kattaakseen kelatointiaineiden käytön kustannuksia. Kustannustehokkain tapa kontrolloida jäännösmangaanipitoisuutta oli EDTA:n annostelu D2 DD-pesurin jälkeen. Haittapuolena tälläisessä kelatoinnissa oli metallikompleksien palautuminen valkaisuun kuivauskoneen kiertoveden mukana. Tärkeimmät onnistuneeseen kelatointiin vaikuttavat parametrit olivat lajittelussa käytetyn rikkihapon annos, D0-vaiheen pH ja D0 DD-pesurin pesutehokkuus.