18 resultados para Drilling process monitoring
em Aston University Research Archive
Resumo:
Deep hole drilling is one of the most complicated metal cutting processes and one of the most difficult to perform on CNC machine-tools or machining centres under conditions of limited manpower or unmanned operation. This research work investigates aspects of the deep hole drilling process with small diameter twist drills and presents a prototype system for real time process monitoring and adaptive control; two main research objectives are fulfilled in particular : First objective is the experimental investigation of the mechanics of the deep hole drilling process, using twist drills without internal coolant supply, in the range of diarneters Ø 2.4 to Ø4.5 mm and working length up to 40 diameters. The definition of the problems associated with the low strength of these tools and the study of mechanisms of catastrophic failure which manifest themselves well before and along with the classic mechanism of tool wear. The relationships between drilling thrust and torque with the depth of penetration and the various machining conditions are also investigated and the experimental evidence suggests that the process is inherently unstable at depths beyond a few diameters. Second objective is the design and implementation of a system for intelligent CNC deep hole drilling, the main task of which is to ensure integrity of the process and the safety of the tool and the workpiece. This task is achieved by means of interfacing the CNC system of the machine tool to an external computer which performs the following functions: On-line monitoring of the drilling thrust and torque, adaptive control of feed rate, spindle speed and tool penetration (Z-axis), indirect monitoring of tool wear by pattern recognition of variations of the drilling thrust with cumulative cutting time and drilled depth, operation as a data base for tools and workpieces and finally issuing of alarms and diagnostic messages.
Resumo:
This thesis describes the design and development of an autonomous micro-drilling system capable of accurately controlling the penetration of complaint tissues and its application to the drilling of the cochleostomy; a key stage in the cochlea implant procedure. The drilling of the cochleostomy is a precision micro-surgical task in which the control of the burr penetration through the outer bone tissue of the cochlea is vital to prevent damage to the structures within and requires a high degree of skill to perform successfully. The micro-drilling system demonstrates that the penetration of the cochlea can be achieved consistently and accurately. Breakthrough can be detected and controlled to within 20µm of the distal surface and the hole completed without perforation of the underlying endosteal membrane, leaving the membranous cochlea intact. This device is the first autonomous surgical tool successfully deployed in the operating theatre. The system is unique due to the way in which it uses real-time data from the cutting tool to derive the state of the tool-tissue interaction. Being a smart tool it uses this state information to actively control the way in which the drilling process progresses. This sensor guided strategy enables the tool to self-reference to the deforming tissue and navigate without the need for pre-operative scan data. It is this capability that enables the system to operate in circumstances where the tissue properties and boundary conditions are unknown, without the need to restrain the patient.
Resumo:
This paper consides the problem of extracting the relationships between two time series in a non-linear non-stationary environment with Hidden Markov Models (HMMs). We describe an algorithm which is capable of identifying associations between variables. The method is applied both to synthetic data and real data. We show that HMMs are capable of modelling the oil drilling process and that they outperform existing methods.
Resumo:
Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models (HMMs) to identify the lag (or delay) between different variables for such data. We first present a method using maximum likelihood estimation and propose a simple algorithm which is capable of identifying associations between variables. We also adopt an information-theoretic approach and develop a novel procedure for training HMMs to maximise the mutual information between delayed time series. Both methods are successfully applied to real data. We model the oil drilling process with HMMs and estimate a crucial parameter, namely the lag for return.
Resumo:
Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models to identify the lag (or delay) between different variables for such data. Adopting an information-theoretic approach, we develop a procedure for training HMMs to maximise the mutual information (MMI) between delayed time series. The method is used to model the oil drilling process. We show that cross-correlation gives no information and that the MMI approach outperforms maximum likelihood.
Resumo:
This work is undertaken in the attempt to understand the processes at work at the cutting edge of the twist drill. Extensive drill life testing performed by the University has reinforced a survey of previously published information. This work demonstrated that there are two specific aspects of drilling which have not previously been explained comprehensively. The first concerns the interrelating of process data between differing drilling situations, There is no method currently available which allows the cutting geometry of drilling to be defined numerically so that such comparisons, where made, are purely subjective. Section one examines this problem by taking as an example a 4.5mm drill suitable for use with aluminium. This drill is examined using a prototype solid modelling program to explore how the required numerical information may be generated. The second aspect is the analysis of drill stiffness. What aspects of drill stiffness provide the very great difference in performance between short flute length, medium flute length and long flute length drills? These differences exist between drills of identical point geometry and the practical superiority of short drills has been known to shop floor drilling operatives since drilling was first introduced. This problem has been dismissed repeatedly as over complicated but section two provides a first approximation and shows that at least for smaller drills of 4. 5mm the effects are highly significant. Once the cutting action of the twist drill is defined geometrically there is a huge body of machinability data that becomes applicable to the drilling process. Work remains to interpret the very high inclination angles of the drill cutting process in terms of cutting forces and tool wear but aspects of drill design may already be looked at in new ways with the prospect of a more analytical approach rather than the present mix of experience and trial and error. Other problems are specific to the twist drill, such as the behaviour of the chips in the flute. It is now possible to predict the initial direction of chip flow leaving the drill cutting edge. For the future the parameters of further chip behaviour may also be explored within this geometric model.
Resumo:
This article demonstrates the use of embedded fibre Bragg gratings as vector bending sensor to monitor two-dimensional shape deformation of a shape memory polymer plate. The shape memory polymer plate was made by using thermal-responsive epoxy-based shape memory polymer materials, and the two fibre Bragg grating sensors were orthogonally embedded, one on the top and the other on the bottom layer of the plate, in order to measure the strain distribution in both longitudinal and transverse directions separately and also with temperature reference. When the shape memory polymer plate was bent at different angles, the Bragg wavelengths of the embedded fibre Bragg gratings showed a red-shift of 50 pm/°caused by the bent-induced tensile strain on the plate surface. The finite element method was used to analyse the stress distribution for the whole shape recovery process. The strain transfer rate between the shape memory polymer and optical fibre was also calculated from the finite element method and determined by experimental results, which was around 0.25. During the experiment, the embedded fibre Bragg gratings showed very high temperature sensitivity due to the high thermal expansion coefficient of the shape memory polymer, which was around 108.24 pm/°C below the glass transition temperature (Tg) and 47.29 pm/°C above Tg. Therefore, the orthogonal arrangement of the two fibre Bragg grating sensors could provide a temperature compensation function, as one of the fibre Bragg gratings only measures the temperature while the other is subjected to the directional deformation. © The Author(s) 2013.
Resumo:
Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.
Resumo:
The work presents a new method that combines plasma etching with extrinsic techniques to simultaneously measure matrix and surface protein and lipid deposits. The acronym for this technique is PEEMS - Plasma Etching and Emission Monitoring System. Previous work has identified the presence of proteinaceous and lipoidal deposition on the surface of contact lenses and highlighted the probability that penetration of these spoilants will occur. This technique developed here allows unambiguous identification of the depth of penetration of spoilants to be made for various material types. It is for this reason that the technique has been employed in this thesis. The technique is applied as a 'molecular' scalpel, removing known amounts of material from the target. In this case from both the anterior .and posterior surfaces of a 'soft' contact lens. The residual material is then characterised by other analytical techniques such as UV/visible .and fluorescence spectroscopy. Several studies have be.en carried out for both in vivo and in vitro spoilt materials. The analysis and identification of absorbed protein and lipid of the substrate revealed the importance of many factors in the absorption and adsorption process. The effect of the material structure, protein nature (in terms of size, shape and charge) and environment conditions were examined in order to determine the relative uptake of tear proteins. The studies were extended to real cases in order to study the. patient dependent factors and lipoidal penetration.
Resumo:
Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.
Resumo:
An intelligent agent, operating in an external world which cannot be fully described in its internal world model, must be able to monitor the success of a previously generated plan and to respond to any errors which may have occurred. The process of error analysis requires the ability to reason in an expert fashion about time and about processes occurring in the world. Reasoning about time is needed to deal with causality. Reasoning about processes is needed since the direct effects of a plan action can be completely specified when the plan is generated, but the indirect effects cannot. For example, the action `open tap' leads with certainty to `tap open', whereas whether there will be a fluid flow and how long it might last is more difficult to predict. The majority of existing planning systems cannot handle these kinds of reasoning, thus limiting their usefulness. This thesis argues that both kinds of reasoning require a complex internal representation of the world. The use of Qualitative Process Theory and an interval-based representation of time are proposed as a representation scheme for such a world model. The planning system which was constructed has been tested on a set of realistic planning scenarios. It is shown that even simple planning problems, such as making a cup of coffee, require extensive reasoning if they are to be carried out successfully. The final Chapter concludes that the planning system described does allow the correct solution of planning problems involving complex side effects, which planners up to now have been unable to solve.
Resumo:
Computer integrated monitoring is a very large area in engineering where on-line, real time data acquisition with the aid of sensors is the solution to many problems in the manufacturing industry as opposed to the old data logging method by graphics analysis. The raw data which is collected this way however is useless in the absence of a proper computerized management system. The transfer of data between the management and the shop floor processes has been impossible in the past unless all the computers in the system were totally compatible with each other. This limits the efficiency of the systems because they get governed by the limitations of the computers. General Motors of U.S.A. have recently started research on a new standard called the Manufacturing Automation Protocol (MAP) which is expected to allow data transfer between different types of computers. This is still in early development stages and also is currently very expensive. This research programme shows how such a shop floor data acquisition system and a complete management system on entirely different computers can be integrated together to form a single system by achieving data transfer communications using a cheaper but a superior alternative to MAP. Standard communication character sets and hardware such as ASCII and UARTs have been used in this method but the technique is so powerful that totally incompatible computers are shown to run different programs (in different languages) simultaneously and yet receive data from each other and process in their own CPUs with no human intervention.
Resumo:
Research in the present thesis is focused on the norms, strategies,and approaches which translators employ when translating humour in Children's Literature from English into Greek. It is based on process-oriented descriptive translation studies, since the focus is on investigating the process of translation. Viewing translation as a cognitive process and a problem soling activity, this thesis Think-aloud protocols (TAPs) in order to investigate translator's minds. As it is not possible to directly observe the human mind at work, an attempt is made to ask the translators themselves to reveal their mental processes in real time by verbalising their thoughts while carrying out a translation task involving humour. In this study, thirty participants at three different levels of expertise in translation competence, i.e. tn beginner, ten competent, and ten experts translators, were requested to translate two humourous extracts from the fictional diary novel The Secret Diary of Adrian Mole, Aged 13 ¾ by Sue Townsend (1982) from English into Greek. As they translated, they were asked to verbalise their thoughts and reason them, whenever possible, so that their strategies and approaches could be detected, and that subsequently, the norms that govern these strategies and approaches could be revealed. The thesis consists of four parts: the introduction, the literature review, the study, and the conclusion, and is developed in eleven chapters. the introduction contextualises the study within translation studies (TS) and presents its rationale, research questions, aims, and significance. Chapters 1 to 7 present an extensive and inclusive literature review identifying the principles axioms that guide and inform the study. In these seven chapters the following areas are critically introduced: Children's literature (Chapter 1), Children's Literature Translation (Chapter 2), Norms in Children's Literature (Chapter 3), Strategies in Children's Literature (Chapter 4), Humour in Children's Literature Translation (Chapter 5), Development of Translation Competence (Chapter 6), and Translation Process Research (Chapter 7). In Chapters 8 - 11 the fieldwork is described in detail. the piolot and the man study are described with a reference to he environments and setting, the participants, the research -observer, the data and its analysis, and limitations of the study. The findings of the study are presented and analysed in Chapter 9. Three models are then suggested for systematising translators' norms, strategies, and approaches, thus, filling the existing gap in the field. Pedagogical norms (e.g. appropriateness/correctness, famililarity, simplicity, comprehensibility, and toning down), literary norms (e.g. sound of language and fluency). and source-text norms (e.g. equivalence) were revealed to b the most prominent general and specific norms governing the translators' strategies and approaches in the process of translating humour in ChL. The data also revealed that monitoring and communication strategies (e.g. additions, omissions, and exoticism) were the prevalent strategies employed by translators. In Chapter 10 the main findings and outcomes of a potential secondary benefit (beneficial outcomes) are discussed on the basis of the research questions and aims of the study, and implications of the study are tackled in Chapter 11. In the conclusion, suggestions for future directions are given and final remarks noted.
Resumo:
The Product Service Systems, servitization, and Service Science literature continues to grow as organisations seek to protect and improve their competitive position. The potential of technology applications to deliver service delivery systems facilitated by the ability to make real time decisions based upon ‘in the field’ performance is also significant. Research identifies four key questions to be addressed. Namely: how far along the servitization continuum should the organisation go in a single strategic step? Does the organisation have the structure and infrastructure to support this transition? What level of condition monitoring should it employ? Is the product positioned correctly in the value chain to adopt condition monitoring technology? Strategy consists of three dimensions, namely content, context, and process. The literature relating to PSS, servitization, and strategy all discuss the concepts relative to content and context but none offer a process to deliver an aligned strategy to deliver a service delivery system enabled by condition based management. This paper presents a tested iterative strategy formulation methodology which is the result of a structured development programme.