13 resultados para maintenance computerized system
em Aston University Research Archive
Resumo:
Purpose - To develop a systems strategy for supply chain management in aerospace maintenance, repair and overhaul (MRO). Design/methodology/approach - A standard systems development methodology has been followed to produce a process model (i.e. the AMSCR model); an information model (i.e. business rules) and a computerised information management capability (i.e. automated optimisation). Findings - The proof of concept for this web-based MRO supply chain system has been established through collaboration with a sample of the different types of supply chain members. The proven benefits comprise new potential to minimise the stock holding costs of the whole supply chain whilst also minimising non-flying time of the aircraft that the supply chain supports. Research limitations/implications - The scale of change needed to successfully model and automate the supply chain is vast. This research is a limited-scale experiment intended to show the power of process analysis and automation, coupled with strategic use of management science techniques, to derive tangible business benefit. Practical implications - This type of system is now vital in an industry that has continuously decreasing profit margins; which in turn means pressure to reduce servicing times and increase the mean time between them. Originality/value - Original work has been conducted at several levels: process, information and automation. The proof-of-concept system has been applied to an aircraft MRO supply chain. This is an area of research that has been neglected, and as a result is not well served by current systems solutions. © Emerald Group Publishing Limited.
Resumo:
The existing method of pipeline health monitoring, which requires an entire pipeline to be inspected periodically, is unproductive. A risk-based decision support system (DSS) that reduces the amount of time spent on inspection has been presented. The risk-based DSS uses the analytic hierarchy process (AHP), a multiple attribute decision-making technique, to identify the factors that influence failure on specific segments and analyzes their effects by determining probability of occurrence of these risk factors. The severity of failure is determined through consequence analysis. From this, the effect of a failure caused by each risk factor can be established in terms of cost and the cumulative effect of failure is determined through probability analysis. The model optimizes the cost of pipeline operations by reducing subjectivity in selecting a specific inspection method, identifying and prioritizing the right pipeline segment for inspection and maintenance, deriving budget allocation, providing guidance to deploy the right mix labor for inspection and maintenance, planning emergency preparation, and deriving logical insurance plan. The proposed methodology also helps derive inspection and maintenance policy for the entire pipeline system, suggest design, operational philosophy, and construction methodology for new pipelines.
Resumo:
Knowledge maintenance is a major challenge for both knowledge management and the Semantic Web. Operating over the Semantic Web, there will be a network of collaborating agents, each with their own ontologies or knowledge bases. Change in the knowledge state of one agent may need to be propagated across a number of agents and their associated ontologies. The challenge is to decide how to propagate a change of knowledge state. The effects of a change in knowledge state cannot be known in advance, and so an agent cannot know who should be informed unless it adopts a simple ‘tell everyone – everything’ strategy. This situation is highly reminiscent of the classic Frame Problem in AI. We argue that for agent-based technologies to succeed, far greater attention must be given to creating an appropriate model for knowledge update. In a closed system, simple strategies are possible (e.g. ‘sleeping dog’ or ‘cheap test’ or even complete checking). However, in an open system where cause and effect are unpredictable, a coherent cost-benefit based model of agent interaction is essential. Otherwise, the effectiveness of every act of knowledge update/maintenance is brought into question.
Resumo:
Offshore oil and gas pipelines are vulnerable to environment as any leak and burst in pipelines cause oil/gas spill resulting in huge negative Impacts on marine lives. Breakdown maintenance of these pipelines is also cost-intensive and time-consuming resulting in huge tangible and intangible loss to the pipeline operators. Pipelines health monitoring and integrity analysis have been researched a lot for successful pipeline operations and risk-based maintenance model is one of the outcomes of those researches. This study develops a risk-based maintenance model using a combined multiple-criteria decision-making and weight method for offshore oil and gas pipelines in Thailand with the active participation of experienced executives. The model's effectiveness has been demonstrated through real life application on oil and gas pipelines in the Gulf of Thailand. Practical implications. Risk-based inspection and maintenance methodology is particularly important for oil pipelines system, as any failure in the system will not only affect productivity negatively but also has tremendous negative environmental impact. The proposed model helps the pipelines operators to analyze the health of pipelines dynamically, to select specific inspection and maintenance method for specific section in line with its probability and severity of failure.
Resumo:
Proper maintenance of plant items is crucial for the safe and profitable operation of process plants, The relevant maintenance policies fall into the following four categories: (i) preventivejopportunistic/breakdown replacement policies, (ii) inspection/inspection-repair-replacernent policies, (iii) restorative maintenance policies, and (iv) condition based maintenance policies, For correlating failure times of component equipnent and complete systems, the Weibull failure distribution has been used, A new powerful method, SEQLIM, has been proposed for the estimation of the Weibull parameters; particularly, when maintenance records contain very few failures and many successful operation times. When a system consists of a number of replaceable, ageing components, an opporturistic replacernent policy has been found to be cost-effective, A simple opportunistic rrodel has been developed. Inspection models with various objective functions have been investigated, It was found that, on the assumption of a negative exponential failure distribution, all models converge to the same optimal inspection interval; provided the safety components are very reliable and the demand rate is low, When deterioration becomes a contributory factor to same failures, periodic inspections, calculated from above models, are too frequent, A case of safety trip systems has been studied, A highly effective restorative maintenance policy can be developed if the performance of the equipment under this category can be related to some predictive modelling. A novel fouling model has been proposed to determine cleaning strategies of condensers, Condition-based maintenance policies have been investigated. A simple gauge has been designed for condition monitoring of relief valve springs. A typical case of an exothermic inert gas generation plant has been studied, to demonstrate how various policies can be applied to devise overall maintenance actions.
Resumo:
This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains information relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of concept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network approach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the presence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear techniques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.
Resumo:
Faced with a future of rising energy costs there is a need for industry to manage energy more carefully in order to meet its economic objectives. A problem besetting the growth of energy conservation in the UK is that a large proportion of energy consumption is used in a low intensive manner in organisations where they would be responsibility for energy efficiency is spread over a large number of personnel who each see only small energy costs. In relation to this problem in the non-energy intensive industrial sector, an application of an energy management technique known as monitoring and targeting (M & T) has been installed at the Whetstone site of the General Electric Company Limited in an attempt to prove it as a means for motivating line management and personnel to save energy. The objective energy saving for which the M & T was devised is very specific. During early energy conservation work at the site there had been a change from continuous to intermittent heating but the maintenance of the strategy was receiving a poor level of commitment from line management and performance was some 5% - 10% less than expected. The M & T is concerned therefore with heat for space heating for which a heat metering system was required. Metering of the site high pressure hot water system posed technical difficulties and expenditure was also limited. This led to a ‘tin-house' design being installed for a price less than the commercial equivalent. The timespan of work to achieve an operational heat metering system was 3 years which meant that energy saving results from the scheme were not observed during the study. If successful the replication potential is the larger non energy intensive sites from which some 30 PT savings could be expected in the UK.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
Classification of metamorphic rocks is normally carried out using a poorly defined, subjective classification scheme making this an area in which many undergraduate geologists experience difficulties. An expert system to assist in such classification is presented which is capable of classifying rocks and also giving further details about a particular rock type. A mixed knowledge representation is used with frame, semantic and production rule systems available. Classification in the domain requires that different facets of a rock be classified. To implement this, rocks are represented by 'context' frames with slots representing each facet. Slots are satisfied by calling a pre-defined ruleset to carry out the necessary inference. The inference is handled by an interpreter which uses a dependency graph representation for the propagation of evidence. Uncertainty is handled by the system using a combination of the MYCIN certainty factor system and the Dempster-Shafer range mechanism. This allows for positive and negative reasoning, with rules capable of representing necessity and sufficiency of evidence, whilst also allowing the implementation of an alpha-beta pruning algorithm to guide question selection during inference. The system also utilizes a semantic net type structure to allow the expert to encode simple relationships between terms enabling rules to be written with a sensible level of abstraction. Using frames to represent rock types where subclassification is possible allows the knowledge base to be built in a modular fashion with subclassification frames only defined once the higher level of classification is functioning. Rulesets can similarly be added in modular fashion with the individual rules being essentially declarative allowing for simple updating and maintenance. The knowledge base so far developed for metamorphic classification serves to demonstrate the performance of the interpreter design whilst also moving some way towards providing a useful assistant to the non-expert metamorphic petrologist. The system demonstrates the possibilities for a fully developed knowledge base to handle the classification of igneous, sedimentary and metamorphic rocks. The current knowledge base and interpreter have been evaluated by potential users and experts. The results of the evaluation show that the system performs to an acceptable level and should be of use as a tool for both undergraduates and researchers from outside the metamorphic petrography field. .
Resumo:
The objective of this thesis is to investigate, through an empirical study, the different functions of the highways maintenance departments and to suggest methods by means of which road maintenance work could be carried out in a more efficient way by utilising its resources of men, material and plant to the utmost advantage. This is particularly important under the present circumstances of national financial difficulties which have resulted in continuous cuts in public expenditure. In order to achieve this objective, the researcher carried out a survey among several Highways Authorities by means of questionnaire and interview. The information so collected was analysed in order to understand the actual, practical situation within highways manintenance departments, and highlight any existing problems, and try to answer the question of how they could become more efficient. According to the results obtained by the questionnaire and the interview, and the analysis of these results, the researcher concludes that it is the management system where least has been done, and where problems exist and are most complex. The management of highways maintenance departments argue that the reasons for their problems include both financial and organisational difficulties, apart from the political aspect and nature of the activities undertaken. The researcher believes that this ought to necessitate improving the management's analytical tools and techniques in order to achieve the most effective way of performing each activity. To this end the researcher recommends several related procedures to be adopted by the management of the highways maintenance departments. These recommendations, arising from the study, involve the technical, practical and human aspects. These are essential factors of which management should be aware - and certainly should not neglect - in order to achieve its objectives of improved productivity in the highways maintenance departments.
Resumo:
Aims/hypothesis - Loss of the trophic support provided by surrounding non-endocrine pancreatic cell populations underlies the decline in beta cell mass and insulin secretory function observed in human islets following isolation and culture. This study sought to determine whether restoration of regulatory influences mediated by ductal epithelial cells promotes sustained beta cell function in vitro. Methods - Human islets were isolated according to existing protocols. Ductal epithelial cells were harvested from the exocrine tissue remaining after islet isolation, expanded in monolayer culture and characterised using fluorescence immunocytochemistry. The two cell types were co-cultured under conventional static culture conditions or within a rotational cell culture system. The effect of co-culture on islet structural integrity, beta cell mass and insulin secretory capacity was observed for 10 days following isolation. Results - Human islets maintained under conventional culture conditions exhibited a characteristic loss in structural integrity and functional viability as indicated by a diminution of glucose responsiveness. By contrast, co-culture of islets with ductal epithelial cells led to preserved islet morphology and sustained beta cell function, most evident in co-cultures held within the rotational cell culture system, which showed a significantly (p<0.05) greater insulin secretory response to elevated glucose compared with control islets. Similarly, insulin/protein ratio data suggested that the presence of ductal epithelial cells is beneficial for the maintenance of beta cell mass. Conclusions/interpretation - The data indicate a supportive role for ductal epithelial cells in islet viability. Further characterisation of the regulatory influences may lead to novel strategies to improve long-term beta cell function both in vitro and following islet transplantation.
Resumo:
Astrocytes are essential for neuronal function and survival, so both cell types were included in a human neurotoxicity test-system to assess the protective effects of astrocytes on neurons, compared with a culture of neurons alone. The human NT2.D1 cell line was differentiated to form either a co-culture of post-mitotic NT2.N neuronal (TUJ1, NF68 and NSE positive) and NT2.A astrocytic (GFAP positive) cells (∼2:1 NT2.A:NT2.N), or an NT2.N mono-culture. Cultures were exposed to human toxins, for 4 h at sub-cytotoxic concentrations, in order to compare levels of compromised cell function and thus evidence of an astrocytic protective effect. Functional endpoints examined included assays for cellular energy (ATP) and glutathione (GSH) levels, generation of hydrogen peroxide (H2O2) and caspase-3 activation. Generally, the NT2.N/A co-culture was more resistant to toxicity, maintaining superior ATP and GSH levels and sustaining smaller significant increases in H2O2 levels compared with neurons alone. However, the pure neuronal culture showed a significantly lower level of caspase activation. These data suggest that besides their support for neurons through maintenance of ATP and GSH and control of H2O2 levels, following exposure to some substances, astrocytes may promote an apoptotic mode of cell death. Thus, it appears the use of astrocytes in an in vitro predictive neurotoxicity test-system may be more relevant to human CNS structure and function than neuronal cells alone. © 2007 Elsevier Ltd. All rights reserved.