978 resultados para system maintenance
Resumo:
"29 August 1989."
Resumo:
"Mounting kit M241E5 1450-00-078-1217 accessory kit M219E1 1450-00-179-5317 accessory kit M218E1 1450-00-179-5318 mounting kit M184 1450-00-179-6095 Pershing 1a Field Artillery Missile System.
Resumo:
"HRP-0906516."
Resumo:
The blood-borne renin-angiotensin system (RAS) is known best for its role in the maintenance of blood pressure and electrolyte and fluid homeostasis. However, numerous tissues show intrinsic angiotensin-generating systems that cater for specific local needs through actions that add to, or differ from, the circulating RAS. The male reproductive system has several sites of intrinsic RAS activity. Recent focus on the epididymis, by our laboratories and by others, has contributed important details about the local RAS in this tissue. The RAS components have been localized morphologically and topographically; they have been shown to be responsive to androgens and to hypoxia; and angiotensin has been shown to influence tubular, and consequently, fluid secretion. Components of the RAS have also been found in the testis, vas deferens, prostate and semen. Angiotensin II receptors, type 1 and, to a lesser extent, type 2 are widespread, and angiotensin IV receptors have been localized in the prostate. The roles of the RAS in local processes at these sites are still uncertain and have yet to be fully elucidated, although there is evidence for involvement in tubular contractility, spermatogenesis, sperm maturation, capacitation, acrosomal exocytosis and fertilization. Notwithstanding this evidence for the involvement of the RAS in various important aspects of male reproduction, there has so far been a lack of clinical evidence, demonstrable by changes in fertility, for a crucial role of the RAS in male reproduction. However, it is clear that there are several potential targets for manipulating the activity of the male reproductive system by interfering with the locally generated angiotensin systems.
Resumo:
Schistosomes are parasitic blood flukes, responsible for significant human disease in tropical and developing nations. Here we review information on the organization of the cytoskeleton and associated motor proteins of schistosomes, with particular reference to the organization of the syncytial tegument, a unique cellular adaptation of these and other neodermatan flatworms. Extensive EST databases show that the molecular constituents of the cytoskeleton and associated molecular systems are likely to be similar to those of other eukaryotes, although there are potentially some molecules unique to schistosomes and platyhelminths. The biology of some components, particular those contributing to host-parasite interactions as well as chemotherapy and immunotherapy are discussed. Unresolved questions in relation to the structure and function of the tegument relate to dynamic organization of the syncytial layer. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Response of an aerobic upflow sludge blanket (AUSB) reactor system to the changes in operating conditions was investigated by varying two principle operating variables: the oxygenation pressure and the flow recirculation rate. The oxygenation pressure was varied between 0 and 25 psig (relative), while flow recirculation rates were between 1,300 and 600% correspondingly. The AUSB reactor system was able to handle a volumetric loading of as high as 3.8 kg total organic carbon (TOC)/m(3) day, with a removal efficiency of 92%. The rate of TOC removal by AUSB was highest at a pressure of 20 psig and it decreased when the pressure was increased to 25 psig and the flow recirculation rate was reduced to 600%. The TOC removal rate also decreased when the operating pressure was reduced to 0 and 15 psig, with corresponding increase in flow recirculation rates to 1,300 and 1,000%, respectively. Maintenance of a high dissolved oxygen level and a high flow recirculation rate was found to improve the substrate removal capacity of the AUSB system. The AUSB system was extremely effective in retaining the produced biomass despite a high upflow velocity and the overall sludge yield was only 0.24-0.32 g VSS/g TOC removed. However, the effluent TOC was relatively high due to the system's operation at a high organic loading.
Resumo:
This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.
Resumo:
Knowledge maintenance is a major challenge for both knowledge management and the Semantic Web. Operating over the Semantic Web, there will be a network of collaborating agents, each with their own ontologies or knowledge bases. Change in the knowledge state of one agent may need to be propagated across a number of agents and their associated ontologies. The challenge is to decide how to propagate a change of knowledge state. The effects of a change in knowledge state cannot be known in advance, and so an agent cannot know who should be informed unless it adopts a simple ‘tell everyone – everything’ strategy. This situation is highly reminiscent of the classic Frame Problem in AI. We argue that for agent-based technologies to succeed, far greater attention must be given to creating an appropriate model for knowledge update. In a closed system, simple strategies are possible (e.g. ‘sleeping dog’ or ‘cheap test’ or even complete checking). However, in an open system where cause and effect are unpredictable, a coherent cost-benefit based model of agent interaction is essential. Otherwise, the effectiveness of every act of knowledge update/maintenance is brought into question.
Resumo:
Offshore oil and gas pipelines are vulnerable to environment as any leak and burst in pipelines cause oil/gas spill resulting in huge negative Impacts on marine lives. Breakdown maintenance of these pipelines is also cost-intensive and time-consuming resulting in huge tangible and intangible loss to the pipeline operators. Pipelines health monitoring and integrity analysis have been researched a lot for successful pipeline operations and risk-based maintenance model is one of the outcomes of those researches. This study develops a risk-based maintenance model using a combined multiple-criteria decision-making and weight method for offshore oil and gas pipelines in Thailand with the active participation of experienced executives. The model's effectiveness has been demonstrated through real life application on oil and gas pipelines in the Gulf of Thailand. Practical implications. Risk-based inspection and maintenance methodology is particularly important for oil pipelines system, as any failure in the system will not only affect productivity negatively but also has tremendous negative environmental impact. The proposed model helps the pipelines operators to analyze the health of pipelines dynamically, to select specific inspection and maintenance method for specific section in line with its probability and severity of failure.
Resumo:
Proper maintenance of plant items is crucial for the safe and profitable operation of process plants, The relevant maintenance policies fall into the following four categories: (i) preventivejopportunistic/breakdown replacement policies, (ii) inspection/inspection-repair-replacernent policies, (iii) restorative maintenance policies, and (iv) condition based maintenance policies, For correlating failure times of component equipnent and complete systems, the Weibull failure distribution has been used, A new powerful method, SEQLIM, has been proposed for the estimation of the Weibull parameters; particularly, when maintenance records contain very few failures and many successful operation times. When a system consists of a number of replaceable, ageing components, an opporturistic replacernent policy has been found to be cost-effective, A simple opportunistic rrodel has been developed. Inspection models with various objective functions have been investigated, It was found that, on the assumption of a negative exponential failure distribution, all models converge to the same optimal inspection interval; provided the safety components are very reliable and the demand rate is low, When deterioration becomes a contributory factor to same failures, periodic inspections, calculated from above models, are too frequent, A case of safety trip systems has been studied, A highly effective restorative maintenance policy can be developed if the performance of the equipment under this category can be related to some predictive modelling. A novel fouling model has been proposed to determine cleaning strategies of condensers, Condition-based maintenance policies have been investigated. A simple gauge has been designed for condition monitoring of relief valve springs. A typical case of an exothermic inert gas generation plant has been studied, to demonstrate how various policies can be applied to devise overall maintenance actions.
Resumo:
This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains information relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of concept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network approach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the presence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear techniques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.
Resumo:
Faced with a future of rising energy costs there is a need for industry to manage energy more carefully in order to meet its economic objectives. A problem besetting the growth of energy conservation in the UK is that a large proportion of energy consumption is used in a low intensive manner in organisations where they would be responsibility for energy efficiency is spread over a large number of personnel who each see only small energy costs. In relation to this problem in the non-energy intensive industrial sector, an application of an energy management technique known as monitoring and targeting (M & T) has been installed at the Whetstone site of the General Electric Company Limited in an attempt to prove it as a means for motivating line management and personnel to save energy. The objective energy saving for which the M & T was devised is very specific. During early energy conservation work at the site there had been a change from continuous to intermittent heating but the maintenance of the strategy was receiving a poor level of commitment from line management and performance was some 5% - 10% less than expected. The M & T is concerned therefore with heat for space heating for which a heat metering system was required. Metering of the site high pressure hot water system posed technical difficulties and expenditure was also limited. This led to a ‘tin-house' design being installed for a price less than the commercial equivalent. The timespan of work to achieve an operational heat metering system was 3 years which meant that energy saving results from the scheme were not observed during the study. If successful the replication potential is the larger non energy intensive sites from which some 30 PT savings could be expected in the UK.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
Classification of metamorphic rocks is normally carried out using a poorly defined, subjective classification scheme making this an area in which many undergraduate geologists experience difficulties. An expert system to assist in such classification is presented which is capable of classifying rocks and also giving further details about a particular rock type. A mixed knowledge representation is used with frame, semantic and production rule systems available. Classification in the domain requires that different facets of a rock be classified. To implement this, rocks are represented by 'context' frames with slots representing each facet. Slots are satisfied by calling a pre-defined ruleset to carry out the necessary inference. The inference is handled by an interpreter which uses a dependency graph representation for the propagation of evidence. Uncertainty is handled by the system using a combination of the MYCIN certainty factor system and the Dempster-Shafer range mechanism. This allows for positive and negative reasoning, with rules capable of representing necessity and sufficiency of evidence, whilst also allowing the implementation of an alpha-beta pruning algorithm to guide question selection during inference. The system also utilizes a semantic net type structure to allow the expert to encode simple relationships between terms enabling rules to be written with a sensible level of abstraction. Using frames to represent rock types where subclassification is possible allows the knowledge base to be built in a modular fashion with subclassification frames only defined once the higher level of classification is functioning. Rulesets can similarly be added in modular fashion with the individual rules being essentially declarative allowing for simple updating and maintenance. The knowledge base so far developed for metamorphic classification serves to demonstrate the performance of the interpreter design whilst also moving some way towards providing a useful assistant to the non-expert metamorphic petrologist. The system demonstrates the possibilities for a fully developed knowledge base to handle the classification of igneous, sedimentary and metamorphic rocks. The current knowledge base and interpreter have been evaluated by potential users and experts. The results of the evaluation show that the system performs to an acceptable level and should be of use as a tool for both undergraduates and researchers from outside the metamorphic petrography field. .
Resumo:
The objective of this thesis is to investigate, through an empirical study, the different functions of the highways maintenance departments and to suggest methods by means of which road maintenance work could be carried out in a more efficient way by utilising its resources of men, material and plant to the utmost advantage. This is particularly important under the present circumstances of national financial difficulties which have resulted in continuous cuts in public expenditure. In order to achieve this objective, the researcher carried out a survey among several Highways Authorities by means of questionnaire and interview. The information so collected was analysed in order to understand the actual, practical situation within highways manintenance departments, and highlight any existing problems, and try to answer the question of how they could become more efficient. According to the results obtained by the questionnaire and the interview, and the analysis of these results, the researcher concludes that it is the management system where least has been done, and where problems exist and are most complex. The management of highways maintenance departments argue that the reasons for their problems include both financial and organisational difficulties, apart from the political aspect and nature of the activities undertaken. The researcher believes that this ought to necessitate improving the management's analytical tools and techniques in order to achieve the most effective way of performing each activity. To this end the researcher recommends several related procedures to be adopted by the management of the highways maintenance departments. These recommendations, arising from the study, involve the technical, practical and human aspects. These are essential factors of which management should be aware - and certainly should not neglect - in order to achieve its objectives of improved productivity in the highways maintenance departments.