871 resultados para Predictive Maintenance
Resumo:
In the artificial lift method by Electrical Submersible Pump (ESP), the energy is transmitted for the well´s deep through a flat electric handle, where it is converted into mechanical energy through an engine of sub-surface, which is connected to a centrifugal pump. This transmits energy to the fluid under the pressure form, bringing it to the surface In this method the subsurface equipment is basically divided into: pump, seal and motor. The main function of the seal is the protect the motor, avoiding the motor´s oil be contaminated by oil production and the consequent burning of it. Over time, the seal will be wearing and initiates a contamination of motor oil, causing it to lose its insulating characteristics. This work presents a design of a magnetic sensor capable of detecting contamination of insulating oil used in the artificial lift method of oil-type Electrical Submersible Pump (ESP). The objective of this sensor is to generate alarm signal just the moment when the contamination in the isolated oil is present, enabling the implementation of a predictive maintenance. The prototype was designed to work in harsh conditions to reach a depth of 2000m and temperatures up to 150°C. It was used a simulator software to defined the mechanical and electromagnetic variables. Results of field experiments were performed to validate the prototype. The final results performed in an ESP system with a 62HP motor showed a good reliability and fast response of the prototype.
Resumo:
Organotypic models may provide mechanistic insight into colorectal cancer (CRC) morphology. Three-dimensional (3D) colorectal gland formation is regulated by phosphatase and tensin homologue deleted on chromosome 10 (PTEN) coupling of cell division cycle 42 (cdc42) to atypical protein kinase C (aPKC). This study investigated PTEN phosphatase-dependent and phosphatase-independent morphogenic functions in 3D models and assessed translational relevance in human studies. Isogenic PTEN-expressing or PTEN-deficient 3D colorectal cultures were used. In translational studies, apical aPKC activity readout was assessed against apical membrane (AM) orientation and gland morphology in 3D models and human CRC. We found that catalytically active or inactive PTEN constructs containing an intact C2 domain enhanced cdc42 activity, whereas mutants of the C2 domain calcium binding region 3 membrane-binding loop (M-CBR3) were ineffective. The isolated PTEN C2 domain (C2) accumulated in membrane fractions, but C2 M-CBR3 remained in cytosol. Transfection of C2 but not C2 M-CBR3 rescued defective AM orientation and 3D morphogenesis of PTEN-deficient Caco-2 cultures. The signal intensity of apical phospho-aPKC correlated with that of Na/H exchanger regulatory factor-1 (NHERF-1) in the 3D model. Apical NHERF-1 intensity thus provided readout of apical aPKC activity and associated with glandular morphology in the model system and human colon. Low apical NHERF-1 intensity in CRC associated with disruption of glandular architecture, high cancer grade, and metastatic dissemination. We conclude that the membrane-binding function of the catalytically inert PTEN C2 domain influences cdc42/aPKC-dependent AM dynamics and gland formation in a highly relevant 3D CRC morphogenesis model system.
Resumo:
11 p.
Resumo:
Case-Based Reasoning (CBR) uses past experiences to solve new problems. The quality of the past experiences, which are stored as cases in a case base, is a big factor in the performance of a CBR system. The system's competence may be improved by adding problems to the case base after they have been solved and their solutions verified to be correct. However, from time to time, the case base may have to be refined to reduce redundancy and to get rid of any noisy cases that may have been introduced. Many case base maintenance algorithms have been developed to delete noisy and redundant cases. However, different algorithms work well in different situations and it may be difficult for a knowledge engineer to know which one is the best to use for a particular case base. In this thesis, we investigate ways to combine algorithms to produce better deletion decisions than the decisions made by individual algorithms, and ways to choose which algorithm is best for a given case base at a given time. We analyse five of the most commonly-used maintenance algorithms in detail and show how the different algorithms perform better on different datasets. This motivates us to develop a new approach: maintenance by a committee of experts (MACE). MACE allows us to combine maintenance algorithms to produce a composite algorithm which exploits the merits of each of the algorithms that it contains. By combining different algorithms in different ways we can also define algorithms that have different trade-offs between accuracy and deletion. While MACE allows us to define an infinite number of new composite algorithms, we still face the problem of choosing which algorithm to use. To make this choice, we need to be able to identify properties of a case base that are predictive of which maintenance algorithm is best. We examine a number of measures of dataset complexity for this purpose. These provide a numerical way to describe a case base at a given time. We use the numerical description to develop a meta-case-based classification system. This system uses previous experience about which maintenance algorithm was best to use for other case bases to predict which algorithm to use for a new case base. Finally, we give the knowledge engineer more control over the deletion process by creating incremental versions of the maintenance algorithms. These incremental algorithms suggest one case at a time for deletion rather than a group of cases, which allows the knowledge engineer to decide whether or not each case in turn should be deleted or kept. We also develop incremental versions of the complexity measures, allowing us to create an incremental version of our meta-case-based classification system. Since the case base changes after each deletion, the best algorithm to use may also change. The incremental system allows us to choose which algorithm is the best to use at each point in the deletion process.
Resumo:
Future analysis tools that predict the behavior of electronic components, both during qualification testing and in-service lifetime assessment, will be very important in predicting product reliability and identifying when to undertake maintenance. This paper will discuss some of these techniques and illustrate these with examples. The paper will also discuss future challenges for these techniques.
Resumo:
This paper describes a framework that is being developed for the prediction and analysis of electronics power module reliability both for qualification testing and in-service lifetime prediction. Physics of failure (PoF) reliability methodology using multi-physics high-fidelity and reduced order computer modelling, as well as numerical optimization techniques, are integrated in a dedicated computer modelling environment to meet the needs of the power module designers and manufacturers as well as end-users for both design and maintenance purposes. An example of lifetime prediction for a power module solder interconnect structure is described. Another example is the lifetime prediction of a power module for a railway traction control application. Also in the paper a combined physics of failure and data trending prognostic methodology for the health monitoring of power modules is discussed.
Resumo:
This talk addresses the problem of controlling a heating ventilating and air conditioning system with the purpose of achieving a desired thermal comfort level and energy savings. The formulation uses the thermal comfort, assessed using the predicted mean vote (PMV) index, as a restriction and minimises the energy spent to comply with it. This results in the maintenance of thermal comfort and on the minimisation of energy, which in most operating conditions are conflicting goals requiring some sort of optimisation method to find appropriate solutions over time. In this work a discrete model based predictive control methodology is applied to the problem. It consists of three major components: the predictive models, implemented by radial basis function neural networks identifed by means of a multi-objective genetic algorithm [1]; the cost function that will be optimised to minimise energy consumption and provide adequate thermal comfort; and finally the optimisation method, in this case a discrete branch and bound approach. Each component will be described, with a special emphasis on a fast and accurate computation of the PMV indices [2]. Experimental results obtained within different rooms in a building of the University of Algarve will be presented, both in summer [3] and winter [4] conditions, demonstrating the feasibility and performance of the approach. Energy savings resulting from the application of the method are estimated to be greater than 50%.
Resumo:
Feed samples received by commercial analytical laboratories are often undefined or mixed varieties of forages, originate from various agronomic or geographical areas of the world, are mixtures (e.g., total mixed rations) and are often described incompletely or not at all. Six unified single equation approaches to predict the metabolizable energy (ME) value of feeds determined in sheep fed at maintenance ME intake were evaluated utilizing 78 individual feeds representing 17 different forages, grains, protein meals and by-product feedstuffs. The predictive approaches evaluated were two each from National Research Council [National Research Council (NRC), Nutrient Requirements of Dairy Cattle, seventh revised ed. National Academy Press, Washington, DC, USA, 2001], University of California at Davis (UC Davis) and ADAS (Stratford, UK). Slopes and intercepts for the two ADAS approaches that utilized in vitro digestibility of organic matter and either measured gross energy (GE), or a prediction of GE from component assays, and one UC Davis approach, based upon in vitro gas production and some component assays, differed from both unity and zero, respectively, while this was not the case for the two NRC and one UC Davis approach. However, within these latter three approaches, the goodness of fit (r(2)) increased from the NRC approach utilizing lignin (0.61) to the NRC approach utilizing 48 h in vitro digestion of neutral detergent fibre (NDF:0.72) and to the UC Davis approach utilizing a 30 h in vitro digestion of NDF (0.84). The reason for the difference between the precision of the NRC procedures was the failure of assayed lignin values to accurately predict 48 h in vitro digestion of NDF. However, differences among the six predictive approaches in the number of supporting assays, and their costs, as well as that the NRC approach is actually three related equations requiring categorical description of feeds (making them unsuitable for mixed feeds) while the ADAS and UC Davis approaches are single equations, suggests that the procedure of choice will vary dependent Upon local conditions, specific objectives and the feedstuffs to be evaluated. In contrast to the evaluation of the procedures among feedstuffs, no procedure was able to consistently discriminate the ME values of individual feeds within feedstuffs determined in vivo, suggesting that the quest for an accurate and precise ME predictive approach among and within feeds, may remain to be identified. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
BACKGROUND: Limited evidence exists on the significance of residual probing pocket depth (PPD) as a predictive parameter for periodontal disease progression and tooth loss. AIM: The aim of this study was to investigate the influence of residual PPD >or=5 mm and bleeding on probing (BOP) after active periodontal therapy (APT) on the progression of periodontitis and tooth loss. MATERIAL AND METHODS: In this retrospective cohort, 172 patients were examined after APT and supportive periodontal therapy (SPT) for 3-27 years (mean 11.3 years). Analyses were conducted using information at site, tooth and patient levels. The association of risk factors with tooth loss and progression of periodontitis was investigated using multilevel logistic regression analysis. RESULTS: The number of residual PPD increased during SPT. Compared with PPD
Resumo:
OBJECT A main concern with regard to surgery for low-grade glioma (LGG, WHO Grade II) is maintenance of the patient's functional integrity. This concern is particularly relevant for gliomas in the central region, where damage can have grave repercussions. The authors evaluated postsurgical outcomes with regard to neurological deficits, seizures, and quality of life. METHODS Outcomes were compared for 33 patients with central LGG (central cohort) and a control cohort of 31 patients with frontal LGG (frontal cohort), all of whom had had medically intractable seizures before undergoing surgery with mapping while awake. All surgeries were performed in the period from February 2007 through April 2010 at the same institution. RESULTS For the central cohort, the median extent of resection was 92% (range 80%-97%), and for the frontal cohort, the median extent of resection was 93% (range 83%-98%; p = 1.0). Although the rate of mild neurological deficits was similar for both groups, seizure freedom (Engel Class I) was achieved for only 4 (12.1%) of 33 patients in the central cohort compared with 26 (83.9%) of 31 patients in the frontal cohort (p < 0.0001). The rate of return to work was lower for patients in the central cohort (4 [12.1%] of 33) than for the patients in the frontal cohort (28 [90.3%] of 31; p < 0.0001). CONCLUSIONS Resection of central LGG is feasible and safe when appropriate intraoperative mapping is used. However, seizure control for these patients remains poor, a finding that contrasts markedly with seizure control for patients in the frontal cohort and with that reported in the literature. For patients with central LGG, poor seizure control ultimately determines quality of life because most will not be able to return to work.
Resumo:
Background Protein-energy-malnutrition (PEM) is common in people with end stage kidney disease (ESKD) undergoing maintenance haemodialysis (MHD) and correlates strongly with mortality. To this day, there is no gold standard for detecting PEM in patients on MHD. Aim of Study The aim of this study was to evaluate if Nutritional Risk Screening 2002 (NRS-2002), handgrip strength measurement, mid-upper arm muscle area (MUAMA), triceps skin fold measurement (TSF), serum albumin, normalised protein catabolic rate (nPCR), Kt/V and eKt/V, dry body weight, body mass index (BMI), age and time since start on MHD are relevant for assessing PEM in patients on MHD. Methods The predictive value of the selected parameters on mortality and mortality or weight loss of more than 5% was assessed. Quantitative data analysis of the 12 parameters in the same patients on MHD in autumn 2009 (n = 64) and spring 2011 (n = 40) with paired statistical analysis and multivariate logistic regression analysis was performed. Results Paired data analysis showed significant reduction of dry body weight, BMI and nPCR. Kt/Vtot did not change, eKt/v and hand grip strength measurements were significantly higher in spring 2011. No changes were detected in TSF, serum albumin, NRS-2002 and MUAMA. Serum albumin was shown to be the only predictor of death and of the combined endpoint “death or weight loss of more than 5%”. Conclusion We now screen patients biannually for serum albumin, nPCR, Kt/V, handgrip measurement of the shunt-free arm, dry body weight, age and time since initiation of MHD.
Resumo:
The research project is an extension of a series of administrative science and health care research projects evaluating the influence of external context, organizational strategy, and organizational structure upon organizational success or performance. The research will rely on the assumption that there is not one single best approach to the management of organizations (the contingency theory). As organizational effectiveness is dependent on an appropriate mix of factors, organizations may be equally effective based on differing combinations of factors. The external context of the organization is expected to influence internal organizational strategy and structure and in turn the internal measures affect performance (discriminant theory). The research considers the relationship of external context and organization performance.^ The unit of study for the research will be the health maintenance organization (HMO); an organization the accepts in exchange for a fixed, advance capitation payment, contractual responsibility to assure the delivery of a stated range of health sevices to a voluntary enrolled population. With the current Federal resurgence of interest in the Health Maintenance Organization (HMO) as a major component in the health care system, attention must be directed at maximizing development of HMOs from the limited resources available. Increased skills are needed in both Federal and private evaluation of HMO feasibility in order to prevent resource investment and in projects that will fail while concurrently identifying potentially successful projects that will not be considered using current standards.^ The research considers 192 factors measuring contextual milieu (social, educational, economic, legal, demographic, health and technological factors). Through intercorrelation and principle components data reduction techniques this was reduced to 12 variables. Two measures of HMO performance were identified, they are (1) HMO status (operational or defunct), and (2) a principle components factor score considering eight measures of performance. The relationship between HMO context and performance was analysed using correlation and stepwise multiple regression methods. In each case it has been concluded that the external contextual variables are not predictive of success or failure of study Health Maintenance Organizations. This suggests that performance of an HMO may rely on internal organizational factors. These findings have policy implications as contextual measures are used as a major determinant in HMO feasibility analysis, and as a factor in the allocation of limited Federal funds. ^
Resumo:
Proper maintenance of plant items is crucial for the safe and profitable operation of process plants, The relevant maintenance policies fall into the following four categories: (i) preventivejopportunistic/breakdown replacement policies, (ii) inspection/inspection-repair-replacernent policies, (iii) restorative maintenance policies, and (iv) condition based maintenance policies, For correlating failure times of component equipnent and complete systems, the Weibull failure distribution has been used, A new powerful method, SEQLIM, has been proposed for the estimation of the Weibull parameters; particularly, when maintenance records contain very few failures and many successful operation times. When a system consists of a number of replaceable, ageing components, an opporturistic replacernent policy has been found to be cost-effective, A simple opportunistic rrodel has been developed. Inspection models with various objective functions have been investigated, It was found that, on the assumption of a negative exponential failure distribution, all models converge to the same optimal inspection interval; provided the safety components are very reliable and the demand rate is low, When deterioration becomes a contributory factor to same failures, periodic inspections, calculated from above models, are too frequent, A case of safety trip systems has been studied, A highly effective restorative maintenance policy can be developed if the performance of the equipment under this category can be related to some predictive modelling. A novel fouling model has been proposed to determine cleaning strategies of condensers, Condition-based maintenance policies have been investigated. A simple gauge has been designed for condition monitoring of relief valve springs. A typical case of an exothermic inert gas generation plant has been studied, to demonstrate how various policies can be applied to devise overall maintenance actions.