913 resultados para Drilling process monitoring
Resumo:
Accurate reliability prediction for large-scale, long lived engineering is a crucial foundation for effective asset risk management and optimal maintenance decision making. However, a lack of failure data for assets that fail infrequently, and changing operational conditions over long periods of time, make accurate reliability prediction for such assets very challenging. To address this issue, we present a Bayesian-Marko best approach to reliability prediction using prior knowledge and condition monitoring data. In this approach, the Bayesian theory is used to incorporate prior information about failure probabilities and current information about asset health to make statistical inferences, while Markov chains are used to update and predict the health of assets based on condition monitoring data. The prior information can be supplied by domain experts, extracted from previous comparable cases or derived from basic engineering principles. Our approach differs from existing hybrid Bayesian models which are normally used to update the parameter estimation of a given distribution such as the Weibull-Bayesian distribution or the transition probabilities of a Markov chain. Instead, our new approach can be used to update predictions of failure probabilities when failure data are sparse or nonexistent, as is often the case for large-scale long-lived engineering assets.
Resumo:
With the large diffusion of Business Process Managemen (BPM) automation suites, the possibility of managing process-related risks arises. This paper introduces an innovative framework for process-related risk management and describes a working implementation realized by extending the YAWL system. The framework covers three aspects of risk management: risk monitoring, risk prevention, and risk mitigation. Risk monitoring functionality is provided using a sensor-based architecture, where sensors are defined at design time and used at run-time for monitoring purposes. Risk prevention functionality is provided in the form of suggestions about what should be executed, by who, and how, through the use of decision trees. Finally, risk mitigation functionality is provided as a sequence of remedial actions (e.g. reallocating, skipping, rolling back of a work item) that should be executed to restore the process to a normal situation.
Resumo:
The modern structural diagnosis process is rely on vibration characteristics to assess safer serviceability level of the structure. This paper examines the potential of change in flexibility method to use in damage detection process and two main practical constraints associated with it. The first constraint addressed in this paper is reduction in number of data acquisition points due to limited number of sensors. Results conclude that accuracy of the change in flexibility method is influenced by the number of data acquisition points/sensor locations in real structures. Secondly, the effect of higher modes on damage detection process has been studied. This addresses the difficulty of extracting higher order modal data with available sensors. Four damage indices have been presented to identify their potential of damage detection with respect to different locations and severity of damage. A simply supported beam with two degrees of freedom at each node is considered only for a single damage cases throughout the paper.
Resumo:
The health effects of environmental hazards are often examined using time series of the association between a daily response variable (e.g., death) and a daily level of exposure (e.g., temperature). Exposures are usually the average from a network of stations. This gives each station equal importance, and negates the opportunity for some stations to be better measures of exposure. We used a Bayesian hierarchical model that weighted stations using random variables between zero and one. We compared the weighted estimates to the standard model using data on health outcomes (deaths and hospital admissions) and exposures (air pollution and temperature) in Brisbane, Australia. The improvements in model fit were relatively small, and the estimated health effects of pollution were similar using either the standard or weighted estimates. Spatial weighted exposures would be probably more worthwhile when there is either greater spatial detail in the health outcome, or a greater spatial variation in exposure.
Resumo:
While the emission rate of ultrafine particles has been measured and quantified, there is very little information on the emission rates of ions and charged particles from laser printers. This paper describes a methodology that can be adopted for measuring the surface charge density on printed paper and the ion and charged particle emissions during operation of a high-emitting laser printer and shows how emission rates of ultrafine particles, ions and charged particles may be quantified using a controlled experiment within a closed chamber.
Resumo:
Organisations are constantly seeking efficiency improvements for their business processes in terms of time and cost. Management accounting enables reporting of detailed cost of operations for decision making purpose, although significant effort is required to gather accurate operational data. Business process management is concerned with systematically documenting, managing, automating, and optimising processes. Process mining gives valuable insight into processes through analysis of events recorded by an IT system in the form of an event log with the focus on efficient utilisation of time and resources, although its primary focus is not on cost implications. In this paper, we propose a framework to support management accounting decisions on cost control by automatically incorporating cost data with historical data from event logs for monitoring, predicting and reporting process-related costs. We also illustrate how accurate, relevant and timely management accounting style cost reports can be produced on demand by extending open-source process mining framework ProM.
Resumo:
A simple and effective down-sample algorithm, Peak-Hold-Down-Sample (PHDS) algorithm is developed in this paper to enable a rapid and efficient data transfer in remote condition monitoring applications. The algorithm is particularly useful for high frequency Condition Monitoring (CM) techniques, and for low speed machine applications since the combination of the high sampling frequency and low rotating speed will generally lead to large unwieldy data size. The effectiveness of the algorithm was evaluated and tested on four sets of data in the study. One set of the data was extracted from the condition monitoring signal of a practical industry application. Another set of data was acquired from a low speed machine test rig in the laboratory. The other two sets of data were computer simulated bearing defect signals having either a single or multiple bearing defects. The results disclose that the PHDS algorithm can substantially reduce the size of data while preserving the critical bearing defect information for all the data sets used in this work even when a large down-sample ratio was used (i.e., 500 times down-sampled). In contrast, the down-sample process using existing normal down-sample technique in signal processing eliminates the useful and critical information such as bearing defect frequencies in a signal when the same down-sample ratio was employed. Noise and artificial frequency components were also induced by the normal down-sample technique, thus limits its usefulness for machine condition monitoring applications.
Resumo:
New substation technology, such as non-conventional instrument transformers,and a need to reduce design and construction costs, are driving the adoption of Ethernet based digital process bus networks for high voltage substations. Protection and control applications can share a process bus, making more efficient use of the network infrastructure. This paper classifies and defines performance requirements for the protocols used in a process bus on the basis of application. These include GOOSE, SNMP and IEC 61850-9-2 sampled values. A method, based on the Multiple Spanning Tree Protocol (MSTP) and virtual local area networks, is presented that separates management and monitoring traffic from the rest of the process bus. A quantitative investigation of the interaction between various protocols used in a process bus is described. These tests also validate the effectiveness of the MSTP based traffic segregation method. While this paper focusses on a substation automation network, the results are applicable to other real-time industrial networks that implement multiple protocols. High volume sampled value data and time-critical circuit breaker tripping commands do not interact on a full duplex switched Ethernet network, even under very high network load conditions. This enables an efficient digital network to replace a large number of conventional analog connections between control rooms and high voltage switchyards.
Resumo:
The management and improvement of business processes are a core topic of the information systems discipline. The persistent demand in corporations within all industry sectors for increased operational efficiency and innovation, an emerging set of established and evaluated methods, tools, and techniques as well as the quickly growing body of academic and professional knowledge are indicative for the standing that Business Process Management (BPM) has nowadays. During the last decades, intensive research has been conducted with respect to the design, implementation, execution, and monitoring of business processes. Comparatively low attention, however, has been paid to questions related to organizational issues such as the adoption, usage, implications, and overall success of BPM approaches, technologies, and initiatives. This research gap motivated us to edit a corresponding special focus issue for the journal BISE/WIRTSCHAFTSINFORMATIK. We are happy that we are able to present a selection of three research papers and a state-of-the-art paper in the scientific section of the issue at hand. As these papers differ in the topics they investigate, the research method they apply, and the theoretical foundations they build on, the diversity within the BPM field becomes evident. The academic papers are complemented by an interview with Phil Gilbert, IBM’s Vice President for Business Process and Decision Management, who reflects on the relationship between business processes and the data flowing through them, the need to establish a process context for decision making, and the calibration of BPM efforts toward executives who see processes as a means to an end, rather than a first-order concept in its own right.
Resumo:
This article proposes an approach for real-time monitoring of risks in executable business process models. The approach considers risks in all phases of the business process management lifecycle, from process design, where risks are defined on top of process models, through to process diagnosis, where risks are detected during process execution. The approach has been realized via a distributed, sensor-based architecture. At design-time, sensors are defined to specify risk conditions which when fulfilled, are a likely indicator of negative process states (faults) to eventuate. Both historical and current process execution data can be used to compose such conditions. At run-time, each sensor independently notifies a sensor manager when a risk is detected. In turn, the sensor manager interacts with the monitoring component of a business process management system to prompt the results to process administrators who may take remedial actions. The proposed architecture has been implemented on top of the YAWL system, and evaluated through performance measurements and usability tests with students. The results show that risk conditions can be computed efficiently and that the approach is perceived as useful by the participants in the tests.
Resumo:
Restoring a large-scale power system has always been a complicated and important issue. A lot of research work has been done on different aspects of the whole power system restoration procedure. However, more time will be required to complete the power system restoration process in an actual situation if accurate and real-time system data cannot be obtained. With the development of the wide area monitoring system (WAMS), power system operators are capable of accessing to more accurate data in the restoration stage after a major outage. The ultimate goal of the system restoration is to restore as much load as possible while in the shortest period of time after a blackout, and the restorable load can be estimated by employing WAMS. Moreover, discrete restorable loads are employed considering the limited number of circuit-breaker operations and the practical topology of distribution systems. In this work, a restorable load estimation method is proposed employing WAMS data after the network frame has been reenergized, and WAMS is also employed to monitor the system parameters in case the newly recovered system becomes unstable again. The proposed method has been validated with the New England 39-Bus system and an actual power system in Guangzhou, China.
Resumo:
Aims: This paper describes the development of a risk adjustment (RA) model predictive of individual lesion treatment failure in percutaneous coronary interventions (PCI) for use in a quality monitoring and improvement program. Methods and results: Prospectively collected data for 3972 consecutive revascularisation procedures (5601 lesions) performed between January 2003 and September 2011 were studied. Data on procedures to September 2009 (n = 3100) were used to identify factors predictive of lesion treatment failure. Factors identified included lesion risk class (p < 0.001), occlusion type (p < 0.001), patient age (p = 0.001), vessel system (p < 0.04), vessel diameter (p < 0.001), unstable angina (p = 0.003) and presence of major cardiac risk factors (p = 0.01). A Bayesian RA model was built using these factors with predictive performance of the model tested on the remaining procedures (area under the receiver operating curve: 0.765, Hosmer–Lemeshow p value: 0.11). Cumulative sum, exponentially weighted moving average and funnel plots were constructed using the RA model and subjectively evaluated. Conclusion: A RA model was developed and applied to SPC monitoring for lesion failure in a PCI database. If linked to appropriate quality improvement governance response protocols, SPC using this RA tool might improve quality control and risk management by identifying variation in performance based on a comparison of observed and expected outcomes.
Resumo:
Organisations are constantly seeking efficiency gains for their business processes in terms of time and cost. Management accounting enables detailed cost reporting of business operations for decision making purposes, although significant effort is required to gather accurate operational data. Process mining, on the other hand, may provide valuable insight into processes through analysis of events recorded in logs by IT systems, but its primary focus is not on cost implications. In this paper, a framework is proposed which aims to exploit the strengths of both fields in order to better support management decisions on cost control. This is achieved by automatically merging cost data with historical data from event logs for the purposes of monitoring, predicting, and reporting process-related costs. The on-demand generation of accurate, relevant and timely cost reports, in a style akin to reports in the area of management accounting, will also be illustrated. This is achieved through extending the open-source process mining framework ProM.
Resumo:
Scanning electrochemical microscopy (SECM), in the substrate generation–tip collection (SG-TC) mode, has been used to detect the cuprous ion intermediate formed during the course of electrodeposition of Cu metal from aqueous solution. Addition of chloride is confirmed to strongly stabilize the ion in aqueous solution and enhance the rate of Cu electrodeposition. This SECM method in the SG-TC mode offers an alternative to the rotating ring disk electrode (RRDE) technique for in situ studies on the effect of plating bath additives in metal electrodeposition. An attractive feature of the SECM relative to the RRDE method is that it allows qualitative aspects of the electrodeposition process to be studied in close proximity to the substrate in a simple and direct fashion using an inexpensive probe, and without the need for forced convection.
Resumo:
Non-communicable diseases (NCDs) dominate disease burdens globally and poor nutrition increasingly contributes to this global burden. Comprehensive monitoring of food environments, and evaluation of the impact of public and private sector policies on food environments is needed to strengthen accountability systems to reduce NCDs. The International Network for Food and Obesity/NCDs Research, Monitoring and Action Support (INFORMAS) is a global network of public-interest organizations and researchers that aims to monitor, benchmark and support public and private sector actions to create healthy food environments and reduce obesity, NCDs and their related inequalities. The INFORMAS framework includes two ‘process’ modules, that monitor the policies and actions of the public and private sectors, seven ‘impact’ modules that monitor the key characteristics of food environments and three ‘outcome’ modules that monitor dietary quality, risk factors and NCD morbidity and mortality. Monitoring frameworks and indicators have been developed for 10 modules to provide consistency, but allowing for stepwise approaches (‘minimal’, ‘expanded’, ‘optimal’) to data collection and analysis. INFORMAS data will enable benchmarking of food environments between countries, and monitoring of progress over time within countries. Through monitoring and benchmarking, INFORMAS will strengthen the accountability systems needed to help reduce the burden of obesity, NCDs and their related inequalities.