194 resultados para Fault tolerant computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Continuous monitoring of diesel engine performance is critical for early detection of fault developments in an engine before they materialize into a functional failure. Instantaneous crank angular speed (IAS) analysis is one of a few nonintrusive condition monitoring techniques that can be utilized for such a task. Furthermore, the technique is more suitable for mass industry deployments than other non-intrusive methods such as vibration and acoustic emission techniques due to the low instrumentation cost, smaller data size and robust signal clarity since IAS is not affected by the engine operation noise and noise from the surrounding environment. A combination of IAS and order analysis was employed in this experimental study and the major order component of the IAS spectrum was used for engine loading estimation and fault diagnosis of a four-stroke four-cylinder diesel engine. It was shown that IAS analysis can provide useful information about engine speed variation caused by changing piston momentum and crankshaft acceleration during the engine combustion process. It was also found that the major order component of the IAS spectra directly associated with the engine firing frequency (at twice the mean shaft rotating speed) can be utilized to estimate engine loading condition regardless of whether the engine is operating at healthy condition or with faults. The amplitude of this order component follows a distinctive exponential curve as the loading condition changes. A mathematical relationship was then established in the paper to estimate the engine power output based on the amplitude of this order component of the IAS spectrum. It was further illustrated that IAS technique can be employed for the detection of a simulated exhaust valve fault in this study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research suggests information technology (IT) governance structures to manage the cloud computing services. The interest in acquiring IT resources as a utility from the cloud computing environment is gaining momentum. The cloud computing services present organizations with opportunities to manage their IT expenditure on an ongoing basis, and access to modern IT resources to innovate and manage their continuity. However, the cloud computing services are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to manage the cloud computing services. The subsequent decisions from these governance structures will ensure the effective management of the cloud computing services. This management will facilitate a better fit of the cloud computing services into organizations’ existing processes to achieve the business (process-level) and the financial (firm-level) objectives. Using a triangulation approach, we suggest four governance structures for managing the cloud computing services. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. We also propose that these governance structures would relate directly to organizations cloud computing services-related business objectives, and indirectly to cloud computing services-related financial objectives. Perceptive field survey data from actual and prospective cloud computing service adopters suggest that the suggested governance structures would contribute directly to cloud computing-related business objectives and indirectly to cloud computing-related financial objectives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel gray-box neural network model (GBNNM), including multi-layer perception (MLP) neural network (NN) and integrators, is proposed for a model identification and fault estimation (MIFE) scheme. With the GBNNM, both the nonlinearity and dynamics of a class of nonlinear dynamic systems can be approximated. Unlike previous NN-based model identification methods, the GBNNM directly inherits system dynamics and separately models system nonlinearities. This model corresponds well with the object system and is easy to build. The GBNNM is embedded online as a normal model reference to obtain the quantitative residual between the object system output and the GBNNM output. This residual can accurately indicate the fault offset value, so it is suitable for differing fault severities. To further estimate the fault parameters (FPs), an improved extended state observer (ESO) using the same NNs (IESONN) from the GBNNM is proposed to avoid requiring the knowledge of ESO nonlinearity. Then, the proposed MIFE scheme is applied for reaction wheels (RW) in a satellite attitude control system (SACS). The scheme using the GBNNM is compared with other NNs in the same fault scenario, and several partial loss of effect (LOE) faults with different severities are considered to validate the effectiveness of the FP estimation and its superiority.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mobile technologies are enabling access to information in diverse environ.ments, and are exposing a wider group of individuals to said technology. Therefore, this paper proposes that a wider view of user relations than is usually considered in information systems research is required. Specifically, we examine the potential effects of emerging mobile technologies on end-­‐user relations with a focus on the ‘secondary user’, those who are not intended to interact directly with the technology but are intended consumers of the technology’s output. For illustration, we draw on a study of a U.K. regional Fire and Rescue Service and deconstruct mobile technology use at Fire Service incidents. Our findings provide insights, which suggest that, because of the nature of mobile technologies and their context of use, secondary user relations in such emerging mobile environments are important and need further exploration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Supervisory Control and Data Acquisition systems (SCADA) are widely used to control critical infrastructure automatically. Capturing and analyzing packet-level traffic flowing through such a network is an essential requirement for problems such as legacy network mapping and fault detection. Within the framework of captured network traffic, we present a simple modeling technique, which supports the mapping of the SCADA network topology via traffic monitoring. By characterizing atomic network components in terms of their input-output topology and the relationship between their data traffic logs, we show that these modeling primitives have good compositional behaviour, which allows complex networks to be modeled. Finally, the predictions generated by our model are found to be in good agreement with experimentally obtained traffic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed computation and storage have been widely used for processing of big data sets. For many big data problems, with the size of data growing rapidly, the distribution of computing tasks and related data can affect the performance of the computing system greatly. In this paper, a distributed computing framework is presented for high performance computing of All-to-All Comparison Problems. A data distribution strategy is embedded in the framework for reduced storage space and balanced computing load. Experiments are conducted to demonstrate the effectiveness of the developed approach. They have shown that about 88% of the ideal performance capacity have be achieved in multiple machines through using the approach presented in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper uses transaction cost theory to study cloud computing adoption. A model is developed and tested with data from an Australian survey. According to the results, perceived vendor opportunism and perceived legislative uncertainty around cloud computing were significantly associated with perceived cloud computing security risk. There was also a significant negative relationship between perceived cloud computing security risk and the intention to adopt cloud services. This study also reports on adoption rates of cloud computing in terms of applications, as well as the types of services used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Research that has focused on the ability of self-report assessment tools to predict crash outcomes has proven to be mixed. As a result, researchers are now beginning to explore whether examining culpability of crash involvement can subsequently improve this predictive efficacy. This study reports on the application of the Manchester Driver Behaviour Questionnaire (DBQ) to predict crash involvement among a sample of general Queensland motorists, and in particular, whether including a crash culpability variable improves predictive outcomes. Surveys were completed by 249 general motorists on-line or via a pen-and-paper format. Results: Consistent with previous research, a factor analysis revealed a three factor solution for the DBQ accounting for 40.5% of the overall variance. However, multivariate analysis using the DBQ revealed little predictive ability of the tool to predict crash involvement. Rather, exposure to the road was found to be predictive of crashes. An analysis into culpability revealed 88 participants reported being “at fault” for their most recent crash. Corresponding between and multi-variate analyses that included the culpability variable did not result in an improvement in identifying those involved in crashes. Conclusions: While preliminary, the results suggest that including crash culpability may not necessarily improve predictive outcomes in self-report methodologies, although it is noted the current small sample size may also have had a deleterious effect on this endeavour. This paper also outlines the need for future research (which also includes official crash and offence outcomes) to better understand the actual contribution of self-report assessment tools, and culpability variables, to understanding and improving road safety.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increased focus on energy cost savings and carbon footprint reduction efforts improved the visibility of building energy simulation, which became a mandatory requirement of several building rating systems. Despite developments in building energy simulation algorithms and user interfaces, there are some major challenges associated with building energy simulation; an important one is the computational demands and processing time. In this paper, we analyze the opportunities and challenges associated with this topic while executing a set of 275 parametric energy models simultaneously in EnergyPlus using a High Performance Computing (HPC) cluster. Successful parallel computing implementation of building energy simulations will not only improve the time necessary to get the results and enable scenario development for different design considerations, but also might enable Dynamic-Building Information Modeling (BIM) integration and near real-time decision-making. This paper concludes with the discussions on future directions and opportunities associated with building energy modeling simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a highly reliable fault diagnosis approach for low-speed bearings. The proposed approach first extracts wavelet-based fault features that represent diverse symptoms of multiple low-speed bearing defects. The most useful fault features for diagnosis are then selected by utilizing a genetic algorithm (GA)-based kernel discriminative feature analysis cooperating with one-against-all multicategory support vector machines (OAA MCSVMs). Finally, each support vector machine is individually trained with its own feature vector that includes the most discriminative fault features, offering the highest classification performance. In this study, the effectiveness of the proposed GA-based kernel discriminative feature analysis and the classification ability of individually trained OAA MCSVMs are addressed in terms of average classification accuracy. In addition, the proposedGA- based kernel discriminative feature analysis is compared with four other state-of-the-art feature analysis approaches. Experimental results indicate that the proposed approach is superior to other feature analysis methodologies, yielding an average classification accuracy of 98.06% and 94.49% under rotational speeds of 50 revolutions-per-minute (RPM) and 80 RPM, respectively. Furthermore, the individually trained MCSVMs with their own optimal fault features based on the proposed GA-based kernel discriminative feature analysis outperform the standard OAA MCSVMs, showing an average accuracy of 98.66% and 95.01% for bearings under rotational speeds of 50 RPM and 80 RPM, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a recommendation system that supports process participants in taking risk-informed decisions, with the goal of reducing risks that may arise during process execution. Risk reduction involves decreasing the likelihood and severity of a process fault from occurring. Given a business process exposed to risks, e.g. a financial process exposed to a risk of reputation loss, we enact this process and whenever a process participant needs to provide input to the process, e.g. by selecting the next task to execute or by filling out a form, we suggest to the participant the action to perform which minimizes the predicted process risk. Risks are predicted by traversing decision trees generated from the logs of past process executions, which consider process data, involved resources, task durations and other information elements like task frequencies. When applied in the context of multiple process instances running concurrently, a second technique is employed that uses integer linear programming to compute the optimal assignment of resources to tasks to be performed, in order to deal with the interplay between risks relative to different instances. The recommendation system has been implemented as a set of components on top of the YAWL BPM system and its effectiveness has been evaluated using a real-life scenario, in collaboration with risk analysts of a large insurance company. The results, based on a simulation of the real-life scenario and its comparison with the event data provided by the company, show that the process instances executed concurrently complete with significantly fewer faults and with lower fault severities, when the recommendations provided by our recommendation system are taken into account.