221 resultados para fault-tolerant quantum computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anisotropic damage distribution and evolution have a profound effect on borehole stress concentrations. Damage evolution is an irreversible process that is not adequately described within classical equilibrium thermodynamics. Therefore, we propose a constitutive model, based on non-equilibrium thermodynamics, that accounts for anisotropic damage distribution, anisotropic damage threshold and anisotropic damage evolution. We implemented this constitutive model numerically, using the finite element method, to calculate stress–strain curves and borehole stresses. The resulting stress–strain curves are distinctively different from linear elastic-brittle and linear elastic-ideal plastic constitutive models and realistically model experimental responses of brittle rocks. We show that the onset of damage evolution leads to an inhomogeneous redistribution of material properties and stresses along the borehole wall. The classical linear elastic-brittle approach to borehole stability analysis systematically overestimates the stress concentrations on the borehole wall, because dissipative strain-softening is underestimated. The proposed damage mechanics approach explicitly models dissipative behaviour and leads to non-conservative mud window estimations. Furthermore, anisotropic rocks with preferential planes of failure, like shales, can be addressed with our model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wind power has become one of the popular renewable resources all over the world and is anticipated to occupy 12% of the total global electricity generation capacity by 2020. For the harsh environment that the wind turbine operates, fault diagnostic and condition monitoring are important for wind turbine safety and reliability. This paper employs a systematic literature review to report the most recent promotions in the wind turbine fault diagnostic, from 2005 to 2012. The frequent faults and failures in wind turbines are considered and different techniques which have been used by researchers are introduced, classified and discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The production of adequate agricultural outputs to support the growing human population places great demands on agriculture, especially in light of ever-greater restrictions on input resources. Sorghum is a drought-adapted cereal capable of reliable production where other cereals fail, and thus represents a good candidate to address food security as agricultural inputs of water and arable land grow scarce. A long-standing issue with sorghum grain is that it has an inherently lower digestibility. Here we show that a low-frequency allele type in the starch metabolic gene, pullulanase, is associated with increased digestibility, regardless of genotypic background. We also provide evidence that the beneficial allele type is not associated with deleterious pleiotropic effects in the modern field environment. We argue that increasing the digestibility of an adapted crop is a viable way forward towards addressing food security while maximizing water and land-use efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a framework for isolating unprecedented faults for an EGR valve system is presented. Using normal behavior data generated by a high fidelity engine simulation, the recently introduced Growing Structure Multiple Model System (GSMMS) is used to construct models of normal behavior for an EGR valve system and its various subsystems. Using the GSMMS models as a foundation, anomalous behavior of the entire system is then detected as statistically significant departures of the most recent modeling residuals from the modeling residuals during normal behavior. By reconnecting anomaly detectors to the constituent subsystems, the anomaly can be isolated without the need for prior training using faulty data. Furthermore, faults that were previously encountered (and modeled) are recognized using the same approach as the anomaly detectors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a recently introduced model-based method for precedent-free fault detection and isolation (FDI) is modified to deal with multiple input, multiple output (MIMO) systems and is applied to an automotive engine with exhaust gas recirculation (EGR) system. Using normal behavior data generated by a high fidelity engine simulation, the growing structure multiple model system (GSMMS) approach is used to construct dynamic models of normal behavior for the EGR system and its constituent subsystems. Using the GSMMS models as a foundation, anomalous behavior is detected whenever statistically significant departures of the most recent modeling residuals away from the modeling residuals displayed during normal behavior are observed. By reconnecting the anomaly detectors (ADs) to the constituent subsystems, EGR valve, cooler, and valve controller faults are isolated without the need for prior training using data corresponding to particular faulty system behaviors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diagnostics of rolling element bearings have been traditionally developed for constant operating conditions, and sophisticated techniques, like Spectral Kurtosis or Envelope Analysis, have proven their effectiveness by means of experimental tests, mainly conducted in small-scale laboratory test-rigs. Algorithms have been developed for the digital signal processing of data collected at constant speed and bearing load, with a few exceptions, allowing only small fluctuations of these quantities. Owing to the spreading of condition based maintenance in many industrial fields, in the last years a need for more flexible algorithms emerged, asking for compatibility with highly variable operating conditions, such as acceleration/deceleration transients. This paper analyzes the problems related with significant speed and load variability, discussing in detail the effect that they have on bearing damage symptoms, and propose solutions to adapt existing algorithms to cope with this new challenge. In particular, the paper will i) discuss the implication of variable speed on the applicability of diagnostic techniques, ii) address quantitatively the effects of load on the characteristic frequencies of damaged bearings and iii) finally present a new approach for bearing diagnostics in variable conditions, based on envelope analysis. The research is based on experimental data obtained by using artificially damaged bearings installed on a full scale test-rig, equipped with actual train traction system and reproducing the operation on a real track, including all the environmental noise, owing to track irregularity and electrical disturbances of such a harsh application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diagnostics of rolling element bearings is usually performed by means of vibration signals measured by accelerometers placed in the proximity of the bearing under investigation. The aim is to monitor the integrity of the bearing components, in order to avoid catastrophic failures, or to implement condition based maintenance strategies. In particular, the trend in this field is to combine in a single algorithm different signal-enhancement and signal-analysis techniques. Among the first ones, Minimum Entropy Deconvolution (MED) has been pointed out as a key tool able to highlight the effect of a possible damage in one of the bearing components within the vibration signal. This paper presents the application of this technique to signals collected on a simple test-rig, able to test damaged industrial roller bearings in different working conditions. The effectiveness of the technique has been tested, comparing the results of one undamaged bearing with three bearings artificially damaged in different locations, namely on the inner race, outer race and rollers. Since MED performances are dependent on the filter length, the most suitable value of this parameter is defined on the basis of both the application and measured signals. This represents an original contribution of the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the field of diagnostics of rolling element bearings, the development of sophisticated techniques, such as Spectral Kurtosis and 2nd Order Cyclostationarity, extended the capability of expert users to identify not only the presence, but also the location of the damage in the bearing. Most of the signal-analysis methods, as the ones previously mentioned, result in a spectrum-like diagram that presents line frequencies or peaks in the neighbourhood of some theoretical characteristic frequencies, in case of damage. These frequencies depend only on damage position, bearing geometry and rotational speed. The major improvement in this field would be the development of algorithms with high degree of automation. This paper aims at this important objective, by discussing for the first time how these peaks can draw away from the theoretical expected frequencies as a function of different working conditions, i.e. speed, torque and lubrication. After providing a brief description of the peak-patterns associated with each type of damage, this paper shows the typical magnitudes of the deviations from the theoretical expected frequencies. The last part of the study presents some remarks about increasing the reliability of the automatic algorithm. The research is based on experimental data obtained by using artificially damaged bearings installed in a gearbox.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low speed rotating machines which are the most critical components in drive train of wind turbines are often menaced by several technical and environmental defects. These factors contribute to mount the economic requirement for Health Monitoring and Condition Monitoring of the systems. When a defect is happened in such system result in reduced energy loss rates from related process and due to it Condition Monitoring techniques that detecting energy loss are very difficult if not possible to use. However, in the case of Acoustic Emission (AE) technique this issue is partly overcome and is well suited for detecting very small energy release rates. Acoustic Emission (AE) as a technique is more than 50 years old and in this new technology the sounds associated with the failure of materials were detected. Acoustic wave is a non-stationary signal which can discover elastic stress waves in a failure component, capable of online monitoring, and is very sensitive to the fault diagnosis. In this paper the history and background of discovering and developing AE is discussed, different ages of developing AE which include Age of Enlightenment (1950-1967), Golden Age of AE (1967-1980), Period of Transition (1980-Present). In the next section the application of AE condition monitoring in machinery process and various systems that applied AE technique in their health monitoring is discussed. In the end an experimental result is proposed by QUT test rig which an outer race bearing fault was simulated to depict the sensitivity of AE for detecting incipient faults in low speed high frequency machine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we modeled a quantum dot at near proximity to a gap plasmon waveguide to study the quantum dot-plasmon interactions. Assuming that the waveguide is single mode, this paper is concerned about the dependence of spontaneous emission rate of the quantum dot on waveguide dimensions such as width and height. We compare coupling efficiency of a gap waveguide with symmetric configuration and asymmetric configuration illustrating that symmetric waveguide has a better coupling efficiency to the quantum dot. We also demonstrate that optimally placed quantum dot near a symmetric waveguide with 50 nm x 50 nm cross section can capture 80% of the spontaneous emission into a guided plasmon mode.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is a discussion of the journal article: "Construcing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation". The article and discussion have appeared in the Journal of the Royal Statistical Society: Series B (Statistical Methodology).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel approach for developing summary statistics for use in approximate Bayesian computation (ABC) algorithms using indirect infer- ence. We embed this approach within a sequential Monte Carlo algorithm that is completely adaptive. This methodological development was motivated by an application involving data on macroparasite population evolution modelled with a trivariate Markov process. The main objective of the analysis is to compare inferences on the Markov process when considering two di®erent indirect mod- els. The two indirect models are based on a Beta-Binomial model and a three component mixture of Binomials, with the former providing a better ¯t to the observed data.