984 resultados para Reliability (engineering)
Resumo:
We present a scheme for quasiperfect transfer of polariton states from a sender to a spatially separated receiver, both composed of high-quality cavities filled by atomic samples. The sender and the receiver are connected by a nonideal transmission channel -the data bus- modelled by a network of lossy empty cavities. In particular, we analyze the influence of a large class of data-bus topologies on the fidelity and transfer time of the polariton state. Moreover, we also assume dispersive couplings between the polariton fields and the data-bus normal modes in order to achieve a tunneling-like state transfer. Such a tunneling-transfer mechanism, by which the excitation energy of the polariton effectively does not populate the data-bus cavities, is capable of attenuating appreciably the dissipative effects of the data-bus cavities. After deriving a Hamiltonian for the effective coupling between the sender and the receiver, we show that the decay rate of the fidelity is proportional to a cooperativity parameter that weighs the cost of the dissipation rate against the benefit of the effective coupling strength. The increase of the fidelity of the transfer process can be achieved at the expense of longer transfer times. We also show that the dependence of both the fidelity and the transfer time on the network topology is analyzed in detail for distinct regimes of parameters. It follows that the data-bus topology can be explored to control the time of the state-transfer process.
Resumo:
The mapping, exact or approximate, of a many-body problem onto an effective single-body problem is one of the most widely used conceptual and computational tools of physics. Here, we propose and investigate the inverse map of effective approximate single-particle equations onto the corresponding many-particle system. This approach allows us to understand which interacting system a given single-particle approximation is actually describing, and how far this is from the original physical many-body system. We illustrate the resulting reverse engineering process by means of the Kohn-Sham equations of density-functional theory. In this application, our procedure sheds light on the nonlocality of the density-potential mapping of density-functional theory, and on the self-interaction error inherent in approximate density functionals.
Resumo:
We evaluated the reliability and validity of a Brazilian-Portuguese version of the Epilepsy Medication Treatment Complexity Index (EMTCI). Interrater reliability was evaluated with the intraclass correlation coefficient (ICC), and validity was evaluated by correlation of mean EMTCI scores with the following variables: number of antiepileptic drugs (AEDs), seizure control, patients` perception of seizure control, and adherence to the therapeutic regimen as measured with the Morisky scale. We studied patients with epilepsy followed in a tertiary university-based hospital outpatient clinic setting, aged 18 years or older, independent in daily living activities, and without cognitive impairment or active psychiatric disease. ICCs ranged from 0.721 to 0.999. Mean EMTCI scores were significantly correlated with the variables assessed. Higher EMTCI scores were associated with an increasing number of AEDs, uncontrolled seizures, patients` perception of lack of seizure control, and poorer adherence to the therapeutic regimen. The results indicate that the Brazilian-Portuguese EMTCI is reliable and valid to be applied clinically in the country. The Brazilian-Portuguese EMTCI version may be a useful tool in developing strategies to minimize treatment complexity, possibly improving seizure control and quality of life in people with epilepsy in our milieu. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Aims. The aims of this study were to assess the internal reliability (internal consistency), construct validity, sensitivity and ceiling and floor effects of the Brazilian-Portuguese version of the Impact of Event Scale (IES). Design. Methodological research design. Method. The Brazilian-Portuguese version of the IES was applied to a group of 91 burned patients at three times: the first week after the burn injury (time one), between the fourth and the sixth months (time two) and between the ninth and the 12th months (time three). The internal consistency, construct validity (convergent and dimensionality), sensitivity and ceiling and floor effects were tested. Results. Cronbach`s alpha coefficients showed high internal consistency for the total scale (0 center dot 87) and for the domains intrusive thoughts (0 center dot 87) and avoidance responses (0 center dot 76). During the hospitalisation (time one), the scale showed low and positive correlations with pain measures immediately before (r = 0 center dot 22; p < 0 center dot 05) and immediately after baths and dressings (r = 0 center dot 21; p < 0 center dot 05). After the discharge, we found strong and negative correlations with self-esteem (r = -0 center dot 52; p < 0 center dot 01), strong and positive with depression (r = 0 center dot 63; p < 0 center dot 01) and low and negative with the Bodily pain (r = -0 center dot 24; p < 0 center dot 05), Social functioning (r = -0 center dot 34; p < 0 center dot 01) and Mental health (r = -0 center dot 27; p < 0 center dot 05) domains of the SF-36 at time two. Regarding the sensitivity, no statistically significant differences were observed between mean scale scores according to burned body surface (p = 0 center dot 21). The floor effect was observed in most of the IES items. Conclusion. The adapted version of the scale showed to be reliable and valid to assess postburn reactions on the impact of the event in the group of patients under analysis. Relevance to clinical practice. The Impact of Event Scale can be used in research and clinical practice to assess nursing interventions aimed at decreasing stress during rehabilitation.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
This work presents the study and development of a combined fault location scheme for three-terminal transmission lines using wavelet transforms (WTs). The methodology is based on the low- and high-frequency components of the transient signals originated from fault situations registered in the terminals of a system. By processing these signals and using the WT, it is possible to determine the time of travelling waves of voltages and/or currents from the fault point to the terminals, as well as estimate the fundamental frequency components. A new approach presents a reliable and accurate fault location scheme combining some different solutions. The main idea is to have a decision routine in order to select which method should be used in each situation presented to the algorithm. The combined algorithm was tested for different fault conditions by simulations using the ATP (Alternative Transients Program) software. The results obtained are promising and demonstrate a highly satisfactory degree of accuracy and reliability of the proposed method.
Resumo:
Hybrid active-passive damping treatments combine the reliability, low cost and robustness of viscoelastic damping treatments and the high-performance, modal selective and adaptive piezoelectric active control. Numerous hybrid damping treatments have been reported in the literature. They differ mainly by the relative positions of viscoelastic treatments, sensors and piezoelectric actuators. In this work we present an experimental analysis of three active-passive damping design configurations applied to a cantilever beam. In particular, two design configurations based on the extension mode of piezoelectric actuators combined with viscoelastic constrained layer damping treatments and one design configuration with shear piezoelectric actuators embedded in a sandwich beam with viscoelastic core are analyzed. For comparison purposes, a purely active design configuration with an extension piezoelectric actuator bonded to an elastic beam is also analyzed. The active-passive damping performance of the four design configurations is compared. Results show that active-passive design configurations provide more reliable and wider-range damping performance than the purely active configuration.
Resumo:
The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or ""long"") beams. The Timoshenko theory applies for shear-dominated (or ""short"") beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper addresses the time-variant reliability analysis of structures with random resistance or random system parameters. It deals with the problem of a random load process crossing a random barrier level. The implications of approximating the arrival rate of the first overload by an ensemble-crossing rate are studied. The error involved in this so-called ""ensemble-crossing rate"" approximation is described in terms of load process and barrier distribution parameters, and in terms of the number of load cycles. Existing results are reviewed, and significant improvements involving load process bandwidth, mean-crossing frequency and time are presented. The paper shows that the ensemble-crossing rate approximation can be accurate enough for problems where load process variance is large in comparison to barrier variance, but especially when the number of load cycles is small. This includes important practical applications like random vibration due to impact loadings and earthquake loading. Two application examples are presented, one involving earthquake loading and one involving a frame structure subject to wind and snow loadings. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents results of research into the use of the Bellman-Zadeh approach to decision making in a fuzzy environment for solving multicriteria power engineering problems. The application of the approach conforms to the principle of guaranteed result and provides constructive lines in computationally effective obtaining harmonious solutions on the basis of solving associated maxmin problems. The presented results are universally applicable and are already being used to solve diverse classes of power engineering problems. It is illustrated by considering problems of power and energy shortage allocation, power system operation, optimization of network configuration in distribution systems, and energetically effective voltage control in distribution systems. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper describes the development of an optimization model for the management and operation of a large-scale, multireservoir water supply distribution system with preemptive priorities. The model considers multiobjectives and hedging rules. During periods of drought, when water supply is insufficient to meet the planned demand, appropriate rationing factors are applied to reduce water supply. In this paper, a water distribution system is formulated as a network and solved by the GAMS modeling system for mathematical programming and optimization. A user-friendly interface is developed to facilitate the manipulation of data and to generate graphs and tables for decision makers. The optimization model and its interface form a decision support system (DSS), which can be used to configure a water distribution system to facilitate capacity expansion and reliability studies. Several examples are presented to demonstrate the utility and versatility of the developed DSS under different supply and demand scenarios, including applications to one of the largest water supply systems in the world, the Sao Paulo Metropolitan Area Water Supply Distribution System in Brazil.
Resumo:
Reconciliation can be divided into stages, each stage representing the performance of a mining operation, such as: long-term estimation, short-term estimation, planning, mining and mineral processing. The gold industry includes another stage which is the budget, when the company informs the financial market of its annual production forecast. The division of reconciliation into stages increases the reliability of the annual budget informed by the mining companies, while also detecting and correcting the critical steps responsible for the overall estimation error by the optimization of sampling protocols and equipment. This paper develops and validates a new reconciliation model for the gold industry, which is based on correct sampling practices and the subdivision of reconciliation into stages, aiming for better grade estimates and more efficient control of the mining industry`s processes, from resource estimation to final production.
Resumo:
A methodology for rock-excavation structural-reliability analysis that uses Distinct Element Method numerical models is presented. The methodology solves the problem of the conventional numerical models that supply only punctual results and use fixed input parameters, without considering its statistical errors. The analysis of rock-excavation stability must consider uncertainties from geological variability, from uncertainty in the choice of mechanical behaviour hypothesis, and from uncertainties in parameters adopted in numerical model construction. These uncertainties can be analyzed in simple deterministic models, but a new methodology was developed for numerical models with results of several natures. The methodology is based on Monte Carlo simulations and uses principles of Paraconsistent Logic. It will be presented in the analysis of a final slope of a large-dimensioned surface mine.
Resumo:
This work describes the development of an engineering approach based upon a toughness scaling methodology incorporating the effects of weld strength mismatch on crack-tip driving forces. The approach adopts a nondimensional Weibull stress, (sigma) over bar (w), as a the near-tip driving force to correlate cleavage fracture across cracked weld configurations with different mismatch conditions even though the loading parameter (measured by J) may vary widely due to mismatch and constraint variations. Application of the procedure to predict the failure strain for an overmatch girth weld made of an API X80 pipeline steel demonstrates the effectiveness of the micromechanics approach. Overall, the results lend strong support to use a Weibull stress based procedure in defect assessments of structural welds.
Resumo:
In this paper, 2 different approaches for estimating the directional wave spectrum based on a vessel`s 1st-order motions are discussed, and their predictions are compared to those provided by a wave buoy. The real-scale data were obtained in an extensive monitoring campaign based on an FPSO unit operating at Campos Basin, Brazil. Data included vessel motions, heading and tank loadings. Wave field information was obtained by means of a heave-pitch-roll buoy installed in the vicinity of the unit. `two of the methods most widely used for this kind of analysis are considered, one based on Bayesian statistical inference, the other consisting of a parametrical representation of the wave spectrum. The performance of both methods is compared, and their sensitivity to input parameters is discussed. This analysis complements a set of previous validations based on numerical and towing-tank results and allows for a preliminary evaluation of reliability when applying the methodology at full scale.