992 resultados para Reliability simulation
Resumo:
Regulatory authorities in many countries, in order to maintain an acceptable balance between appropriate customer service qualities and costs, are introducing a performance-based regulation. These regulations impose penalties, and in some cases rewards, which introduce a component of financial risk to an electric power utility due to the uncertainty associated with preserving a specific level of system reliability. In Brazil, for instance, one of the reliability indices receiving special attention by the utilities is the Maximum Continuous Interruption Duration per customer (MCID). This paper describes a chronological Monte Carlo simulation approach to evaluate probability distributions of reliability indices, including the MCID, and the corresponding penalties. In order to get the desired efficiency, modern computational techniques are used for modeling (UML -Unified Modeling Language) as well as for programming (Object- Oriented Programming). Case studies on a simple distribution network and on real Brazilian distribution systems are presented and discussed. © Copyright KTH 2006.
Resumo:
In this paper, a hybrid heuristic methodology that employs fuzzy logic for solving the AC transmission network expansion planning (AC-TEP) problem is presented. An enhanced constructive heuristic algorithm aimed at obtaining a significant quality solution for such complicated problems considering contingency is proposed. In order to indicate the severity of the contingency, 2 performance indices, namely the line flow performance index and voltage performance index, are calculated. An interior point method is applied as a nonlinear programming solver to handle such nonconvex optimization problems, while the objective function includes the costs of the new transmission lines as well as the real power losses. The performance of the proposed method is examined by applying it to the well-known Garver system for different cases. The simulation studies and result analysis demonstrate that the proposed method provides a promising way to find an optimal plan. Obtaining the best quality solution shows the capability and the viability of the proposed algorithm in AC-TEP. © Tübi̇tak..
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This paper addresses the analysis of probabilistic corrosion time initiation in reinforced concrete structures exposed to ions chloride penetration. Structural durability is an important criterion which must be evaluated in every type of structure, especially when these structures are constructed in aggressive atmospheres. Considering reinforced concrete members, chloride diffusion process is widely used to evaluate the durability. Therefore, at modelling this phenomenon, corrosion of reinforcements can be better estimated and prevented. These processes begin when a threshold level of chlorides concentration is reached at the steel bars of reinforcements. Despite the robustness of several models proposed in the literature, deterministic approaches fail to predict accurately the corrosion time initiation due to the inherently randomness observed in this process. In this regard, the durability can be more realistically represented using probabilistic approaches. A probabilistic analysis of ions chloride penetration is presented in this paper. The ions chloride penetration is simulated using the Fick's second law of diffusion. This law represents the chloride diffusion process, considering time dependent effects. The probability of failure is calculated using Monte Carlo simulation and the First Order Reliability Method (FORM) with a direct coupling approach. Some examples are considered in order to study these phenomena and a simplified method is proposed to determine optimal values for concrete cover.
Resumo:
The central aim of this thesis work is the application and further development of a hybrid quantum mechanical/molecular mechanics (QM/MM) based approach to compute spectroscopic properties of molecules in complex chemical environments from electronic structure theory. In the framework of this thesis, an existing density functional theory implementation of the QM/MM approach is first used to calculate the nuclear magnetic resonance (NMR) solvent shifts of an adenine molecule in aqueous solution. The findings show that the aqueous solvation with its strongly fluctuating hydrogen bond network leads to specific changes in the NMR resonance lines. Besides the absolute values, also the ordering of the NMR lines changes under the influence of the solvating water molecules. Without the QM/MM scheme, a quantum chemical calculation could have led to an incorrect assignment of these lines. The second part of this thesis describes a methodological improvement of the QM/MM method that is designed for cases in which a covalent chemical bond crosses the QM/MM boundary. The development consists in an automatized protocol to optimize a so-called capping potential that saturates the electronic subsystem in the QM region. The optimization scheme is capable of tuning the parameters in such a way that the deviations of the electronic orbitals between the regular and the truncated (and "capped") molecule are minimized. This in turn results in a considerable improvement of the structural and spectroscopic parameters when computed with the new optimized capping potential within the QM/MM technique. This optimization scheme is applied and benchmarked on the example of truncated carbon-carbon bonds in a set of small test molecules. It turns out that the optimized capping potentials yield an excellent agreement of NMR chemical shifts and protonation energies with respect to the corresponding full molecules. These results are very promising, so that the application to larger biological complexes will significantly improve the reliability of the prediction of the related spectroscopic properties.
Resumo:
Background Many medical exams use 5 options for multiple choice questions (MCQs), although the literature suggests that 3 options are optimal. Previous studies on this topic have often been based on non-medical examinations, so we sought to analyse rarely selected, 'non-functional' distractors (NF-D) in high stakes medical examinations, and their detection by item authors as well as psychometric changes resulting from a reduction in the number of options. Methods Based on Swiss Federal MCQ examinations from 2005-2007, the frequency of NF-D (selected by <1% or <5% of the candidates) was calculated. Distractors that were chosen the least or second least were identified and candidates who chose them were allocated to the remaining options using two extreme assumptions about their hypothetical behaviour: In case rarely selected distractors were eliminated, candidates could randomly choose another option - or purposively choose the correct answer, from which they had originally been distracted. In a second step, 37 experts were asked to mark the least plausible options. The consequences of a reduction from 4 to 3 or 2 distractors - based on item statistics or on the experts' ratings - with respect to difficulty, discrimination and reliability were modelled. Results About 70% of the 5-option-items had at least 1 NF-D selected by <1% of the candidates (97% for NF-Ds selected by <5%). Only a reduction to 2 distractors and assuming that candidates would switch to the correct answer in the absence of a 'non-functional' distractor led to relevant differences in reliability and difficulty (and to a lesser degree discrimination). The experts' ratings resulted in slightly greater changes compared to the statistical approach. Conclusions Based on item statistics and/or an expert panel's recommendation, the choice of a varying number of 3-4 (or partly 2) plausible distractors could be performed without marked deteriorations in psychometric characteristics.
Resumo:
OBJECTIVES The aim was to study the impact of the defect size of endodontically treated incisors compared to dental implants as abutments on the survival of zirconia two-unit anterior cantilever-fixed partial dentures (2U-FPDs) during 10-year simulation. MATERIALS AND METHODS Human maxillary central incisors were endodontically treated and divided into three groups (n = 24): I, access cavities rebuilt with composite core; II, teeth decoronated and restored with composite; and III as II supported by fiber posts. In group IV, implants with individual zirconia abutments were used. Specimens were restored with zirconia 2U-FPDs and exposed to two sequences of thermal cycling and mechanical loading. Statistics: Kaplan-Meier; log-rank tests. RESULTS During TCML in group I two tooth fractures and two debondings with chipping were found. Solely chippings occurred in groups II (2×), IV (2×), and III (1×). No significant different survival was found for the different abutments (p = 0.085) or FPDs (p = 0.526). Load capability differed significantly between groups I (176 N) and III (670 N), and III and IV (324 N) (p < 0.024). CONCLUSION Within the limitations of an in vitro study, it can be concluded that zirconia-framework 2U-FPDs on decoronated teeth with/without post showed comparable in vitro reliability as restorations on implants. The results indicated that restorations on teeth with only access cavity perform worse in survival and linear loading. CLINICAL RELEVANCE Even severe defects do not justify per se a replacement of this particular tooth by a dental implant from load capability point of view.
Resumo:
Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.
Resumo:
In the framework of a global investigation of the Spanish natural analogues of CO2 storage and leakage, four selected sites from the Mazarrón?Gañuelas Tertiary Basin (Murcia, Spain) were studied for computing the diffuse soil CO2 flux, by using the accumulation chamber method. The Basin is characterized by the presence of a deep, saline, thermal (?47 ?C) CO2-rich aquifer intersected by two deep geothermal exploration wells named ?El Saladillo? (535 m) and ?El Reventón? (710 m). The CO2 flux data were processed by means of a graphical?statistical method, kriging estimation and sequential Gaussian simulation algorithms. The results have allowed concluding that the Tertiary marly cap-rock of this CO2-rich aquifer acts as a very effective sealing, preventing any CO2 leak from this natural CO2 storage site, being therefore an excellent scenario to guarantee, by analogy, the safety of a CO2 storage.
Resumo:
Electrical Protection systems and Automatic Voltage Regulators (AVR) are essential components of actual power plants. Its installation and setting is performed during the commissioning, and it needs extensive experience since any failure in this process or in the setting, may entails some risk not only for the generator of the power plant, but also for the reliability of the power grid. In this paper, a real time power plant simulation platform is presented as a tool for improving the training and learning process on electrical protections and automatic voltage regulators. The activities of the commissioning procedure which can be practiced are described, and the applicability of this tool for improving the comprehension of this important part of the power plants is discussed. A commercial AVR and a multifunction protective relay have been tested with satisfactory results.
Resumo:
Pipeline transport represents one of the most important means of moving oil derivatives to different locations. It is both a reliable and inexpensive means of transport, and it yields small variable costs along with a great degree of reliability. Pipeline scheduling is not a trivial task; it involves considerable time from schedulers. Discussed here is a real-application case of a tool that helps schedulers simulate pipeline performance as a means of creating a feasible schedule for a particular time span.
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.
Resumo:
The Cronbach's alpha is the most widely used method for estimating internal consistency reliability. This procedure has proved very resistant to the passage of time, even if its limitations are well documented and although there are better options as omega coefficient or the different versions of glb, with obvious advantages especially for applied research in which the ítems differ in quality or have skewed distributions. In this paper, using Monte Carlo simulation, the performance of these reliability coefficients under a one-dimensional model is evaluated in terms of skewness and no tau-equivalence. The results show that omega coefficient is always better choice than alpha and in the presence of skew items is preferable to use omega and glb coefficients even in small samples.
Resumo:
Consider a network of unreliable links, modelling for example a communication network. Estimating the reliability of the network-expressed as the probability that certain nodes in the network are connected-is a computationally difficult task. In this paper we study how the Cross-Entropy method can be used to obtain more efficient network reliability estimation procedures. Three techniques of estimation are considered: Crude Monte Carlo and the more sophisticated Permutation Monte Carlo and Merge Process. We show that the Cross-Entropy method yields a speed-up over all three techniques.
Resumo:
In recent years, many sorghum producers in the more marginal (