804 resultados para Reliability constraints
Resumo:
In this paper, a hybrid heuristic methodology that employs fuzzy logic for solving the AC transmission network expansion planning (AC-TEP) problem is presented. An enhanced constructive heuristic algorithm aimed at obtaining a significant quality solution for such complicated problems considering contingency is proposed. In order to indicate the severity of the contingency, 2 performance indices, namely the line flow performance index and voltage performance index, are calculated. An interior point method is applied as a nonlinear programming solver to handle such nonconvex optimization problems, while the objective function includes the costs of the new transmission lines as well as the real power losses. The performance of the proposed method is examined by applying it to the well-known Garver system for different cases. The simulation studies and result analysis demonstrate that the proposed method provides a promising way to find an optimal plan. Obtaining the best quality solution shows the capability and the viability of the proposed algorithm in AC-TEP. © Tübi̇tak..
Resumo:
In this paper, a cross-layer solution for packet size optimization in wireless sensor networks (WSN) is introduced such that the effects of multi-hop routing, the broadcast nature of the physical wireless channel, and the effects of error control techniques are captured. A key result of this paper is that contrary to the conventional wireless networks, in wireless sensor networks, longer packets reduce the collision probability. Consequently, an optimization solution is formalized by using three different objective functions, i.e., packet throughput, energy consumption, and resource utilization. Furthermore, the effects of end-to-end latency and reliability constraints are investigated that may be required by a particular application. As a result, a generic, cross-layer optimization framework is developed to determine the optimal packet size in WSN. This framework is further extended to determine the optimal packet size in underwater and underground sensor networks. From this framework, the optimal packet sizes under various network parameters are determined.
Resumo:
Cross-layer techniques represent efficient means to enhance throughput and increase the transmission reliability of wireless communication systems. In this paper, a cross-layer design of aggressive adaptive modulation and coding (A-AMC), truncated automatic repeat request (T-ARQ), and user scheduling is proposed for multiuser multiple-input-multiple-output (MIMO) maximal ratio combining (MRC) systems, where the impacts of feedback delay (FD) and limited feedback (LF) on channel state information (CSI) are also considered. The A-AMC and T-ARQ mechanism selects the appropriate modulation and coding schemes (MCSs) to achieve higher spectral efficiency while satisfying the service requirement on the packet loss rate (PLR), profiting from the feasibility of using different MCSs to retransmit a packet, which is destined to a scheduled user selected to exploit multiuser diversity and enhance the system's performance in terms of both transmission efficiency and fairness. The system's performance is evaluated in terms of the average PLR, average spectral efficiency (ASE), outage probability, and average packet delay, which are derived in closed form, considering transmissions over Rayleigh-fading channels. Numerical results and comparisons are provided and show that A-AMC combined with T-ARQ yields higher spectral efficiency than the conventional scheme based on adaptive modulation and coding (AMC), while keeping the achieved PLR closer to the system's requirement and reducing delay. Furthermore, the effects of the number of ARQ retransmissions, numbers of transmit and receive antennas, normalized FD, and cardinality of the beamforming weight vector codebook are studied and discussed.
Resumo:
This thesis is done to solve two issues for Sayid Paper Mill Ltd Pakistan. Section one deals with a practical problem arise in SPM that is cutting a given set of raw paper rolls of known length and width, and a set of product paper rolls of known length (equal to the length of raw paper rolls) and width, practical cutting constraints on a single cutting machine, according to demand orders for all customers. To solve this problem requires to determine an optimal cutting schedule to maximize the overall cutting process profitability while satisfying all demands and cutting constraints. The aim of this part of thesis is to develop a mathematical model which solves this problem.Second section deals with a problem of delivering final product from warehouse to different destinations by finding shortest paths. It is an operational routing problem to decide the daily routes for sending trucks to different destination to deliver their final product. This industrial problem is difficult and includes aspect such as delivery to a single destination and multiple destinations with limited resources. The aim of this part of thesis is to develop a process which helps finding shortest path.
Resumo:
We present a new expert system: a constraints generator for structure determination of natural products. The constraints that the system furnishes are: skeleton (reliability: 95%), large substructures (reliability: 98%) and their associated assignments (reliability: 90%) This system is intended for structure determination of carbon-rich compounds (sesqui-, di- and triterpenes, sterols etc.) for which most structures generators are not very effective. We also present a new algorithm that can avoid the combinatorial explosion during subspectrum/substructure analysis.
Resumo:
This paper reports on a process to validate a revised version of a system for coding classroom discourse in foreign language lessons, a context in which the dual role of language (as content and means of communication) and the speakers' specific pedagogical aims lead to a certain degree of ambiguity in language analysis. The language used by teachers and students has been extensively studied, and a framework of concepts concerning classroom discourse well-established. Models for coding classroom language need, however, to be revised when they are applied to specific research contexts. The application and revision of an initial framework can lead to the development of earlier models, and to the re-definition of previously established categories of analysis that have to be validated. The procedures followed to validate a coding system are related here as guidelines for conducting research under similar circumstances. The advantages of using instruments that incorporate two types of data, that is, quantitative measures and qualitative information from raters' metadiscourse, are discussed, and it is suggested that such procedure can contribute to the process of validation itself, towards attaining reliability of research results, as well as indicate some constraints of the adopted research methodology.
Resumo:
In this paper, the effects of uncertainty and expected costs of failure on optimum structural design are investigated, by comparing three distinct formulations of structural optimization problems. Deterministic Design Optimization (DDO) allows one the find the shape or configuration of a structure that is optimum in terms of mechanics, but the formulation grossly neglects parameter uncertainty and its effects on structural safety. Reliability-based Design Optimization (RBDO) has emerged as an alternative to properly model the safety-under-uncertainty part of the problem. With RBDO, one can ensure that a minimum (and measurable) level of safety is achieved by the optimum structure. However, results are dependent on the failure probabilities used as constraints in the analysis. Risk optimization (RO) increases the scope of the problem by addressing the compromising goals of economy and safety. This is accomplished by quantifying the monetary consequences of failure, as well as the costs associated with construction, operation and maintenance. RO yields the optimum topology and the optimum point of balance between economy and safety. Results are compared for some example problems. The broader RO solution is found first, and optimum results are used as constraints in DDO and RBDO. Results show that even when optimum safety coefficients are used as constraints in DDO, the formulation leads to configurations which respect these design constraints, reduce manufacturing costs but increase total expected costs (including expected costs of failure). When (optimum) system failure probability is used as a constraint in RBDO, this solution also reduces manufacturing costs but by increasing total expected costs. This happens when the costs associated with different failure modes are distinct. Hence, a general equivalence between the formulations cannot be established. Optimum structural design considering expected costs of failure cannot be controlled solely by safety factors nor by failure probability constraints, but will depend on actual structural configuration. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Solution of structural reliability problems by the First Order method require optimization algorithms to find the smallest distance between a limit state function and the origin of standard Gaussian space. The Hassofer-Lind-Rackwitz-Fiessler (HLRF) algorithm, developed specifically for this purpose, has been shown to be efficient but not robust, as it fails to converge for a significant number of problems. On the other hand, recent developments in general (augmented Lagrangian) optimization techniques have not been tested in aplication to structural reliability problems. In the present article, three new optimization algorithms for structural reliability analysis are presented. One algorithm is based on the HLRF, but uses a new differentiable merit function with Wolfe conditions to select step length in linear search. It is shown in the article that, under certain assumptions, the proposed algorithm generates a sequence that converges to the local minimizer of the problem. Two new augmented Lagrangian methods are also presented, which use quadratic penalties to solve nonlinear problems with equality constraints. Performance and robustness of the new algorithms is compared to the classic augmented Lagrangian method, to HLRF and to the improved HLRF (iHLRF) algorithms, in the solution of 25 benchmark problems from the literature. The new proposed HLRF algorithm is shown to be more robust than HLRF or iHLRF, and as efficient as the iHLRF algorithm. The two augmented Lagrangian methods proposed herein are shown to be more robust and more efficient than the classical augmented Lagrangian method.
Resumo:
Next generation electronic devices have to guarantee high performance while being less power-consuming and highly reliable for several application domains ranging from the entertainment to the business. In this context, multicore platforms have proven the most efficient design choice but new challenges have to be faced. The ever-increasing miniaturization of the components produces unexpected variations on technological parameters and wear-out characterized by soft and hard errors. Even though hardware techniques, which lend themselves to be applied at design time, have been studied with the objective to mitigate these effects, they are not sufficient; thus software adaptive techniques are necessary. In this thesis we focus on multicore task allocation strategies to minimize the energy consumption while meeting performance constraints. We firstly devise a technique based on an Integer Linear Problem formulation which provides the optimal solution but cannot be applied on-line since the algorithm it needs is time-demanding; then we propose a sub-optimal technique based on two steps which can be applied on-line. We demonstrate the effectiveness of the latter solution through an exhaustive comparison against the optimal solution, state-of-the-art policies, and variability-agnostic task allocations by running multimedia applications on the virtual prototype of a next generation industrial multicore platform. We also face the problem of the performance and lifetime degradation. We firstly focus on embedded multicore platforms and propose an idleness distribution policy that increases core expected lifetimes by duty cycling their activity; then, we investigate the use of micro thermoelectrical coolers in general-purpose multicore processors to control the temperature of the cores at runtime with the objective of meeting lifetime constraints without performance loss.
Resumo:
Observation-based reconstructions of sea surface temperature from relatively stable periods in the past, such as the Last Glacial Maximum, represent an important means of constraining climate sensitivity and evaluating model simulations. The first quantitative global reconstruction of sea surface temperatures during the Last Glacial Maximum was developed by the Climate Long-Range Investigation, Mapping and Prediction (CLIMAP) project in the 1970s and 1980s. Since that time, several shortcomings of that earlier effort have become apparent. Here we present an updated synthesis of sea surface temperatures during the Last Glacial Maximum, rigorously defined as the period between 23 and 19 thousand years before present, from the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO) project. We integrate microfossil and geochemical reconstructions of surface temperatures and include assessments of the reliability of individual records. Our reconstruction reveals the presence of large longitudinal gradients in sea surface temperature in all of the ocean basins, in contrast to the simulations of the Last Glacial Maximum climate available at present.
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.
Resumo:
To generate innovation in Brazil becomes a high level priority in the last two decades. Innovation, according to the Presidential speech, is the right way to conduce the nation towards the development of technological competitive capabilities through the high technology-based products and services. Although the nation has come a long way, Brazil has to face the challenge of overcoming obstacles in infrastructure conditions for innovation. This paper aims to describe the main conditions to manage innovation in Brazil. This work offers a quantitative analysis of the main factors that impact innovation. This is a documental research based on data collected from high reliability international sources complemented by a research field applied to a sample of technology–based firms located in São José dos Campos, Brazil. The results indicated that entrepreneurs deal with difficulties to develop managerial competences in order to manage the business growth while developing new products and services. The lack of qualified human resources to manage business in technological environment is also a matter.
Resumo:
Rework strategies that involve different checking points as well as rework times can be applied into reconfigurable manufacturing system (RMS) with certain constraints, and effective rework strategy can significantly improve the mission reliability of manufacturing process. The mission reliability of process is a measurement of production ability of RMS, which serves as an integrated performance indicator of the production process under specified technical constraints, including time, cost and quality. To quantitatively characterize the mission reliability and basic reliability of RMS under different rework strategies, rework model of RMS was established based on the method of Logistic regression. Firstly, the functional relationship between capability and work load of manufacturing process was studied through statistically analyzing a large number of historical data obtained in actual machining processes. Secondly, the output, mission reliability and unit cost in different rework paths were calculated and taken as the decision variables based on different input quantities and the rework model mentioned above. Thirdly, optimal rework strategies for different input quantities were determined by calculating the weighted decision values and analyzing advantages and disadvantages of each rework strategy. At last, case application were demonstrated to prove the efficiency of the proposed method.
Resumo:
It is a crucial task to evaluate the reliability of manufacturing process in product development process. Process reliability is a measurement of production ability of reconfigurable manufacturing system (RMS), which serves as an integrated performance indicator of the production process under specified technical constraints, including time, cost and quality. An integration framework of manufacturing process reliability evaluation is presented together with product development process. A mathematical model and algorithm based on universal generating function (UGF) is developed for calculating the reliability of manufacturing process with respect to task intensity and process capacity, which are both independent random variables. The rework strategies of RMS are analyzed under different task intensity based on process reliability is presented, and the optimization of rework strategies based on process reliability is discussed afterwards.
Resumo:
This thesis develops and validates the framework of a specialized maintenance decision support system for a discrete part manufacturing facility. Its construction utilizes a modular approach based on the fundamental philosophy of Reliability Centered Maintenance (RCM). The proposed architecture uniquely integrates System Decomposition, System Evaluation, Failure Analysis, Logic Tree Analysis, and Maintenance Planning modules. It presents an ideal solution to the unique maintenance inadequacies of modern discrete part manufacturing systems. Well established techniques are incorporated as building blocks of the system's modules. These include Failure Mode Effect and Criticality Analysis (FMECA), Logic Tree Analysis (LTA), Theory of Constraints (TOC), and an Expert System (ES). A Maintenance Information System (MIS) performs the system's support functions. Validation was performed by field testing of the system at a Miami based manufacturing facility. Such a maintenance support system potentially reduces downtime losses and contributes to higher product quality output. Ultimately improved profitability is the final outcome. ^