963 resultados para Reliability level
Resumo:
The rate of sea level change has varied considerably over geological time, with rapid increases (0.25 cm yr-1) at the end of the last ice age to more modest increases over the last 4,000 years (0.04 cm yr-1; Hendry 1993). Due to anthropogenic contributions to climate change, however, the rate of sea level rise is expected to increase between 0.10 and 0.25 cm year-1 for many coastal areas (Warrick et al. 1996). Notwithstanding, it has been predicted that over the next 100 years, sea levels along the northeastern coast of North Carolina may increase by an astonishing 0.8 m (0.8 cm yr-1); through a combination of sea-level rise and coastal subsidence (Titus and Richman 2001; Parham et al. 2006). As North Carolina ranks third in the United States with land at or just above sea level, any additional sea rise may promote further deterioration of vital coastal wetland systems. (PDF contains 4 pages)
Resumo:
The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.
Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.
The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.
Resumo:
Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.
We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.
We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
It is shown that in a closed equispaced three-level ladder system, by controlling the relative phase of two applied coherent fields, the conversion from absorption with inversion to lasing without inversion (LWI) can be realized; a large index of the refraction with zero absorption can be gotten; considerable increasing of the spectrum region and value of the LWI gain can be achieved. Our study also reveals that the incoherent pumping will produce a remarkable effect oil the phase-dependent properties of the system. Modifying value of the incoherent pumping can change the property of the system from absorption to amplification and enhance significantly LWI gain. If the incoherent pumping is absent, we cannot get any gain for any value of the relative phase. (c) 2007 Elsevier GmbH. All rights reserved.
Resumo:
A diagnostic survey was conducted among the fishermen in six selected villages in Doko Local Government Area of Niger State. One hundred and fifty fishermen were randomly selected and interviewed to find out whether or not they had interest in commercial fish farming aimed at improving their livelihood. The dwindling fish catches in the natural flood plain ponds and Ex-bow Lakes continue to have a serious negative effect on the socio-economic well being of the village communities in question. A break on natural regular annual flooding of the plains had resulted into very low natural fish recruitment. Data analysis using simple descriptive statistics revealed that land tenure system, educational status, inadequate infrastructural facilities, religious taboos, existing fish species among others were found to be favourable indices for commercial fish farming. However, serious conflicts among the fishermen concerning the ownership status of these natural fish ponds are found to be major obstacles to commercial fish farming despite that the traditional ownership of the ponds were vested in the lands of individuals and village communities. Extensive fish farming and small-scale fish farming in the ponds and Ex-bow Lake with improved management practices are considered to be profitable venture. Despite the fact that fish seeds supply and extension effort are still inadequate, the fish farmers have indicated willingness to adopt commercial fish farming in the Ex-bow Lakes and flood plains in order to restore abundant fish production thereby providing for their food security and also increasing the daily income
Resumo:
The two most important digital-system design goals today are to reduce power consumption and to increase reliability. Reductions in power consumption improve battery life in the mobile space and reductions in energy lower operating costs in the datacenter. Increased robustness and reliability shorten down time, improve yield, and are invaluable in the context of safety-critical systems. While optimizing towards these two goals is important at all design levels, optimizations at the circuit level have the furthest reaching effects; they apply to all digital systems. This dissertation presents a study of robust minimum-energy digital circuit design and analysis. It introduces new device models, metrics, and methods of calculation—all necessary first steps towards building better systems—and demonstrates how to apply these techniques. It analyzes a fabricated chip (a full-custom QDI microcontroller designed at Caltech and taped-out in 40-nm silicon) by calculating the minimum energy operating point and quantifying the chip’s robustness in the face of both timing and functional failures.
Resumo:
Electric and magnetic responses of the medium to the probe field are analysed in a four-level loop atomic system by taking into account the relative phase of the applied fields. An interesting phenomenon is found: under suitable conditions, a change of the refractive index from positive to negative can occur by modulating the relative phase of the applied fields. Then the medium can be switched from a positive index material to a negative index material in our scheme. In addition, a negative index material can be realized in different frequency regions by adjusting the relative phase. It may give us a convenient way to obtain the desired material with positive or negative index.
Resumo:
Sideband manipulation of population inversion in a three-level A atomic configuration is investigated theoretically. Compared with the case of a nearly monochromatic field, a population inversion between an excited state and a ground state has been found in a wide sideband intensity range by increasing the difference in frequency between three components. Furthermore, the population inversion can be controlled by the sum of the relative phases of the sideband components of the trichromatic pump field with respective to the phase of the central component. Changing the sum phase from 0 to pi, the population inversion between the excited state and the ground state can increase within nearly half of the sideband intensity range. At the same time, the sideband intensity range that corresponds to the system exhibiting inversion rho(00) > rho 11 also becomes wider evidently.
Resumo:
The spatiotemporal evolutions of ultrashort pulses in two dimensions are investigated numerically by solving the coupled Maxwell-Bloch equations without invoking the slowly varying envelope approximation and rotating-wave approximation. For an on-axis 2n pi sech pulse, local delay makes the temporal split 2 pi sech pulses crescent-shaped in the transverse distribution. Due to the transverse effect, the temporal split 2 pi sech pulses become unstable and experience reshaping during the propagation process. Then, interference occurs between the successive crescent-shaped pulses and multiple self-focusing can form.
Resumo:
Trichromatic manipulation of Kerr nonlinearity in a three-level A atomic configuration is investigated theoretically. It is shown that for a weak monochromatic probe field, the enhanced Kerr nonlinearity can be achieved in multiple separate transparent windows due to interference effect of multiple two-photon Raman channels. Furthermore, the property of Kerr nonlinearity can be controlled by the sum of the relative phases of the sideband components of the trichromatic pump field compared to the central component.
Resumo:
We propose an atom localization scheme for a four-level alkaline earth atom via a classical standing-wave field, and give the analytical expressions of the localization peak positions as well as the widths versus the parameters of the optical fields. We show that the probability of finding the atom at a particular position can be increased from 1/4 to 1/3 or 1/2 by adjusting the detuning of the probe field and the Rabi frequencies of the optical fields. Furthermore, the localization precision can be dramatically enhanced by increasing the intensity of the standing-wave field or decreasing the detuning of the probe field. The analytical results are quite accordant to the numerical solutions.