970 resultados para Upper bound method


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Flash memory is a leading storage media with excellent features such as random access and high storage density. However, it also faces significant reliability and endurance challenges. In flash memory, the charge level in the cells can be easily increased, but removing charge requires an expensive erasure operation. In this thesis we study rewriting schemes that enable the data stored in a set of cells to be rewritten by only increasing the charge level in the cells. We consider two types of modulation scheme; a convectional modulation based on the absolute levels of the cells, and a recently-proposed scheme based on the relative cell levels, called rank modulation. The contributions of this thesis to the study of rewriting schemes for rank modulation include the following: we

•propose a new method of rewriting in rank modulation, beyond the previously proposed method of “push-to-the-top”;

•study the limits of rewriting with the newly proposed method, and derive a tight upper bound of 1 bit per cell;

•extend the rank-modulation scheme to support rankings with repetitions, in order to improve the storage density;

•derive a tight upper bound of 2 bits per cell for rewriting in rank modulation with repetitions;

•construct an efficient rewriting scheme that asymptotically approaches the upper bound of 2 bit per cell.

The next part of this thesis studies rewriting schemes for a conventional absolute-levels modulation. The considered model is called “write-once memory” (WOM). We focus on WOM schemes that achieve the capacity of the model. In recent years several capacity-achieving WOM schemes were proposed, based on polar codes and randomness extractors. The contributions of this thesis to the study of WOM scheme include the following: we

•propose a new capacity-achievingWOM scheme based on sparse-graph codes, and show its attractive properties for practical implementation;

•improve the design of polarWOMschemes to remove the reliance on shared randomness and include an error-correction capability.

The last part of the thesis studies the local rank-modulation (LRM) scheme, in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. The LRM scheme is used to simulate a single conventional multi-level flash cell. The simulated cell is realized by a Gray code traversing all the relative-value states where, physically, the transition between two adjacent states in the Gray code is achieved by using a single “push-to-the-top” operation. The main results of the last part of the thesis are two constructions of Gray codes with asymptotically-optimal rate.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose an algorithm for solving optimization problems defined on a subset of the cone of symmetric positive semidefinite matrices. This algorithm relies on the factorization X = Y Y T , where the number of columns of Y fixes an upper bound on the rank of the positive semidefinite matrix X. It is thus very effective for solving problems that have a low-rank solution. The factorization X = Y Y T leads to a reformulation of the original problem as an optimization on a particular quotient manifold. The present paper discusses the geometry of that manifold and derives a second-order optimization method with guaranteed quadratic convergence. It furthermore provides some conditions on the rank of the factorization to ensure equivalence with the original problem. In contrast to existing methods, the proposed algorithm converges monotonically to the sought solution. Its numerical efficiency is evaluated on two applications: the maximal cut of a graph and the problem of sparse principal component analysis. © 2010 Society for Industrial and Applied Mathematics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An energy method for a linear-elastic perfectly plastic method utilising the von Mises yield criterion with associated flow developed in 2013 by McMahon and co-workers is used to compare the ellipsoidal cavity-expansion mechanism, from the same work, and the displacement fields of other research by Levin, in 1995, and Osman and Bolton, in 2005, which utilise the Hill and Prandtl mechanisms respectively. The energy method was also used with a mechanism produced by performing a linear-elastic finite-element analysis in Abaqus. At small values of settlement and soil rigidity the elastic mechanism provides the lowest upper-bound solution, and matches well with finite-element analysis results published in the literature. At typical footing working loads and settlements the cavity-expansion mechanism produces a more optimal solution than the displacement fields within the Hill and Prandtl mechanisms, and also matches well with the published finite-element analysis results in this range. Beyond these loads, at greater footing settlements, or soil rigidity, the Prandtl mechanism is shown to be the most appropriate.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A method is proposed for on-line reconfiguration of the terminal constraint used to provide theoretical nominal stability guarantees in linear model predictive control (MPC). By parameterising the terminal constraint, its complete reconstruction is avoided when input constraints are modified to accommodate faults. To enlarge the region of feasibility of the terminal control law for a certain class of input faults with redundantly actuated plants, the linear terminal controller is defined in terms of virtual commands. A suitable terminal cost weighting for the reconfigurable MPC is obtained by means of an upper bound on the cost for all feasible realisations of the virtual commands from the terminal controller. Conditions are proposed that guarantee feasibility recovery for a defined subset of faults. The proposed method is demonstrated by means of a numerical example. © 2013 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Qubit measurement by mesoscopic charge detectors has received great interest in the community of mesoscopic transport and solid-state quantum computation, and some controversial issues still remain unresolved. In this work, we revisit the continuous weak measurement of a solid-state qubit by single electron transistors (SETs) in nonlinear-response regime. For two SET models typically used in the literature, we find that the signal-to-noise ratio can violate the universal upper bound "4," which is imposed quantum mechanically on linear-response detectors. This different result can be understood by means of the cross correlation of the detector currents by viewing the two junctions of the single SET as two detectors. Possible limitation of the potential-scattering approach to this result is also discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Theory of limit analysis include upper bound theorem and lower bound theorem. To deal with slope stability analysis by limit analysis is to approximate the real solution from upper limit and lower limit. The most used method of limit analysis is upper bound theorem, therefore it is often applied to slope engineering in many cases. Although upper bound approach of limit analysis can keep away from vague constitutive relation and complex stress analyses, it also can obtain rigorous result. Assuming the critical surface is circular slip surface, two kinematically admissible velocity fields for perpendicular slice method and radial slice method can be established according to the limit analysis of upper bound theorem. By means of virtual work rate equation and strength reduction method, the upper-bound solution of limit analysis for homogeneous soil slope can be obtained. A log-spiral rotational failure mechanism for homogeneous slope is discussed from two different conditions which represent the position of shear crack passing the toe and below the toe. In the dissertition, the author also establishes a rotational failure mechanics with combination of different logarithmic spiral arcs. Furthermore, the calculation formula of upper bound solution for inhomogeneous soil slope stability problem can be deduced based on the upper bound approach of rigid elements. Through calculating the external work rate caused by soil nail, anti-slide pile, geotechnological grid and retaining wall, the upper bound solution of safety factor of soil nail structure slope, slip resistance of anti-slide pile, critical height of reinforced soil slope and active earth pressure of retaining wall can be obtained by upper bound limit analysis method. Taking accumulated body slope as subject investigated, with study on the limit analysis method to calculate slope safety factor, the kinematically admissible velocity fields of perpendicular slice method for slope with broken slip surface is proposed. Through calculating not only the energy dissipation rate produced in the broken slip surfaces and the vertical velocity discontinuity, but also the work rate produced by self-weight and external load, the upper bound solution of slope with broken slip surface is deduced. As a case study, the slope stability of the Sanmashan landslide in the area of the Three Gorges reservoir is analyzed. Based on the theory of limit analysis, the upper bound solution for rock slope with planar failure surface is obtained. By means of virtual work-rate equation, energy dissipation caused by dislocation of thin-layer and terrane can be calculated; furthermore, the formulas of safety factor for upper bound approach of limit analysis can be deduced. In the end, a new computational model of stability analysis for anchored rock slope is presented after taking into consideration the supporting effect of rock-bolts, the action of seismic force and fissure water pressure. By using the model, not only the external woke-rate done by self-weight, seismic force, fissure water pressure and anchorage force but also the internal energy dissipation produced in the slip surface and structural planes can be totally calculated. According to the condition of virtual work rate equation in limit state, the formula of safety factor for upper bound limit analysis can be deduced.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Evaluating the mechanical properties of rock masses is the base of rock engineering design and construction. It has great influence on the safety and cost of rock project. The recognition is inevitable consequence of new engineering activities in rock, including high-rise building, super bridge, complex underground installations, hydraulic project and etc. During the constructions, lots of engineering accidents happened, which bring great damage to people. According to the investigation, many failures are due to choosing improper mechanical properties. ‘Can’t give the proper properties’ becomes one of big problems for theoretic analysis and numerical simulation. Selecting the properties reasonably and effectively is very significant for the planning, design and construction of rock engineering works. A multiple method based on site investigation, theoretic analysis, model test, numerical test and back analysis by artificial neural network is conducted to determine and optimize the mechanical properties for engineering design. The following outcomes are obtained: (1) Mapping of the rock mass structure Detailed geological investigation is the soul of the fine structure description. Based on statistical window,geological sketch and digital photography,a new method for rock mass fine structure in-situ mapping is developed. It has already been taken into practice and received good comments in Baihetan Hydropower Station. (2) Theoretic analysis of rock mass containing intermittent joints The shear strength mechanisms of joint and rock bridge are analyzed respectively. And the multiple modes of failure on different stress condition are summarized and supplied. Then, through introducing deformation compatibility equation in normal direction, the direct shear strength formulation and compression shear strength formulation for coplanar intermittent joints, as well as compression shear strength formulation for ladderlike intermittent joints are deducted respectively. In order to apply the deducted formulation conveniently in the real projects, a relationship between these formulations and Mohr-Coulomb hypothesis is built up. (3) Model test of rock mass containing intermittent joints Model tests are adopted to study the mechanical mechanism of joints to rock masses. The failure modes of rock mass containing intermittent joints are summarized from the model test. Six typical failure modes are found in the test, and brittle failures are the main failure mode. The evolvement processes of shear stress, shear displacement, normal stress and normal displacement are monitored by using rigid servo test machine. And the deformation and failure character during the loading process is analyzed. According to the model test, the failure modes quite depend on the joint distribution, connectivity and stress states. According to the contrastive analysis of complete stress strain curve, different failure developing stages are found in the intact rock, across jointed rock mass and intermittent jointed rock mass. There are four typical stages in the stress strain curve of intact rock, namely shear contraction stage, linear elastic stage, failure stage and residual strength stage. There are three typical stages in the across jointed rock mass, namely linear elastic stage, transition zone and sliding failure stage. Correspondingly, five typical stages are found in the intermittent jointed rock mass, namely linear elastic stage, sliding of joint, steady growth of post-crack, joint coalescence failure, and residual strength. According to strength analysis, the failure envelopes of intact rock and across jointed rock mass are the upper bound and lower bound separately. The strength of intermittent jointed rock mass can be evaluated by reducing the bandwidth of the failure envelope with geo-mechanics analysis. (4) Numerical test of rock mass Two sets of methods, i.e. the distinct element method (DEC) based on in-situ geology mapping and the realistic failure process analysis (RFPA) based on high-definition digital imaging, are developed and introduced. The operation process and analysis results are demonstrated detailedly from the research on parameters of rock mass based on numerical test in the Jinping First Stage Hydropower Station and Baihetan Hydropower Station. By comparison,the advantages and disadvantages are discussed. Then the applicable fields are figured out respectively. (5) Intelligent evaluation based on artificial neural network (ANN) The characters of both ANN and parameter evaluation of rock mass are discussed and summarized. According to the investigations, ANN has a bright application future in the field of parameter evaluation of rock mass. Intelligent evaluation of mechanical parameters in the Jinping First Stage Hydropower Station is taken as an example to demonstrate the analysis process. The problems in five aspects, i. e. sample selection, network design, initial value selection, learning rate and expected error, are discussed detailedly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the problem of matching model and sensory data features in the presence of geometric uncertainty, for the purpose of object localization and identification. The problem is to construct sets of model feature and sensory data feature pairs that are geometrically consistent given that there is uncertainty in the geometry of the sensory data features. If there is no geometric uncertainty, polynomial-time algorithms are possible for feature matching, yet these approaches can fail when there is uncertainty in the geometry of data features. Existing matching and recognition techniques which account for the geometric uncertainty in features either cannot guarantee finding a correct solution, or can construct geometrically consistent sets of feature pairs yet have worst case exponential complexity in terms of the number of features. The major new contribution of this work is to demonstrate a polynomial-time algorithm for constructing sets of geometrically consistent feature pairs given uncertainty in the geometry of the data features. We show that under a certain model of geometric uncertainty the feature matching problem in the presence of uncertainty is of polynomial complexity. This has important theoretical implications by demonstrating an upper bound on the complexity of the matching problem, an by offering insight into the nature of the matching problem itself. These insights prove useful in the solution to the matching problem in higher dimensional cases as well, such as matching three-dimensional models to either two or three-dimensional sensory data. The approach is based on an analysis of the space of feasible transformation parameters. This paper outlines the mathematical basis for the method, and describes the implementation of an algorithm for the procedure. Experiments demonstrating the method are reported.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes a method for limiting vibration in flexible systems by shaping the system inputs. Unlike most previous attempts at input shaping, this method does not require an extensive system model or lengthy numerical computation; only knowledge of the system natural frequency and damping ratio are required. The effectiveness of this method when there are errors in the system model is explored and quantified. An algorithm is presented which, given an upper bound on acceptable residual vibration amplitude, determines a shaping strategy that is insensitive to errors in the estimated natural frequency. A procedure for shaping inputs to systems with input constraints is outlined. The shaping method is evaluated by dynamic simulations and hardware experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user. © 2010 The American Physical Society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Size-fractionated filtration (SFF) is a direct method for estimating pigment concentration in various size classes. It is also common practice to infer the size structure of phytoplankton communities from diagnostic pigments estimated by high-performance liquid chromatography (HPLC). In this paper, the three-component model of Brewin et al. (2010) was fitted to coincident data from HPLC and from SFF collected along Atlantic Meridional Transect cruises. The model accounted for the variability in each data set, but the fitted model parameters differed for the two data sets. Both HPLC and SFF data supported the conceptual framework of the three-component model, which assumes that the chlorophyll concentration in small cells increases to an asymptotic maximum, beyond which further increase in chlorophyll is achieved by the addition of larger celled phytoplankton. The three-component model was extended to a multicomponent model of size structure using observed relationships between model parameters and assuming that the asymptotic concentration that can be reached by cells increased linearly with increase in the upper bound on the cell size. The multicomponent model was verified using independent SFF data for a variety of size fractions and found to perform well (0.628 ≤ r ≤ 0.989) lending support for the underlying assumptions. An advantage of the multicomponent model over the three-component model is that, for the same number of parameters, it can be applied to any size range in a continuous fashion. The multicomponent model provides a useful tool for studying the distribution of phytoplankton size structure at large scales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The upper and lower bounds on the actual solution of any microwave structure is of general interest. The purpose of this letter is to compare some calculations using the mode-matching and finite-element methods, with some measurements on a 180 degrees ridge waveguide insert between standard WR62 rectangular waveguides. The work suggests that the MMM produces an upper bound, while the FEM places a lower bound on the measurement. (C) 2001 John Wiley & Sons, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose a mixed cost-function adaptive initialization algorithm for the time domain equalizer in a discrete multitone (DMT)-based asymmetric digital subscriber line. Using our approach, a higher convergence rate than that of the commonly used least-mean square algorithm is obtained, whilst attaining bit rates close to the optimum maximum shortening SNR and the upper bound SNR. Furthermore, our proposed method outperforms the minimum mean-squared error design for a range of time domain equalizer (TEQ) filter lengths. The improved performance outweighs the small increase in computational complexity required. A block variant of our proposed algorithm is also presented to overcome the increased latency imposed on the feedback path of the adaptive system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we present a hybrid mixed cost-function adaptive initialization algorithm for the time domain equalizer in a discrete multitone (DMT)-based asymmetric digital subscriber loop. Using our approach, a higher convergence rate than that of the commonly used least-mean square algorithm is obtained, whilst attaining bit rates close to the optimum maximum shortening SNR and the upper bound SNR. Moreover, our proposed method outperforms the minimum mean-squared error design for a range of TEQ filter lengths.