969 resultados para performance constraints


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of reconstructing a sparse signal from its DFT magnitude. We refer to this problem as the sparse phase retrieval (SPR) problem, which finds applications in tomography, digital holography, electron microscopy, etc. We develop a Fienup-type iterative algorithm, referred to as the Max-K algorithm, to enforce sparsity and successively refine the estimate of phase. We show that the Max-K algorithm possesses Cauchy convergence properties under certain conditions, that is, the MSE of reconstruction does not increase with iterations. We also formulate the problem of SPR as a feasibility problem, where the goal is to find a signal that is sparse in a known basis and whose Fourier transform magnitude is consistent with the measurement. Subsequently, we interpret the Max-K algorithm as alternating projections onto the object-domain and measurement-domain constraint sets and generalize it to a parameterized relaxation, known as the relaxed averaged alternating reflections (RAAR) algorithm. On the application front, we work with measurements acquired using a frequency-domain optical-coherence tomography (FDOCT) experimental setup. Experimental results on measured data show that the proposed algorithms exhibit good reconstruction performance compared with the direct inversion technique, homomorphic technique, and the classical Fienup algorithm without sparsity constraint; specifically, the autocorrelation artifacts and background noise are suppressed to a significant extent. We also demonstrate that the RAAR algorithm offers a broader framework for FDOCT reconstruction, of which the direct inversion technique and the proposed Max-K algorithm become special instances corresponding to specific values of the relaxation parameter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adapting the power of secondary users (SUs) while adhering to constraints on the interference caused to primary receivers (PRxs) is a critical issue in underlay cognitive radio (CR). This adaptation is driven by the interference and transmit power constraints imposed on the secondary transmitter (STx). Its performance also depends on the quality of channel state information (CSI) available at the STx of the links from the STx to the secondary receiver and to the PRxs. For a system in which an STx is subject to an average interference constraint or an interference outage probability constraint at each of the PRxs, we derive novel symbol error probability (SEP)-optimal, practically motivated binary transmit power control policies. As a reference, we also present the corresponding SEP-optimal continuous transmit power control policies for one PRx. We then analyze the robustness of the optimal policies when the STx knows noisy channel estimates of the links between the SU and the PRxs. Altogether, our work develops a holistic understanding of the critical role played by different transmit and interference constraints in driving power control in underlay CR and the impact of CSI on its performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vehicular Ad-hoc Networks (VANET), is a type of wireless ad-hoc network that aims to provide communication among vehicles. A key characteristic of VANETs is the very high mobility of nodes that result in a frequently changing topology along with the frequent breakage and linkage of the paths among the nodes involved. These characteristics make the Quality of Service (QoS) requirements in VANET a challenging issue. In this paper we characterize the performance available to applications in infrastructureless VANETs in terms of path holding time, path breakage probability and per session throughput as a function of various vehicle densities on road, data traffic rate and number of connections formed among vehicles by making use of table-driven and on-demand routing algorithms. Several QoS constraints in the applications of infrastructureless VANETs are observed in the results obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop an approximate analytical technique for evaluating the performance of multi-hop networks based on beaconless IEEE 802.15.4 ( the ``ZigBee'' PHY and MAC), a popular standard for wireless sensor networks. The network comprises sensor nodes, which generate measurement packets, relay nodes which only forward packets, and a data sink (base station). We consider a detailed stochastic process at each node, and analyse this process taking into account the interaction with neighbouring nodes via certain time averaged unknown variables (e.g., channel sensing rates, collision probabilities, etc.). By coupling the analyses at various nodes, we obtain fixed point equations that can be solved numerically to obtain the unknown variables, thereby yielding approximations of time average performance measures, such as packet discard probabilities and average queueing delays. The model incorporates packet generation at the sensor nodes and queues at the sensor nodes and relay nodes. We demonstrate the accuracy of our model by an extensive comparison with simulations. As an additional assessment of the accuracy of the model, we utilize it in an algorithm for sensor network design with quality-of-service (QoS) objectives, and show that designs obtained using our model actually satisfy the QoS constraints (as validated by simulating the networks), and the predictions are accurate to well within 10% as compared to the simulation results in a regime where the packet discard probability is low. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we extend to the multistage case two recent risk averse measures for two-stage stochastic programs based on first- and second-order stochastic dominance constraints induced by mixed-integer linear recourse. Additionally, we consider Time Stochastic Dominance (TSD) along a given horizon. Given the dimensions of medium-sized problems augmented by the new variables and constraints required by those risk measures, it is unrealistic to solve the problem up to optimality by plain use of MIP solvers in a reasonable computing time, at least. Instead of it, decomposition algorithms of some type should be used. We present an extension of our Branch-and-Fix Coordination algorithm, so named BFC-TSD, where a special treatment is given to cross scenario group constraints that link variables from different scenario groups. A broad computational experience is presented by comparing the risk neutral approach and the tested risk averse strategies. The performance of the new version of the BFC algorithm versus the plain use of a state-of-the-artMIP solver is also reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lean premixed prevaporized (LPP) technology has been widely used in the new generation of gas turbines in which reduced emissions are a priority. However, such combustion systems are susceptible to the damage of self-excited oscillations. Feedback control provide a way of preventing such dynamic stabilities. A flame dynamics assumption is proposed for a recently developed unsteady heat release model, the robust design technique, ℋ ∞ loop-shaping, is applied for the controller design and the performance of the controller is confirmed by simulations of the closed-loop system. The Integral Quadratic Constraints(IQC) method is employed to prove the stability of the closed-loop system. ©2010 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model predictive control allows systematic handling of physical and operational constraints through the use of constrained optimisation. It has also been shown to successfully exploit plant redundancy to maintain a level of control in scenarios when faults are present. Unfortunately, the computational complexity of each individual iteration of the algorithm to solve the optimisation problem scales cubically with the number of plant inputs, so the computational demands are high for large MIMO plants. Multiplexed MPC only calculates changes in a subset of the plant inputs at each sampling instant, thus reducing the complexity of the optimisation. This paper demonstrates the application of multiplexed model predictive control to a large transport airliner in a nominal and a contingency scenario. The performance is compared to that obtained with a conventional synchronous model predictive controller, designed using an equivalent cost function. © 2012 AACC American Automatic Control Council).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mitochondrial 16S ribosomal RNA (rRNA) gene sequences from 93 cyprinid fishes were examined to reconstruct the phylogenetic relationships within the diverse and economically important subfamily Cyprininae. Within the subfamily a biased nucleotide composition (A > T, C > G) was observed in the loop regions of the gene, and in stem regions apparent selective pressures of base pairing showed a bias in favor of G over C and T over A. The bias may be associated with transition-transversion bias. Rates of nucleotide substitution were lower in stems than in loops. Analysis of compensatory substitutions across these taxa demonstrates 68% covariation in the gene and a logical weighting factor to account for dependence in mutations for phylogenetic inference should be 0.66. Comparisons of varied stem-loop weighting schemes indicate that the down-weightings for stem regions could improve the phylogenetic analysis and the degree of non-independence of stem substitutions was not as important as expected. Bayesian inference under four models of nucleotide substitution indicated that likelihood-based phylogenetic analyses were more effective in improving the phylogenetic performance than was weighted parsimony analysis. In Bayesian analyses, the resolution of phylogenies under the 16-state models for paired regions, incorporating GTR + G + I models for unpaired regions was better than those under other models. The subfamily Cyprininae was resolved as a monophyletic group, as well as tribe Labein and several genera. However, the monophyly of the currently recognized tribes, such as Schizothoracin, Barbin, Cyprinion + Onychostoma lineages, and some genera was rejected. Furthermore, comparisons of the parsimony and Bayesian analyses and results of variable length bootstrap analysis indicates that the mitochondrial 16S rRNA gene should contain important character variation to recover well-supported phylogeny of cyprinid taxa whose divergences occurred within the recent 8 MY, but could not provide resolution power for deep phylogenies spanning 10-19 MYA. (c) 2008 Published by Elsevier Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visibility constraints can aid the segmentation of foreground objects observed with multiple range images. In our approach, points are defined as foreground if they can be determined to occlude some {em empty space} in the scene. We present an efficient algorithm to estimate foreground points in each range view using explicit epipolar search. In cases where the background pattern is stationary, we show how visibility constraints from other views can generate virtual background values at points with no valid depth in the primary view. We demonstrate the performance of both algorithms for detecting people in indoor office environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Speculative Concurrency Control (SCC) [Best92a] is a new concurrency control approach especially suited for real-time database applications. It relies on the use of redundancy to ensure that serializable schedules are discovered and adopted as early as possible, thus increasing the likelihood of the timely commitment of transactions with strict timing constraints. In [Best92b], SCC-nS, a generic algorithm that characterizes a family of SCC-based algorithms was described, and its correctness established by showing that it only admits serializable histories. In this paper, we evaluate the performance of the Two-Shadow SCC algorithm (SCC-2S), a member of the SCC-nS family, which is notable for its minimal use of redundancy. In particular, we show that SCC-2S (as a representative of SCC-based algorithms) provides significant performance gains over the widely used Optimistic Concurrency Control with Broadcast Commit (OCC-BC), under a variety of operating conditions and workloads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions utilize unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real-world computer vision applications: unlimited field of view, infinite depth of field, and/or infinite servo precision and speed. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real-world cameras. The region to be observed by cameras may be volumetric, static or dynamic, and may include holes that are caused, for instance, by columns or furniture in a room that can occlude potential camera views. A subclass of this general problem can be formulated in terms of planar regions that are typical of building floorplans. Given a floorplan to be observed, the problem is then to efficiently compute a camera layout such that certain task-specific constraints are met. A solution to this problem is obtained via binary optimization over a discrete problem space. In preliminary experiments the performance of the resulting system is demonstrated with different real floorplans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One role for workload generation is as a means for understanding how servers and networks respond to variation in load. This enables management and capacity planning based on current and projected usage. This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server. The tool, called Surge (Scalable URL Reference Generator) generates references matching empirical measurements of 1) server file size distribution; 2) request size distribution; 3) relative file popularity; 4) embedded file references; 5) temporal locality of reference; and 6) idle periods of individual users. This paper reviews the essential elements required in the generation of a representative Web workload. It also addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream, the solutions we adopted, and their associated accuracy. Finally, we present evidence that Surge exercises servers in a manner significantly different from other Web server benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A foundational issue underlying many overlay network applications ranging from routing to P2P file sharing is that of connectivity management, i.e., folding new arrivals into the existing mesh and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are tractable to address via theoretical analyses, especially game-theoretic analysis. Our work unifies these two thrusts first by distilling insights gleaned from clean theoretical models, notably that under natural resource constraints, selfish players can select neighbors so as to efficiently reach near-equilibria that also provide high global performance. Using Egoist, a prototype overlay routing system we implemented on PlanetLab, we demonstrate that our neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics; that Egoist is competitive with an optimal, but unscalable full-mesh approach; and that it remains highly effective under significant churn. We also describe variants of Egoist's current design that would enable it to scale to overlays of much larger scale and allow it to cater effectively to applications, such as P2P file sharing in unstructured overlays, based on the use of primitives such as scoped-flooding rather than routing.