866 resultados para Lower bounds


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a new formulation of Miller's regularization theory, which is particularly suitable for object restoration problems. By means of simple geometrical arguments, we obtain upper and lower bounds for the errors on regularized solutions. This leads to distinguish between ' Holder continuity ' which is quite good for practical computations and ` logarithmic continuity ' which is very poor. However, in the latter case, one can reconstruct local weighted averages of the solution. This procedure allows for precise valuations of the resolution attainable in a given problem. Numerical computations, made for object restoration beyond the diffraction limit in Fourier optics, show that, when logarithmic continuity holds, the resolution is practically independent of the data noise level. © 1980 Taylor & Francis Group, LLC.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A spectrally efficient cooperative protocol for uplink wireless transmission in a centralised communication system is proposed, where each of the N users play the relaying and source roles simultaneously by using superposition (SP) modulation. The probability density function of the mutual information between SP-modulated transmitted and received signals of the cooperative uplink channels is derived. Using the high-signal-to-noise ratio (SNR) approximation of this density function, the outage probability formula of the system as well as its easily computable tight upper and lower bounds are obtained and these formulas are evaluated numerically. Numerical results show that the proposed strategy can achieve around 3 dB performance gain over comparable schemes. Furthermore, the multiplexing and diversity tradeoff formula is derived to illustrate the optimal performance of the proposed protocol, which also confirms that the SP relaying transmission does not cause any loss of data rate. Moreover, performance characterisation in terms of ergodic and outage capacities are studied and numerical results show that the proposed scheme can achieve significantly larger outage capacity than direct transmission, which is similar to other cooperative schemes. The superiority of the proposed strategy is demonstrated by the fact that it can maintain almost the same ergodic capacity as the direct transmission, whereas the ergodic capacity of other cooperative schemes would be much worse.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An appreciation of the quantity of streamflow derived from the main hydrological pathways involved in transporting diffuse contaminants is critical when addressing a wide range of water resource management issues. In order to assess hydrological pathway contributions to streams, it is necessary to provide feasible upper and lower bounds for flows in each pathway. An important first step in this process is to provide reliable estimates of the slower responding groundwater pathways and subsequently the quicker overland and interflow pathways. This paper investigates the effectiveness of a multi-faceted approach applying different hydrograph separation techniques, supplemented by lumped hydrological modelling, for calculating the Baseflow Index (BFI), for the development of an integrated approach to hydrograph separation. A semi-distributed, lumped and deterministic rainfall runoff model known as NAM has been applied to ten catchments (ranging from 5 to 699 km2). While this modelling approach is useful as a validation method, NAM itself is also an important tool for investigation. These separation techniques provide a large variation in BFI, a difference of 0.741 predicted for BFI in a catchment with the less reliable fixed and sliding interval methods and local minima turning point methods included. This variation is reduced to 0.167 with these methods omitted. The Boughton and Eckhardt algorithms, while quite subjective in their use, provide quick and easily implemented approaches for obtaining physically realistic hydrograph separations. It is observed that while the different separation techniques give varying BFI values for each of the catchments, a recharge coefficient approach developed in Ireland, when applied in conjunction with the Master recession Curve Tabulation method, predict estimates in agreement with those obtained using the NAM model, and these estimates are also consistent with the study catchments’ geology. These two separation methods, in conjunction with the NAM model, were selected to form an integrated approach to assessing BFI in catchments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of self-healing in networks that are reconfigurable in the sense that they can change their topology during an attack. Our goal is to maintain connectivity in these networks, even in the presence of repeated adversarial node deletion, by carefully adding edges after each attack. We present a new algorithm, DASH, that provably ensures that: 1) the network stays connected even if an adversary deletes up to all nodes in the network; and 2) no node ever increases its degree by more than 2 log n, where n is the number of nodes initially in the network. DASH is fully distributed; adds new edges only among neighbors of deleted nodes; and has average latency and bandwidth costs that are at most logarithmic in n. DASH has these properties irrespective of the topology of the initial network, and is thus orthogonal and complementary to traditional topology- based approaches to defending against attack. We also prove lower-bounds showing that DASH is asymptotically optimal in terms of minimizing maximum degree increase over multiple attacks. Finally, we present empirical results on power-law graphs that show that DASH performs well in practice, and that it significantly outperforms naive algorithms in reducing maximum degree increase.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Molecular communication is set to play an important role in the design of complex biological and chemical systems. An important class of molecular communication systems is based on the timing channel, where information is encoded in the delay of the transmitted molecule - a synchronous approach. At present, a widely used modeling assumption is the perfect synchronization between the transmitter and the receiver. Unfortunately, this assumption is unlikely to hold in most practical molecular systems. To remedy this, we introduce a clock into the model - leading to the molecular timing channel with synchronization error. To quantify the behavior of this new system, we derive upper and lower bounds on the variance-constrained capacity, which we view as the step between the mean-delay and the peak-delay constrained capacity. By numerically evaluating our bounds, we obtain a key practical insight: the drift velocity of the clock links does not need to be significantly larger than the drift velocity of the information link, in order to achieve the variance-constrained capacity with perfect synchronization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hardware impairments in physical transceivers are known to have a deleterious effect on communication systems; however, very few contributions have investigated their impact on relaying. This paper quantifies the impact of transceiver impairments in a two-way amplify-and-forward configuration. More specifically, the effective signal-to-noise-and-distortion ratios at both transmitter nodes are obtained. These are used to deduce exact and asymptotic closed-form expressions for the outage probabilities (OPs), as well as tractable formulations for the symbol error rates (SERs). It is explicitly shown that non-zero lower bounds on the OP and SER exist in the high-power regime---this stands in contrast to the special case of ideal hardware, where the OP and SER go asymptotically to zero.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, weconsider switch-and-stay combining (SSC) in two-way relay systems with two amplify-and-forward relays, one of which is activated to assist the information exchange between the two sources. The system operates in either analog network coding (ANC) protocol where the communication is only achieved with the help of the active relay or timedivision broadcast (TDBC) protocol where the direct link between two sources can be utilized to exploit more diversity gain. In both cases, we study the outage probability and bit error rate (BER) for Rayleigh fading channels. In particular, we derive closed-form lower bounds for the outage probability and the average BER, which remain tight for different fading conditions. We also present asymptotic analysis for both the outage probability and the average BER at high signalto-noise ratio. It is shown that SSC can achieve the full diversity order in two-way relay systems for both ANC and TDBC protocols with proper switching thresholds. Copyright © 2014 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms. 

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we study the achievable ergodic sum-rate of multiuser multiple-input multiple-output downlink systems in Rician fading channels. We first derive a lower bound on the average signal-to-leakage-and-noise ratio by using the Mullen’s inequality, and then use it to analyze the effect of channel mean information on the achievable ergodic sum-rate. A novel statistical-eigenmode space-division multiple-access (SESDMA) downlink transmission scheme is then proposed. For this scheme, we derive an exact analytical closed-form expression for the achievable ergodic rate and present tractable tight upper and lower bounds. Based on our analysis, we gain valuable insights into the system parameters, such as the number of transmit antennas, the signal-to-noise ratio (SNR) and Rician K-factor on the system sum-rate. Results show that the sum-rate converges to a saturation value in the high SNR regime and tends to a lower limit for the low Rician K-factor case. In addition, we compare the achievable ergodic sum-rate between SE-SDMA and zeroforcing beamforming with perfect channel state information at the base station. Our results reveal that the rate gap tends to zero in the high Rician K-factor regime. Finally, numerical results are presented to validate our analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of 5G enabling technologies brings new challenges to the design of power amplifiers (PAs). In particular, there is a strong demand for low-cost, nonlinear PAs which, however, introduce nonlinear distortions. On the other hand, contemporary expensive PAs show great power efficiency in their nonlinear region. Inspired by this trade-off between nonlinearity distortions and efficiency, finding an optimal operating point is highly desirable. Hence, it is first necessary to fully understand how and how much the performance of multiple-input multiple-output (MIMO) systems deteriorates with PA nonlinearities. In this paper, we first reduce the ergodic achievable rate (EAR) optimization from a power allocation to a power control problem with only one optimization variable, i.e. total input power. Then, we develop a closed-form expression for the EAR, where this variable is fixed. Since this expression is intractable for further analysis, two simple lower bounds and one upper bound are proposed. These bounds enable us to find the best input power and approach the channel capacity. Finally, our simulation results evaluate the EAR of MIMO channels in the presence of nonlinearities. An important observation is that the MIMO performance can be significantly degraded if we utilize the whole power budget.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates the achievable sum-rate of massive multiple-input multiple-output (MIMO) systems in the presence of channel aging. For the uplink, by assuming that the base station (BS) deploys maximum ratio combining (MRC) or zero-forcing (ZF) receivers, we present tight closed-form lower bounds on the achievable sum-rate for both receivers with aged channel state information (CSI). In addition, the benefit of implementing channel prediction methods on the sum-rate is examined, and closed-form sum rate lower bounds are derived. Moreover, the impact of channel aging and channel prediction on the power scaling law is characterized. Extension to the downlink scenario and multi-cell scenario are also considered. It is found that, for a system with/without channel prediction, the transmit power of each user can be scaled down at most by 1= p M (where M is the number of BS antennas), which indicates that aged CSI does not degrade the power scaling law, and channel prediction does not enhance the power scaling law; instead, these phenomena affect the achievable sum-rate by degrading or enhancing the effective signal to interference and noise ratio, respectively.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We analyze the performance of amplify-and-forward dual-hop relaying systems in the presence of in-phase and quadrature-phase imbalance (IQI) at the relay node. In particular, an exact analytical expression for and tight lower bounds on the outage probability are derived over independent, non-identically distributed Nakagami-m fading channels. Moreover, tractable upper and lower bounds on the ergodic capacity are presented at arbitrary signal-to-noise ratios (SNRs). Some special cases of practical interest (e.g., Rayleigh and Nakagami-0.5 fading) are also studied. An asymptotic analysis is performed in the high SNR regime, where we observe that IQI results in a ceiling effect on the signal-to-interference-plus-noise ratio (SINR), which depends only on the level of I/Q impairments, i.e., the joint image rejection ratio. Finally, the optimal I/Q amplitude and phase mismatch parameters are provided for maximizing the SINR ceiling, thus improving the system performance. An interesting observation is that, under a fixed total phase mismatch constraint, it is optimal to have the same level of transmitter (TX) and receiver (RX) phase mismatch at the relay node, while the optimal values for the TX and RX amplitude mismatch should be inversely proportional to each other.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nesta tese abordam-se várias formulações e diferentes métodos para resolver o Problema da Árvore de Suporte de Custo Mínimo com Restrições de Peso (WMST – Weight-constrained Minimum Spanning Tree Problem). Este problema, com aplicações no desenho de redes de comunicações e telecomunicações, é um problema de Otimização Combinatória NP-difícil. O Problema WMST consiste em determinar, numa rede com custos e pesos associados às arestas, uma árvore de suporte de custo mínimo de tal forma que o seu peso total não exceda um dado limite especificado. Apresentam-se e comparam-se várias formulações para o problema. Uma delas é usada para desenvolver um procedimento com introdução de cortes baseado em separação e que se tornou bastante útil na obtenção de soluções para o problema. Tendo como propósito fortalecer as formulações apresentadas, introduzem-se novas classes de desigualdades válidas que foram adaptadas das conhecidas desigualdades de cobertura, desigualdades de cobertura estendida e desigualdades de cobertura levantada. As novas desigualdades incorporam a informação de dois conjuntos de soluções: o conjunto das árvores de suporte e o conjunto saco-mochila. Apresentam-se diversos algoritmos heurísticos de separação que nos permitem usar as desigualdades válidas propostas de forma eficiente. Com base na decomposição Lagrangeana, apresentam-se e comparam-se algoritmos simples, mas eficientes, que podem ser usados para calcular limites inferiores e superiores para o valor ótimo do WMST. Entre eles encontram-se dois novos algoritmos: um baseado na convexidade da função Lagrangeana e outro que faz uso da inclusão de desigualdades válidas. Com o objetivo de obter soluções aproximadas para o Problema WMST usam-se métodos heurísticos para encontrar uma solução inteira admissível. Os métodos heurísticos apresentados são baseados nas estratégias Feasibility Pump e Local Branching. Apresentam-se resultados computacionais usando todos os métodos apresentados. Os resultados mostram que os diferentes métodos apresentados são bastante eficientes para encontrar soluções para o Problema WMST.