161 resultados para adaptive cost
Resumo:
Amplify-and-forward (AF) relay based cooperation has been investigated in the literature given its simplicity and practicality. Two models for AF, namely, fixed gain and fixed power relaying, have been extensively studied. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay (SR) channel gain. In fixed power relaying, the relay's instantaneous transmit power is fixed, but its gain varies. We propose a general AF cooperation model in which an average transmit power constrained relay jointly adapts its gain and transmit power as a function of the channel gains. We derive the optimal AF gain policy that minimizes the fading- averaged symbol error probability (SEP) of MPSK and present insightful and tractable lower and upper bounds for it. We then analyze the SEP of the optimal policy. Our results show that the optimal scheme is up to 39.7% and 47.5% more energy-efficient than fixed power relaying and fixed gain relaying, respectively. Further, the weaker the direct source-destination link, the greater are the energy-efficiency gains.
Resumo:
The optimal tradeoff between average service cost rate and average delay, is addressed for a M/M/1 queueing model with queue-length dependent service rates, chosen from a finite set. We provide an asymptotic characterization of the minimum average delay, when the average service cost rate is a small positive quantity V more than the minimum average service cost rate required for stability. We show that depending on the value of the arrival rate, the assumed service cost rate function, and the possible values of the service rates, the minimum average delay either a) increases only to a finite value, b) increases without bound as log(1/V), or c) increases without bound as 1/V, when V down arrow 0. We apply the analysis to a flow-level resource allocation model for a wireless downlink. We also investigate the asymptotic tradeoff for a sequence of policies which are obtained from an approximate fluid model for the M/M/1 queue.
Resumo:
Many studies investigating the effect of human social connectivity structures (networks) and human behavioral adaptations on the spread of infectious diseases have assumed either a static connectivity structure or a network which adapts itself in response to the epidemic (adaptive networks). However, human social connections are inherently dynamic or time varying. Furthermore, the spread of many infectious diseases occur on a time scale comparable to the time scale of the evolving network structure. Here we aim to quantify the effect of human behavioral adaptations on the spread of asymptomatic infectious diseases on time varying networks. We perform a full stochastic analysis using a continuous time Markov chain approach for calculating the outbreak probability, mean epidemic duration, epidemic reemergence probability, etc. Additionally, we use mean-field theory for calculating epidemic thresholds. Theoretical predictions are verified using extensive simulations. Our studies have uncovered the existence of an ``adaptive threshold,'' i.e., when the ratio of susceptibility (or infectivity) rate to recovery rate is below the threshold value, adaptive behavior can prevent the epidemic. However, if it is above the threshold, no amount of behavioral adaptations can prevent the epidemic. Our analyses suggest that the interaction patterns of the infected population play a major role in sustaining the epidemic. Our results have implications on epidemic containment policies, as awareness campaigns and human behavioral responses can be effective only if the interaction levels of the infected populace are kept in check.
Resumo:
An exciting application of crowdsourcing is to use social networks in complex task execution. In this paper, we address the problem of a planner who needs to incentivize agents within a network in order to seek their help in executing an atomic task as well as in recruiting other agents to execute the task. We study this mechanism design problem under two natural resource optimization settings: (1) cost critical tasks, where the planner's goal is to minimize the total cost, and (2) time critical tasks, where the goal is to minimize the total time elapsed before the task is executed. We identify a set of desirable properties that should ideally be satisfied by a crowdsourcing mechanism. In particular, sybil-proofness and collapse-proofness are two complementary properties in our desiderata. We prove that no mechanism can satisfy all the desirable properties simultaneously. This leads us naturally to explore approximate versions of the critical properties. We focus our attention on approximate sybil-proofness and our exploration leads to a parametrized family of payment mechanisms which satisfy collapse-proofness. We characterize the approximate versions of the desirable properties in cost critical and time critical domain.
Resumo:
We consider the problem of Probably Ap-proximate Correct (PAC) learning of a bi-nary classifier from noisy labeled exam-ples acquired from multiple annotators(each characterized by a respective clas-sification noise rate). First, we consider the complete information scenario, where the learner knows the noise rates of all the annotators. For this scenario, we derive sample complexity bound for the Mini-mum Disagreement Algorithm (MDA) on the number of labeled examples to be ob-tained from each annotator. Next, we consider the incomplete information sce-nario, where each annotator is strategic and holds the respective noise rate as a private information. For this scenario, we design a cost optimal procurement auc-tion mechanism along the lines of Myer-son’s optimal auction design framework in a non-trivial manner. This mechanism satisfies incentive compatibility property,thereby facilitating the learner to elicit true noise rates of all the annotators.
Resumo:
We propose a novel space-time descriptor for region-based tracking which is very concise and efficient. The regions represented by covariance matrices within a temporal fragment, are used to estimate this space-time descriptor which we call the Eigenprofiles(EP). EP so obtained is used in estimating the Covariance Matrix of features over spatio-temporal fragments. The Second Order Statistics of spatio-temporal fragments form our target model which can be adapted for variations across the video. The model being concise also allows the use of multiple spatially overlapping fragments to represent the target. We demonstrate good tracking results on very challenging datasets, shot under insufficient illumination conditions.
Resumo:
The analysis of modulation schemes for the physical layer network-coded two way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. Depending on the signal set used at the end nodes, the minimum distance of the effective constellation seen at the relay becomes zero for a finite number of channel fade states referred as the singular fade states. The singular fade states fall into the following two classes: (i) the ones which are caused due to channel outage and whose harmful effect cannot be mitigated by adaptive network coding called the non-removable singular fade states and (ii) the ones which occur due to the choice of the signal set and whose harmful effects can be removed called the removable singular fade states. In this paper, we derive an upper bound on the average end-to-end Symbol Error Rate (SER), with and without adaptive network coding at the relay, for a Rician fading scenario. It is shown that without adaptive network coding, at high Signal to Noise Ratio (SNR), the contribution to the end-to-end SER comes from the following error events which fall as SNR-1: the error events associated with the removable and nonremovable singular fade states and the error event during the BC phase. In contrast, for the adaptive network coding scheme, the error events associated with the removable singular fade states fall as SNR-2, thereby providing a coding gain over the case when adaptive network coding is not used. Also, it is shown that for a Rician fading channel, the error during the MA phase dominates over the error during the BC phase. Hence, adaptive network coding, which improves the performance during the MA phase provides more gain in a Rician fading scenario than in a Rayleigh fading scenario. Furthermore, it is shown that for large Rician factors, among those removable singular fade states which have the same magnitude, those which have the least absolute value of the phase - ngle alone contribute dominantly to the end-to-end SER and it is sufficient to remove the effect of only such singular fade states.
Resumo:
Compressive Sensing theory combines the signal sampling and compression for sparse signals resulting in reduction in sampling rate and computational complexity of the measurement system. In recent years, many recovery algorithms were proposed to reconstruct the signal efficiently. Look Ahead OMP (LAOMP) is a recently proposed method which uses a look ahead strategy and performs significantly better than other greedy methods. In this paper, we propose a modification to the LAOMP algorithm to choose the look ahead parameter L adaptively, thus reducing the complexity of the algorithm, without compromising on the performance. The performance of the algorithm is evaluated through Monte Carlo simulations.
Resumo:
CuIn1-xAlxSe2 (CIASe) thin films were grown by a simple sol-gel route followed by annealing under vacuum. Parameters related to the spin-orbit (Delta(SO)) and crystal field (Delta(CF)) were determined using a quasi-cubic model. Highly oriented (002) aluminum doped (2%) ZnO, 100 nm thin films, were co-sputtered for CuIn1-xAlxSe2/AZnO based solar cells. Barrier height and ideality factor varied from 0.63 eV to 0.51 eV and 1.3186 to 2.095 in the dark and under 1.38 A. M 1.5 solar illumination respectively. Current-voltage characteristics carried out at 300 K were confined to a triangle, exhibiting three limiting conduction mechanisms: Ohms law, trap-filled limit curve and SCLC, with 0.2 V being the cross-over voltage, for a quadratic transition from Ohm's to Child's law. Visible photodetection was demonstrated with a CIASe/AZO photodiode configuration. Photocurrent was enhanced by one order from 3 x 10(-3) A in the dark at 1 V to 3 x 10(-2) A upon 1.38 sun illumination. The optimized photodiode exhibits an external quantum efficiency of over 32% to 10% from 350 to 1100 nm at high intensity 17.99 mW cm(-2) solar illumination. High responsivity R-lambda similar to 920 A W-1, sensitivity S similar to 9.0, specific detectivity D* similar to 3 x 10(14) Jones, make CIASe a potential absorber for enhancing the forthcoming technological applications of photodetection.
Resumo:
As petrol prices are going up in developing countries in upcoming decades low cost electric cars will become more and more popular in developing world. One of the main deciding factors for success of electric cars specially in developing world in upcoming decades will be its cost. This paper shows a cost effective method to control the speed of low cost brushed D.C. motor by combining a IC 555 Timer with a High Boost Converter. The main purpose of using High Boost Converter since electric cars needs high voltage and current which a High Boost Converter can provide even with low battery supply.
Resumo:
An important question in kernel regression is one of estimating the order and bandwidth parameters from available noisy data. We propose to solve the problem within a risk estimation framework. Considering an independent and identically distributed (i.i.d.) Gaussian observations model, we use Stein's unbiased risk estimator (SURE) to estimate a weighted mean-square error (MSE) risk, and optimize it with respect to the order and bandwidth parameters. The two parameters are thus spatially adapted in such a manner that noise smoothing and fine structure preservation are simultaneously achieved. On the application side, we consider the problem of image restoration from uniform/non-uniform data, and show that the SURE approach to spatially adaptive kernel regression results in better quality estimation compared with its spatially non-adaptive counterparts. The denoising results obtained are comparable to those obtained using other state-of-the-art techniques, and in some scenarios, superior.
Resumo:
Microorganisms exhibit varied regulatory strategies such as direct regulation, symmetric anticipatory regulation, asymmetric anticipatory regulation, etc. Current mathematical modeling frameworks for the growth of microorganisms either do not incorporate regulation or assume that the microorganisms utilize the direct regulation strategy. In the present study, we extend the cybernetic modeling framework to account for asymmetric anticipatory regulation strategy. The extended model accurately captures various experimental observations. We use the developed model to explore the fitness advantage provided by the asymmetric anticipatory regulation strategy and observe that the optimal extent of asymmetric regulation depends on the selective pressure that the microorganisms experience. We also explore the importance of timing the response in anticipatory regulation and find that there is an optimal time, dependent on the extent of asymmetric regulation, at which microorganisms should respond anticipatorily to maximize their fitness. We then discuss the advantages offered by the cybernetic modeling framework over other modeling frameworks in modeling the asymmetric anticipatory regulation strategy. (C) 2013 Published by Elsevier Inc.
Resumo:
Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.
Resumo:
Zinc oxide nanorods (ZnO NRs) have been synthesized on flexible substrates by adopting a new and novel three-step process. The as-grown ZnO NRs are vertically aligned and have excellent chemical stoichiometry between its constituents. The transmission electron microscopic studies show that these NR structures are single crystalline and grown along the < 001 > direction. The optical studies show that these nanostructures have a direct optical band gap of about 3.34 eV. Therefore, the proposed methodology for the synthesis of vertically aligned NRs on flexible sheets launches a new route in the development of low-cost flexible devices. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In this article, we prove convergence of the weakly penalized adaptive discontinuous Galerkin methods. Unlike other works, we derive the contraction property for various discontinuous Galerkin methods only assuming the stabilizing parameters are large enough to stabilize the method. A central idea in the analysis is to construct an auxiliary solution from the discontinuous Galerkin solution by a simple post processing. Based on the auxiliary solution, we define the adaptive algorithm which guides to the convergence of adaptive discontinuous Galerkin methods.