216 resultados para maximum ratio combining
Resumo:
Consider L independent and identically distributed exponential random variables (r.vs) X-1, X-2 ,..., X-L and positive scalars b(1), b(2) ,..., b(L). In this letter, we present the probability density function (pdf), cumulative distribution function and the Laplace transform of the pdf of the composite r.v Z = (Sigma(L)(j=1) X-j)(2) / (Sigma(L)(j=1) b(j)X(j)). We show that the r.v Z appears in various communication systems such as i) maximal ratio combining of signals received over multiple channels with mismatched noise variances, ii)M-ary phase-shift keying with spatial diversity and imperfect channel estimation, and iii) coded multi-carrier code-division multiple access reception affected by an unknown narrow-band interference, and the statistics of the r.v Z derived here enable us to carry out the performance analysis of such systems in closed-form.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
This paper compares and analyzes the performance of distributed cophasing techniques for uplink transmission over wireless sensor networks. We focus on a time-division duplexing approach, and exploit the channel reciprocity to reduce the channel feedback requirement. We consider periodic broadcast of known pilot symbols by the fusion center (FC), and maximum likelihood estimation of the channel by the sensor nodes for the subsequent uplink cophasing transmission. We assume carrier and phase synchronization across the participating nodes for analytical tractability. We study binary signaling over frequency-flat fading channels, and quantify the system performance such as the expected gains in the received signal-to-noise ratio (SNR) and the average probability of error at the FC, as a function of the number of sensor nodes and the pilot overhead. Our results show that a modest amount of accumulated pilot SNR is sufficient to realize a large fraction of the maximum possible beamforming gain. We also investigate the performance gains obtained by censoring transmission at the sensors based on the estimated channel state, and the benefits obtained by using maximum ratio transmission (MRT) and truncated channel inversion (TCI) at the sensors in addition to cophasing transmission. Simulation results corroborate the theoretical expressions and show the relative performance benefits offered by the various schemes.
Resumo:
The maximum independent set problem is NP-complete even when restricted to planar graphs, cubic planar graphs or triangle free graphs. The problem of finding an absolute approximation still remains NP-complete. Various polynomial time approximation algorithms, that guarantee a fixed worst case ratio between the independent set size obtained to the maximum independent set size, in planar graphs have been proposed. We present in this paper a simple and efficient, O(|V|) algorithm that guarantees a ratio 1/2, for planar triangle free graphs. The algorithm differs completely from other approaches, in that, it collects groups of independent vertices at a time. Certain bounds we obtain in this paper relate to some interesting questions in the theory of extremal graphs.
Resumo:
By using the bender and extender elements tests, together with measurements of the travel times of shear (S) and primary (P) waves, the variation of Poisson ratio (nu) was determined for dry sands with respect to changes in relative densities and effective confining pressures (sigma(3)). The tests were performed for three different ranges of particle sizes. The magnitude of the Poisson ratio decreases invariably with an increase in both the relative density and the effective confining pressure. The effect of the confining pressure on the Poisson ratio was found to become relatively more significant for fine-grained sand as compared with the coarse-grained sand. For a given material, at a particular value of sigma(3), the magnitude of the Poisson ratio decreases, almost in a linear fashion, with an increase in the value of maximum shear modulus (G(max)). The two widely used correlations in literature, providing the relationships among G(max), void ratio (e) and effective confining pressure (sigma(3)), applicable for angular granular materials, were found to compare reasonably well with the present experimental data for the fine- and medium-grained sands. However, for the coarse-grained sand, these correlations tend to overestimate the values of G(max).
Resumo:
This paper presents an experimental study on damage assessment of reinforced concrete (RC) beams subjected to incremental cyclic loading. During testing acoustic emissions (AEs) were recorded. The analysis of the AE released was carried out by using parameters relaxation ratio, load ratio and calm ratio. Digital image correlation (DIC) technique and tracking with available MATLAB program were used to measure the displacement and surface strains in concrete. Earlier researchers classified the damage in RC beams using Kaiser effect, crack mouth opening displacement and proposed a standard. In general (or in practical situations), multiple cracks occur in reinforced concrete beams. In the present study damage assessment in RC beams was studied according to different limit states specified by the code of practice IS-456:2000 and AE technique. Based on the two ratios namely load ratio and calm ratio and when the deflection reached approximately 85% of the maximum allowable deflection it was observed that the RC beams were heavily damaged. The combination of AE and DIC techniques has the potential to provide the state of damage in RC structures.
Resumo:
The H-1 NMR spectroscopic discrimination of enantiomers in the solution state and the measurement of enantiomeric composition is most often hindered due to either very small chemical shift differences between the discriminated peaks or severe overlap of transitions from other chemically non-equivalent protons. In addition the use of chiral auxiliaries such as, crown ether and chiral lanthanide shift reagent may often cause enormous line broadening or give little degree of discrimination beyond the crown ether substrate ratio, hampering the discrimination. In circumventing such problems we are proposing the utilization of the difference in the additive values of all the chemical shifts of a scalar coupled spin system. The excitation and detection of appropriate highest quantum coherence yields the measurable difference in the frequencies between two transitions, one pertaining to each enantiomer in the maximum quantum dimension permitting their discrimination and the F-2 cross section at each of these frequencies yields an enantiopure spectrum. The advantage of the utility of the proposed method is demonstrated on several chiral compounds where the conventional one dimensional H-1 NMR spectra fail to differentiate the enantiomers.
Resumo:
This paper considers a firm real-time M/M/1 system, where jobs have stochastic deadlines till the end of service. A method for approximately specifying the loss ratio of the earliest-deadline-first scheduling policy along with exit control through the early discarding technique is presented. This approximation uses the arrival rate and the mean relative deadline, normalized with respect to the mean service time, for exponential and uniform distributions of relative deadlines. Simulations show that the maximum approximation error is less than 4% and 2% for the two distributions, respectively, for a wide range of arrival rates and mean relative deadlines. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Learning from Positive and Unlabelled examples (LPU) has emerged as an important problem in data mining and information retrieval applications. Existing techniques are not ideally suited for real world scenarios where the datasets are linearly inseparable, as they either build linear classifiers or the non-linear classifiers fail to achieve the desired performance. In this work, we propose to extend maximum margin clustering ideas and present an iterative procedure to design a non-linear classifier for LPU. In particular, we build a least squares support vector classifier, suitable for handling this problem due to symmetry of its loss function. Further, we present techniques for appropriately initializing the labels of unlabelled examples and for enforcing the ratio of positive to negative examples while obtaining these labels. Experiments on real-world datasets demonstrate that the non-linear classifier designed using the proposed approach gives significantly better generalization performance than the existing relevant approaches for LPU.
Resumo:
We present here observations on diurnal and seasonal variation of mixing ratio and delta C-13 of air CO2, from an urban station-Bangalore (BLR), India, monitored between October 2008 and December 2011. On a diurnal scale, higher mixing ratio with depleted delta C-13 of air CO2 was found for the samples collected during early morning compared to the samples collected during late afternoon. On a seasonal scale, mixing ratio was found to be higher for dry summer months (April-May) and lower for southwest monsoon months (June-July). The maximum enrichment in delta C-13 of air CO2 (-8.04 +/- 0.02aEuro degrees) was seen in October, then delta C-13 started depleting and maximum depletion (-9.31 +/- 0.07aEuro degrees) was observed during dry summer months. Immediately after that an increasing trend in delta C-13 was monitored coincidental with the advancement of southwest monsoon months and maximum enrichment was seen again in October. Although a similar pattern in seasonal variation was observed for the three consecutive years, the dry summer months of 2011 captured distinctly lower amplitude in both the mixing ratio and delta C-13 of air CO2 compared to the dry summer months of 2009 and 2010. This was explained with reduced biomass burning and increased productivity associated with prominent La Nina condition. While compared with the observations from the nearest coastal and open ocean stations-Cabo de Rama (CRI) and Seychelles (SEY), BLR being located within an urban region captured higher amplitude of seasonal variation. The average delta C-13 value of the end member source CO2 was identified based on both diurnal and seasonal scale variation. The delta C-13 value of source CO2 (-24.9 +/- 3aEuro degrees) determined based on diurnal variation was found to differ drastically from the source value (-14.6 +/- 0.7aEuro degrees) identified based on seasonal scale variation. The source CO2 identified based on diurnal variation incorporated both early morning and late afternoon sample; whereas, the source CO2 identified based on seasonal variation included only afternoon samples. Thus, it is evident from the study that sampling timing is one of the important factors while characterizing the composition of end member source CO2 for a particular station. The difference in delta C-13 value of source CO2 obtained based on both diurnal and seasonal variation might be due to possible contribution from cement industry along with fossil fuel / biomass burning as predominant sources for the station along with differential meteorological conditions prevailed.
Resumo:
The main objective of the paper is to develop a new method to estimate the maximum magnitude (M (max)) considering the regional rupture character. The proposed method has been explained in detail and examined for both intraplate and active regions. Seismotectonic data has been collected for both the regions, and seismic study area (SSA) map was generated for radii of 150, 300, and 500 km. The regional rupture character was established by considering percentage fault rupture (PFR), which is the ratio of subsurface rupture length (RLD) to total fault length (TFL). PFR is used to arrive RLD and is further used for the estimation of maximum magnitude for each seismic source. Maximum magnitude for both the regions was estimated and compared with the existing methods for determining M (max) values. The proposed method gives similar M (max) value irrespective of SSA radius and seismicity. Further seismicity parameters such as magnitude of completeness (M (c) ), ``a'' and ``aEuro parts per thousand b `` parameters and maximum observed magnitude (M (max) (obs) ) were determined for each SSA and used to estimate M (max) by considering all the existing methods. It is observed from the study that existing deterministic and probabilistic M (max) estimation methods are sensitive to SSA radius, M (c) , a and b parameters and M (max) (obs) values. However, M (max) determined from the proposed method is a function of rupture character instead of the seismicity parameters. It was also observed that intraplate region has less PFR when compared to active seismic region.
Resumo:
The distribution of black leaf nodes at each level of a linear quadtree is of significant interest in the context of estimation of time and space complexities of linear quadtree based algorithms. The maximum number of black nodes of a given level that can be fitted in a square grid of size 2n × 2n can readily be estimated from the ratio of areas. We show that the actual value of the maximum number of nodes of a level is much less than the maximum obtained from the ratio of the areas. This is due to the fact that the number of nodes possible at a level k, 0≤k≤n − 1, should consider the sum of areas occupied by the actual number of nodes present at levels k + 1, k + 2, …, n − 1.
Resumo:
Photometric and spectral evolution of the Type Ic supernova SN 2007ru until around 210 days after maximum are presented. The spectra show broad spectral features due to very high expansion velocity, normally seen in hypernovae. The photospheric velocity is higher than other normal Type Ic supernovae (SNe Ic). It is lower than SN 1998bw at similar to 8 days after the explosion, but is comparable at later epochs. The light curve (LC) evolution of SN 2007ru indicates a fast rise time of 8 +/- 3 days to B-band maximum and postmaximum decline more rapid than other broad-line SNe Ic. With an absolute V magnitude of -19.06, SN 2007ru is comparable in brightness with SN 1998bw and lies at the brighter end of the observed SNe Ic. The ejected mass of Ni-56 is estimated to be similar to 0.4 M-circle dot. The fast rise and decline of the LC and the high expansion velocity suggest that SN 2007ru is an explosion with a high kinetic energy/ejecta mass ratio (E-K/M-ej). This adds to the diversity of SNe Ic. Although the early phase spectra are most similar to those of broad-line SN 2003jd, the [O I] line profile in the nebular spectrum of SN 2007ru shows the singly peaked profile, in contrast to the doubly peaked profile in SN 2003jd. The singly peaked profile, together with the high luminosity and the high expansion velocity, may suggest that SN 2007ru could be an aspherical explosion viewed from the polar direction. Estimated oxygen abundance 12 + log(O/H) of similar to 8.8 indicates that SN 2007ru occurred in a region with nearly solar metallicity.
Resumo:
The charge at which adsorption of orgamc compounds attains a maximum ( \sigma MAX M) at an electrochenucal interface is analysed using several multi-state models in a hierarchical manner The analysis is based on statistical mechamcal results for the following models (A) two-state site parity, (B) two-state muhl-slte, and (C) three-state site parity The coulombic interactions due to permanent and reduced dipole effects (using mean field approximation), electrostatic field effects and specific substrate interactions have been taken into account. The simplest model in the hierarchy (two-state site parity) yields the exphcit dependence of ( \sigma MAX M) on the permanent dipole moment, polarizability of the solvent and the adsorbate, lattice spacing, effective coordination number, etc Other models in the baerarchy bring to hght the influence of the solvent structure and the role of substrate interactions, etc As a result of this approach, the "composition" of oM.x m terms of the fundamental molecular constants becomes clear. With a view to use these molecular results to maxamum advantage, the derived results for ( \sigma MAX M) have been converted into those involving experimentally observable parameters lake Co, C 1, E N, etc Wherever possible, some of the earlier phenomenologlcal relations reported for ( \sigma MAX M), notably by Parsons, Damaskm and Frumkln, and Trasattl, are shown to have a certain molecular basis, vlz a simple two-state sate panty model.As a corollary to the hxerarcbacal modelling, \sigma MAX M and the potential corresponding to at (Emax) are shown to be constants independent of 0max or Corg for all models The lmphcatlon of our analysis f o r OmMa x with respect to that predicted by the generalized surface layer equation (which postulates Om~ and Ema x varlaUon with 0) is discussed in detail Finally we discuss an passing o M. and the electrosorptlon valency an this context.
Resumo:
An acyclic edge coloring of a graph is a proper edge coloring such that there are no bichromatic cycles. The acyclic chromatic index of a graph is the minimum number k such that there is an acyclic edge coloring using k colors and is denoted by a'(G). It was conjectured by Alon, Sudakov, and Zaks that for any simple and finite graph G, a'(G) <= Delta+2, where Delta=Delta(G) denotes the maximum degree of G. We prove the conjecture for connected graphs with Delta(G)<= 4, with the additional restriction that m <= 2n-1, where n is the number of vertices and m is the number of edges in G. Note that for any graph G, m <= 2n, when Delta(G)<= 4. It follows that for any graph G if Delta(G)<= 4, then a'(G) <= 7.