192 resultados para Binary Coding

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Construction of Huffman binary codes for WLN symbols is described for the compression of a WLN file. Here, a parenthesized representation of the tree structure is used for computer encoding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantization formats of four digital holographic codes (Lohmann,Lee, Burckhardt and Hsueh-Sawchuk) are evaluated. A quantitative assessment is made from errors in both the Fourier transform and image domains. In general, small errors in the Fourier amplitude or phase alone do not guarantee high image fidelity. From quantization considerations, the Lee hologram is shown to be the best choice for randomly phase coded objects. When phase coding is not feasible, the Lohmann hologram is preferable as it is easier to plot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of compression via homomorphic encoding of a source having a group alphabet. This is motivated by the problem of distributed function computation, where it is known that if one is only interested in computing a function of several sources, then one can at times improve upon the compression rate required by the Slepian-Wolf bound. The functions of interest are those which could be represented by the binary operation in the group. We first consider the case when the source alphabet is the cyclic Abelian group, Zpr. In this scenario, we show that the set of achievable rates provided by Krithivasan and Pradhan [1], is indeed the best possible. In addition to that, we provide a simpler proof of their achievability result. In the case of a general Abelian group, an improved achievable rate region is presented than what was obtained by Krithivasan and Pradhan. We then consider the case when the source alphabet is a non-Abelian group. We show that if all the source symbols have non-zero probability and the center of the group is trivial, then it is impossible to compress such a source if one employs a homomorphic encoder. Finally, we present certain non-homomorphic encoders, which also are suitable in the context of function computation over non-Abelian group sources and provide rate regions achieved by these encoders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In terabit-density magnetic recording, several bits of data can be replaced by the values of their neighbors in the storage medium. As a result, errors in the medium are dependent on each other and also on the data written. We consider a simple 1-D combinatorial model of this medium. In our model, we assume a setting where binary data is sequentially written on the medium and a bit can erroneously change to the immediately preceding value. We derive several properties of codes that correct this type of errors, focusing on bounds on their cardinality. We also define a probabilistic finite-state channel model of the storage medium, and derive lower and upper estimates of its capacity. A lower bound is derived by evaluating the symmetric capacity of the channel, i.e., the maximum transmission rate under the assumption of the uniform input distribution of the channel. An upper bound is found by showing that the original channel is a stochastic degradation of another, related channel model whose capacity we can compute explicitly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The input-constrained erasure channel with feedback is considered, where the binary input sequence contains no consecutive ones, i.e., it satisfies the (1, infinity)-RLL constraint. We derive the capacity for this setting, which can be expressed as C-is an element of = max(0 <= p <= 0.5) (1-is an element of) H-b (p)/1+(1-is an element of) p, where is an element of is the erasure probability and Hb(.) is the binary entropy function. Moreover, we prove that a priori knowledge of the erasure at the encoder does not increase the feedback capacity. The feedback capacity was calculated using an equivalent dynamic programming (DP) formulation with an optimal average-reward that is equal to the capacity. Furthermore, we obtained an optimal encoding procedure from the solution of the DP, leading to a capacity-achieving, zero-error coding scheme for our setting. DP is, thus, shown to be a tool not only for solving optimization problems, such as capacity calculation, but also for constructing optimal coding schemes. The derived capacity expression also serves as the only non-trivial upper bound known on the capacity of the input-constrained erasure channel without feedback, a problem that is still open.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are essentially two different phenomenological models available to describe the interdiffusion process in binary systems in the olid state. The first of these, which is used more frequently, is based on the theory of flux partitioning. The second model, developed much more recently, uses the theory of dissociation and reaction. Although the theory of flux partitioning has been widely used, we found that this theory does not account for the mobility of both species and therefore is not suitable for use in most interdiffusion systems. We have first modified this theory to take into account the mobility of both species and then further extended it to develop relations or the integrated diffusion coefficient and the ratio of diffusivities of the species. The versatility of these two different models is examined in the Co-Si system with respect to different end-member compositions. From our analysis, we found that the applicability of the theory of flux partitioning is rather limited but the theory of dissociation and reaction can be used in any binary system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed space time coding for wireless relay networks when the source, the destination and the relays have multiple antennas have been studied by Jing and Hassibi. In this set-up, the transmit and the receive signals at different antennas of the same relay are processed and designed independently, even though the antennas are colocated. In this paper, a wireless relay network with single antenna at the source and the destination and two antennas at each of the R relays is considered. A new class of distributed space time block codes called Co-ordinate Interleaved Distributed Space-Time Codes (CIDSTC) are introduced where, in the first phase, the source transmits a T-length complex vector to all the relays;and in the second phase, at each relay, the in-phase and quadrature component vectors of the received complex vectors at the two antennas are interleaved and processed before forwarding them to the destination. Compared to the scheme proposed by Jing-Hassibi, for T >= 4R, while providing the same asymptotic diversity order of 2R, CIDSTC scheme is shown to provide asymptotic coding gain with the cost of negligible increase in the processing complexity at the relays. However, for moderate and large values of P, CIDSTC scheme is shown to provide more diversity than that of the scheme proposed by Jing-Hassibi. CIDSTCs are shown to be fully diverse provided the information symbols take value from an appropriate multidimensional signal set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The LISA Parameter Estimation Taskforce was formed in September 2007 to provide the LISA Project with vetted codes, source distribution models and results related to parameter estimation. The Taskforce's goal is to be able to quickly calculate the impact of any mission design changes on LISA's science capabilities, based on reasonable estimates of the distribution of astrophysical sources in the universe. This paper describes our Taskforce's work on massive black-hole binaries (MBHBs). Given present uncertainties in the formation history of MBHBs, we adopt four different population models, based on (i) whether the initial black-hole seeds are small or large and (ii) whether accretion is efficient or inefficient at spinning up the holes. We compare four largely independent codes for calculating LISA's parameter-estimation capabilities. All codes are based on the Fisher-matrix approximation, but in the past they used somewhat different signal models, source parametrizations and noise curves. We show that once these differences are removed, the four codes give results in extremely close agreement with each other. Using a code that includes both spin precession and higher harmonics in the gravitational-wave signal, we carry out Monte Carlo simulations and determine the number of events that can be detected and accurately localized in our four population models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurements of the ratio of diffusion coefficient to mobility (D/ mu ) of electrons in SF6-N2 and CCl2F2-N2 mixtures over the range 80

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A dual representation scheme for performing arithmetic modulo an arbitrary integer M is presented. The coding scheme maps each integer N in the range 0 <= N < M into one of two representations, each being identified by its most significant bit. The encoding of numbers is straightforward and the problem of checking for unused combinations is eliminated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experimental results are presented of ionisation (a)a nd electron attachment ( v ) coefficients evaluated from the steady-state Townsend curregnrto wth curves for SFsN2 and CC12FrN2 mixtures over the range 60 S E/P 6 240 (where E is the electric field in V cm" and P is the pressure in Torr reduced to 20'C). In both the mixtures the attachment coefficients (vmu) evaluated were found to follow the relationship; where 7 is the attachment coefficient of pure electronegative gas, F is the fraction of the electronegative gas in the mixture and /3 is a constant. The ionisation coefficients (amlx) generally obeyed the relationship where w2a nd aAa re thei onisation coefficients of nitrogen and the attachinggraess pectively. However, in case of CC12FrN2 mixtures, there were maxima in the a,,,v,a,l ues for CCI2F2 concentrations varying between 10% and 30% at all values of E/P investigated. Effective ionisation coefficients (a - p)/P obtained in these binary mixtures show that the critical E/P (corresponding to (a - q)/P = 0) increases with increase in the concentration of the electronegative gas up to 40%. Further increase in the electronegative gas content does not seem to alter the critical E/P.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The leader protease (L-pro) and capsid-coding sequences (P1) constitute approximately 3 kb of the foot-and-mouth disease virus (FMDV). We studied the phylogenetic relationship of 46 FMDV serotype A isolates of Indian origin collected during the period 1968-2005 and also eight vaccine strains using the neighbour-joining tree and Bayesian tree methods. The viruses were categorized under three major groups - Asian, Euro-South American and European. The Indian isolates formed a distinct genetic group among the Asian isolates. The Indian isolates were further classified into different genetic subgroups (<5% divergence). Post-1995 isolates were divided into two subgroups while a few isolates which originated in the year 2005 from Andhra Pradesh formed a separate group. These isolates were closely related to the isolates of the 1970s. The FMDV isolates seem to undergo reverse mutation or onvergent evolution wherein sequences identical to the ancestors are present in the isolates in circulation. The eight vaccine strains included in the study were not related to each other and belonged to different genetic groups. Recombination was detected in the L-pro region in one isolate (A IND 20/82) and in the VP1 coding 1D region in another isolate (A RAJ 21/96). Positive selection was identified at aa positions 23 in the L-pro (P<0.05; 0.046*) and at aa 171 in the capsid protein VP1 (P<0.01; 0.003**).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A numerical study on columnar-to-equiaxed transition (CET) during directional solidification of binary alloys is presented using a macroscopic solidification model. The position of CET is predicted numerically using a critical cooling rate criterion reported in literature. The macroscopic solidification model takes into account movement of solid phase due to buoyancy, and drag effect on the moving solid phase because of fluid motion. The model is applied to simulate the solidification process for binary alloys (Sn-Pb) and to estimate solidification parameters such as position of the liquidus, velocity of the liquidus isotherm, temperature gradient ahead of the liquidus, and cooling rate at the liquidus. Solidification phenomena under two cooling configurations are studied: one without melt convection and the other involvin thermosolutal convection. The numerically predicted positions of CET compare well with those of experiments reported in literature. Melt convection results in higher cooling rate, higher liquidus isotherm velocities, and stimulation of occurrence of CET in comparison to the nonconvecting case. The movement of solid phase aids further the process of CET. With a fixed solid phase, the occurrence of CET based on the same critical cooling rate is delayed and it occurs at a greater distance from the chill.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ratio of the electron attachment coefficient eta to the gas pressure p (reduced to 0 degrees C) evaluated from the Townsend current growth curves in binary mixtures of electronegative gases (SF6, CCl2F2, CO2) and buffer gases (N2, Ar, air) clearly indicate that the eta /p ratios do not scale as the partial pressure of electronegative gas in the mixture. Extensive calculations carried out using data experimentally obtained have shown that the attachment coefficient of the mixture eta mix can be expressed as eta mix= eta (1-exp- beta F/(100-F)) where eta is the attachment coefficient of the 100% electronegative gas, F is the percentage of the electronegative gas in the mixture and beta is a constant. The results of this analysis explain to a high degree of accuracy the data obtained in various mixtures and are in very good agreement with the data deduced by Itoh and co-workers (1980) using the Boltzmann equation method.