402 resultados para Binary Cyclically Permutable Constant Weight Codes
em Indian Institute of Science - Bangalore - Índia
Resumo:
We present a construction of constant weight codes based on the prime ideals of a Noetherian commutative ring. The coding scheme is based on the uniqueness of the primary decomposition of ideals in Noetherian rings. The source alphabet consists of a set of radical ideals constructed from a chosen subset of the prime spectrum of the ring. The distance function between two radical ideals is taken to be the Hamming metric based on the symmetric distance between sets. As an application we construct codes for random networks employing SAF routing.
Resumo:
The low-frequency (5–100 kHz) dielectric constant ε has been measured in the temperature range 7 × 10−5 < T = (T − Tc)/Tc < 8 × 10−2. Near Tc an exponent ≈0.11 characterizes the power law behaviour of dε/dt consistent with the theoretically predicted t−α singularity. However, over the full range of t an exponent ≈0.35 is obtained.
Resumo:
The low-frequency (5–100 kHz) dielectric constant epsilon (Porson) has been measured in the temperature range 7 × 10−5 < t = (T − Tc)/Tc < 8 × 10−2. Near Tc an exponent ≈0.11 characterizes the power law behaviour of Image consistent with the theoretically predicted t−α singularity. However, over the full range of t an exponent ≈0.35 is obtained.
Resumo:
Space-Time Block Codes (STBCs) from Complex Orthogonal Designs (CODs) are single-symbol decodable/symbol-by-symbol decodable (SSD); however, SSD codes are obtainable from designs that are not CODs. Recently, two such classes of SSD codes have been studied: (i) Coordinate Interleaved Orthogonal Designs (CIODs) and (ii) Minimum-Decoding-Complexity (MDC) STBCs from Quasi-ODs (QODs). The class of CIODs have non-unitary weight matrices when written as a Linear Dispersion Code (LDC) proposed by Hassibi and Hochwald, whereas the other class of SSD codes including CODs have unitary weight matrices. In this paper, we construct a large class of SSD codes with nonunitary weight matrices. Also, we show that the class of CIODs is a special class of our construction.
Resumo:
Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error pe(0 <= p(e) < 0.5). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small p(e) should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low p(e) and high p(e) regimes. For the low p(e) regime, convolutional codes with good distance properties show good performance. For the high p(e) regime, convolutional codes that have a good slope ( the minimum normalized cycle weight) are seen to be good. We derive a lower bound on the slope of any rate b/c convolutional code with a certain degree.
Resumo:
Three codes, that can solve three dimensional linear elastostatic problems using constant boundary elements while ignoring body forces, are provided here. The file 'bemconst.m' contains a MATLAB code for solving three dimensional linear elastostatic problems using constant boundary elements while ignoring body forces. The file 'bemconst.f90' is a Fortran translation of the MATLAB code contained in the file 'bemconst.m'. The file 'bemconstp.f90' is a parallelized version of the Fortran code contained in the file 'bemconst.f90'. The file 'inbem96.txt' is the input file for the Fortran codes contained in the files 'bemconst.f90' and 'bemconstp.f90'. Author hereby declares that the present codes are the original works of the author. Further, author hereby declares that any of the present codes, in full or in part, is not a translation or a copy of any of the existing codes written by someone else. Author's institution (Indian Institute of Science) has informed the author in writing that the institution is not interested in claiming any copyright on the present codes. Author is hereby distributing the present codes under the MIT License; full text of the license is included in each of the files that contain the codes.
Resumo:
The ultrasonic degradation of poly(acrylic acid), a water-soluble polymer, was studied in the presence of persulfates at different temperatures in binary solvent Mixtures of methanol and water. The degraded samples were analyzed by gel permeation chromatography for the time evolution of the molecular weight distributions. A continuous distribution kinetics model based on midpoint chain scission was developed, and the degradation rate coefficients were determined. The decline in the rate of degradation of poly(acrylic acid) with increasing temperature and with an increment in the methanol content in the binary solvent mixture of methanol and water was attributed to the increased vapor pressure of the solutions. The experimental data showed an augmentation of the degradation rate of the polymer with increasing oxidizing agent (persulfate) concentrations. Different concentrations of three persulfates-potassium persulfate, ammonium persulfate, and sodium persulfate-were used. It was found that the ratio of the polymer degradation rate coefficient to the dissociation rate constant of the persulfate was constant. This implies that the ultrasonic degradation rate of poly(acrylic acid) can be determined a priori in the presence of any initiator.
Resumo:
Let G = (V, E) be a finite, simple and undirected graph. For S subset of V, let delta(S, G) = {(u, v) is an element of E : u is an element of S and v is an element of V - S} be the edge boundary of S. Given an integer i, 1 <= i <= vertical bar V vertical bar, let the edge isoperimetric value of G at i be defined as b(e)(i, G) = min(S subset of V:vertical bar S vertical bar=i)vertical bar delta(S, G)vertical bar. The edge isoperimetric peak of G is defined as b(e)(G) = max(1 <= j <=vertical bar V vertical bar)b(e)(j, G). Let b(v)(G) denote the vertex isoperimetric peak defined in a corresponding way. The problem of determining a lower bound for the vertex isoperimetric peak in complete t-ary trees was recently considered in [Y. Otachi, K. Yamazaki, A lower bound for the vertex boundary-width of complete k-ary trees, Discrete Mathematics, in press (doi: 10.1016/j.disc.2007.05.014)]. In this paper we provide bounds which improve those in the above cited paper. Our results can be generalized to arbitrary (rooted) trees. The depth d of a tree is the number of nodes on the longest path starting from the root and ending at a leaf. In this paper we show that for a complete binary tree of depth d (denoted as T-d(2)), c(1)d <= b(e) (T-d(2)) <= d and c(2)d <= b(v)(T-d(2)) <= d where c(1), c(2) are constants. For a complete t-ary tree of depth d (denoted as T-d(t)) and d >= c log t where c is a constant, we show that c(1)root td <= b(e)(T-d(t)) <= td and c(2)d/root t <= b(v) (T-d(t)) <= d where c(1), c(2) are constants. At the heart of our proof we have the following theorem which works for an arbitrary rooted tree and not just for a complete t-ary tree. Let T = (V, E, r) be a finite, connected and rooted tree - the root being the vertex r. Define a weight function w : V -> N where the weight w(u) of a vertex u is the number of its successors (including itself) and let the weight index eta(T) be defined as the number of distinct weights in the tree, i.e eta(T) vertical bar{w(u) : u is an element of V}vertical bar. For a positive integer k, let l(k) = vertical bar{i is an element of N : 1 <= i <= vertical bar V vertical bar, b(e)(i, G) <= k}vertical bar. We show that l(k) <= 2(2 eta+k k)
Resumo:
The LISA Parameter Estimation Taskforce was formed in September 2007 to provide the LISA Project with vetted codes, source distribution models and results related to parameter estimation. The Taskforce's goal is to be able to quickly calculate the impact of any mission design changes on LISA's science capabilities, based on reasonable estimates of the distribution of astrophysical sources in the universe. This paper describes our Taskforce's work on massive black-hole binaries (MBHBs). Given present uncertainties in the formation history of MBHBs, we adopt four different population models, based on (i) whether the initial black-hole seeds are small or large and (ii) whether accretion is efficient or inefficient at spinning up the holes. We compare four largely independent codes for calculating LISA's parameter-estimation capabilities. All codes are based on the Fisher-matrix approximation, but in the past they used somewhat different signal models, source parametrizations and noise curves. We show that once these differences are removed, the four codes give results in extremely close agreement with each other. Using a code that includes both spin precession and higher harmonics in the gravitational-wave signal, we carry out Monte Carlo simulations and determine the number of events that can be detected and accurately localized in our four population models.
Resumo:
Measurements of the ratio of diffusion coefficient to mobility (D/ mu ) of electrons in SF6-N2 and CCl2F2-N2 mixtures over the range 80
Resumo:
Experimental results are presented of ionisation (a)a nd electron attachment ( v ) coefficients evaluated from the steady-state Townsend curregnrto wth curves for SFsN2 and CC12FrN2 mixtures over the range 60 S E/P 6 240 (where E is the electric field in V cm" and P is the pressure in Torr reduced to 20'C). In both the mixtures the attachment coefficients (vmu) evaluated were found to follow the relationship; where 7 is the attachment coefficient of pure electronegative gas, F is the fraction of the electronegative gas in the mixture and /3 is a constant. The ionisation coefficients (amlx) generally obeyed the relationship where w2a nd aAa re thei onisation coefficients of nitrogen and the attachinggraess pectively. However, in case of CC12FrN2 mixtures, there were maxima in the a,,,v,a,l ues for CCI2F2 concentrations varying between 10% and 30% at all values of E/P investigated. Effective ionisation coefficients (a - p)/P obtained in these binary mixtures show that the critical E/P (corresponding to (a - q)/P = 0) increases with increase in the concentration of the electronegative gas up to 40%. Further increase in the electronegative gas content does not seem to alter the critical E/P.
Resumo:
The ratio of the electron attachment coefficient eta to the gas pressure p (reduced to 0 degrees C) evaluated from the Townsend current growth curves in binary mixtures of electronegative gases (SF6, CCl2F2, CO2) and buffer gases (N2, Ar, air) clearly indicate that the eta /p ratios do not scale as the partial pressure of electronegative gas in the mixture. Extensive calculations carried out using data experimentally obtained have shown that the attachment coefficient of the mixture eta mix can be expressed as eta mix= eta (1-exp- beta F/(100-F)) where eta is the attachment coefficient of the 100% electronegative gas, F is the percentage of the electronegative gas in the mixture and beta is a constant. The results of this analysis explain to a high degree of accuracy the data obtained in various mixtures and are in very good agreement with the data deduced by Itoh and co-workers (1980) using the Boltzmann equation method.
Resumo:
Three different algorithms are described for the conversion of Hensel codes to Farey rationals. The first algorithm is based on the trial and error factorization of the weight of a Hensel code, inversion and range test. The second algorithm is deterministic and uses a pair of different p-adic systems for simultaneous computation; from the resulting weights of the two different Hensel codes of the same rational, two equivalence classes of rationals are generated using the respective primitive roots. The intersection of these two equivalence classes uniquely identifies the rational. Both the above algorithms are exponential (in time and/or space).
Resumo:
Quantization formats of four digital holographic codes (Lohmann,Lee, Burckhardt and Hsueh-Sawchuk) are evaluated. A quantitative assessment is made from errors in both the Fourier transform and image domains. In general, small errors in the Fourier amplitude or phase alone do not guarantee high image fidelity. From quantization considerations, the Lee hologram is shown to be the best choice for randomly phase coded objects. When phase coding is not feasible, the Lohmann hologram is preferable as it is easier to plot.
Resumo:
Construction of Huffman binary codes for WLN symbols is described for the compression of a WLN file. Here, a parenthesized representation of the tree structure is used for computer encoding.