996 resultados para Binary Cyclically Permutable Constant Weight Codes
Resumo:
We present a theoretical method for a direct evaluation of the average error exponent in Gallager error-correcting codes using methods of statistical physics. Results for the binary symmetric channel(BSC)are presented for codes of both finite and infinite connectivity.
Resumo:
We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate R=14, an RS critical transition point at pc 0.67 while the critical RSB transition point is located at pc 0.7450±0.0050, to be compared with the corresponding Shannon bound 1-R. For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed. © 2006 The American Physical Society.
Resumo:
We present a theoretical method for a direct evaluation of the average and reliability error exponents in low-density parity-check error-correcting codes using methods of statistical physics. Results for the binary symmetric channel are presented for codes of both finite and infinite connectivity.
Resumo:
As a basis for the commercial separation of normal paraffins a detailed study has been made of factors affecting the adsorption of binary liquid mixtures of high molecular weight normal paraffins (C12, C16, and C20) from isooctane on type 5A molecular sieves. The literature relating to molecular sieve properties and applications, and to liquid-phase adsorption of high molecular weight normal paraffin compounds by zeolites, was reviewed. Equilibrium isotherms were determined experimentally for the normal paraffins under investigation at temperatures of 303oK, 323oK and 343oK and showed a non-linear, favourable- type of isotherm. A higher equilibrium amount was adsorbed with lower molecular weight normal paraffins. An increase in adsorption temperature resulted in a decrease in the adsorption value. Kinetics of adsorption were investigated for the three normal paraffins at different temperatures. The effective diffusivity and the rate of adsorption of each normal paraffin increased with an increase in temperature in the range 303 to 343oK. The value of activation energy was between 2 and 4 kcal/mole. The dynamic properties of the three systems were investigated over a range of operating conditions (i.e. temperature, flow rate, feed concentration, and molecular sieve size in the range 0.032 x 10-3 to 2 x 10-3m) with a packed column. The heights of adsorption zones calculated by two independent equations (one based on a constant width, constant velocity and adsorption zone and the second on a solute material balance within the adsorption zone) agreed within 3% which confirmed the validity of using the mass transfer zone concept to provide a simple design procedure for the systems under study. The dynamic capacity of type 5A sieves for n-eicosane was lower than for n-hexadecane and n-dodecane corresponding to a lower equilibrium loading capacity and lower overall mass transfer coefficient. The values of individual external, internal, theoretical and experimental overall mass transfer coefficient were determined. The internal resistance was in all cases rate-controlling. A mathematical model for the prediction of dynamic breakthrough curves was developed analytically and solved from the equilibrium isotherm and the mass transfer rate equation. The experimental breakthrough curves were tested against both the proposed model and a graphical method developed by Treybal. The model produced the best fit with mean relative percent deviations of 26, 22, and 13% for the n-dodecane, n-hexadecane, and n-eicosane systems respectively.
Resumo:
Oral liquid formulations are ideal dosage forms for paediatric, geriatric and patient with dysphagia. Dysphagia is prominent among patients suffering from stroke, motor neurone disease, advanced Alzheimer’s and Parkinson’s disease. However oral liquid preparations are particularly difficult to formulate for hydrophobic and unstable drugs. Therefore current methods employed in solving this issue include the use of ‘specials’ or extemporaneous preparations. In order to challenge this, the government has encouraged research into the field of oral liquid formulations, with the EMEA and MHRA publishing list of drugs of interest. The current work investigates strategic formulation development and characterisation of select API’s (captopril, gliclazide, melatonin, L-arginine and lansoprazole), each with unique obstacles to overcome during solubilisation, stabilisation and when developing a palatable dosage from. By preparing a validated calibration protocol for each of the drug candidates, the oral liquid formulations were assessed for stability, according to the ICH guidelines along with thorough physiochemical characterisation. The results showed that pH and polarity of the solvent had the greatest influence on the extent of drug solubilisation, with inclusion of antioxidants and molecular steric hindrance influencing the extent of drug stability. Captopril, a hydrophilic ACE inhibitor (160 mg.mL-1), undergoes dimerisation with another captopril molecule. It was found that with the addition of EDTA and HP-β-CD, the drug molecule was stabilised and prevented from initiating a thiol induced first order free radical oxidation. The cyclodextrin provided further steric hindrance (1:1 molar ratio) resulting in complete reduction of the intensity of sulphur like smell associated with captopril. Palatability is a crucial factor in patient compliance, particularly when developing a dosage form targeted towards paediatrics. L-arginine is extremely bitter in solution (148.7 g.L-1). The addition of tartaric acid into the 100 mg.mL-1 formulation was sufficient to mask the bitterness associated with its guanidium ions. The hydrophobicity of gliclazide (55 mg.L-1) was strategically challenged using a binary system of a co-solvent and surfactant to reduce the polarity of the medium and ultimately increase the solubility of the drug. A second simpler method was developed using pH modification with L-arginine. Melatonin has two major obstacles in formulation: solubility (100 μg.mL-1) and photosensitivity, which were both overcome by lowering the dielectric constant of the medium and by reversibly binding the drug within the cyclodextrin cup (1:1 ratio). The cyclodextrin acts by preventing UV rays from reaching the drug molecule and initiated the degradation pathway. Lansoprazole is an acid labile drug that could only be delivered orally via a delivery vehicle. In oral liquid preparations this involved nanoparticulate vesicles. The extent of drug loading was found to be influenced by the type of polymer, concentration of polymer, and the molecular weight. All of the formulations achieved relatively long shelf-lives with good preservative efficacy.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.
Resumo:
In 1965 Levenshtein introduced the deletion correcting codes and found an asymptotically optimal family of 1-deletion correcting codes. During the years there has been a little or no research on t-deletion correcting codes for larger values of t. In this paper, we consider the problem of finding the maximal cardinality L2(n;t) of a binary t-deletion correcting code of length n. We construct an infinite family of binary t-deletion correcting codes. By computer search, we construct t-deletion codes for t = 2;3;4;5 with lengths n ≤ 30. Some of these codes improve on earlier results by Hirschberg-Fereira and Swart-Fereira. Finally, we prove a recursive upper bound on L2(n;t) which is asymptotically worse than the best known bounds, but gives better estimates for small values of n.
Resumo:
Similar to classic Signal Detection Theory (SDT), recent optimal Binary Signal Detection Theory (BSDT) and based on it Neural Network Assembly Memory Model (NNAMM) can successfully reproduce Receiver Operating Characteristic (ROC) curves although BSDT/NNAMM parameters (intensity of cue and neuron threshold) and classic SDT parameters (perception distance and response bias) are essentially different. In present work BSDT/NNAMM optimal likelihood and posterior probabilities are analytically analyzed and used to generate ROCs and modified (posterior) mROCs, optimal overall likelihood and posterior. It is shown that for the description of basic discrimination experiments in psychophysics within the BSDT a ‘neural space’ can be introduced where sensory stimuli as neural codes are represented and decision processes are defined, the BSDT’s isobias curves can simultaneously be interpreted as universal psychometric functions satisfying the Neyman-Pearson objective, the just noticeable difference (jnd) can be defined and interpreted as an atom of experience, and near-neutral values of biases are observers’ natural choice. The uniformity or no-priming hypotheses, concerning the ‘in-mind’ distribution of false-alarm probabilities during ROC or overall probability estimations, is introduced. The BSDT’s and classic SDT’s sensitivity, bias, their ROC and decision spaces are compared.
Resumo:
Mathematics Subject Classification: 26D10.
Resumo:
Rising concentrations of atmospheric CO2 are changing the carbonate chemistry of the oceans, a process known as ocean acidification (OA). Absorption of this CO2 by the surface oceans is increasing the amount of total dissolved inorganic carbon (DIC) and bicarbonate ion (HCO3) available for marine calcification yet is simultaneously lowering the seawater pH and carbonate ion concentration ([CO3]), and thus the saturation state of seawater with respect to aragonite. We investigated the relative importance of [HCO3] versus [CO3] for early calcification by new recruits (primary polyps settled from zooxanthellate larvae) of two tropical coral species, Favia fragum and Porites astreoides. The polyps were reared over a range of ?ar values, which were manipulated by both acid-addition at constant pCO2 (decreased total [HCO3] and [CO3]) and by pCO2 elevation at constant alkalinity (increased [HCO3], decreased [CO3]). Calcification after 2 weeks was quantified by weighing the complete skeleton (corallite) accreted by each polyp over the course of the experiment. Both species exhibited the same negative response to decreasing [CO3] whether ?ar was lowered by acid-addition or by pCO2 elevation-calcification did not follow total DIC or [HCO3]. Nevertheless, the calcification response to decreasing [CO3] was nonlinear. A statistically significant decrease in calcification was only detected between Omega aragonite = <2.5 and Omega aragonite = 1.1-1.5, where calcification of new recruits was reduced by 22-37% per 1.0 decrease in Omega aragonite. Our results differ from many previous studies that report a linear coral calcification response to OA, and from those showing that calcification increases with increasing [HCO3]. Clearly, the coral calcification response to OA is variable and complex. A deeper understanding of the biomineralization mechanisms and environmental conditions underlying these variable responses is needed to support informed predictions about future OA impacts on corals and coral reefs.
Resumo:
Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.
Resumo:
The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.
Resumo:
Modelling of massive stars and supernovae (SNe) plays a crucial role in understanding galaxies. From this modelling we can derive fundamental constraints on stellar evolution, mass-loss processes, mixing, and the products of nucleosynthesis. Proper account must be taken of all important processes that populate and depopulate the levels (collisional excitation, de-excitation, ionization, recombination, photoionization, bound–bound processes). For the analysis of Type Ia SNe and core collapse SNe (Types Ib, Ic and II) Fe group elements are particularly important. Unfortunately little data is currently available and most noticeably absent are the photoionization cross-sections for the Fe-peaks which have high abundances in SNe. Important interactions for both photoionization and electron-impact excitation are calculated using the relativistic Dirac atomic R-matrix codes (DARC) for low-ionization stages of Cobalt. All results are calculated up to photon energies of 45 eV and electron energies up to 20 eV. The wavefunction representation of Co III has been generated using GRASP0 by including the dominant 3d7, 3d6[4s, 4p], 3p43d9 and 3p63d9 configurations, resulting in 292 fine structure levels. Electron-impact collision strengths and Maxwellian averaged effective collision strengths across a wide range of astrophysically relevant temperatures are computed for Co III. In addition, statistically weighted level-resolved ground and metastable photoionization cross-sections are presented for Co II and compared directly with existing work.
Resumo:
This paper revisits strongly-MDS convolutional codes with maximum distance profile (MDP). These are (non-binary) convolutional codes that have an optimum sequence of column distances and attains the generalized Singleton bound at the earliest possible time frame. These properties make these convolutional codes applicable over the erasure channel, since they are able to correct a large number of erasures per time interval. The existence of these codes have been shown only for some specific cases. This paper shows by construction the existence of convolutional codes that are both strongly-MDS and MDP for all choices of parameters.
Resumo:
Background: Body composition is affected by diseases, and affects responses to medical treatments, dosage of medicines, etc., while an abnormal body composition contributes to the causation of many chronic diseases. While we have reliable biochemical tests for certain nutritional parameters of body composition, such as iron or iodine status, and we have harnessed nuclear physics to estimate the body’s content of trace elements, the very basic quantification of body fat content and muscle mass remains highly problematic. Both body fat and muscle mass are vitally important, as they have opposing influences on chronic disease, but they have seldom been estimated as part of population health surveillance. Instead, most national surveys have merely reported BMI and waist, or sometimes the waist/hip ratio; these indices are convenient but do not have any specific biological meaning. Anthropometry offers a practical and inexpensive method for muscle and fat estimation in clinical and epidemiological settings; however, its use is imperfect due to many limitations, such as a shortage of reference data, misuse of terminology, unclear assumptions, and the absence of properly validated anthropometric equations. To date, anthropometric methods are not sensitive enough to detect muscle and fat loss. Aims: The aim of this thesis is to estimate Adipose/fat and muscle mass in health disease and during weight loss through; 1. evaluating and critiquing the literature, to identify the best-published prediction equations for adipose/fat and muscle mass estimation; 2. to derive and validate adipose tissue and muscle mass prediction equations; and 3.to evaluate the prediction equations along with anthropometric indices and the best equations retrieved from the literature in health, metabolic illness and during weight loss. Methods: a Systematic review using Cochrane Review method was used for reviewing muscle mass estimation papers that used MRI as the reference method. Fat mass estimation papers were critically reviewed. Mixed ethnic, age and body mass data that underwent whole body magnetic resonance imaging to quantify adipose tissue and muscle mass (dependent variable) and anthropometry (independent variable) were used in the derivation/validation analysis. Multiple regression and Bland-Altman plot were applied to evaluate the prediction equations. To determine how well the equations identify metabolic illness, English and Scottish health surveys were studied. Statistical analysis using multiple regression and binary logistic regression were applied to assess model fit and associations. Also, populations were divided into quintiles and relative risk was analysed. Finally, the prediction equations were evaluated by applying them to a pilot study of 10 subjects who underwent whole-body MRI, anthropometric measurements and muscle strength before and after weight loss to determine how well the equations identify adipose/fat mass and muscle mass change. Results: The estimation of fat mass has serious problems. Despite advances in technology and science, prediction equations for the estimation of fat mass depend on limited historical reference data and remain dependent upon assumptions that have not yet been properly validated for different population groups. Muscle mass does not have the same conceptual problems; however, its measurement is still problematic and reference data are scarce. The derivation and validation analysis in this thesis was satisfactory, compared to prediction equations in the literature they were similar or even better. Applying the prediction equations in metabolic illness and during weight loss presented an understanding on how well the equations identify metabolic illness showing significant associations with diabetes, hypertension, HbA1c and blood pressure. And moderate to high correlations with MRI-measured adipose tissue and muscle mass before and after weight loss. Conclusion: Adipose tissue mass and to an extent muscle mass can now be estimated for many purposes as population or groups means. However, these equations must not be used for assessing fatness and categorising individuals. Further exploration in different populations and health surveys would be valuable.