996 resultados para Binary Cyclically Permutable Constant Weight Codes
Resumo:
In conjunction with an enhanced system for Agrobacterium-mediated plant transformation, a new binary bacterial artificial chromosome (BIBAC) vector has been developed that is capable of transferring at least 150 kb of foreign DNA into a plant nuclear genome. The transferred DNA appears to be intact in the majority of transformed tobacco plants analyzed and is faithfully inherited in the progeny. The ability to introduce high molecular weight DNA into plant chromosomes should accelerate gene identification and genetic engineering of plants and may lead to new approaches in studies of genome organization.
Resumo:
All immunoglobulins and T-cell receptors throughout phylogeny share regions of highly conserved amino acid sequence. To identify possible primitive immunoglobulins and immunoglobulin-like molecules, we utilized 3' RACE (rapid amplification of cDNA ends) and a highly conserved constant region consensus amino acid sequence to isolate a new immunoglobulin class from the sandbar shark Carcharhinus plumbeus. The immunoglobulin, termed IgW, in its secreted form consists of 782 amino acids and is expressed in both the thymus and the spleen. The molecule overall most closely resembles mu chains of the skate and human and a new putative antigen binding molecule isolated from the nurse shark (NAR). The full-length IgW chain has a variable region resembling human and shark heavy-chain (VH) sequences and a novel joining segment containing the WGXGT motif characteristic of H chains. However, unlike any other H-chain-type molecule, it contains six constant (C) domains. The first C domain contains the cysteine residue characteristic of C mu1 that would allow dimerization with a light (L) chain. The fourth and sixth domains also contain comparable cysteines that would enable dimerization with other H chains or homodimerization. Comparison of the sequences of IgW V and C domains shows homology greater than that found in comparisons among VH and C mu or VL, or CL thereby suggesting that IgW may retain features of the primordial immunoglobulin in evolution.
Resumo:
"Retyped October, 1964"
Resumo:
As presently used, the immersed weight rate, I sub l, is the volume rate, Q, of longshore transport, multiplied by a constant. For use in engineering problems, I sub l must be converted back to the equivalent Q. The I sub l formulation may be important where the unit weight of sand differs significantly from the unit weight of sand at the open-coast sites contributing data to the design curve. Increase in void ratio may result in a 10- to 20-percent increase in actual (as compared to predicted) shoaling volumes where sand accumulates in protected water. Void ratio should be measured in field studies of longshore transport.
Resumo:
Block copolymers have become an integral part of the preparation of complex architectures through self-assembly. The use of reversible addition-fragmentation chain transfer (RAFT) allows blocks ranging from functional to nonfunctional polymers to be made with predictable molecular weight distributions. This article models block formation by varying many of the kinetic parameters. The simulations provide insight into the overall polydispersities (PDIs) that will be obtained when the chain-transfer constants in the main equilibrium steps are varied from 100 to 0.5. When the first dormant block [polymer-S-C(Z)=S] has a PDI of 1 and the second propagating radical has a low reactivity to the RAFT moiety, the overall PDI will be greater than 1 and dependent on the weight fraction of each block. When the first block has a PDI of 2 and the second propagating radical has a low reactivity to the RAFT moiety, the PDI will decrease to around 1.5 because of random coupling of two broad distributions. It is also shown how we can in principle use only one RAFT agent to obtain block copolymers with any desired molecular weight distribution. We can accomplish this by maintaining the monomer concentration at a constant level in the reactor over the course of the reaction. (c) 2005 Wiley Periodicals, Inc.
Resumo:
Living radical polymerization has allowed complex polymer architectures to be synthesized in bulk, solution, and water. The most versatile of these techniques is reversible addition-fragmentation chain transfer (RAFT), which allows a wide range of functional and nonfunctional polymers to be made with predictable molecular weight distributions (MWDs), ranging from very narrow to quite broad. The great complexity of the RAFT mechanism and how the kinetic parameters affect the rate of polymerization and MWD are not obvious. Therefore, the aim of this article is to provide useful insights into the important kinetic parameters that control the rate of polymerization and the evolution of the MWD with conversion. We discuss how a change in the chain-transfer constant can affect the evolution of the MWD. It is shown how we can, in principle, use only one RAFT agent to obtain a poly-mer with any MWD. Retardation and inhibition are discussed in terms of (1) the leaving R group reactivity and (2) the intermediate radical termination model versus the slow fragmentation model. (c) 2005 Wiley Periodicals, Inc.
Resumo:
Dynamic binary translation is the process of translating, modifying and rewriting executable (binary) code from one machine to another at run-time. This process of low-level re-engineering consists of a reverse engineering phase followed by a forward engineering phase. UQDBT, the University of Queensland Dynamic Binary Translator, is a machine-adaptable translator. Adaptability is provided through the specification of properties of machines and their instruction sets, allowing the support of different pairs of source and target machines. Most binary translators are closely bound to a pair of machines, making analyses and code hard to reuse. Like most virtual machines, UQDBT performs generic optimizations that apply to a variety of machines. Frequently executed code is translated to native code by the use of edge weight instrumentation, which makes UQDBT converge more quickly than systems based on instruction speculation. In this paper, we describe the architecture and run-time feedback optimizations performed by the UQDBT system, and provide results obtained in the x86 and SPARC® platforms.
Resumo:
Knowledge of the adsorption behavior of coal-bed gases, mainly under supercritical high-pressure conditions, is important for optimum design of production processes to recover coal-bed methane and to sequester CO2 in coal-beds. Here, we compare the two most rigorous adsorption methods based on the statistical mechanics approach, which are Density Functional Theory (DFT) and Grand Canonical Monte Carlo (GCMC) simulation, for single and binary mixtures of methane and carbon dioxide in slit-shaped pores ranging from around 0.75 to 7.5 nm in width, for pressure up to 300 bar, and temperature range of 308-348 K, as a preliminary study for the CO2 sequestration problem. For single component adsorption, the isotherms generated by DFT, especially for CO2, do not match well with GCMC calculation, and simulation is subsequently pursued here to investigate the binary mixture adsorption. For binary adsorption, upon increase of pressure, the selectivity of carbon dioxide relative to methane in a binary mixture initially increases to a maximum value, and subsequently drops before attaining a constant value at pressures higher than 300 bar. While the selectivity increases with temperature in the initial pressure-sensitive region, the constant high-pressure value is also temperature independent. Optimum selectivity at any temperature is attained at a pressure of 90-100 bar at low bulk mole fraction of CO2, decreasing to approximately 35 bar at high bulk mole fractions. (c) 2005 American Institute of Chemical Engineers.
Resumo:
The dynamics of drop formation and pinch-off have been investigated for a series of low viscosity elastic fluids possessing similar shear viscosities, but differing substantially in elastic properties. On initial approach to the pinch region, the viscoelastic fluids all exhibit the same global necking behavior that is observed for a Newtonian fluid of equivalent shear viscosity. For these low viscosity dilute polymer solutions, inertial and capillary forces form the dominant balance in this potential flow regime, with the viscous force being negligible. The approach to the pinch point, which corresponds to the point of rupture for a Newtonian fluid, is extremely rapid in such solutions, with the sudden increase in curvature producing very large extension rates at this location. In this region the polymer molecules are significantly extended, causing a localized increase in the elastic stresses, which grow to balance the capillary pressure. This prevents the necked fluid from breaking off, as would occur in the equivalent Newtonian fluid. Alternatively, a cylindrical filament forms in which elastic stresses and capillary pressure balance, and the radius decreases exponentially with time. A (0+1)-dimensional finitely extensible nonlinear elastic dumbbell theory incorporating inertial, capillary, and elastic stresses is able to capture the basic features of the experimental observations. Before the critical "pinch time" t(p), an inertial-capillary balance leads to the expected 2/3-power scaling of the minimum radius with time: R-min similar to(t(p)-t)(2/3). However, the diverging deformation rate results in large molecular deformations and rapid crossover to an elastocapillary balance for times t>t(p). In this region, the filament radius decreases exponentially with time R-min similar to exp[(t(p)-t)/lambda(1)], where lambda(1) is the characteristic time constant of the polymer molecules. Measurements of the relaxation times of polyethylene oxide solutions of varying concentrations and molecular weights obtained from high speed imaging of the rate of change of filament radius are significantly higher than the relaxation times estimated from Rouse-Zimm theory, even though the solutions are within the dilute concentration region as determined using intrinsic viscosity measurements. The effective relaxation times exhibit the expected scaling with molecular weight but with an additional dependence on the concentration of the polymer in solution. This is consistent with the expectation that the polymer molecules are in fact highly extended during the approach to the pinch region (i.e., prior to the elastocapillary filament thinning regime) and subsequently as the filament is formed they are further extended by filament stretching at a constant rate until full extension of the polymer coil is achieved. In this highly extended state, intermolecular interactions become significant, producing relaxation times far above theoretical predictions for dilute polymer solutions under equilibrium conditions. (C) 2006 American Institute of Physics
Resumo:
The dynamics of fibre slippage within general non-bonded fibrous assemblies is studied in the situation where the assembly is subjected to general small cyclic loads. Two models are proposed. The first is applicable when the general cyclic loading is complemented by an occasional tugging force on one end of a fibre, which causes it to gradually withdraw from the assembly, such as might occur during the pilling of a textile. The second considers the situation in which the cyclic perturbations act around a constant background load applied to the assembly. The dynamics is reminiscent of self-organized critical behaviour. This model is applied to predict the progressive elongation of a single yarn during weaving.
Resumo:
We investigate the performance of Gallager type error- correcting codes for Binary Symmetric Channels, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability, with improved decoding properties is obtained for finite K and C.
Resumo:
A variation of low-density parity check (LDPC) error-correcting codes defined over Galois fields (GF(q)) is investigated using statistical physics. A code of this type is characterised by a sparse random parity check matrix composed of C non-zero elements per column. We examine the dependence of the code performance on the value of q, for finite and infinite C values, both in terms of the thermodynamical transition point and the practical decoding phase characterised by the existence of a unique (ferromagnetic) solution. We find different q-dependence in the cases of C = 2 and C ≥ 3; the analytical solutions are in agreement with simulation results, providing a quantitative measure to the improvement in performance obtained using non-binary alphabets.
Resumo:
We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.
Resumo:
The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel noise models.