933 resultados para Low Autocorrelation Binary Sequence Problem
Resumo:
We completed the genome sequence of Lettuce necrotic yellows virus (LNYV) by determining the nucleotide sequences of the 4a (putative phosphoprotein), 4b, M (matrix protein), G (glycoprotein) and L (polymerase) genes. The genome consists of 12,807 nucleotides and encodes six genes in the order 3' leader-N-4a(P)-4b-M-G-L-5' trailer. Sequences were derived from clones of a cDNA library from LNYV genomic RNA and from fragments amplified using reverse transcription-polymerase chain reaction. The 4a protein has a low isoelectric point characteristic for rhabdovirus phosphoproteins. The 4b protein has significant sequence similarities with the movement proteins of capillo- and trichoviruses and may be involved in cell-to-cell movement. The putative G protein sequence contains a predicted 25 amino acids signal peptide and endopeptidase cleavage site, three predicted glycosylation sites and a putative transmembrane domain. The deduced L protein sequence shows similarities with the L proteins of other plant rhabdoviruses and contains polymerase module motifs characteristic for RNA-dependent RNA polymerases of negative-strand RNA viruses. Phylogenetic analysis of this motif among rhabdoviruses placed LNYV in a group with other sequenced cytorhabdoviruses, most closely related to Strawberry crinkle virus. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Dynamic binary translation is the process of translating, modifying and rewriting executable (binary) code from one machine to another at run-time. This process of low-level re-engineering consists of a reverse engineering phase followed by a forward engineering phase. UQDBT, the University of Queensland Dynamic Binary Translator, is a machine-adaptable translator. Adaptability is provided through the specification of properties of machines and their instruction sets, allowing the support of different pairs of source and target machines. Most binary translators are closely bound to a pair of machines, making analyses and code hard to reuse. Like most virtual machines, UQDBT performs generic optimizations that apply to a variety of machines. Frequently executed code is translated to native code by the use of edge weight instrumentation, which makes UQDBT converge more quickly than systems based on instruction speculation. In this paper, we describe the architecture and run-time feedback optimizations performed by the UQDBT system, and provide results obtained in the x86 and SPARC® platforms.
Resumo:
The Burdekin River of northeastern Australia has constructed a substantial delta during the Holocene (delta plain area 1260 km2). The vertical succession through this delta comprises (1) a basal, coarse-grained transgressive lag overlying a continental omission surface, overlain by (2) a mud interval deposited as the coastal region was inundated by the postglacially rising sea, in turn overlain by (3) a generally sharp-based sand unit deposited principally in channel and mouth-bar environments with lesser volumes of floodplain and coastal facies. The Holocene Burdekin Delta was constructed as a series of at least thirteen discrete delta lobes, formed as the river avulsed. Each lobe consists of a composite sand body typically 5-8 m thick. The oldest lobes, formed during the latter stages of the postglacial sea-level rise (10-5.5 kyr BP), are larger than those formed during the highstand (5.5-3 kyr BP), which are in turn larger than those formed during the most recent slight sea-level lowering and stillstand (3-0 kyr BP). Radiocarbon ages and other stratigraphic data indicate that inter-avulsion period has decreased through time coincident with the decrease in delta lobe area. The primary control on Holocene delta architecture appears to have been a change from a pluvial climate known to characterize the region 12-4 kyr BP to the present drier, ENSO-dominated climate. In addition to decreasing the sediment supply via lower rates of chemical weathering, this change may have contributed to the shorter avulsion period by facilitating extreme variability of discharge. More frequent avulsion may also have been facilitated by the lengthening of the delta-plain channels as the system prograded seaward. Copyright © 2006, SEPM (Society for Sedimentary Geology).
Resumo:
Motivation: Conformational flexibility is essential to the function of many proteins, e.g. catalytic activity. To assist efforts in determining and exploring the functional properties of a protein, it is desirable to automatically identify regions that are prone to undergo conformational changes. It was recently shown that a probabilistic predictor of continuum secondary structure is more accurate than categorical predictors for structurally ambivalent sequence regions, suggesting that such models are suited to characterize protein flexibility. Results: We develop a computational method for identifying regions that are prone to conformational change directly from the amino acid sequence. The method uses the entropy of the probabilistic output of an 8-class continuum secondary structure predictor. Results for 171 unique amino acid sequences with well-characterized variable structure (identified in the 'Macromolecular movements database') indicate that the method is highly sensitive at identifying flexible protein regions, but false positives remain a problem. The method can be used to explore conformational flexibility of proteins (including hypothetical or synthetic ones) whose structure is yet to be determined experimentally.
Resumo:
Bang-bang phase detector based PLLs are simple to design, suffer no systematic phase error, and can run at the highest speed a process can make a working flip-flop. For these reasons designers are employing them in the design of very high speed Clock Data Recovery (CDR) architectures. The major drawback of this class of PLL is the inherent jitter due to quantized phase and frequency corrections. Reducing loop gain can proportionally improve jitter performance, but also reduces locking time and pull-in range. This paper presents a novel PLL design that dynamically scales its gain in order to achieve fast lock times while improving fitter performance in lock. Under certain circumstances the design also demonstrates improved capture range. This paper also analyses the behaviour of a bang-bang type PLL when far from lock, and demonstrates that the pull-in range is proportional to the square root of the PLL loop gain.
Resumo:
The aim of this study was to identify a set of genetic polymorphisms that efficiently divides methicillin-resistant Staphylococcus aureus (MRSA) strains into groups consistent with the population structure. The rationale was that such polymorphisms could underpin rapid real-time PCR or low-density array-based methods for monitoring MRSA dissemination in a cost-effective manner. Previously, the authors devised a computerized method for identifying sets of single nucleoticle polymorphisms (SNPs) with high resolving power that are defined by multilocus sequence typing (MLST) databases, and also developed a real-time PCR method for interrogating a seven-member SNP set for genotyping S. aureus. Here, it is shown that these seven SNPs efficiently resolve the major MRSA lineages and define 27 genotypes. The SNP-based genotypes are consistent with the MRSA population structure as defined by eBURST analysis. The capacity of binary markers to improve resolution was tested using 107 diverse MRSA isolates of Australian origin that encompass nine SNP-based genotypes. The addition of the virulence-associated genes cna, pvl and bbplsdrE, and the integrated plasmids pT181, p1258 and pUB110, resolved the nine SNP-based genotypes into 21 combinatorial genotypes. Subtyping of the SCCmec locus revealed new SCCmec types and increased the number of combinatorial genotypes to 24. It was concluded that these polymorphisms provide a facile means of assigning MRSA isolates into well-recognized lineages.
Resumo:
Hannenhalli and Pevzner developed the first polynomial-time algorithm for the combinatorial problem of sorting of signed genomic data. Their algorithm solves the minimum number of reversals required for rearranging a genome to another when gene duplication is nonexisting. In this paper, we show how to extend the Hannenhalli-Pevzner approach to genomes with multigene families. We propose a new heuristic algorithm to compute the reversal distance between two genomes with multigene families via the concept of binary integer programming without removing gene duplicates. The experimental results on simulated and real biological data demonstrate that the proposed algorithm is able to find the reversal distance accurately. ©2005 IEEE
Resumo:
A formalism for describing the dynamics of Genetic Algorithms (GAs) using method s from statistical mechanics is applied to the problem of generalization in a perceptron with binary weights. The dynamics are solved for the case where a new batch of training patterns is presented to each population member each generation, which considerably simplifies the calculation. The theory is shown to agree closely to simulations of a real GA averaged over many runs, accurately predicting the mean best solution found. For weak selection and large problem size the difference equations describing the dynamics can be expressed analytically and we find that the effects of noise due to the finite size of each training batch can be removed by increasing the population size appropriately. If this population resizing is used, one can deduce the most computationally efficient size of training batch each generation. For independent patterns this choice also gives the minimum total number of training patterns used. Although using independent patterns is a very inefficient use of training patterns in general, this work may also prove useful for determining the optimum batch size in the case where patterns are recycled.
Resumo:
Visual impairment is a large and growing socioeconomic problem. Good evidence on rehabilitation outcomes is required to guide service development and improve the lives of people with sight loss. Of the 478 potentially relevant articles identified, only 58 studies met our liberal inclusion criteria, and of these only 7 were randomized controlled trials. Although the literature is sufficient to confirm that rehabilitation services result in improved clinical and functional ability outcomes, the effects on mood, vision-related quality of life (QoL) and health-related QoL are less clear. There are some good data on the performance of particular types of intervention, but almost no useful data about outcomes in children, those of working age, and other groups. There were no reports on cost effectiveness. Overall, the number of well-designed and adequately reported studies is pitifully small; visual rehabilitation research needs higher quality research. We highlight study design and reporting considerations and suggest a future research agenda.
Resumo:
A variation of low-density parity check (LDPC) error-correcting codes defined over Galois fields (GF(q)) is investigated using statistical physics. A code of this type is characterised by a sparse random parity check matrix composed of C non-zero elements per column. We examine the dependence of the code performance on the value of q, for finite and infinite C values, both in terms of the thermodynamical transition point and the practical decoding phase characterised by the existence of a unique (ferromagnetic) solution. We find different q-dependence in the cases of C = 2 and C ≥ 3; the analytical solutions are in agreement with simulation results, providing a quantitative measure to the improvement in performance obtained using non-binary alphabets.
Resumo:
We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.
Resumo:
The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel noise models.
Resumo:
We present a theoretical method for a direct evaluation of the average error exponent in Gallager error-correcting codes using methods of statistical physics. Results for the binary symmetric channel(BSC)are presented for codes of both finite and infinite connectivity.
Resumo:
We present a theoretical method for a direct evaluation of the average and reliability error exponents in low-density parity-check error-correcting codes using methods of statistical physics. Results for the binary symmetric channel are presented for codes of both finite and infinite connectivity.