957 resultados para Binary Coded Decimal


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider systems that can be described in terms of two kinds of degree of freedom. The corresponding ordering modes may, under certain conditions, be coupled to each other. We may thus assume that the primary ordering mode gives rise to a diffusionless first-order phase transition. The change of its thermodynamic properties as a function of the secondary-ordering-mode state is then analyzed. Two specific examples are discussed. First, we study a three-state Potts model in a binary system. Using mean-field techniques, we obtain the phase diagram and different properties of the system as a function of the distribution of atoms on the different lattice sites. In the second case, the properties of a displacive structural phase transition of martensitic type in a binary alloy are studied as a function of atomic order. Because of the directional character of the martensitic-transition mechanism, we find only a very weak dependence of the entropy on atomic order. Experimental results are found to be in quite good agreement with theoretical predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Front and domain growth of a binary mixture in the presence of a gravitational field is studied. The interplay of bulk- and surface-diffusion mechanisms is analyzed. An equation for the evolution of interfaces is derived from a time-dependent Ginzburg-Landau equation with a concentration-dependent diffusion coefficient. Scaling arguments on this equation give the exponents of a power-law growth. Numerical integrations of the Ginzburg-Landau equation corroborate the theoretical analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show how to decompose any density matrix of the simplest binary composite systems, whether separable or not, in terms of only product vectors. We determine for all cases the minimal number of product vectors needed for such a decomposition. Separable states correspond to mixing from one to four pure product states. Inseparable states can be described as pseudomixtures of four or five pure product states, and can be made separable by mixing them with one or two pure product states.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phase diagrams for bulk nuclear matter at finite temperatures and variable proton concentrations are presented and discussed. This binary system exhibits a line of critical points, a line of equal concentrations, and a line of maximum temperatures. the phenomenon of retrograde condensation is also possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identification and relative quantification of hundreds to thousands of proteins within complex biological samples have become realistic with the emergence of stable isotope labeling in combination with high throughput mass spectrometry. However, all current chemical approaches target a single amino acid functionality (most often lysine or cysteine) despite the fact that addressing two or more amino acid side chains would drastically increase quantifiable information as shown by in silico analysis in this study. Although the combination of existing approaches, e.g. ICAT with isotope-coded protein labeling, is analytically feasible, it implies high costs, and the combined application of two different chemistries (kits) may not be straightforward. Therefore, we describe here the development and validation of a new stable isotope-based quantitative proteomics approach, termed aniline benzoic acid labeling (ANIBAL), using a twin chemistry approach targeting two frequent amino acid functionalities, the carboxylic and amino groups. Two simple and inexpensive reagents, aniline and benzoic acid, in their (12)C and (13)C form with convenient mass peak spacing (6 Da) and without chromatographic discrimination or modification in fragmentation behavior, are used to modify carboxylic and amino groups at the protein level, resulting in an identical peptide bond-linked benzoyl modification for both reactions. The ANIBAL chemistry is simple and straightforward and is the first method that uses a (13)C-reagent for a general stable isotope labeling approach of carboxylic groups. In silico as well as in vitro analyses clearly revealed the increase in available quantifiable information using such a twin approach. ANIBAL was validated by means of model peptides and proteins with regard to the quality of the chemistry as well as the ionization behavior of the derivatized peptides. A milk fraction was used for dynamic range assessment of protein quantification, and a bacterial lysate was used for the evaluation of relative protein quantification in a complex sample in two different biological states

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dynamics of an interface separating the two coexistent phases of a binary system in the presence of external fluctuations in temperature is studied. An interfacial instability is obtained for an interface that would be stable in the absence of fluctuations or in the presence of internal fluctuations. Analytical stability analysis and numerical simulations are in accordance with an explanation of these effects in terms of a quenchlike instability induced by fluctuations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider diffusion of a passive substance C in a phase-separating nonmiscible binary alloy under turbulent mixing. The substance is assumed to have different diffusion coefficients in the pure phases A and B, leading to a spatially and temporarily dependent diffusion ¿coefficient¿ in the diffusion equation plus convective term. In this paper we consider especially the effects of a turbulent flow field coupled to both the Cahn-Hilliard type evolution equation of the medium and the diffusion equation (both, therefore, supplemented by a convective term). It is shown that the formerly observed prolonged anomalous diffusion [H. Lehr, F. Sagués, and J.M. Sancho, Phys. Rev. E 54, 5028 (1996)] is no longer seen if a flow of sufficient intensity is supplied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Remarkable differences in the shape of the nematic-smectic-B interface in a quasi-two-dimensional geometry have been experimentally observed in three liquid crystals of very similar molecular structure, i.e., neighboring members of a homologous series. In the thermal equilibrium of the two mesophases a faceted rectanglelike shape was observed with considerably different shape anisotropies for the three homologs. Various morphologies such as dendritic, dendriticlike, and faceted shapes of the rapidly growing smectic-B germ were also observed for the three substances. Experimental results were compared with computer simulations based on the phase field model. The pattern forming behavior of a binary mixture of two homologs was also studied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An optical-model potential for systematic calculations of elastic scattering of electrons and positrons by atoms and positive ions is proposed. The electrostatic interaction is determined from the Dirac-Hartree-Fock self-consistent atomic electron density. In the case of electron projectiles, the exchange interaction is described by means of the local-approximation of Furness and McCarthy. The correlation-polarization potential is obtained by combining the correlation potential derived from the local density approximation with a long-range polarization interaction, which is represented by means of a Buckingham potential with an empirical energy-dependent cutoff parameter. The absorption potential is obtained from the local-density approximation, using the Born-Ochkur approximation and the Lindhard dielectric function to describe the binary collisions with a free-electron gas. The strength of the absorption potential is adjusted by means of an empirical parameter, which has been determined by fitting available absolute elastic differential cross-section data for noble gases and mercury. The Dirac partial-wave analysis with this optical-model potential provides a realistic description of elastic scattering of electrons and positrons with energies in the range from ~100 eV up to ~5 keV. At higher energies, correlation-polarization and absorption corrections are small and the usual static-exchange approximation is sufficiently accurate for most practical purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Molar heat capacities of the binary compounds NiAl, NiIn, NiSi, NiGe, NiBi, NiSb, CoSb and FeSb were determined every 10 K by differential scanning calorimetry in the temperature range 310-1080 K. The experimental results have been fitted versus temperature according to C-p = a + b . T + c . T-2 + d . T-2. Results are given, discussed and compared to estimations found in the literature. Two compounds, NiBi and FeSb, are subject to transformations between 460 and 500 K. (C) 1999 Elsevier Science Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Notch proteins regulate a broad spectrum of cell fate decisions and differentiation processes during fetal and postnatal life. These proteins are involved in organogenesis during embryonic development as well as in the maintenance of homeostasis of self-renewing systems. The paradigms of Notch function, such as stem and progenitor cell maintenance, lineage specification mediated by binary cell fate decisions, and induction of terminal differentiation, were initially established in invertebrates and subsequently confirmed in mammals. Moreover, aberrant Notch signaling is linked to tumorigenesis. In this review, we discuss the origin of postulated Notch functions, give examples from different mammalian organ systems, and try to relate them to the hematopoietic system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A haplotype is an m-long binary vector. The XOR-genotype of two haplotypes is the m-vector of their coordinate-wise XOR. We study the following problem: Given a set of XOR-genotypes, reconstruct their haplotypes so that the set of resulting haplotypes can be mapped onto a perfect phylogeny (PP) tree. The question is motivated by studying population evolution in human genetics, and is a variant of the perfect phylogeny haplotyping problem that has received intensive attention recently. Unlike the latter problem, in which the input is "full" genotypes, here we assume less informative input, and so may be more economical to obtain experimentally. Building on ideas of Gusfield, we show how to solve the problem in polynomial time, by a reduction to the graph realization problem. The actual haplotypes are not uniquely determined by that tree they map onto, and the tree itself may or may not be unique. We show that tree uniqueness implies uniquely determined haplotypes, up to inherent degrees of freedom, and give a sufficient condition for the uniqueness. To actually determine the haplotypes given the tree, additional information is necessary. We show that two or three full genotypes suffice to reconstruct all the haplotypes, and present a linear algorithm for identifying those genotypes.