994 resultados para 019900 OTHER MATHEMATICAL SCIENCES
Resumo:
A group is termed parafree if it is residually nilpotent and has the same nilpotent quotients as a given free group. Since free groups are residually nilpotent, they are parafree. Nonfree parafree groups abound and they all have many properties in common with free groups. Finitely presented parafree groups have solvable word problems, but little is known about the conjugacy and isomorphism problems. The conjugacy problem plays an important part in determining whether an automorphism is inner, which we term the inner automorphism problem. We will attack these and other problems about parafree groups experimentally, in a series of papers, of which this is the first and which is concerned with the isomorphism problem. The approach that we take here is to distinguish some parafree groups by computing the number of epimorphisms onto selected finite groups. It turns out, rather unexpectedly, that an understanding of the quotients of certain groups leads to some new results about equations in free and relatively free groups. We touch on this only lightly here but will discuss this in more depth in a future paper.
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.
Resumo:
We consider a universal set of quantum gates encoded within a perturbed decoherence-free subspace of four physical qubits. Using second-order perturbation theory and a measuring device modelled by an infinite set of harmonic oscillators, simply coupled to the system, we show that continuous observation of the coupling agent induces inhibition of the decoherence due to spurious perturbations. We thus advance the idea of protecting or even creating a decoherence-free subspace for processing quantum information.
Resumo:
Proportionally balanced designs (pi BDs) were introduced by Gray and Matters in response to a need for the allocation of markers of the Queensland Core Skills Test to have a certain property. Subsequent papers extended the theoretical results relating to such designs and provided further instances and general constructions. This work focused on designs comprising blocks of precisely two sizes, and when each variety occurs with one of precisely two possible frequencies. Two designs based on the set V of varieties are complementary if, whenever B is a block of one, then its complement with regard to the set V is a block of the other. Here we present necessary conditions for the existence of complementary pairs of such pi BDs and provide lists of some restricted parameter sets satisfying these necessary conditions. The lists are arranged according to the number of blocks. We demonstrate that not all of these parameter sets give rise to designs. However we establish by construction of the sets of blocks that, for every feasible number of blocks less than or equal to 100, with the possible exception of 63, there exists at least one pair of complementary pi BDs. We also investigate the conditions under which the complementary design can be isomorphic to the original design, and again provide a list of feasible parameters for pairs of such designs with at most 400 blocks.
Resumo:
The properties of commercial directly and indirectly heated UHT milks, both after heating and during storage at room temperature for 24 weeks, were studied. Thermally induced changes were examined by changes in lactulose, furosine and acid-soluble whey proteins. The results confirmed previous reports that directly heated UHT milks suffer less heat damage than indirectly heated milk. During storage, furosine increased and bovine serum albumin in directly heat-treated milks decreased significantly. The changes in lactulose, alpha-lactalbumin and beta-lactoglobulin were not statistically significant. The data suggest that heat treatment indicators should be measured as soon as possible after processing to avoid any misinterpretations of the intensity of the heat treatment.
Resumo:
Complex numbers appear in the Hilbert space formulation of quantum mechanics, but not in the formulation in phase space. Quantum symmetries are described by complex, unitary or antiunitary operators defining ray representations in Hilbert space, whereas in phase space they are described by real, true representations. Equivalence of the formulations requires that the former representations can be obtained from the latter and vice versa. Examples are given. Equivalence of the two formulations also requires that complex superpositions of state vectors can be described in the phase space formulation, and it is shown that this leads to a nonlinear superposition principle for orthogonal, pure-state Wigner functions. It is concluded that the use of complex numbers in quantum mechanics can be regarded as a computational device to simplify calculations, as in all other applications of mathematics to physical phenomena.
Resumo:
High-quality data about protein structures and their gene sequences are essential to the understanding of the relationship between protein folding and protein coding sequences. Firstly we constructed the EcoPDB database, which is a high-quality database of Escherichia coli genes and their corresponding PDB structures. Based on EcoPDB, we presented a novel approach based on information theory to investigate the correlation between cysteine synonymous codon usages and local amino acids flanking cysteines, the correlation between cysteine synonymous codon usages and synonymous codon usages of local amino acids flanking cysteines, as well as the correlation between cysteine synonymous codon usages and the disulfide bonding states of cysteines in the E. coli genome. The results indicate that the nearest neighboring residues and their synonymous codons of the C-terminus have the greatest influence on the usages of the synonymous codons of cysteines and the usage of the synonymous codons has a specific correlation with the disulfide bond formation of cysteines in proteins. The correlations may result from the regulation mechanism of protein structures at gene sequence level and reflect the biological function restriction that cysteines pair to form disulfide bonds. The results may also be helpful in identifying residues that are important for synonymous codon selection of cysteines to introduce disulfide bridges in protein engineering and molecular biology. The approach presented in this paper can also be utilized as a complementary computational method and be applicable to analyse the synonymous codon usages in other model organisms. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Many populations have a negative impact on their habitat or upon other species in the environment if their numbers become too large. For this reason they are often subjected to some form of control. One common control regime is the reduction regime: when the population reaches a certain threshold it is controlled (for example culled) until it falls below a lower predefined level. The natural model for such a controlled population is a birth-death process with two phases, the phase determining which of two distinct sets of birth and death rates governs the process. We present formulae for the probability of extinction and the expected time to extinction, and discuss several applications. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
The estimation of P(S-n > u) by simulation, where S, is the sum of independent. identically distributed random varibles Y-1,..., Y-n, is of importance in many applications. We propose two simulation estimators based upon the identity P(S-n > u) = nP(S, > u, M-n = Y-n), where M-n = max(Y-1,..., Y-n). One estimator uses importance sampling (for Y-n only), and the other uses conditional Monte Carlo conditioning upon Y1,..., Yn-1. Properties of the relative error of the estimators are derived and a numerical study given in terms of the M/G/1 queue in which n is replaced by an independent geometric random variable N. The conclusion is that the new estimators compare extremely favorably with previous ones. In particular, the conditional Monte Carlo estimator is the first heavy-tailed example of an estimator with bounded relative error. Further improvements are obtained in the random-N case, by incorporating control variates and stratification techniques into the new estimation procedures.
Resumo:
Background: The residue-wise contact order (RWCO) describes the sequence separations between the residues of interest and its contacting residues in a protein sequence. It is a new kind of one-dimensional protein structure that represents the extent of long-range contacts and is considered as a generalization of contact order. Together with secondary structure, accessible surface area, the B factor, and contact number, RWCO provides comprehensive and indispensable important information to reconstructing the protein three-dimensional structure from a set of one-dimensional structural properties. Accurately predicting RWCO values could have many important applications in protein three-dimensional structure prediction and protein folding rate prediction, and give deep insights into protein sequence-structure relationships. Results: We developed a novel approach to predict residue-wise contact order values in proteins based on support vector regression (SVR), starting from primary amino acid sequences. We explored seven different sequence encoding schemes to examine their effects on the prediction performance, including local sequence in the form of PSI-BLAST profiles, local sequence plus amino acid composition, local sequence plus molecular weight, local sequence plus secondary structure predicted by PSIPRED, local sequence plus molecular weight and amino acid composition, local sequence plus molecular weight and predicted secondary structure, and local sequence plus molecular weight, amino acid composition and predicted secondary structure. When using local sequences with multiple sequence alignments in the form of PSI-BLAST profiles, we could predict the RWCO distribution with a Pearson correlation coefficient (CC) between the predicted and observed RWCO values of 0.55, and root mean square error (RMSE) of 0.82, based on a well-defined dataset with 680 protein sequences. Moreover, by incorporating global features such as molecular weight and amino acid composition we could further improve the prediction performance with the CC to 0.57 and an RMSE of 0.79. In addition, combining the predicted secondary structure by PSIPRED was found to significantly improve the prediction performance and could yield the best prediction accuracy with a CC of 0.60 and RMSE of 0.78, which provided at least comparable performance compared with the other existing methods. Conclusion: The SVR method shows a prediction performance competitive with or at least comparable to the previously developed linear regression-based methods for predicting RWCO values. In contrast to support vector classification (SVC), SVR is very good at estimating the raw value profiles of the samples. The successful application of the SVR approach in this study reinforces the fact that support vector regression is a powerful tool in extracting the protein sequence-structure relationship and in estimating the protein structural profiles from amino acid sequences.
Resumo:
This paper has three primary aims: to establish an effective means for modelling mainland-island metapopulations inhabiting a dynamic landscape: to investigate the effect of immigration and dynamic changes in habitat on metapopulation patch occupancy dynamics; and to illustrate the implications of our results for decision-making and population management. We first extend the mainland-island metapopulation model of Alonso and McKane [Bull. Math. Biol. 64:913-958,2002] to incorporate a dynamic landscape. It is shown, for both the static and the dynamic landscape models, that a suitably scaled version of the process converges to a unique deterministic model as the size of the system becomes large. We also establish that. under quite general conditions, the density of occupied patches, and the densities of suitable and occupied patches, for the respective models, have approximate normal distributions. Our results not only provide us with estimates for the means and variances that are valid at all stages in the evolution of the population, but also provide a tool for fitting the models to real metapopulations. We discuss the effect of immigration and habitat dynamics on metapopulations, showing that mainland-like patches heavily influence metapopulation persistence, and we argue for adopting measures to increase connectivity between this large patch and the other island-like patches. We illustrate our results with specific reference to examples of populations of butterfly and the grasshopper Bryodema tuberculata.
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.
Resumo:
Functionally-fitted methods are generalizations of collocation techniques to integrate an equation exactly if its solution is a linear combination of a chosen set of basis functions. When these basis functions are chosen as the power functions, we recover classical algebraic collocation methods. This paper shows that functionally-fitted methods can be derived with less restrictive conditions than previously stated in the literature, and that other related results can be derived in a much more elegant way. The novelty in our approach is to fully retain the collocation framework without reverting back into derivations based on cumbersome Taylor series expansions.
Resumo:
The robustness of mathematical models for biological systems is studied by sensitivity analysis and stochastic simulations. Using a neural network model with three genes as the test problem, we study robustness properties of synthesis and degradation processes. For single parameter robustness, sensitivity analysis techniques are applied for studying parameter variations and stochastic simulations are used for investigating the impact of external noise. Results of sensitivity analysis are consistent with those obtained by stochastic simulations. Stochastic models with external noise can be used for studying the robustness not only to external noise but also to parameter variations. For external noise we also use stochastic models to study the robustness of the function of each gene and that of the system.
Resumo:
Many populations have a negative impact on their habitat, or upon other species in the environment, if their numbers become too large. For this reason they are often managed using some form of control. The objective is to keep numbers at a sustainable level, while ensuring survival of the population.+Here we present models that allow population management programs to be assessed. Two common control regimes will be considered: reduction and suppression. Under the suppression regime the previous population is maintained close to a particular threshold through near continuous control, while under the reduction regime, control begins once the previous population reaches a certain threshold and continues until it falls below a lower pre-defined level. We discuss how to best choose the control parameters, and we provide tools that allow population managers to select reduction levels and control rates. Additional tools will be provided to assess the effect of different control regimes, in terms of population persistence and cost.In particular we consider the effects of each regime on the probability of extinction and the expected time to extinction, and compare the control methods in terms of the expected total cost of each regime over the life of the population. The usefulness of our results will be illustrated with reference to the control of a koala population inhabiting Kangaroo Island, Australia.