849 resultados para RANDOM PERMUTATION MODEL
Resumo:
High pressure NMR spectroscopy has developed into an important tool for studying conformational equilibria of proteins in solution. We have studied the amide proton and nitrogen chemical shifts of the 20 canonical amino acids X in the random-coil model peptide Ac-Gly-Gly-X-Ala-NH2, in a pressure range from 0.1 to 200 MPa, at a proton resonance frequency of 800 MHz. The obtained data allowed the determination of first and second order pressure coefficients with high accuracy at 283 K and pH 6.7. The mean first and second order pressure coefficients <B-1(15N)> and <B-2(15N)> for nitrogen are 2.91 ppm/GPa and -2.32 ppm/GPa(2), respectively. The corresponding values <B-1(1H)> and <B-2(1H)> for the amide protons are 0.52 ppm/GPa and -0.41 ppm/GPa(2). Residual dependent (1)J(1H15N)-coupling constants are shown.
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .
Resumo:
Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.
Resumo:
Niche apportionment models have only been applied once to parasite communities. Only the random assortment model (RA), which indicates that species abundances are independent from each other and that interspecific competition is unimportant, provided a good fit to 3 out of 6 parasite communities investigated. The generality of this result needs to be validated, however. In this study we apply 5 niche apportionment models to the parasite communities of 14 fish species from the Great Barrier Reef. We determined which model fitted the data when using either numerical abundance or biomass as an estimate of parasite abundance, and whether the fit of niche apportionment models depends on how the parasite community is defined (e.g. ecto, endoparasites or all parasites considered together). The RA model provided a good fit for the whole community of parasites in 7 fish species when using biovolume (as a surrogate of biomass) as a measure of species abundance. The RA model also fitted observed data when ecto- and endoparasites were considered separately, using abundance or biovolume, but less frequently. Variation in fish sizes among species was not associated with the probability of a model fitting the data. Total numerical abundance and biovolume of parasites were not related across host species, suggesting that they capture different aspects of abundance. Biovolume is not only a better measurement to use with niche-orientated models, it should also be the preferred descriptor to analyse parasite community structure in other contexts. Most of the biological assumptions behind the RA model, i.e. randomness in apportioning niche space, lack of interspecific competition, independence of abundance among different species, and species with variable niches in changeable environments, are in accordance with some previous findings on parasite communities. Thus, parasite communities may generally be unsaturated with species, with empty niches, and interspecific interactions may generally be unimportant in determining parasite community structure.
Resumo:
In Statnote 9, we described a one-way analysis of variance (ANOVA) ‘random effects’ model in which the objective was to estimate the degree of variation of a particular measurement and to compare different sources of variation in space and time. The illustrative scenario involved the role of computer keyboards in a University communal computer laboratory as a possible source of microbial contamination of the hands. The study estimated the aerobic colony count of ten selected keyboards with samples taken from two keys per keyboard determined at 9am and 5pm. This type of design is often referred to as a ‘nested’ or ‘hierarchical’ design and the ANOVA estimated the degree of variation: (1) between keyboards, (2) between keys within a keyboard, and (3) between sample times within a key. An alternative to this design is a 'fixed effects' model in which the objective is not to measure sources of variation per se but to estimate differences between specific groups or treatments, which are regarded as 'fixed' or discrete effects. This statnote describes two scenarios utilizing this type of analysis: (1) measuring the degree of bacterial contamination on 2p coins collected from three types of business property, viz., a butcher’s shop, a sandwich shop, and a newsagent and (2) the effectiveness of drugs in the treatment of a fungal eye infection.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. We use non-linear, artificial intelligence techniques, namely, recurrent neural networks, evolution strategies and kernel methods in our forecasting experiment. In the experiment, these three methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. There is evidence in the literature that evolutionary methods can be used to evolve kernels hence our future work should combine the evolutionary and kernel methods to get the benefits of both.
On Multi-Dimensional Random Walk Models Approximating Symmetric Space-Fractional Diffusion Processes
Resumo:
Mathematics Subject Classification: 26A33, 47B06, 47G30, 60G50, 60G52, 60G60.
Resumo:
In non-linear random effects some attention has been very recently devoted to the analysis ofsuitable transformation of the response variables separately (Taylor 1996) or not (Oberg and Davidian 2000) from the transformations of the covariates and, as far as we know, no investigation has been carried out on the choice of link function in such models. In our study we consider the use of a random effect model when a parameterized family of links (Aranda-Ordaz 1981, Prentice 1996, Pregibon 1980, Stukel 1988 and Czado 1997) is introduced. We point out the advantages and the drawbacks associated with the choice of this data-driven kind of modeling. Difficulties in the interpretation of regression parameters, and therefore in understanding the influence of covariates, as well as problems related to loss of efficiency of estimates and overfitting, are discussed. A case study on radiotherapy usage in breast cancer treatment is discussed.
Resumo:
We propose a new password-based 3-party protocol with a formal security proof in the standard model. Under reasonable assumptions we show that our new protocol is more efficient than the recent protocol of Abdalla and Pointcheval (FC 2005), proven in the random oracle model. We also observe some limitations in the model due to Abdalla, Fouque and Pointcheval (PKC 2005) for proving security of such protocols.
Resumo:
We treat the security of group key exchange (GKE) in the universal composability (UC) framework. Analyzing GKE protocols in the UC framework naturally addresses attacks by malicious insiders. We define an ideal functionality for GKE that captures contributiveness in addition to other desired security goals. We show that an efficient two-round protocol securely realizes the proposed functionality in the random oracle model. As a result, we obtain the most efficient UC-secure contributory GKE protocol known.
Resumo:
We introduce a formal model for certificateless authenticated key exchange (CL-AKE) protocols. Contrary to what might be expected, we show that the natural combination of an ID-based AKE protocol with a public key based AKE protocol cannot provide strong security. We provide the first one-round CL-AKE scheme proven secure in the random oracle model. We introduce two variants of the Diffie-Hellman trapdoor the introduced by \cite{DBLP:conf/eurocrypt/CashKS08}. The proposed key agreement scheme is secure as long as each party has at least one uncompromised secret. Thus, our scheme is secure even if the key generation centre learns the ephemeral secrets of both parties.
Resumo:
The aim of this paper is to provide a contemporary summary of statistical and non-statistical meta-analytic procedures that have relevance to the type of experimental designs often used by sport scientists when examining differences/change in dependent measure(s) as a result of one or more independent manipulation(s). Using worked examples from studies on observational learning in the motor behaviour literature, we adopt a random effects model and give a detailed explanation of the statistical procedures for the three types of raw score difference-based analyses applicable to between-participant, within-participant, and mixed-participant designs. Major merits and concerns associated with these quantitative procedures are identified and agreed methods are reported for minimizing biased outcomes, such as those for dealing with multiple dependent measures from single studies, design variation across studies, different metrics (i.e. raw scores and difference scores), and variations in sample size. To complement the worked examples, we summarize the general considerations required when conducting and reporting a meta-analysis, including how to deal with publication bias, what information to present regarding the primary studies, and approaches for dealing with outliers. By bringing together these statistical and non-statistical meta-analytic procedures, we provide the tools required to clarify understanding of key concepts and principles.
Resumo:
A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.
Resumo:
We study MCF-7 breast cancer cell movement in a transwell apparatus. Various experimental conditions lead to a variety of monotone and nonmonotone responses which are difficult to interpret. We anticipate that the experimental results could be caused by cell-to-cell adhesion or volume exclusion. Without any modeling, it is impossible to understand the relative roles played by these two mechanisms. A lattice-based exclusion process random-walk model incorporating agent-to-agent adhesion is applied to the experimental system. Our combined experimental and modeling approach shows that a low value of cell-to-cell adhesion strength provides the best explanation of the experimental data suggesting that volume exclusion plays a more important role than cell-to-cell adhesion. This combined experimental and modeling study gives insight into the cell-level details and design of transwell assays.
Resumo:
The contributions of this thesis fall into three areas of certificateless cryptography. The first area is encryption, where we propose new constructions for both identity-based and certificateless cryptography. We construct an n-out-of- n group encryption scheme for identity-based cryptography that does not require any special means to generate the keys of the trusted authorities that are participating. We also introduce a new security definition for chosen ciphertext secure multi-key encryption. We prove that our construction is secure as long as at least one authority is uncompromised, and show that the existing constructions for chosen ciphertext security from identity-based encryption also hold in the group encryption case. We then consider certificateless encryption as the special case of 2-out-of-2 group encryption and give constructions for highly efficient certificateless schemes in the standard model. Among these is the first construction of a lattice-based certificateless encryption scheme. Our next contribution is a highly efficient certificateless key encapsulation mechanism (KEM), that we prove secure in the standard model. We introduce a new way of proving the security of certificateless schemes based that are based on identity-based schemes. We leave the identity-based part of the proof intact, and just extend it to cover the part that is introduced by the certificateless scheme. We show that our construction is more efficient than any instanciation of generic constructions for certificateless key encapsulation in the standard model. The third area where the thesis contributes to the advancement of certificateless cryptography is key agreement. Swanson showed that many certificateless key agreement schemes are insecure if considered in a reasonable security model. We propose the first provably secure certificateless key agreement schemes in the strongest model for certificateless key agreement. We extend Swanson's definition for certificateless key agreement and give more power to the adversary. Our new schemes are secure as long as each party has at least one uncompromised secret. Our first construction is in the random oracle model and gives the adversary slightly more capabilities than our second construction in the standard model. Interestingly, our standard model construction is as efficient as the random oracle model construction.