931 resultados para Mathematical methods
Resumo:
Using six kinds of lattice types (4×4 ,5×5 , and6×6 square lattices;3×3×3 cubic lattice; and2+3+4+3+2 and4+5+6+5+4 triangular lattices), three different size alphabets (HP ,HNUP , and 20 letters), and two energy functions, the designability of proteinstructures is calculated based on random samplings of structures and common biased sampling (CBS) of proteinsequence space. Then three quantities stability (average energy gap),foldability, and partnum of the structure, which are defined to elucidate the designability, are calculated. The authors find that whatever the type of lattice, alphabet size, and energy function used, there will be an emergence of highly designable (preferred) structure. For all cases considered, the local interactions reduce degeneracy and make the designability higher. The designability is sensitive to the lattice type, alphabet size, energy function, and sampling method of the sequence space. Compared with the random sampling method, both the CBS and the Metropolis Monte Carlo sampling methods make the designability higher. The correlation coefficients between the designability, stability, and foldability are mostly larger than 0.5, which demonstrate that they have strong correlation relationship. But the correlation relationship between the designability and the partnum is not so strong because the partnum is independent of the energy. The results are useful in practical use of the designability principle, such as to predict the proteintertiary structure.
Resumo:
Summary This systematic review demonstrates that vitamin D supplementation does not have a significant effect on muscle strength in vitamin D replete adults. However, a limited number of studies demonstrate an increase in proximal muscle strength in adults with vitamin D deficiency. Introduction The purpose of this study is to systematically review the evidence on the effect of vitamin D supplementation on muscle strength in adults. Methods A comprehensive systematic database search was performed. Inclusion criteria included randomised controlled trials (RCTs) involving adult human participants. All forms and doses of vitamin D supplementation with or without calcium supplementation were included compared with placebo or standard care. Outcome measures included evaluation of strength. Outcomes were compared by calculating standardised mean difference (SMD) and 95% confidence intervals. Results Of 52 identified studies, 17 RCTs involving 5,072 participants met the inclusion criteria. Meta-analysis showed no significant effect of vitamin D supplementation on grip strength (SMD −0.02, 95%CI −0.15,0.11) or proximal lower limb strength (SMD 0.1, 95%CI −0.01,0.22) in adults with 25(OH)D levels >25 nmol/L. Pooled data from two studies in vitamin D deficient participants (25(OH)D <25 nmol/L) demonstrated a large effect of vitamin D supplementation on hip muscle strength (SMD 3.52, 95%CI 2.18, 4.85). Conclusion Based on studies included in this systematic review, vitamin D supplementation does not have a significant effect on muscle strength in adults with baseline 25(OH)D >25 nmol/L. However, a limited number of studies demonstrate an increase in proximal muscle strength in adults with vitamin D deficiency. Keywords Muscle – Muscle fibre – Strength – Vitamin D
Resumo:
Expert elicitation is the process of determining what expert knowledge is relevant to support a quantitative analysis and then eliciting this information in a form that supports analysis or decision-making. The credibility of the overall analysis, therefore, relies on the credibility of the elicited knowledge. This, in turn, is determined by the rigor of the design and execution of the elicitation methodology, as well as by its clear communication to ensure transparency and repeatability. It is difficult to establish rigor when the elicitation methods are not documented, as often occurs in ecological research. In this chapter, we describe software that can be combined with a well-structured elicitation process to improve the rigor of expert elicitation and documentation of the results
Resumo:
This paper discusses the statistical analyses used to derive bridge live loads models for Hong Kong from a 10-year weigh-in-motion (WIM) data. The statistical concepts required and the terminologies adopted in the development of bridge live load models are introduced. This paper includes studies for representative vehicles from the large amount of WIM data in Hong Kong. Different load affecting parameters such as gross vehicle weights, axle weights, axle spacings, average daily number of trucks etc are first analyzed by various stochastic processes in order to obtain the mathematical distributions of these parameters. As a prerequisite to determine accurate bridge design loadings in Hong Kong, this study not only takes advantages of code formulation methods used internationally but also presents a new method for modelling collected WIM data using a statistical approach.
Resumo:
A model for drug diffusion from a spherical polymeric drug delivery device is considered. The model contains two key features. The first is that solvent diffuses into the polymer, which then transitions from a glassy to a rubbery state. The interface between the two states of polymer is modelled as a moving boundary, whose speed is governed by a kinetic law; the same moving boundary problem arises in the one-phase limit of a Stefan problem with kinetic undercooling. The second feature is that drug diffuses only through the rubbery region, with a nonlinear diffusion coefficient that depends on the concentration of solvent. We analyse the model using both formal asymptotics and numerical computation, the latter by applying a front-fixing scheme with a finite volume method. Previous results are extended and comparisons are made with linear models that work well under certain parameter regimes. Finally, a model for a multi-layered drug delivery device is suggested, which allows for more flexible control of drug release.
Resumo:
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter β ∈ [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter β is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.