989 resultados para parametric implicit vector equilibrium problems
Resumo:
A sample of 95 sib pairs affected with insulin-dependent diabetes and typed with their normal parents for 28 markers on chromosome 6 has been analyzed by several methods. When appropriate parameters are efficiently estimated, a parametric model is equivalent to the β model, which is superior to nonparametric alternatives both in single point tests (as found previously) and in multipoint tests. Theory is given for meta-analysis combined with allelic association, and problems that may be associated with errors of map location and/or marker typing are identified. Reducing by multipoint analysis the number of association tests in a dense map can give a 3-fold reduction in the critical lod, and therefore in the cost of positional cloning.
Resumo:
We introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. We test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, we use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.
Resumo:
Adenoviral vectors are widely used as highly efficient gene transfer vehicles in a variety of biological research strategies including human gene therapy. One of the limitations of the currently available adenoviral vector system is the presence of the majority of the viral genome in the vector, resulting in leaky expression of viral genes particularly at high multiplicity of infection and limited cloning capacity of exogenous sequences. As a first step to overcome this problem, we attempted to rescue a defective human adenovirus serotype 5 DNA, which had an essential region of the viral genome (L1, L2, VAI + II, pTP) deleted and replaced with an indicator gene. In the presence of wild-type adenovirus as a helper, this DNA was packaged and propagated as transducing viral particles. After several rounds of amplification, the titer of the recombinant virus reached at least 4 x 10(6) transducing particles per ml. The recombinant virus could be partially purified from the helper virus by CsCl equilibrium density-gradient centrifugation. The structure of the recombinant virus around the marker gene remained intact after serial propagation, while the pBR sequence inserted in the E1 region was deleted from the recombinant virus. Our results suggest that it should be possible to develop a helper-dependent adenoviral vector, which does not encode any viral proteins, as an alternative to the currently available adenoviral vector systems.
Resumo:
Presentation submitted to PSE Seminar, Chemical Engineering Department, Center for Advanced Process Design-making (CAPD), Carnegie Mellon University, Pittsburgh (USA), October 2012.
Resumo:
We address the optimization of discrete-continuous dynamic optimization problems using a disjunctive multistage modeling framework, with implicit discontinuities, which increases the problem complexity since the number of continuous phases and discrete events is not known a-priori. After setting a fixed alternative sequence of modes, we convert the infinite-dimensional continuous mixed-logic dynamic (MLDO) problem into a finite dimensional discretized GDP problem by orthogonal collocation on finite elements. We use the Logic-based Outer Approximation algorithm to fully exploit the structure of the GDP representation of the problem. This modelling framework is illustrated with an optimization problem with implicit discontinuities (diver problem).
Resumo:
Convex vector (or multi-objective) semi-infinite optimization deals with the simultaneous minimization of finitely many convex scalar functions subject to infinitely many convex constraints. This paper provides characterizations of the weakly efficient, efficient and properly efficient points in terms of cones involving the data and Karush–Kuhn–Tucker conditions. The latter characterizations rely on different local and global constraint qualifications. The results in this paper generalize those obtained by the same authors on linear vector semi-infinite optimization problems.
Resumo:
As the Greek debt drama reaches another supposedly decision point, Daniel Gros urges creditors (and indeed all policy-makers) to think about the long term and poses one key question in this CEPS High-Level Brief: What can be gained by keeping Greece inside the euro area at “whatever it takes”? As he points out, the US, with its unified politics and its federal fiscal transfer system, is often taken as a model for the Eurozone, and it is thus instructive to consider the longer-term performance of an area of the US which has for years been kept afloat by massive transfers, and which is now experiencing a public debt crisis. The entity in question is Puerto Rico, which is an integral part of the US in all relevant economic dimensions (currency, economic policy, etc.). The dismal fiscal and economic performance of Puerto Rico carries two lessons: 1) Keeping Greece in the eurozone by increasing implicit subsidies in the form of debt forgiveness might create a low-growth equilibrium with increasing aid dependency. 2) It is wrong to assume that, further integration, including a fiscal and political union, would be sufficient to foster convergence, and prevent further problems of the type the EU is experiencing with Greece.
Resumo:
No estudo da economia, há diversas situações em que a propensão de um indivíduo a tomar determinada ação é crescente na quantidade de outras pessoas que este indivíduo acredita que tomarão a mesma ação. Esse tipo de complementaridade estratégica geralmente leva à existência de múltiplos equilíbrios. Além disso, o resultado atingido pelas decisões decentralizadas dos agentes pode ser ineficiente, deixando espaço para intervenções de política econômica. Esta tese estuda diferentes ambientes em que a coordenação entre indivíduos é importante. O primeiro capítulo analisa como a manipulação de informação e a divulgação de informação afetam a coordenação entre investidores e o bem-estar em um modelo de corridas bancárias. No modelo, há uma autoridade reguladora que não pode se comprometer a revelar a verdadeira situação do setor bancário. O regulador observa informações idiossincráticas dos bancos (através de um stress test, por exemplo) e escolhe se revela essa informação para o público ou se divulga somente um relatório agregado sobre a saúde do sistema financeiro como um todo. O relatório agregado pode ser distorcido a um custo – um custo mais elevado significa maior credibilidade do regulador. Os investidores estão cientes dos incentivos do regulador a esconder más notícias do mercado, mas a manipulação de informação pode, ainda assim, ser efetiva. Se a credibilidade do regulador não for muito baixa, a política de divulgação de informação é estado-contingente, e existe sempre um conjunto de estados em que há manipulação de informação em equilíbrio. Se a credibilidade for suficientemente baixa, porém, o regulador opta por transparência total dos resultados banco-específicos, caso em que somente os bancos mais sólidos sobrevivem. Uma política de opacidade levaria a uma crise bancária sistêmica, independentemente do estado. O nível de credibilidade que maximiza o bem-estar agregado do ponto de vista ex ante é interior. O segundo e o terceiro capítulos estudam problemas de coordenação dinâmicos. O segundo capítulo analisa o bem-estar em um ambiente em que agentes recebem oportunidades aleatórias para migrar entre duas redes. Os resultados mostram que sempre que a rede de pior qualidade (intrínseca) prevalece, isto é eficiente. Na verdade, um planejador central estaria ainda mais inclinado a escolher a rede de pior qualidade. Em equilíbrio, pode haver mudanças ineficientes que ampliem a rede de qualidade superior. Quando indivíduos escolhem entre dois padrões ou redes com níveis de qualidade diferentes, se todos os indivíduos fizessem escolhas simultâneas, a solução eficiente seria que todos adotassem a rede de melhor qualidade. No entanto, quando há fricções e os agentes tomam decisões escalonadas, a solução eficiente difere ix do senso comum. O terceiro capítulo analisa um problema de coordenação dinâmico com decisões escalonadas em que os agentes são heterogêneos ex ante. No modelo, existe um único equilíbrio, caracterizado por thresholds que determinam as escolhas para cada tipo de agente. Apesar da heterogeneidade nos payoffs, há bastante conformidade nas ações individuais em equilíbrio. Os thresholds de diferentes tipos de agentes coincidem parcialmente contanto que exista um conjunto de crenças arbitrário que justifique esta conformidade. No entanto, as estratégias de equilíbrio de diferentes tipos nunca coincidem totalmente. Além disso, a conformidade não é ineficiente. A solução eficiente apresentaria estratégias ainda mais similares para tipos distintos em comparação com o equilíbrio decentralizado.
Resumo:
Thesis (M.S.)--University of Illinois, 1970.
Resumo:
At head of title: Contributions to cosmogony and the fundamental problems of geology.
Resumo:
The objective of this review is to draw attention to potential pitfalls in attempts to glean mechanistic information from the magnitudes of standard enthalpies and entropies derived from the temperature dependence of equilibrium and rate constants for protein interactions. Problems arise because the minimalist model that suffices to describe the energy differences between initial and final states usually comprises a set of linked equilibria, each of which is characterized by its own energetics. For example, because the overall standard enthalpy is a composite of those individual values, a positive magnitude for AHO can still arise despite all reactions within the subset being characterized by negative enthalpy changes: designation of the reaction as being entropy driven is thus equivocal. An experimenter must always bear in mind the fact that any mechanistic interpretation of the magnitudes of thermodynamic parameters refers to the reaction model rather than the experimental system For the same reason there is little point in subjecting the temperature dependence of rate constants for protein interactions to transition-state analysis. If comparisons with reported values of standard enthalpy and entropy of activation are needed, they are readily calculated from the empirical Arrhenius parameters. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
Using generalized collocation techniques based on fitting functions that are trigonometric (rather than algebraic as in classical integrators), we develop a new class of multistage, one-step, variable stepsize, and variable coefficients implicit Runge-Kutta methods to solve oscillatory ODE problems. The coefficients of the methods are functions of the frequency and the stepsize. We refer to this class as trigonometric implicit Runge-Kutta (TIRK) methods. They integrate an equation exactly if its solution is a trigonometric polynomial with a known frequency. We characterize the order and A-stability of the methods and establish results similar to that of classical algebraic collocation RK methods. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The thrust of this report concerns spline theory and some of the background to spline theory and follows the development in (Wahba, 1991). We also review methods for determining hyper-parameters, such as the smoothing parameter, by Generalised Cross Validation. Splines have an advantage over Gaussian Process based procedures in that we can readily impose atmospherically sensible smoothness constraints and maintain computational efficiency. Vector splines enable us to penalise gradients of vorticity and divergence in wind fields. Two similar techniques are summarised and improvements based on robust error functions and restricted numbers of basis functions given. A final, brief discussion of the application of vector splines to the problem of scatterometer data assimilation highlights the problems of ambiguous solutions.
Resumo:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.