967 resultados para function approximation
Resumo:
In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.
For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.
Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.
Resumo:
This paper studies the correlation properties of the speckles in the deep Fresnel diffraction region produced by the scattering of rough self-affine fractal surfaces. The autocorrelation function of the speckle intensities is formulated by the combination of the light scattering theory of Kirchhoff approximation and the principles of speckle statistics. We propose a method for extracting the three surface parameters, i.e. the roughness w, the lateral correlation length xi and the roughness exponent alpha, from the autocorrelation functions of speckles. This method is verified by simulating the speckle intensities and calculating the speckle autocorrelation function. We also find the phenomenon that for rough surfaces with alpha = 1, the structure of the speckles resembles that of the surface heights, which results from the effect of the peak and the valley parts of the surface, acting as micro-lenses converging and diverging the light waves.
Resumo:
Based on the rigorous formulation of integral equations for the propagations of light waves at the medium interface, we carry out the numerical solutions of the random light field scattered from self-affine fractal surface samples. The light intensities produced by the same surface samples are also calculated in Kirchhoff's approximation, and their comparisons with the corresponding rigorous results show directly the degree of the accuracy of the approximation. It is indicated that Kirchhoff's approximation is of good accuracy for random surfaces with small roughness value w and large roughness exponent alpha. For random surfaces with larger w and smaller alpha, the approximation results in considerable errors, and detailed calculations show that the inaccuracy comes from the simplification that the transmitted light field is proportional to the incident field and from the neglect of light field derivative at the interface.
Resumo:
The Edge Function method formerly developed by Quinlan(25) is applied to solve the problem of thin elastic plates resting on spring supported foundations subjected to lateral loads the method can be applied to plates of any convex polygonal shapes, however, since most plates are rectangular in shape, this specific class is investigated in this thesis. The method discussed can also be applied easily to other kinds of foundation models (e.g. springs connected to each other by a membrane) as long as the resulting differential equation is linear. In chapter VII, solution of a specific problem is compared with a known solution from literature. In chapter VIII, further comparisons are given. The problems of concentrated load on an edge and later on a corner of a plate as long as they are far away from other boundaries are also given in the chapter and generalized to other loading intensities and/or plates springs constants for Poisson's ratio equal to 0.2
Resumo:
A new approach based on the gated integration technique is proposed for the accurate measurement of the autocorrelation function of speckle intensities scattered from a random phase screen. The Boxcar used for this technique in the acquisition of the speckle intensity data integrates the photoelectric signal during its sampling gate open, and it repeats the sampling by a preset number, in. The average analog of the in samplings output by the Boxcar enhances the signal-to-noise ratio by root m, because the repeated sampling and the average make the useful speckle signals stable, while the randomly varied photoelectric noise is suppressed by 1/ root m. In the experiment, we use an analog-to-digital converter module to synchronize all the actions such as the stepped movement of the phase screen, the repeated sampling, the readout of the averaged output of the Boxcar, etc. The experimental results show that speckle signals are better recovered from contaminated signals, and the autocorrelation function with the secondary maximum is obtained, indicating that the accuracy of the measurement of the autocorrelation function is greatly improved by the gated integration technique. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
23 p.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
Understanding how transcriptional regulatory sequence maps to regulatory function remains a difficult problem in regulatory biology. Given a particular DNA sequence for a bacterial promoter region, we would like to be able to say which transcription factors bind there, how strongly they bind, and whether they interact with each other and/or RNA polymerase, with the ultimate objective of integrating knowledge of these parameters into a prediction of gene expression levels. The theoretical framework of statistical thermodynamics provides a useful framework for doing so, enabling us to predict how gene expression levels depend on transcription factor binding energies and concentrations. We used thermodynamic models, coupled with models of the sequence-dependent binding energies of transcription factors and RNAP, to construct a genotype to phenotype map for the level of repression exhibited by the lac promoter, and tested it experimentally using a set of promoter variants from E. coli strains isolated from different natural environments. For this work, we sought to ``reverse engineer'' naturally occurring promoter sequences to understand how variations in promoter sequence affects gene expression. The natural inverse of this approach is to ``forward engineer'' promoter sequences to obtain targeted levels of gene expression. We used a high precision model of RNAP-DNA sequence dependent binding energy, coupled with a thermodynamic model relating binding energy to gene expression, to predictively design and verify a suite of synthetic E. coli promoters whose expression varied over nearly three orders of magnitude.
However, although thermodynamic models enable predictions of mean levels of gene expression, it has become evident that cell-to-cell variability or ``noise'' in gene expression can also play a biologically important role. In order to address this aspect of gene regulation, we developed models based on the chemical master equation framework and used them to explore the noise properties of a number of common E. coli regulatory motifs; these properties included the dependence of the noise on parameters such as transcription factor binding strength and copy number. We then performed experiments in which these parameters were systematically varied and measured the level of variability using mRNA FISH. The results showed a clear dependence of the noise on these parameters, in accord with model predictions.
Finally, one shortcoming of the preceding modeling frameworks is that their applicability is largely limited to systems that are already well-characterized, such as the lac promoter. Motivated by this fact, we used a high throughput promoter mutagenesis assay called Sort-Seq to explore the completely uncharacterized transcriptional regulatory DNA of the E. coli mechanosensitive channel of large conductance (MscL). We identified several candidate transcription factor binding sites, and work is continuing to identify the associated proteins.
Resumo:
Close to equilibrium, a normal Bose or Fermi fluid can be described by an exact kinetic equation whose kernel is nonlocal in space and time. The general expression derived for the kernel is evaluated to second order in the interparticle potential. The result is a wavevector- and frequency-dependent generalization of the linear Uehling-Uhlenbeck kernel with the Born approximation cross section.
The theory is formulated in terms of second-quantized phase space operators whose equilibrium averages are the n-particle Wigner distribution functions. Convenient expressions for the commutators and anticommutators of the phase space operators are obtained. The two-particle equilibrium distribution function is analyzed in terms of momentum-dependent quantum generalizations of the classical pair distribution function h(k) and direct correlation function c(k). The kinetic equation is presented as the equation of motion of a two -particle correlation function, the phase space density-density anticommutator, and is derived by a formal closure of the quantum BBGKY hierarchy. An alternative derivation using a projection operator is also given. It is shown that the method used for approximating the kernel by a second order expansion preserves all the sum rules to the same order, and that the second-order kernel satisfies the appropriate positivity and symmetry conditions.
Resumo:
This thesis is a theoretical work on the space-time dynamic behavior of a nuclear reactor without feedback. Diffusion theory with G-energy groups is used.
In the first part the accuracy of the point kinetics (lumped-parameter description) model is examined. The fundamental approximation of this model is the splitting of the neutron density into a product of a known function of space and an unknown function of time; then the properties of the system can be averaged in space through the use of appropriate weighting functions; as a result a set of ordinary differential equations is obtained for the description of time behavior. It is clear that changes of the shape of the neutron-density distribution due to space-dependent perturbations are neglected. This results to an error in the eigenvalues and it is to this error that bounds are derived. This is done by using the method of weighted residuals to reduce the original eigenvalue problem to that of a real asymmetric matrix. Then Gershgorin-type theorems .are used to find discs in the complex plane in which the eigenvalues are contained. The radii of the discs depend on the perturbation in a simple manner.
In the second part the effect of delayed neutrons on the eigenvalues of the group-diffusion operator is examined. The delayed neutrons cause a shifting of the prompt-neutron eigenvalue s and the appearance of the delayed eigenvalues. Using a simple perturbation method this shifting is calculated and the delayed eigenvalues are predicted with good accuracy.
Resumo:
A presente tese tem por objetivo defender, sob a visão do direito civil-constitucional e da função promocional do direito, a inter-relação entre os direitos de posse, propriedade e do meio ambiente e a possibilidade de uma ponderação harmoniosa em caso de desequilíbrio entre esses direitos. Utiliza-se para tanto a dimensão analítica, empírica e normativa. A dimensão analítica tem por objetivo investigar os conceitos jurídicos envolvidos na pesquisa, especialmente em relação à propriedade e à sua função socioambiental. A relação entre tais conceitos sobressai através da análise da função socioambiental da propriedade, da posse enfatizando-se os aspectos da legislação ambiental. O direito fundamental ao meio ambiente é estudado como direito e dever de todos conforme disposto no art. 225 da Constituição de 1988, e, nesse ponto, diretamente eficaz nas relações interprivadas. Aborda-se, na dimensão empírica e normativa essencialmente aspectos práticos, com foco na jurisprudência, especialmente do Supremo Tribunal Federal (STF) e do Superior Tribunal de Justiça (STJ). A ponderação harmoniosa entre a propriedade, a posse e o meio ambiente, busca o equilíbrio na efetivação desses direitos, inclusive mediante a aplicação dos princípios do direito econômico. Por meio da ponderação, é possível alcançar, de forma mais eficiente do que o modelo tradicional de subsunção, uma resposta adequada e fundamentada para os casos difíceis, especialmente na efetivação e na restauração do equilíbrio entre a posse, a propriedade e o meio ambiente quando esses princípios, no caso concreto, colidem uns com os outros. Sobretudo, pretende-se concretizar os direitos fundamentais segundo exigências do pós-positivismo, por meio da aproximação entre o Direito e a Ética, com o fim de se alcançar a Justiça para o caso concreto.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.