983 resultados para vectorial analytic solution
Resumo:
Generalized linear mixed models are flexible tools for modeling non-normal data and are useful for accommodating overdispersion in Poisson regression models with random effects. Their main difficulty resides in the parameter estimation because there is no analytic solution for the maximization of the marginal likelihood. Many methods have been proposed for this purpose and many of them are implemented in software packages. The purpose of this study is to compare the performance of three different statistical principles - marginal likelihood, extended likelihood, Bayesian analysis-via simulation studies. Real data on contact wrestling are used for illustration.
Resumo:
This paper shows the insertion of corona effect in a transmission line model based on lumped elements. The development is performed considering a frequency-dependent line representation by cascade of pi sections and state equations. Hence, the detailed profile of currents and voltages along the line, described from a non-homogeneous system of differential equations, can be obtained directly in time domain applying numerical or analytic solution integration methods. The corona discharge model is also based on lumped elements and is implemented from the well-know Skilling-Umoto Model.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Matemática - IBILCE
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
É apresentada uma solução totalmente analítica do modelo da falha infinita para o modo TE magnetotelúrico, levando em conta a presença do ar, utilizando como base o trabalho de Sampaio apresentado em 1985, que apresenta uma solução parcialmente analítica e parcialmente numérica – solução híbrida. Naquela solução foram aplicadas oito condições de contorno, sendo que em quatro delas foram encontradas inconsistências matemáticas que foram dirimidas com alterações adequadas nas soluções propostas por Sampaio. Tais alterações propiciaram a chegarse à solução totalmente analítica aqui apresentada. A solução obtida foi comparada com a solução de Weaver, com a de Sampaio e com o resultado do método numérico dos elementos finitos para contrastes de resistividade iguais a 2, 10 e 50. A comparação da solução analítica, para o campo elétrico normalizado, com a solução de elementos finitos mostra que a solução analítica proporcionou resultados mais próximos, em comparação aos fornecidos por Weaver e por Sampaio. Este é um problema muito difícil, aberto para uma solução analítica definitiva. A solução apresentada aqui é, nesta direção, um grande passo.
Resumo:
Different mathematical methods have been applied to obtain the analytic result for the massless triangle Feynman diagram yielding a sum of four linearly independent (LI) hypergeometric functions of two variables F-4. This result is not physically acceptable when it is embedded in higher loops, because all four hypergeometric functions in the triangle result have the same region of convergence and further integration means going outside those regions of convergence. We could go outside those regions by using the well-known analytic continuation formulas obeyed by the F-4, but there are at least two ways we can do this. Which is the correct one? Whichever continuation one uses, it reduces a number of F-4 from four to three. This reduction in the number of hypergeometric functions can be understood by taking into account the fundamental physical constraint imposed by the conservation of momenta flowing along the three legs of the diagram. With this, the number of overall LI functions that enter the most general solution must reduce accordingly. It remains to determine which set of three LI solutions needs to be taken. To determine the exact structure and content of the analytic solution for the three-point function that can be embedded in higher loops, we use the analogy that exists between Feynman diagrams and electric circuit networks, in which the electric current flowing in the network plays the role of the momentum flowing in the lines of a Feynman diagram. This analogy is employed to define exactly which three out of the four hypergeometric functions are relevant to the analytic solution for the Feynman diagram. The analogy is built based on the equivalence between electric resistance circuit networks of types Y and Delta in which flows a conserved current. The equivalence is established via the theorem of minimum energy dissipation within circuits having these structures.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The numerical simulation of flows of highly elastic fluids has been the subject of intense research over the past decades with important industrial applications. Therefore, many efforts have been made to improve the convergence capabilities of the numerical methods employed to simulate viscoelastic fluid flows. An important contribution for the solution of the High-Weissenberg Number Problem has been presented by Fattal and Kupferman [J. Non-Newton. Fluid. Mech. 123 (2004) 281-285] who developed the matrix-logarithm of the conformation tensor technique, henceforth called log-conformation tensor. Its advantage is a better approximation of the large growth of the stress tensor that occur in some regions of the flow and it is doubly beneficial in that it ensures physically correct stress fields, allowing converged computations at high Weissenberg number flows. In this work we investigate the application of the log-conformation tensor to three-dimensional unsteady free surface flows. The log-conformation tensor formulation was applied to solve the Upper-Convected Maxwell (UCM) constitutive equation while the momentum equation was solved using a finite difference Marker-and-Cell type method. The resulting developed code is validated by comparing the log-conformation results with the analytic solution for fully developed pipe flows. To illustrate the stability of the log-conformation tensor approach in solving three-dimensional free surface flows, results from the simulation of the extrudate swell and jet buckling phenomena of UCM fluids at high Weissenberg numbers are presented. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The interplay of hydrodynamic and electrostatic forces is of great importance for the understanding of colloidal dispersions. Theoretical descriptions are often based on the so called standard electrokinetic model. This Mean Field approach combines the Stokes equation for the hydrodynamic flow field, the Poisson equation for electrostatics and a continuity equation describing the evolution of the ion concentration fields. In the first part of this thesis a new lattice method is presented in order to efficiently solve the set of non-linear equations for a charge-stabilized colloidal dispersion in the presence of an external electric field. Within this framework, the research is mainly focused on the calculation of the electrophoretic mobility. Since this transport coefficient is independent of the electric field only for small driving, the algorithm is based upon a linearization of the governing equations. The zeroth order is the well known Poisson-Boltzmann theory and the first order is a coupled set of linear equations. Furthermore, this set of equations is divided into several subproblems. A specialized solver for each subproblem is developed, and various tests and applications are discussed for every particular method. Finally, all solvers are combined in an iterative procedure and applied to several interesting questions, for example, the effect of the screening mechanism on the electrophoretic mobility or the charge dependence of the field-induced dipole moment and ion clouds surrounding a weakly charged sphere. In the second part a quantitative data analysis method is developed for a new experimental approach, known as "Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy" (TIR-FCCS). The TIR-FCCS setup is an optical method using fluorescent colloidal particles to analyze the flow field close to a solid-fluid interface. The interpretation of the experimental results requires a theoretical model, which is usually the solution of a convection-diffusion equation. Since an analytic solution is not available due to the form of the flow field and the boundary conditions, an alternative numerical approach is presented. It is based on stochastic methods, i. e. a combination of a Brownian Dynamics algorithm and Monte Carlo techniques. Finally, experimental measurements for a hydrophilic surface are analyzed using this new numerical approach.
Resumo:
We consider the inertially driven, time-dependent biaxial extensional motion of inviscid and viscous thinning liquid sheets. We present an analytic solution describing the base flow and examine its linear stability to varicose (symmetric) perturbations within the framework of a long-wave model where transient growth and long-time asymptotic stability are considered. The stability of the system is characterized in terms of the perturbation wavenumber, Weber number, and Reynolds number. We find that the isotropic nature of the base flow yields stability results that are identical for axisymmetric and general two-dimensional perturbations. Transient growth of short-wave perturbations at early to moderate times can have significant and lasting influence on the long-time sheet thickness. For finite Reynolds numbers, a radially expanding sheet is weakly unstable with bounded growth of all perturbations, whereas in the inviscid and Stokes flow limits sheets are unstable to perturbations in the short-wave limit.
Resumo:
The aim of this paper Is lo discuss the influence of the selection of the interpolation kernel in the accuracy of the modeling of the internal viscous dissipation in Tree surface Hows, Simulations corresponding to a standing wave* for which an analytic solution available, are presented. Wendland and renormalized Gaussian kernels are considered. The differences in the flow pattern* and Internal dissipation mechanisms are documented for a range of Reynolds numbers. It is shown that the simulations with Wendland kernels replicate the dissipation mechanisms more accurately than those with a renormalized Gaussian kernel. Although some explanations are hinted we have Tailed to clarify which the core structural reasons for Mich differences are*
Resumo:
A numerical method to analyse the stability of transverse galloping based on experimental measurements, as an alternative method to polynomial fitting of the transverse force coefficient Cz, is proposed in this paper. The Glauert–Den Hartog criterion is used to determine the region of angles of attack (pitch angles) prone to present galloping. An analytic solution (based on a polynomial curve of Cz) is used to validate the method and to evaluate the discretization errors. Several bodies (of biconvex, D-shape and rhomboidal cross sections) have been tested in a wind tunnel and the stability of the galloping region has been analysed with the new method. An algorithm to determine the pitch angle of the body that allows the maximum value of the kinetic energy of the flow to be extracted is presented.
Resumo:
Fundamental principles of precaution are legal maxims that ask for preventive actions, perhaps as contingent interim measures while relevant information about causality and harm remains unavailable, to minimize the societal impact of potentially severe or irreversible outcomes. Such principles do not explain how to make choices or how to identify what is protective when incomplete and inconsistent scientific evidence of causation characterizes the potential hazards. Rather, they entrust lower jurisdictions, such as agencies or authorities, to make current decisions while recognizing that future information can contradict the scientific basis that supported the initial decision. After reviewing and synthesizing national and international legal aspects of precautionary principles, this paper addresses the key question: How can society manage potentially severe, irreversible or serious environmental outcomes when variability, uncertainty, and limited causal knowledge characterize their decision-making? A decision-analytic solution is outlined that focuses on risky decisions and accounts for prior states of information and scientific beliefs that can be updated as subsequent information becomes available. As a practical and established approach to causal reasoning and decision-making under risk, inherent to precautionary decision-making, these (Bayesian) methods help decision-makers and stakeholders because they formally account for probabilistic outcomes, new information, and are consistent and replicable. Rational choice of an action from among various alternatives-defined as a choice that makes preferred consequences more likely-requires accounting for costs, benefits and the change in risks associated with each candidate action. Decisions under any form of the precautionary principle reviewed must account for the contingent nature of scientific information, creating a link to the decision-analytic principle of expected value of information (VOI), to show the relevance of new information, relative to the initial ( and smaller) set of data on which the decision was based. We exemplify this seemingly simple situation using risk management of BSE. As an integral aspect of causal analysis under risk, the methods developed in this paper permit the addition of non-linear, hormetic dose-response models to the current set of regulatory defaults such as the linear, non-threshold models. This increase in the number of defaults is an important improvement because most of the variants of the precautionary principle require cost-benefit balancing. Specifically, increasing the set of causal defaults accounts for beneficial effects at very low doses. We also show and conclude that quantitative risk assessment dominates qualitative risk assessment, supporting the extension of the set of default causal models.
Resumo:
We consider the problem of on-line gradient descent learning for general two-layer neural networks. An analytic solution is presented and used to investigate the role of the learning rate in controlling the evolution and convergence of the learning process.