985 resultados para Theoretical justification
Resumo:
Density functional calculations, using B3LPY/6-31G(d) methods, have been used to investigate the conformations and vibrational (Raman) spectra of a series of long-chain, saturated fatty acid methyl esters (FAMEs) with the formula CH2nO2 (n = 5-21) and two series of unsaturated FAMEs. The calculations showed that the lowest energy conformer within the saturated FAMEs is the simple (all-trans) structure and, in general, it was possible to reproduce experimental data using calculations on only the all-trans conformer. The only exception was C6H12O2, where a second low-lying conformer had to be included in order to correctly simulate the experimental Raman spectrum. The objective of the work was to provide theoretical justification for the methods that are commonly used to determine the properties of the fats and oils, such as chain length and degree of unsaturation, from experimental Raman data. Here it is shown that the calculations reproduce the trends and calibration curves that are found experimentally and also allow the reasons for the failure of what would appear to be rational measurements to be understood. This work shows that although the assumption that each FAME can simply be treated as a collection of functional groups can be justified in some cases, many of the vibrational modes are complex motions of large sections of the molecules and thus would not be expected to show simple linear trends with changes in structure, such as increasing chain length and/or unsaturation. Simple linear trends obtained from experimental data may thus arise from cancellation of opposing effects, rather than reflecting an underlying simplicity.
Identification of biowaivers among Class II drugs: theoretical justification and practical examples.
Resumo:
Discusses the implications for the doctrine of common mistake of the Court of Appeal ruling in Great Peace Shipping Ltd v Tsavliris Salvage (International) Ltd on whether a contract for the hire of a ship was void on the ground of common mistake regarding the position of the ship. Reviews the origins of the doctrine of common mistake and the relationship between the doctrine and the implication of terms. Considers the determination of impossibility. Examines the role of equity in common mistake and remedial equitable intervention.
Resumo:
In Part I, theoretical derivations for Variational Monte Carlo calculations are compared with results from a numerical calculation of He; both indicate that minimization of the ratio estimate of Evar , denoted EMC ' provides different optimal variational parameters than does minimization of the variance of E MC • Similar derivations for Diffusion Monte Carlo calculations provide a theoretical justification for empirical observations made by other workers. In Part II, Importance sampling in prolate spheroidal coordinates allows Monte Carlo calculations to be made of E for the vdW molecule var He2' using a simplifying partitioning of the Hamiltonian and both an HF-SCF and an explicitly correlated wavefunction. Improvements are suggested which would permit the extension of the computational precision to the point where an estimate of the interaction energy could be made~
Resumo:
This is a guide to develop a theoretical framework for any field of knowledge. It is a rational and organized to put everything that is known or has been written about an issue or a problem way.
Resumo:
This thesis establishes performance properties for approximate filters and controllers that are designed on the basis of approximate dynamic system representations. These performance properties provide a theoretical justification for the widespread application of approximate filters and controllers in the common situation where system models are not known with complete certainty. This research also provides useful tools for approximate filter designs, which are applied to hybrid filtering of uncertain nonlinear systems. As a contribution towards applications, this thesis also investigates air traffic separation control in the presence of measurement uncertainties.
Resumo:
A correlation of the structural data on IS hydrates obtained by x-ray diffraction, neutron diffraction, and proton magnetic resonance reveals that when a water molecule is hydrogen bonded into a crystal structure and the angle subtended at the donor water oxygen by the acceptor atoms deviates from the vapor H-O-H angle, bent hydrogen bonds are formed in preference to distortion of the H-O-H angle. Theoretical justification for this result is obtained from energy considerations by calculating the energy of formation of bent hydrogen bonds on the basis of the Lippincott-Schroeder potential function model for the hydrogen bond and the energy of deformation of the H-O-H angle from spectroscopic force constants.
Resumo:
A two-stage H∞-based design procedure is described which uses a normalized coprime factor approach to robust stabilization of linear systems. A loop-shaping procedure is incroporated to allow the specification of performance characteristics. Theoretical justification of this technique and an outline of the design methodology are given.
Resumo:
This paper is concerned with the probability density function of the energy of a random dynamical system subjected to harmonic excitation. It is shown that if the natural frequencies and mode shapes of the system conform to the Gaussian Orthogonal Ensemble, then under common types of loading the distribution of the energy of the response is approximately lognormal, providing the modal overlap factor is high (typically greater than two). In contrast, it is shown that the response of a system with Poisson natural frequencies is not approximately lognormal. Numerical simulations are conducted on a plate system to validate the theoretical findings and good agreement is obtained. Simulations are also conducted on a system made from two plates connected with rotational springs to demonstrate that the theoretical findings can be extended to a built-up system. The work provides a theoretical justification of the commonly used empirical practice of assuming that the energy response of a random system is lognormal.
Resumo:
The purpose of this thesis is to give answer to the question: why do riblets stop working for a certain size? Riblets are small surface grooves aligned in the mean direction of an overlying turbulent flow, designed specifically to reduce the friction between the flow and the surface. They were inspired by biological surfaces, like the oriented denticles in the skin of fastswimming sharks, and were the focus of a significant amount of research in the late eighties and nineties. Although it was found that the drag reduction depends on the riblet size scaled in wall units, the physical mechanisms implicated have not been completely understood up to now. It has been explained how riblets of vanishing size interact with the turbulent flow, producing a change in the drag proportional to their size, but that is not the regime of practical interest. The optimum performance is achieved for larger sizes, once that linear behavior has broken down, but before riblets begin adopting the character of regular roughness and increasing drag. This regime, which is the most relevant from a technological perspective, was precisely the less understood, so we have focused on it. Our efforts have followed three basic directions. First, we have re-assessed the available experimental data, seeking to identify common characteristics in the optimum regime across the different existing riblet geometries. This study has led to the proposal of a new length scale, the square root of the groove crosssection, to substitute the traditional peak-to-peak spacing. Scaling the riblet dimension with this length, the size of breakdown of the linear behavior becomes roughly universal. This suggests that the onset of the breakdown is related to a certain, fixed value of the cross-section of the groove. Second, we have conducted a set of direct numerical simulations of the turbulent flow over riblets, for sizes spanning the full drag reduction range. We have thus been able to reproduce the gradual transition between the different regimes. The spectral analysis of the flows has proven particularly fruitful, since it has made possible to identify spanwise rollers immediately above the riblets, which begin to appear when the riblet size is close to the optimum. This is a quite surprising feature of the flow, not because of the uniqueness of the phenomenon, which had been reported before for other types of complex and porous surfaces, but because most previous studies had focused on the detail of the flow above each riblet as a unit. Our novel approach has provided the adequate tools to capture coherent structures with an extended spanwise support, which interact with the riblets not individually, but collectively. We have also proven that those spanwise structures are responsible for the increase in drag past the viscous breakdown. Finally, we have analyzed the stability of the flow with a simplified model that connects the appearance of rollers to a Kelvin–Helmholtz-like instability, as is the case also for the flow over plant canopies and porous surfaces. In spite of the model emulating the presence of riblets only in an averaged, general fashion, it succeeds to capture the essential attributes of the breakdown, and provides a theoretical justification for the scaling with the groove cross-section.
Resumo:
Mixture of Gaussians (MoG) modelling [13] is a popular approach to background subtraction in video sequences. Although the algorithm shows good empirical performance, it lacks theoretical justification. In this paper, we give a justification for it from an online stochastic expectation maximization (EM) viewpoint and extend it to a general framework of regularized online classification EM for MoG with guaranteed convergence. By choosing a special regularization function, l1 norm, we derived a new set of updating equations for l1 regularized online MoG. It is shown empirically that l1 regularized online MoG converge faster than the original online MoG .
Resumo:
O presente trabalho aborda de que forma a Análise Económica pode contribuir para a definição de uma Política Pública da Água sustentável para Portugal. Analisam-se, inicialmente, as particularidades do recurso e o seu enquadramento legislativo, institucional, bem como as respectivas implicações no processo de gestão da água. Esta análise conduz à definição daquilo a que se chamará ―Novo Modelo de Gestão da Água‖. Tendo por enquadramento a Directiva-Quadro da Água (DQA) — Directiva 2000/60/CE de 23 de Outubro de 2000, publicada no Jornal Oficial das Comunidades Europeias, em 22 de Dezembro do mesmo ano — é ilustrada a aplicação dos conceitos e a abordagem desenvolvida na definição de uma estratégia política de actuação para Portugal, de modo a assegurar o seu cumprimento de forma eficaz, eficiente e sustentável. São discutidos os aspectos económicos e a justificação teórica para a intervenção nos mercados, nomeadamente através do desenvolvimento de sistemas de tarifas. As formas de financiamento do sector, à luz do princípio da recuperação de custos, são analisados propondo-se a chamada visão dos 4T. Dado o contexto de análise do sector da água, enquanto política pública, são referenciados os vários de tipos de regulação e as várias reformas propostas pelos principais investigadores e organizações internacionais. Neste contexto de análise é abordada a governação (governance) e os seus atributos. São enunciados os principais entraves a uma governação eficiente. As várias formas de participação do capital privado, bem como a descrição de algumas das suas potencialidades são postas em evidência. A partir de um modelo analítico procede-se ao estudo dos efeitos do uso de vários instrumentos económicos, nomeadamente a nível do bem-estar. Analisa-se o modelo institucional português, nas suas vertentes, legislativa e institucional. O estado dos recursos hídricos e dos serviços de água em Portugal é avaliado a partir de dados oficiais. Com base na identificação das restrições do actual modelo institucional, é proposto um novo modelo que responda de forma flexível e atempada às solicitações postas pela Directiva. Propõe se a criação de uma instituição financeira — o ―Banco da Água‖ — que, em condições de mercado, possa financiar os investimentos estruturais necessários à melhoria da qualidade dos recursos hídricos, bem como dos serviços associados à água. Pretende demonstrar-se que, face às restrições orçamentais, à esperada conclusão do Quadro de Referência Estratégico Nacional (QREN) e às limitações dos chamados project finance esta solução será necessária para o sucesso da Política Pública da Água. A criação de condições para um maior papel da iniciativa privada, uma legislação protectora do consumidor, a aplicação de instrumentos de política da água — nomeadamente sistemas de tarifas e a criação de um Fundo de Equilíbrio Tarifário —, e o uso da metodologia Oikomatrix, nas políticas sectoriais, são outras das sugestões que completam as propostas avançadas tendentes a que o Sector da Água mimize algumas das ineficiências detectadas e almeje à desejável sustentabilidade.
Resumo:
In this paper we propose a highly accurate approximation procedure for ruin probabilities in the classical collective risk model, which is based on a quadrature/rational approximation procedure proposed in [2]. For a certain class of claim size distributions (which contains the completely monotone distributions) we give a theoretical justification for the method. We also show that under weaker assumptions on the claim size distribution, the method may still perform reasonably well in some cases. This in particular provides an efficient alternative to a related method proposed in [3]. A number of numerical illustrations for the performance of this procedure is provided for both completely monotone and other types of random variables.
Resumo:
Ma thèse est composée de trois essais sur l'inférence par le bootstrap à la fois dans les modèles de données de panel et les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peut être faible. La théorie asymptotique n'étant pas toujours une bonne approximation de la distribution d'échantillonnage des estimateurs et statistiques de tests, je considère le bootstrap comme une alternative. Ces essais tentent d'étudier la validité asymptotique des procédures bootstrap existantes et quand invalides, proposent de nouvelles méthodes bootstrap valides. Le premier chapitre #co-écrit avec Sílvia Gonçalves# étudie la validité du bootstrap pour l'inférence dans un modèle de panel de données linéaire, dynamique et stationnaire à effets fixes. Nous considérons trois méthodes bootstrap: le recursive-design bootstrap, le fixed-design bootstrap et le pairs bootstrap. Ces méthodes sont des généralisations naturelles au contexte des panels des méthodes bootstrap considérées par Gonçalves et Kilian #2004# dans les modèles autorégressifs en séries temporelles. Nous montrons que l'estimateur MCO obtenu par le recursive-design bootstrap contient un terme intégré qui imite le biais de l'estimateur original. Ceci est en contraste avec le fixed-design bootstrap et le pairs bootstrap dont les distributions sont incorrectement centrées à zéro. Cependant, le recursive-design bootstrap et le pairs bootstrap sont asymptotiquement valides quand ils sont appliqués à l'estimateur corrigé du biais, contrairement au fixed-design bootstrap. Dans les simulations, le recursive-design bootstrap est la méthode qui produit les meilleurs résultats. Le deuxième chapitre étend les résultats du pairs bootstrap aux modèles de panel non linéaires dynamiques avec des effets fixes. Ces modèles sont souvent estimés par l'estimateur du maximum de vraisemblance #EMV# qui souffre également d'un biais. Récemment, Dhaene et Johmans #2014# ont proposé la méthode d'estimation split-jackknife. Bien que ces estimateurs ont des approximations asymptotiques normales centrées sur le vrai paramètre, de sérieuses distorsions demeurent à échantillons finis. Dhaene et Johmans #2014# ont proposé le pairs bootstrap comme alternative dans ce contexte sans aucune justification théorique. Pour combler cette lacune, je montre que cette méthode est asymptotiquement valide lorsqu'elle est utilisée pour estimer la distribution de l'estimateur split-jackknife bien qu'incapable d'estimer la distribution de l'EMV. Des simulations Monte Carlo montrent que les intervalles de confiance bootstrap basés sur l'estimateur split-jackknife aident grandement à réduire les distorsions liées à l'approximation normale en échantillons finis. En outre, j'applique cette méthode bootstrap à un modèle de participation des femmes au marché du travail pour construire des intervalles de confiance valides. Dans le dernier chapitre #co-écrit avec Wenjie Wang#, nous étudions la validité asymptotique des procédures bootstrap pour les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peu être faible. Nous montrons analytiquement qu'un bootstrap standard basé sur les résidus et le bootstrap restreint et efficace #RE# de Davidson et MacKinnon #2008, 2010, 2014# ne peuvent pas estimer la distribution limite de l'estimateur du maximum de vraisemblance à information limitée #EMVIL#. La raison principale est qu'ils ne parviennent pas à bien imiter le paramètre qui caractérise l'intensité de l'identification dans l'échantillon. Par conséquent, nous proposons une méthode bootstrap modifiée qui estime de facon convergente cette distribution limite. Nos simulations montrent que la méthode bootstrap modifiée réduit considérablement les distorsions des tests asymptotiques de type Wald #$t$# dans les échantillons finis, en particulier lorsque le degré d'endogénéité est élevé.
Resumo:
We consider two new approaches to nonparametric estimation of the leverage effect. The first approach uses stock prices alone. The second approach uses the data on stock prices as well as a certain volatility instrument, such as the CBOE volatility index (VIX) or the Black-Scholes implied volatility. The theoretical justification for the instrument-based estimator relies on a certain invariance property, which can be exploited when high frequency data is available. The price-only estimator is more robust since it is valid under weaker assumptions. However, in the presence of a valid volatility instrument, the price-only estimator is inefficient as the instrument-based estimator has a faster rate of convergence. We consider two empirical applications, in which we study the relationship between the leverage effect and the debt-to-equity ratio, credit risk, and illiquidity.