945 resultados para Stochastic integrals


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In questo elaborato ci siamo occupati della legge di Zipf sia da un punto di vista applicativo che teorico. Tale legge empirica afferma che il rango in frequenza (RF) delle parole di un testo seguono una legge a potenza con esponente -1. Per quanto riguarda l'approccio teorico abbiamo trattato due classi di modelli in grado di ricreare leggi a potenza nella loro distribuzione di probabilità. In particolare, abbiamo considerato delle generalizzazioni delle urne di Polya e i processi SSR (Sample Space Reducing). Di questi ultimi abbiamo dato una formalizzazione in termini di catene di Markov. Infine abbiamo proposto un modello di dinamica delle popolazioni capace di unificare e riprodurre i risultati dei tre SSR presenti in letteratura. Successivamente siamo passati all'analisi quantitativa dell'andamento del RF sulle parole di un corpus di testi. Infatti in questo caso si osserva che la RF non segue una pura legge a potenza ma ha un duplice andamento che può essere rappresentato da una legge a potenza che cambia esponente. Abbiamo cercato di capire se fosse possibile legare l'analisi dell'andamento del RF con le proprietà topologiche di un grafo. In particolare, a partire da un corpus di testi abbiamo costruito una rete di adiacenza dove ogni parola era collegata tramite un link alla parola successiva. Svolgendo un'analisi topologica della struttura del grafo abbiamo trovato alcuni risultati che sembrano confermare l'ipotesi che la sua struttura sia legata al cambiamento di pendenza della RF. Questo risultato può portare ad alcuni sviluppi nell'ambito dello studio del linguaggio e della mente umana. Inoltre, siccome la struttura del grafo presenterebbe alcune componenti che raggruppano parole in base al loro significato, un approfondimento di questo studio potrebbe condurre ad alcuni sviluppi nell'ambito della comprensione automatica del testo (text mining).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hypoxia is one of the most important and faster spreading threats to marine life and its occurrence has significantly increased in the last century. The effects of hypoxia on marine organisms and communities has mostly been studied in light of the intensity of the disturbance but not a lot of attention has been given to its interaction with other stressors and the timing of its appearance. In this thesis I started to explore these topics through laboratory and manipulative field experiments. I studied the interactive effects of thermal stress and hypoxia on a European native bivalve species (Cerastoderma edule; Linnaeus, 1758 ) and a non native one (Ruditapes philippinarum; Adams & Reeve, 1850) through a laboratory experiment performed in the Netherlands. The non native species displayed a greater tolerance to oxygen depletion than the native one. The first field experiment was performed in an Italian brackish coastal lagoon (Pialassa Baiona) and tested the effects of different timing regimes of hypoxia on the benthic community. It emerged that the main factor affecting the community is the duration of the hypoxia. The ability of the communities to recover after repeated hypoxic periods was explored in the second manipulative field experiment. We imposed three different timing regimes of hypoxia on sediment patches in Pialassa Baiona and we monitored the changes of both the benthic and the microbial communities after the disturbances. The preliminary analyses of the data from this last work suggest that the experimental manipulations caused limited detrimental effects on the communities. Overall this thesis work suggests that the duration of hypoxic events, their repetitive nature and the associated thermal stress are key factors in determining their effects on the communities and that management measures should point towards a reduction of the duration of the single hypoxic periods more than their frequency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last two decades, authors have begun to expand classical stochastic frontier (SF) models in order to include also some spatial components. Indeed, firms tend to concentrate in clusters, taking advantage of positive agglomeration externalities due to cooperation, shared ideas and emulation, resulting in increased productivity levels. Until now scholars have introduced spatial dependence into SF models following two different paths: evaluating global and local spatial spillover effects related to the frontier or considering spatial cross-sectional correlation in the inefficiency and/or in the error term. In this thesis, we extend the current literature on spatial SF models introducing two novel specifications for panel data. First, besides considering productivity and input spillovers, we introduce the possibility to evaluate the specific spatial effects arising from each inefficiency determinant through their spatial lags aiming to capture also knowledge spillovers. Second, we develop a very comprehensive spatial SF model that includes both frontier and error-based spillovers in order to consider four different sources of spatial dependence (i.e. productivity and input spillovers related to the frontier function and behavioural and environmental correlation associated with the two error terms). Finally, we test the finite sample properties of the two proposed spatial SF models through simulations, and we provide two empirical applications to the Italian accommodation and agricultural sectors. From a practical perspective, policymakers, based on results from these models, can rely on precise, detailed and distinct insights on the spillover effects affecting the productive performance of neighbouring spatial units obtaining interesting and relevant suggestions for policy decisions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we study the heat kernel, a useful tool to analyze various properties of different quantum field theories. In particular, we focus on the study of the one-loop effective action and the application of worldline path integrals to derive perturbatively the heat kernel coefficients for the Proca theory of massive vector fields. It turns out that the worldline path integral method encounters some difficulties if the differential operator of the heat kernel is of non-minimal kind. More precisely, a direct recasting of the differential operator in terms of worldline path integrals, produces in the classical action a non-perturbative vertex and the path integral cannot be solved. In this work we wish to find ways to circumvent this issue and to give a suggestion to solve similar problems in other contexts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El Niño-Southern Oscillation (ENSO) è il maggiore fenomeno climatico che avviene a livello dell’Oceano Pacifico tropicale e che ha influenze ambientali, climatiche e socioeconomiche a larga scala. In questa tesi si ripercorrono i passi principali che sono stati fatti per tentare di comprendere un fenomeno così complesso. Per prima cosa, si sono studiati i meccanismi che ne governano la dinamica, fino alla formulazione del modello matematico chiamato Delayed Oscillator (DO) model, proposto da Suarez e Schopf nel 1988. In seguito, per tenere conto della natura caotica del sistema studiato, si è introdotto nel modello lo schema chiamato Stochastically Perturbed Parameterisation Tendencies (SPPT). Infine, si sono portati due esempi di soluzione numerica del DO, sia con che senza l’introduzione della correzione apportata dallo schema SPPT, e si è visto in che misura SPPT porta reali miglioramenti al modello studiato.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The decomposition of Feynman integrals into a basis of independent master integrals is an essential ingredient of high-precision theoretical predictions, that often represents a major bottleneck when processes with a high number of loops and legs are involved. In this thesis we present a new algorithm for the decomposition of Feynman integrals into master integrals with the formalism of intersection theory. Intersection theory is a novel approach that allows to decompose Feynman integrals into master integrals via projections, based on a scalar product between Feynman integrals called intersection number. We propose a new purely rational algorithm for the calculation of intersection numbers of differential $n-$forms that avoids the presence of algebraic extensions. We show how expansions around non-rational poles, which are a bottleneck of existing algorithms for intersection numbers, can be avoided by performing an expansion in series around a rational polynomial irreducible over $\mathbb{Q}$, that we refer to as $p(z)-$adic expansion. The algorithm we developed has been implemented and tested on several diagrams, both at one and two loops.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Universidade Estadual de Campinas. Faculdade de Educação Física

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este artigo discute um modelo de previsão combinada para a realização de prognósticos climáticos na escala sazonal. Nele, previsões pontuais de modelos estocásticos são agregadas para obter as melhores projeções no tempo. Utilizam-se modelos estocásticos autoregressivos integrados a médias móveis, de suavização exponencial e previsões por análise de correlações canônicas. O controle de qualidade das previsões é feito através da análise dos resíduos e da avaliação do percentual de redução da variância não-explicada da modelagem combinada em relação às previsões dos modelos individuais. Exemplos da aplicação desses conceitos em modelos desenvolvidos no Instituto Nacional de Meteorologia (INMET) mostram bons resultados e ilustram que as previsões do modelo combinado, superam na maior parte dos casos a de cada modelo componente, quando comparadas aos dados observados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste artigo apresentamos uma análise Bayesiana para o modelo de volatilidade estocástica (SV) e uma forma generalizada deste, cujo objetivo é estimar a volatilidade de séries temporais financeiras. Considerando alguns casos especiais dos modelos SV usamos algoritmos de Monte Carlo em Cadeias de Markov e o software WinBugs para obter sumários a posteriori para as diferentes formas de modelos SV. Introduzimos algumas técnicas Bayesianas de discriminação para a escolha do melhor modelo a ser usado para estimar as volatilidades e fazer previsões de séries financeiras. Um exemplo empírico de aplicação da metodologia é introduzido com a série financeira do IBOVESPA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a fuzzy approach to the Reed-Frost model for epidemic spreading taking into account uncertainties in the diagnostic of the infection. The heterogeneities in the infected group is based on the clinical signals of the individuals (symptoms, laboratorial exams, medical findings, etc.), which are incorporated into the dynamic of the epidemic. The infectivity level is time-varying and the classification of the individuals is performed through fuzzy relations. Simulations considering a real problem with data of the viral epidemic in a children daycare are performed and the results are compared with a stochastic Reed-Frost generalization

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The exact time-dependent solution for the stochastic equations governing the behavior of a binary self-regulating gene is presented. Using the generating function technique to rephrase the master equations in terms of partial differential equations, we show that the model is totally integrable and the analytical solutions are the celebrated confluent Heun functions. Self-regulation plays a major role in the control of gene expression, and it is remarkable that such a microscopic model is completely integrable in terms of well-known complex functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Population antimicrobial use may influence resistance emergence. Resistance is an ecological phenomenon due to potential transmissibility. We investigated spatial and temporal patterns of ciprofloxacin (CIP) population consumption related to E. coli resistance emergence and dissemination in a major Brazilian city. A total of 4,372 urinary tract infection E. coli cases, with 723 CIP resistant, were identified in 2002 from two outpatient centres. Cases were address geocoded in a digital map. Raw CIP consumption data was transformed into usage density in DDDs by CIP selling points influence zones determination. A stochastic model coupled with a Geographical Information System was applied for relating resistance and usage density and for detecting city areas of high/low resistance risk. Results: E. coli CIP resistant cluster emergence was detected and significantly related to usage density at a level of 5 to 9 CIP DDDs. There were clustered hot-spots and a significant global spatial variation in the residual resistance risk after allowing for usage density. Conclusions: There were clustered hot-spots and a significant global spatial variation in the residual resistance risk after allowing for usage density. The usage density of 5-9 CIP DDDs per 1,000 inhabitants within the same influence zone was the resistance triggering level. This level led to E. coli resistance clustering, proving that individual resistance emergence and dissemination was affected by antimicrobial population consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large-conductance Ca(2+)-activated K(+) channels (BK) play a fundamental role in modulating membrane potential in many cell types. The gating of BK channels and its modulation by Ca(2+) and voltage has been the subject of intensive research over almost three decades, yielding several of the most complicated kinetic mechanisms ever proposed. A large number of open and closed states disposed, respectively, in two planes, named tiers, characterize these mechanisms. Transitions between states in the same plane are cooperative and modulated by Ca(2+). Transitions across planes are highly concerted and voltage-dependent. Here we reexamine the validity of the two-tiered hypothesis by restricting attention to the modulation by Ca(2+). Large single channel data sets at five Ca(2+) concentrations were simultaneously analyzed from a Bayesian perspective by using hidden Markov models and Markov-chain Monte Carlo stochastic integration techniques. Our results support a dramatic reduction in model complexity, favoring a simple mechanism derived from the Monod-Wyman-Changeux allosteric model for homotetramers, able to explain the Ca(2+) modulation of the gating process. This model differs from the standard Monod-Wyman-Changeux scheme in that one distinguishes when two Ca(2+) ions are bound to adjacent or diagonal subunits of the tetramer.