970 resultados para Generalized Pareto Distribution
Resumo:
The generalized Langevin equation (GLE) method, as developed previously [L. Stella et al., Phys. Rev. B 89, 134303 (2014)], is used to calculate the dissipative dynamics of systems described at the atomic level. The GLE scheme goes beyond the commonly used bilinear coupling between the central system and the bath, and permits us to have a realistic description of both the dissipative central system and its surrounding bath. We show how to obtain the vibrational properties of a realistic bath and how to convey such properties into an extended Langevin dynamics by the use of the mapping of the bath vibrational properties onto a set of auxiliary variables. Our calculations for a model of a Lennard-Jones solid show that our GLE scheme provides a stable dynamics, with the dissipative/relaxation processes properly described. The total kinetic energy of the central system always thermalizes toward the expected bath temperature, with appropriate fluctuation around the mean value. More importantly, we obtain a velocity distribution for the individual atoms in the central system which follows the expected canonical distribution at the corresponding temperature. This confirms that both our GLE scheme and our mapping procedure onto an extended Langevin dynamics provide the correct thermostat. We also examined the velocity autocorrelation functions and compare our results with more conventional Langevin dynamics.
Resumo:
Dissertação de mestrado, Biologia Marinha, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015
Resumo:
We generalize the concept of .systematic risk to a broad class of risk measures potentially accounting for high distribution moments, downside risk, rare disasters, as well as other risk attributes. We offer two different approaches. First is an equilibrium framework generalizing the Capital Asset Pricing Model, two-fund separation, and the security market line. Second is an axiomatic approach resulting in a systematic risk measure as the unique solution to a risk allocation problem. Both approaches lead to similar results extending the traditional beta to capture multiple dimensions of risk. The results lend themselves naturally to empirical investigation.
Resumo:
This paper presents a new and efficient methodology for distribution network reconfiguration integrated with optimal power flow (OPF) based on a Benders decomposition approach. The objective minimizes power losses, balancing load among feeders and subject to constraints: capacity limit of branches, minimum and maximum power limits of substations or distributed generators, minimum deviation of bus voltages and radial optimal operation of networks. The Generalized Benders decomposition algorithm is applied to solve the problem. The formulation can be embedded under two stages; the first one is the Master problem and is formulated as a mixed integer non-linear programming problem. This stage determines the radial topology of the distribution network. The second stage is the Slave problem and is formulated as a non-linear programming problem. This stage is used to determine the feasibility of the Master problem solution by means of an OPF and provides information to formulate the linear Benders cuts that connect both problems. The model is programmed in GAMS. The effectiveness of the proposal is demonstrated through two examples extracted from the literature.
Resumo:
This paper proposes a methodology to increase the probability of delivering power to any load point through the identification of new investments. The methodology uses a fuzzy set approach to model the uncertainty of outage parameters, load and generation. A DC fuzzy multicriteria optimization model considering the Pareto front and based on mixed integer non-linear optimization programming is developed in order to identify the adequate investments in distribution networks components which allow increasing the probability of delivering power to all customers in the distribution network at the minimum possible cost for the system operator, while minimizing the non supplied energy cost. To illustrate the application of the proposed methodology, the paper includes a case study which considers an 33 bus distribution network.
Resumo:
Des efforts de recherche considérables ont été déployés afin d'améliorer les résultats de traitement de cancers pulmonaires. L'étude de la déformation de l'anatomie du patient causée par la ventilation pulmonaire est au coeur du processus de planification de traitement radio-oncologique. À l'aide d'images de tomodensitométrie quadridimensionnelles (4DCT), une simulation dosimétrique peut être calculée sur les 10 ensembles d'images du 4DCT. Une méthode doit être employée afin de recombiner la dose de radiation calculée sur les 10 anatomies représentant une phase du cycle respiratoire. L'utilisation de recalage déformable d'images (DIR), une méthode de traitement d'images numériques, génère neuf champs vectoriels de déformation permettant de rapporter neuf ensembles d'images sur un ensemble de référence correspondant habituellement à la phase d'expiration profonde du cycle respiratoire. L'objectif de ce projet est d'établir une méthode de génération de champs de déformation à l'aide de la DIR conjointement à une méthode de validation de leur précision. Pour y parvenir, une méthode de segmentation automatique basée sur la déformation surfacique de surface à été créée. Cet algorithme permet d'obtenir un champ de déformation surfacique qui décrit le mouvement de l'enveloppe pulmonaire. Une interpolation volumétrique est ensuite appliquée dans le volume pulmonaire afin d'approximer la déformation interne des poumons. Finalement, une représentation en graphe de la vascularisation interne du poumon a été développée afin de permettre la validation du champ de déformation. Chez 15 patients, une erreur de recouvrement volumique de 7.6 ± 2.5[%] / 6.8 ± 2.1[%] et une différence relative des volumes de 6.8 ± 2.4 [%] / 5.9 ± 1.9 [%] ont été calculées pour le poumon gauche et droit respectivement. Une distance symétrique moyenne 0.8 ± 0.2 [mm] / 0.8 ± 0.2 [mm], une distance symétrique moyenne quadratique de 1.2 ± 0.2 [mm] / 1.3 ± 0.3 [mm] et une distance symétrique maximale 7.7 ± 2.4 [mm] / 10.2 ± 5.2 [mm] ont aussi été calculées pour le poumon gauche et droit respectivement. Finalement, 320 ± 51 bifurcations ont été détectées dans le poumons droit d'un patient, soit 92 ± 10 et 228 ± 45 bifurcations dans la portion supérieure et inférieure respectivement. Nous avons été en mesure d'obtenir des champs de déformation nécessaires pour la recombinaison de dose lors de la planification de traitement radio-oncologique à l'aide de la méthode de déformation hiérarchique des surfaces. Nous avons été en mesure de détecter les bifurcations de la vascularisation pour la validation de ces champs de déformation.
Resumo:
Cette thèse traite de deux thèmes principaux. Le premier concerne l'étude des empilements apolloniens généralisés de cercles et de sphères. Généralisations des classiques empilements apolloniens, dont l'étude remonte à la Grèce antique, ces objets s'imposent comme particulièrement attractifs en théorie des nombres. Dans cette thèse sera étudié l'ensemble des courbures (les inverses des rayons) des cercles ou sphères de tels empilements. Sous de bonnes conditions, ces courbures s'avèrent être toutes entières. Nous montrerons qu'elles vérifient un principe local-global partiel, nous compterons le nombre de cercles de courbures plus petites qu'une quantité donnée et nous nous intéresserons également à l'étude des courbures premières. Le second thème a trait à la distribution angulaire des idéaux (ou plutôt ici des nombres idéaux) des corps de nombres quadratiques imaginaires (que l'on peut voir comme la distribution des points à coordonnées entières sur des ellipses). Nous montrerons que la discrépance de l'ensemble des angles des nombres idéaux entiers de norme donnée est faible et nous nous intéresserons également au problème des écarts bornés entre les premiers d'extensions quadratiques imaginaires dans des secteurs.
Resumo:
In this paper, we study some dynamic generalized information measures between a true distribution and an observed (weighted) distribution, useful in life length studies. Further, some bounds and inequalities related to these measures are also studied
Resumo:
Muchas de las nuevas aplicaciones emergentes de Internet tales como TV sobre Internet, Radio sobre Internet,Video Streamming multi-punto, entre otras, necesitan los siguientes requerimientos de recursos: ancho de banda consumido, retardo extremo-a-extremo, tasa de paquetes perdidos, etc. Por lo anterior, es necesario formular una propuesta que especifique y provea para este tipo de aplicaciones los recursos necesarios para su buen funcionamiento. En esta tesis, proponemos un esquema de ingeniería de tráfico multi-objetivo a través del uso de diferentes árboles de distribución para muchos flujos multicast. En este caso, estamos usando la aproximación de múltiples caminos para cada nodo egreso y de esta forma obtener la aproximación de múltiples árboles y a través de esta forma crear diferentes árboles multicast. Sin embargo, nuestra propuesta resuelve la fracción de la división del tráfico a través de múltiples árboles. La propuesta puede ser aplicada en redes MPLS estableciendo rutas explícitas en eventos multicast. En primera instancia, el objetivo es combinar los siguientes objetivos ponderados dentro de una métrica agregada: máxima utilización de los enlaces, cantidad de saltos, el ancho de banda total consumido y el retardo total extremo-a-extremo. Nosotros hemos formulado esta función multi-objetivo (modelo MHDB-S) y los resultados obtenidos muestran que varios objetivos ponderados son reducidos y la máxima utilización de los enlaces es minimizada. El problema es NP-duro, por lo tanto, un algoritmo es propuesto para optimizar los diferentes objetivos. El comportamiento que obtuvimos usando este algoritmo es similar al que obtuvimos con el modelo. Normalmente, durante la transmisión multicast los nodos egresos pueden salir o entrar del árbol y por esta razón en esta tesis proponemos un esquema de ingeniería de tráfico multi-objetivo usando diferentes árboles para grupos multicast dinámicos. (en el cual los nodos egresos pueden cambiar durante el tiempo de vida de la conexión). Si un árbol multicast es recomputado desde el principio, esto podría consumir un tiempo considerable de CPU y además todas las comuicaciones que están usando el árbol multicast serán temporalmente interrumpida. Para aliviar estos inconvenientes, proponemos un modelo de optimización (modelo dinámico MHDB-D) que utilice los árboles multicast previamente computados (modelo estático MHDB-S) adicionando nuevos nodos egreso. Usando el método de la suma ponderada para resolver el modelo analítico, no necesariamente es correcto, porque es posible tener un espacio de solución no convexo y por esta razón algunas soluciones pueden no ser encontradas. Adicionalmente, otros tipos de objetivos fueron encontrados en diferentes trabajos de investigación. Por las razones mencionadas anteriormente, un nuevo modelo llamado GMM es propuesto y para dar solución a este problema un nuevo algoritmo usando Algoritmos Evolutivos Multi-Objetivos es propuesto. Este algoritmo esta inspirado por el algoritmo Strength Pareto Evolutionary Algorithm (SPEA). Para dar una solución al caso dinámico con este modelo generalizado, nosotros hemos propuesto un nuevo modelo dinámico y una solución computacional usando Breadth First Search (BFS) probabilístico. Finalmente, para evaluar nuestro esquema de optimización propuesto, ejecutamos diferentes pruebas y simulaciones. Las principales contribuciones de esta tesis son la taxonomía, los modelos de optimización multi-objetivo para los casos estático y dinámico en transmisiones multicast (MHDB-S y MHDB-D), los algoritmos para dar solución computacional a los modelos. Finalmente, los modelos generalizados también para los casos estático y dinámico (GMM y GMM Dinámico) y las propuestas computacionales para dar slución usando MOEA y BFS probabilístico.
Resumo:
This article introduces generalized beta-generated (GBG) distributions. Sub-models include all classical beta-generated, Kumaraswamy-generated and exponentiated distributions. They are maximum entropy distributions under three intuitive conditions, which show that the classical beta generator skewness parameters only control tail entropy and an additional shape parameter is needed to add entropy to the centre of the parent distribution. This parameter controls skewness without necessarily differentiating tail weights. The GBG class also has tractable properties: we present various expansions for moments, generating function and quantiles. The model parameters are estimated by maximum likelihood and the usefulness of the new class is illustrated by means of some real data sets.
Resumo:
Let 0 denote the level of quality inherent in a food product that is delivered to some terminal market. In this paper, I characterize allocations over 0 and provide an economic rationale for regulating safety and quality standards in the food system. Zusman and Bockstael investigate the theoretical foundations for imposing standards and stress the importance of providing a tractable conceptual foundation. Despite a wealth of contributions that are mainly empirical (for reviews of these works see, respectively, Caswell and Antle), there have been relatively few attempts to model formally the linkages between farm and food markets when food quality and consumer safety are at issue. Here, I attempt to provide such a framework, building on key contributions in the theoretical literature and linking them in a simple model of quality determination in a vertically related marketing channel. The food-marketing model is due to Gardner. Spence provides a foundation for Pareto-improving intervention in a deterministic model of quality provision, and Leland, building on the classic paper by Akerlof, investigates licensing and minimum standards when the information structure is incomplete. Linking these ideas in a satisfactory model of the food markets is the main objective of the paper.
Resumo:
This paper proposes a method for describing the distribution of observed temperatures on any day of the year such that the distribution and summary statistics of interest derived from the distribution vary smoothly through the year. The method removes the noise inherent in calculating summary statistics directly from the data thus easing comparisons of distributions and summary statistics between different periods. The method is demonstrated using daily effective temperatures (DET) derived from observations of temperature and wind speed at De Bilt, Holland. Distributions and summary statistics are obtained from 1985 to 2009 and compared to the period 1904–1984. A two-stage process first obtains parameters of a theoretical probability distribution, in this case the generalized extreme value (GEV) distribution, which describes the distribution of DET on any day of the year. Second, linear models describe seasonal variation in the parameters. Model predictions provide parameters of the GEV distribution, and therefore summary statistics, that vary smoothly through the year. There is evidence of an increasing mean temperature, a decrease in the variability in temperatures mainly in the winter and more positive skew, more warm days, in the summer. In the winter, the 2% point, the value below which 2% of observations are expected to fall, has risen by 1.2 °C, in the summer the 98% point has risen by 0.8 °C. Medians have risen by 1.1 and 0.9 °C in winter and summer, respectively. The method can be used to describe distributions of future climate projections and other climate variables. Further extensions to the methodology are suggested.
Resumo:
Kumaraswamy [Generalized probability density-function for double-bounded random-processes, J. Hydrol. 462 (1980), pp. 79-88] introduced a distribution for double-bounded random processes with hydrological applications. For the first time, based on this distribution, we describe a new family of generalized distributions (denoted with the prefix `Kw`) to extend the normal, Weibull, gamma, Gumbel, inverse Gaussian distributions, among several well-known distributions. Some special distributions in the new family such as the Kw-normal, Kw-Weibull, Kw-gamma, Kw-Gumbel and Kw-inverse Gaussian distribution are discussed. We express the ordinary moments of any Kw generalized distribution as linear functions of probability weighted moments (PWMs) of the parent distribution. We also obtain the ordinary moments of order statistics as functions of PWMs of the baseline distribution. We use the method of maximum likelihood to fit the distributions in the new class and illustrate the potentiality of the new model with an application to real data.
Resumo:
In this paper, the generalized log-gamma regression model is modified to allow the possibility that long-term survivors may be present in the data. This modification leads to a generalized log-gamma regression model with a cure rate, encompassing, as special cases, the log-exponential, log-Weibull and log-normal regression models with a cure rate typically used to model such data. The models attempt to simultaneously estimate the effects of explanatory variables on the timing acceleration/deceleration of a given event and the surviving fraction, that is, the proportion of the population for which the event never occurs. The normal curvatures of local influence are derived under some usual perturbation schemes and two martingale-type residuals are proposed to assess departures from the generalized log-gamma error assumption as well as to detect outlying observations. Finally, a data set from the medical area is analyzed.
Resumo:
We introduce in this paper a new class of discrete generalized nonlinear models to extend the binomial, Poisson and negative binomial models to cope with count data. This class of models includes some important models such as log-nonlinear models, logit, probit and negative binomial nonlinear models, generalized Poisson and generalized negative binomial regression models, among other models, which enables the fitting of a wide range of models to count data. We derive an iterative process for fitting these models by maximum likelihood and discuss inference on the parameters. The usefulness of the new class of models is illustrated with an application to a real data set. (C) 2008 Elsevier B.V. All rights reserved.