973 resultados para Gaussian Probability Distribution


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mobile and wireless networks have long exploited mobility predictions, focused on predicting the future location of given users, to perform more efficient network resource management. In this paper, we present a new approach in which we provide predictions as a probability distribution of the likelihood of moving to a set of future locations. This approach provides wireless services a greater amount of knowledge and enables them to perform more effectively. We present a framework for the evaluation of this new type of predictor, and develop 2 new predictors, HEM and G-Stat. We evaluate our predictors accuracy in predicting future cells for mobile users, using two large geolocation data sets, from MDC [11], [12] and Crawdad [13]. We show that our predictors can successfully predict with as low as an average 2.2% inaccuracy in certain scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis builds a framework for evaluating downside risk from multivariate data via a special class of risk measures (RM). The peculiarity of the analysis lies in getting rid of strong data distributional assumptions and in orientation towards the most critical data in risk management: those with asymmetries and heavy tails. At the same time, under typical assumptions, such as the ellipticity of the data probability distribution, the conformity with classical methods is shown. The constructed class of RM is a multivariate generalization of the coherent distortion RM, which possess valuable properties for a risk manager. The design of the framework is twofold. The first part contains new computational geometry methods for the high-dimensional data. The developed algorithms demonstrate computability of geometrical concepts used for constructing the RM. These concepts bring visuality and simplify interpretation of the RM. The second part develops models for applying the framework to actual problems. The spectrum of applications varies from robust portfolio selection up to broader spheres, such as stochastic conic optimization with risk constraints or supervised machine learning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract: The objectives of this study were to evaluate the combined effects of soil bioticand abiotic factors on the incidence of Fusarium corn stalk rot, during four annual incorporations of two typesofsewagesludge intosoil ina 5-years field assay under tropical conditions and topredict the effectsof these variables on the disease. For each type of sewage sludge, the following treatments were included: control with mineral fertilization recommended for corn; control without fertilization; sewage sludge based on the nitrogen concentration that provided the same amount of nitrogen as in the mineral fertilizer treatment; and sewage sludge that provided two, four and eight times the nitrogen concentration recommended for corn. Increasing dosages of both types of sewage sludge incorporated into soil resulted in increased corn stalk rot incidence, being negatively correlated with corn yield. A global analysis highlighted the effect of the year of the experiment, followed by the sewage sludge dosages. The type of sewage sludge did not affect the disease incidence. Amultiple logistic model using a stepwise procedure was fitted based on the selection of a model that included the three explanatory parameters for disease incidence: electrical conductivity, magnesium and Fusarium population. In the selected model, the probability of higher disease incidence increased with an increase of these three explanatory parameters. When the explanatory parameters were compared, electrical conductivity presented a dominant effect and was the main variable to predict the probability distribution curves of Fusarium corn stalk rot, after sewage sludge application into the soil.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis has extensively investigated for the first time the statistical distributions of atmospheric surface variables and heat fluxes for the Mediterranean Sea. After retrieving a 30-year atmospheric analysis dataset, we have captured the spatial patterns of the probability distribution of the relevant atmospheric variables for ocean atmospheric forcing: wind components (U,V), wind amplitude, air temperature (T2M), dewpoint temperature (D2M) and mean sea-level pressure (MSL-P). The study reveals that a two-parameter PDF is not a good fit for T2M, D2M, MSL-P and wind components (U,V) and a three parameter skew-normal PDF is better suited. Such distribution captures properly the data asymmetric tails (skewness). After removing the large seasonal cycle, we show the quality of the fit and the geographic structure of the PDF parameters. It is found that the PDF parameters vary between different regions, in particular the shape (connected to the asymmetric tails) and the scale (connected to the spread of the distribution) parameters cluster around two or more values, probably connected to the different dynamics that produces the surface atmospheric fields in the Mediterranean basin. Moreover, using the atmospheric variables, we have computed the air-sea heat fluxes for a 20-years period and estimated the net heat budget over the Mediterranean Sea. Interestingly, the higher resolution analysis dataset provides a negative heat budget of –3 W/m2 which is within the acceptable range for the Mediterranean Sea heat budget closure. The lower resolution atmospheric reanalysis dataset(ERA5) does not satisfy the heat budget closure problem pointing out that a minimal resolution of the atmospheric forcing is crucial for the Mediterranean Sea dynamics. The PDF framework developed in this thesis will be the basis for a future ensemble forecasting system that will use the statistical distributions to create perturbations of the atmospheric ocean forcing.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In random matrix theory, the Tracy-Widom (TW) distribution describes the behavior of the largest eigenvalue. We consider here two models in which TW undergoes transformations. In the first one disorder is introduced in the Gaussian ensembles by superimposing an external source of randomness. A competition between TW and a normal (Gaussian) distribution results, depending on the spreading of the disorder. The second model consists of removing at random a fraction of (correlated) eigenvalues of a random matrix. The usual formalism of Fredholm determinants extends naturally. A continuous transition from TW to the Weilbull distribution, characteristic of extreme values of an uncorrelated sequence, is obtained.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a PSO based approach to increase the probability of delivering power to any load point by identifying new investments in distribution energy systems. The statistical failure and repair data of distribution components is the main basis of the proposed methodology that uses a fuzzyprobabilistic modeling for the components outage parameters. The fuzzy membership functions of the outage parameters of each component are based on statistical records. A Modified Discrete PSO optimization model is developed in order to identify the adequate investments in distribution energy system components which allow increasing the probability of delivering power to any customer in the distribution system at the minimum possible cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 180 bus distribution network.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a methodology to increase the probability of delivering power to any load point through the identification of new investments. The methodology uses a fuzzy set approach to model the uncertainty of outage parameters, load and generation. A DC fuzzy multicriteria optimization model considering the Pareto front and based on mixed integer non-linear optimization programming is developed in order to identify the adequate investments in distribution networks components which allow increasing the probability of delivering power to all customers in the distribution network at the minimum possible cost for the system operator, while minimizing the non supplied energy cost. To illustrate the application of the proposed methodology, the paper includes a case study which considers an 33 bus distribution network.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A methodology to increase the probability of delivering power to any load point through the identification of new investments in distribution network components is proposed in this paper. The method minimizes the investment cost as well as the cost of energy not supplied in the network. A DC optimization model based on mixed integer non-linear programming is developed considering the Pareto front technique in order to identify the adequate investments in distribution networks components which allow increasing the probability of delivering power for any customer in the distribution system at the minimum possible cost for the system operator, while minimizing the energy not supplied cost. Thus, a multi-objective problem is formulated. To illustrate the application of the proposed methodology, the paper includes a case study which considers a 180 bus distribution network

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Exact closed-form expressions are obtained for the outage probability of maximal ratio combining in η-μ fadingchannels with antenna correlation and co-channel interference. The scenario considered in this work assumes the joint presence of background white Gaussian noise and independent Rayleigh-faded interferers with arbitrary powers. Outage probability results are obtained through an appropriate generalization of the moment-generating function of theη-μ fading distribution, for which new closed-form expressions are provided.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The speed and width of front solutions to reaction-dispersal models are analyzed both analytically and numerically. We perform our analysis for Laplace and Gaussian distribution kernels, both for delayed and nondelayed models. The results are discussed in terms of the characteristic parameters of the models

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We study the motion of an unbound particle under the influence of a random force modeled as Gaussian colored noise with an arbitrary correlation function. We derive exact equations for the joint and marginal probability density functions and find the associated solutions. We analyze in detail anomalous diffusion behaviors along with the fractal structure of the trajectories of the particle and explore possible connections between dynamical exponents of the variance and the fractal dimension of the trajectories.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Object recognition is complicated by clutter, occlusion, and sensor error. Since pose hypotheses are based on image feature locations, these effects can lead to false negatives and positives. In a typical recognition algorithm, pose hypotheses are tested against the image, and a score is assigned to each hypothesis. We use a statistical model to determine the score distribution associated with correct and incorrect pose hypotheses, and use binary hypothesis testing techniques to distinguish between them. Using this approach we can compare algorithms and noise models, and automatically choose values for internal system thresholds to minimize the probability of making a mistake.