945 resultados para Probability Distribution Function
Resumo:
This paper proposes two new approaches for the sensitivity analysis of multiobjective design optimization problems whose performance functions are highly susceptible to small variations in the design variables and/or design environment parameters. In both methods, the less sensitive design alternatives are preferred over others during the multiobjective optimization process. While taking the first approach, the designer chooses the design variable and/or parameter that causes uncertainties. The designer then associates a robustness index with each design alternative and adds each index as an objective function in the optimization problem. For the second approach, the designer must know, a priori, the interval of variation in the design variables or in the design environment parameters, because the designer will be accepting the interval of variation in the objective functions. The second method does not require any law of probability distribution of uncontrollable variations. Finally, the authors give two illustrative examples to highlight the contributions of the paper.
Resumo:
Amphiphile Peptide, Pro-Glu-(Phe-Glu)n-Pro, Pro-Asp-(Phe-Asp)n-Pro, und Phe-Glu-(Phe-Glu)n-Phe, können so aus n alternierenden Sequenzen von hydrophoben und hydrophilen Aminosäuren konstruiert werden, dass sie sich in Monolagen an der Luft-Wasser Grenzfläche anordnen. In biologischen Systemen können Strukturen an der organisch-wässrigen Grenzfläche als Matrix für die Kristallisation von Hydroxyapatit dienen, ein Vorgang der für die Behandlung von Osteoporose verwendet werden kann. In der vorliegenden Arbeit wurden Computersimulationenrneingesetzt, um die Strukturen und die zugrunde liegenden Wechselwirkungen welche die Aggregation der Peptide auf mikroskopischer Ebene steuern, zu untersuchen. Atomistische Molekulardynamik-Simulationen von einzelnen Peptidsträngen zeigen, dass sie sich leicht an der Luft-Wasser Grenzfläche anordnen und die Fähigkeit haben, sich in β-Schleifen zu falten, selbst für relativ kurze Peptidlängen (n = 2). Seltene Ereignisse wie diese (i.e. Konformationsänderungen) erfordern den Einsatz fortgeschrittener Sampling-Techniken. Hier wurde “Replica Exchange” Molekulardynamik verwendet um den Einfluss der Peptidsequenzen zu untersuchen. Die Simulationsergebnisse zeigten, dass Peptide mit kürzeren azidischen Seitenketten (Asp vs. Glu) gestrecktere Konformationen aufwiesen als die mit längeren Seitenketten, die in der Lage waren die Prolin-Termini zu erreichen. Darüber hinaus zeigte sich, dass die Prolin-Termini (Pro vs. Phe) notwendig sind, um eine 2D-Ordnung innerhalb derrnAggregate zu erhalten. Das Peptid Pro-Asp-(Phe-Asp)n-Pro, das beide dieser Eigenschaften enthält, zeigt das geordnetste Verhalten, eine geringe Verdrehung der Hauptkette, und ist in der Lage die gebildeten Aggregate durch Wasserstoffbrücken zwischen den sauren Seitenketten zu stabilisieren. Somit ist dieses Peptid am besten zur Aggregation geeignet. Dies wurde auch durch die Beurteilung der Stabilität von experimentnah-aufgesetzten Peptidaggregaten, sowie der Neigung einzelner Peptide zur Selbstorganisation von anfänglich ungeordneten Konfigurationen unterstützt. Da atomistische Simulationen nur auf kleine Systemgrößen und relativ kurze Zeitskalen begrenzt sind, wird ein vergröbertes Modell entwickelt damit die Selbstorganisation auf einem größeren Maßstab studiert werden kann. Da die Selbstorganisation an der Grenzfläche vonrnInteresse ist, wurden existierenden Vergröberungsmethoden erweitert, um nicht-gebundene Potentiale für inhomogene Systeme zu bestimmen. Die entwickelte Methode ist analog zur iterativen Boltzmann Inversion, bildet aber das Update für das Interaktionspotential basierend auf der radialen Verteilungsfunktion in einer Slab-Geometrie und den Breiten des Slabs und der Grenzfläche. Somit kann ein Kompromiss zwischen der lokalen Flüssigketsstruktur und den thermodynamischen Eigenschaften der Grenzfläche erreicht werden. Die neue Methode wurde für einen Wasser- und einen Methanol-Slab im Vakuum demonstriert, sowie für ein einzelnes Benzolmolekül an der Vakuum-Wasser Grenzfläche, eine Anwendung die von besonderer Bedeutung in der Biologie ist, in der oft das thermodynamische/Grenzflächenpolymerisations-Verhalten zusätzlich der strukturellen Eigenschaften des Systems erhalten werden müssen. Daraufrnbasierend wurde ein vergröbertes Modell über einen Fragment-Ansatz parametrisiert und die Affinität des Peptids zur Vakuum-Wasser Grenzfläche getestet. Obwohl die einzelnen Fragmente sowohl die Struktur als auch die Wahrscheinlichkeitsverteilungen an der Grenzfläche reproduzierten, diffundierte das Peptid als Ganzes von der Grenzfläche weg. Jedoch führte eine Reparametrisierung der nicht-gebundenen Wechselwirkungen für eines der Fragmente der Hauptkette in einem Trimer dazu, dass das Peptid an der Grenzfläche blieb. Dies deutet darauf hin, dass die Kettenkonnektivität eine wichtige Rolle im Verhalten des Petpids an der Grenzfläche spielt.
Resumo:
The way mass is distributed in galaxies plays a major role in shaping their evolution across cosmic time. The galaxy's total mass is usually determined by tracing the motion of stars in its potential, which can be probed observationally by measuring stellar spectra at different distances from the galactic centre, whose kinematics is used to constrain dynamical models. A class of such models, commonly used to accurately determine the distribution of luminous and dark matter in galaxies, is that of equilibrium models. In this Thesis, a novel approach to the design of equilibrium dynamical models, in which the distribution function is an analytic function of the action integrals, is presented. Axisymmetric and rotating models are used to explain observations of a sample of nearby early-type galaxies in the Calar Alto Legacy Integral Field Area survey. Photometric and spectroscopic data for round and flattened galaxies are well fitted by the models, which are then used to get the galaxies' total mass distribution and orbital anisotropy. The time evolution of massive early-type galaxies is also investigated with numerical models. Their structural properties (mass, size, velocity dispersion) are observed to evolve, on average, with redshift. In particular, they appear to be significantly more compact at higher redshift, at fixed stellar mass, so it is interesting to investigate what drives such evolution. This Thesis focuses on the role played by dark-matter haloes: their mass-size and mass-velocity dispersion correlations evolve similarly to the analogous correlations of ellipticals; at fixed halo mass, the haloes are more compact at higher redshift, similarly to massive galaxies; a simple model, in which all the galaxy's size and velocity-dispersion evolution is due to the cosmological evolution of the underlying halo population, reproduces the observed size and velocity-dispersion of massive compact early-type galaxies up to redshift of about 2.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).
Resumo:
Marshall's (1970) lemma is an analytical result which implies root-n-consistency of the distribution function corresponding to the Grenander (1956) estimator of a non-decreasing probability density. The present paper derives analogous results for the setting of convex densities on [0,\infty).
Resumo:
Free space optical (FSO) communication links can experience extreme signal degradation due to atmospheric turbulence induced spatial and temporal irradiance fuctuations (scintillation) in the laser wavefront. In addition, turbulence can cause the laser beam centroid to wander resulting in power fading, and sometimes complete loss of the signal. Spreading of the laser beam and jitter are also artifacts of atmospheric turbulence. To accurately predict the signal fading that occurs in a laser communication system and to get a true picture of how this affects crucial performance parameters like bit error rate (BER) it is important to analyze the probability density function (PDF) of the integrated irradiance fuctuations at the receiver. In addition, it is desirable to find a theoretical distribution that accurately models these ?uctuations under all propagation conditions. The PDF of integrated irradiance fuctuations is calculated from numerical wave-optic simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to very strong. Our results show that the gamma-gamma PDF provides a good fit to the simulated data distribution for all aperture sizes studied from weak through moderate scintillation. In strong scintillation, the gamma-gamma PDF is a better fit to the distribution for point-like apertures and the lognormal PDF is a better fit for apertures the size of the atmospheric spatial coherence radius ρ0 or larger. In addition, the PDF of received power from a Gaussian laser beam, which has been adaptively compensated at the transmitter before propagation to the receiver of a FSO link in the moderate scintillation regime is investigated. The complexity of the adaptive optics (AO) system is increased in order to investigate the changes in the distribution of the received power and how this affects the BER. For the 10 km link, due to the non-reciprocal nature of the propagation path the optimal beam to transmit is unknown. These results show that a low-order level of complexity in the AO provides a better estimate for the optimal beam to transmit than a higher order for non-reciprocal paths. For the 20 km link distance it was found that, although minimal, all AO complexity levels provided an equivalent improvement in BER and that no AO complexity provided the correction needed for the optimal beam to transmit. Finally, the temporal power spectral density of received power from a FSO communication link is investigated. Simulated and experimental results for the coherence time calculated from the temporal correlation function are presented. Results for both simulation and experimental data show that the coherence time increases as the receiving aperture diameter increases. For finite apertures the coherence time increases as the communication link distance is increased. We conjecture that this is due to the increasing speckle size within the pupil plane of the receiving aperture for an increasing link distance.
Resumo:
The Acoustic emission (AE) technique, as one of non-intrusive and nondestructive evaluation techniques, acquires and analyzes the signals emitting from deformation or fracture of materials/structures under service loading. The AE technique has been successfully applied in damage detection in various materials such as metal, alloy, concrete, polymers and other composite materials. In this study, the AE technique was used for detecting crack behavior within concrete specimens under mechanical and environmental frost loadings. The instrumentations of the AE system used in this study include a low-frequency AE sensor, a computer-based data acquisition device and a preamplifier linking the AE sensor and the data acquisition device. The AE system purchased from Mistras Group was used in this study. The AE technique was applied to detect damage with the following laboratory tests: the pencil lead test, the mechanical three-point single-edge notched beam bending (SEB) test, and the freeze-thaw damage test. Firstly, the pencil lead test was conducted to verify the attenuation phenomenon of AE signals through concrete materials. The value of attenuation was also quantified. Also, the obtained signals indicated that this AE system was properly setup to detect damage in concrete. Secondly, the SEB test with lab-prepared concrete beam was conducted by employing Mechanical Testing System (MTS) and AE system. The cumulative AE events and the measured loading curves, which both used the crack-tip open displacement (CTOD) as the horizontal coordinate, were plotted. It was found that the detected AE events were qualitatively correlated with the global force-displacement behavior of the specimen. The Weibull distribution was vii proposed to quantitatively describe the rupture probability density function. The linear regression analysis was conducted to calibrate the Weibull distribution parameters with detected AE signals and to predict the rupture probability as a function of CTOD for the specimen. Finally, the controlled concrete freeze-thaw cyclic tests were designed and the AE technique was planned to investigate the internal frost damage process of concrete specimens.
Resumo:
In this paper, we show statistical analyses of several types of traffic sources in a 3G network, namely voice, video and data sources. For each traffic source type, measurements were collected in order to, on the one hand, gain better understanding of the statistical characteristics of the sources and, on the other hand, enable forecasting traffic behaviour in the network. The latter can be used to estimate service times and quality of service parameters. The probability density function, mean, variance, mean square deviation, skewness and kurtosis of the interarrival times are estimated by Wolfram Mathematica and Crystal Ball statistical tools. Based on evaluation of packet interarrival times, we show how the gamma distribution can be used in network simulations and in evaluation of available capacity in opportunistic systems. As a result, from our analyses, shape and scale parameters of gamma distribution are generated. Data can be applied also in dynamic network configuration in order to avoid potential network congestions or overflows. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
We study existence of random elements with partially specified distributions. The technique relies on the existence of a positive ex-tension for linear functionals accompanied by additional conditions that ensure the regularity of the extension needed for interpreting it as a probability measure. It is shown in which case the extens ion can be chosen to possess some invariance properties. The results are applied to the existence of point processes with given correlation measure and random closed sets with given two-point covering function or contact distribution function. It is shown that the regularity condition can be efficiently checked in many cases in order to ensure that the obtained point processes are indeed locally finite and random sets have closed realisations.
Resumo:
We analyse the variability of the probability distribution of daily wind speed in wintertime over Northern and Central Europe in a series of global and regional climate simulations covering the last centuries, and in reanalysis products covering approximately the last 60 years. The focus of the study lies on identifying the link of the variations in the wind speed distribution to the regional near-surface temperature, to the meridional temperature gradient and to the North Atlantic Oscillation. Our main result is that the link between the daily wind distribution and the regional climate drivers is strongly model dependent. The global models tend to behave similarly, although they show some discrepancies. The two regional models also tend to behave similarly to each other, but surprisingly the results derived from each regional model strongly deviates from the results derived from its driving global model. In addition, considering multi-centennial timescales, we find in two global simulations a long-term tendency for the probability distribution of daily wind speed to widen through the last centuries. The cause for this widening is likely the effect of the deforestation prescribed in these simulations. We conclude that no clear systematic relationship between the mean temperature, the temperature gradient and/or the North Atlantic Oscillation, with the daily wind speed statistics can be inferred from these simulations. The understand- ing of past and future changes in the distribution of wind speeds, and thus of wind speed extremes, will require a detailed analysis of the representation of the interaction between large-scale and small-scale dynamics.
Resumo:
In this paper, we extend the debate concerning Credit Default Swap valuation to include time varying correlation and co-variances. Traditional multi-variate techniques treat the correlations between covariates as constant over time; however, this view is not supported by the data. Secondly, since financial data does not follow a normal distribution because of its heavy tails, modeling the data using a Generalized Linear model (GLM) incorporating copulas emerge as a more robust technique over traditional approaches. This paper also includes an empirical analysis of the regime switching dynamics of credit risk in the presence of liquidity by following the general practice of assuming that credit and market risk follow a Markov process. The study was based on Credit Default Swap data obtained from Bloomberg that spanned the period January 1st 2004 to August 08th 2006. The empirical examination of the regime switching tendencies provided quantitative support to the anecdotal view that liquidity decreases as credit quality deteriorates. The analysis also examined the joint probability distribution of the credit risk determinants across credit quality through the use of a copula function which disaggregates the behavior embedded in the marginal gamma distributions, so as to isolate the level of dependence which is captured in the copula function. The results suggest that the time varying joint correlation matrix performed far superior as compared to the constant correlation matrix; the centerpiece of linear regression models.
Resumo:
Based on the map of landscapes and permafrost conditions in Yakutia (Merzlotno-landshaftnaya karta Yakutskoi0 ASSR, Gosgeodeziya SSSR, 1991), rasterized maps of permafrost temperature and active-layer thickness of Yakutia, East Siberia were derived. The mean and standard deviation at 0.5-degree grid cell size are estimated by assigning a probability density function at 0.001-degree spatial resolution. Spatial pattern of both variables are dominated by a climatic gradient from north to south, and by mountains and the soil type distribution. Uncertainties are highest in mountains and in the sporadic permafrost zone in the south. The maps are best suited as a benchmark for land surface models which include a permafrost module.
Resumo:
Greenhouse gas emission reduction is the pillar of the Kyoto Protocol and one of the main goals of the European Union (UE) energy policy. National reduction targets for EU member states and an overall target for the EU-15 (8%) were set by the Kyoto Protocol. This reduction target is based on emissions in the reference year (1990) and must be reached by 2012. EU energy policy does not set any national targets, only an overall reduction target of 20% by 2020. This paper transfers global greenhouse gas emission reduction targets in both these documents to the transport sector and specifically to CO2 emissions. It proposes a nonlinear distribution method with objective, dynamic targets for reducing CO2 emissions in the transport sector, according to the context and characteristics of each geographical area. First, we analyse CO2 emissions from transport in the reference year (1990) and their evolution from 1990 to 2007. We then propose a nonlinear methodology for distributing dynamic CO2 emission reduction targets. We have applied the proposed distribution function for 2012 and 2020 at two territorial levels (EU member states and Spanish autonomous regions). The weighted distribution is based on per capita CO2 emissions and CO2 emissions per gross domestic product. Finally, we show the weighted targets found for each EU member state and each Spanish autonomous region, compare them with the real achievements to date, and forecast the situation for the years the Kyoto and EU goals are to be met. The results underline the need for ?weighted? decentralised decisions to be made at different territorial levels with a view to achieving a common goal, so relative convergence of all the geographical areas is reached over time. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
Tropospheric scintillation can become a significant impairment in satellite communication systems, especially in those with low fade-margin. Moreover, fast amplitude fluctuations due to scintillation are even larger when rain is present on the propagation path. Few studies of scintillation during rain have been reported and the statistical characterization is still not totally clear. This paper presents experimental results on the relationship between scintillation and rain attenuation obtained from slant-path attenuation measurements at 50 GHz. The study is focused on the probability density function (PDF) of various scintillation parameters. It is shown that scintillation intensity, measured as the standard deviation of the amplitude fluctuations, increases with rain attenuation; in the range 1-10 dB this relationship can be expressed by power-law or linear equations. The PDFs of scintillation intensity conditioned to a given rain attenuation level are lognormal, while the overall long-term PDF is well fltted by a generalized extreme valué (GEV) distribution. The short-term PDFs of amplitude conditioned to a given intensity are normal, although skewness effects are observed for the strongest intensities. A procedure is given to derive numerically the overall PDF of scintillation amplitude using a combination of conditional PDFs and local statistics of rain attenuation.
Resumo:
In the photovoltaic field, the back contact solar cells technology has appeared as an alternative to the traditional silicon modules. This new type of cells places both positive and negative contacts on the back side of the cells maximizing the exposed surface to the light and making easier the interconnection of the cells in the module. The Emitter Wrap-Through solar cell structure presents thousands of tiny holes to wrap the emitter from the front surface to the rear surface. These holes are made in a first step over the silicon wafers by means of a laser drilling process. This step is quite harmful from a mechanical point of view since holes act as stress concentrators leading to a reduction in the strength of these wafers. This paper presents the results of the strength characterization of drilled wafers. The study is carried out testing the samples with the ring on ring device. Finite Element models are developed to simulate the tests. The stress concentration factor of the drilled wafers under this load conditions is determined from the FE analysis. Moreover, the material strength is characterized fitting the fracture stress of the samples to a three-parameter Weibull cumulative distribution function. The parameters obtained are compared with the ones obtained in the analysis of a set of samples without holes to validate the method employed for the study of the strength of silicon drilled wafers.