963 resultados para PAIR DISTRIBUTION FUNCTION


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ce mémoire porte sur la simulation d'intervalles de crédibilité simultanés dans un contexte bayésien. Dans un premier temps, nous nous intéresserons à des données de précipitations et des fonctions basées sur ces données : la fonction de répartition empirique et la période de retour, une fonction non linéaire de la fonction de répartition. Nous exposerons différentes méthodes déjà connues pour obtenir des intervalles de confiance simultanés sur ces fonctions à l'aide d'une base polynomiale et nous présenterons une méthode de simulation d'intervalles de crédibilité simultanés. Nous nous placerons ensuite dans un contexte bayésien en explorant différents modèles de densité a priori. Pour le modèle le plus complexe, nous aurons besoin d'utiliser la simulation Monte-Carlo pour obtenir les intervalles de crédibilité simultanés a posteriori. Finalement, nous utiliserons une base non linéaire faisant appel à la transformation angulaire et aux splines monotones pour obtenir un intervalle de crédibilité simultané valide pour la période de retour.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Le but de cette thèse était d’étudier la dynamique de croissance par pulvérisation par plasma RF magnétron des couches minces à base d’oxyde de zinc destinées à des applications électroniques, optoélectroniques et photoniques de pointe. Dans ce contexte, nous avons mis au point plusieurs diagnostics permettant de caractériser les espèces neutres et chargées dans ce type de plasmas, notamment la sonde électrostatique, la spectroscopie optique d’émission et d’absorption, ainsi que la spectrométrie de masse. Par la suite, nous avons tenté de corréler certaines caractéristiques physiques de croissance des couches de ZnO, en particulier la vitesse de dépôt, aux propriétés fondamentales du plasma. Nos résultats ont montré que l’éjection d’atomes de Zn, In et O au cours de la pulvérisation RF magnétron de cibles de Zn, ZnO et In2O3 n’influence que très peu la densité d’ions positifs (et donc la densité d’électrons en supposant la quasi-neutralité) ainsi que la fonction de distribution en énergie des électrons (populations de basse et haute énergie). Cependant, le rapport entre la densité d’atomes d’argon métastables (3P2) sur la densité électronique décroît lorsque la densité d’atomes de Zn augmente, un effet pouvant être attribué à l’ionisation des atomes de Zn par effet Penning. De plus, dans les conditions opératoires étudiées (plasmas de basse pression, < 100 mTorr), la thermalisation des atomes pulvérisés par collisions avec les atomes en phase gazeuse demeure incomplète. Nous avons montré que l’une des conséquences de ce résultat est la présence d’ions Zn+ suprathermiques près du substrat. Finalement, nous avons corrélé la quantité d’atomes de Zn pulvérisés déterminée par spectroscopie d’émission avec la vitesse de dépôt d’une couche mince de ZnO mesurée par ellipsométrie spectroscopique. Ces travaux ont permis de mettre en évidence que ce sont majoritairement les atomes de Zn (et non les espèces excitées et/ou ioniques) qui gouvernent la dynamique de croissance par pulvérisation RF magnétron des couches minces de ZnO.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is devoted to the study of some stochastic models in inventories. An inventory system is a facility at which items of materials are stocked. In order to promote smooth and efficient running of business, and to provide adequate service to the customers, an inventory materials is essential for any enterprise. When uncertainty is present, inventories are used as a protection against risk of stock out. It is advantageous to procure the item before it is needed at a lower marginal cost. Again, by bulk purchasing, the advantage of price discounts can be availed. All these contribute to the formation of inventory. Maintaining inventories is a major expenditure for any organization. For each inventory, the fundamental question is how much new stock should be ordered and when should the orders are replaced. In the present study, considered several models for single and two commodity stochastic inventory problems. The thesis discusses two models. In the first model, examined the case in which the time elapsed between two consecutive demand points are independent and identically distributed with common distribution function F(.) with mean  (assumed finite) and in which demand magnitude depends only on the time elapsed since the previous demand epoch. The time between disasters has an exponential distribution with parameter . In Model II, the inter arrival time of disasters have general distribution (F.) with mean  ( ) and the quantity destructed depends on the time elapsed between disasters. Demands form compound poison processes with inter arrival times of demands having mean 1/. It deals with linearly correlated bulk demand two Commodity inventory problem, where each arrival demands a random number of items of each commodity C1 and C2, the maximum quantity demanded being a (< S1) and b(

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The average availability of a repairable system is the expected proportion of time that the system is operating in the interval [0, t]. The present article discusses the nonparametric estimation of the average availability when (i) the data on 'n' complete cycles of system operation are available, (ii) the data are subject to right censorship, and (iii) the process is observed upto a specified time 'T'. In each case, a nonparametric confidence interval for the average availability is also constructed. Simulations are conducted to assess the performance of the estimators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Electron transport in a self-consistent potential along a ballistic two-terminal conductor has been investigated. We have derived general formulas which describe the nonlinear current-voltage characteristics, differential conductance, and low-frequency current and voltage noise assuming an arbitrary distribution function and correlation properties of injected electrons. The analytical results have been obtained for a wide range of biases: from equilibrium to high values beyond the linear-response regime. The particular case of a three-dimensional Fermi-Dirac injection has been analyzed. We show that the Coulomb correlations are manifested in the negative excess voltage noise, i.e., the voltage fluctuations under high-field transport conditions can be less than in equilibrium.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Observations in daily practice are sometimes registered as positive values larger then a given threshold α. The sample space is in this case the interval (α,+∞), α > 0, which can be structured as a real Euclidean space in different ways. This fact opens the door to alternative statistical models depending not only on the assumed distribution function, but also on the metric which is considered as appropriate, i.e. the way differences are measured, and thus variability

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper focuses on one of the methods for bandwidth allocation in an ATM network: the convolution approach. The convolution approach permits an accurate study of the system load in statistical terms by accumulated calculations, since probabilistic results of the bandwidth allocation can be obtained. Nevertheless, the convolution approach has a high cost in terms of calculation and storage requirements. This aspect makes real-time calculations difficult, so many authors do not consider this approach. With the aim of reducing the cost we propose to use the multinomial distribution function: the enhanced convolution approach (ECA). This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements and makes a simple deconvolution process possible. The ECA is used in connection acceptance control, and some results are presented

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este documento se revisa teóricamente la distribución de probabilidad de Poisson como función que asigna a cada suceso definido, sobre una variable aleatoria discreta, la probabilidad de ocurrencia en un intervalo de tiempo o región del espacio disjunto. Adicionalmente se revisa la distribución exponencial negativa empleada para modelar el intervalo de tiempo entre eventos consecutivos de Poisson que ocurren de manera independiente; es decir, en los cuales la probabilidad de ocurrencia de los eventos sucedidos en un intervalo de tiempo no depende de los ocurridos en otros intervalos de tiempo, por esta razón se afirma que es una distribución que no tiene memoria. El proceso de Poisson relaciona la función de Poisson, que representa un conjunto de eventos independientes sucedidos en un intervalo de tiempo o región del espacio con los tiempos dados entre la ocurrencia de los eventos según la distribución exponencial negativa. Los anteriores conceptos se usan en la teoría de colas, rama de la investigación de operaciones que describe y brinda soluciones a situaciones en las que un conjunto de individuos o elementos forman colas en espera de que se les preste un servicio, por lo cual se presentan ejemplos de aplicación en el ámbito médico.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A novel statistic for local wave amplitude of the 500-hPa geopotential height field is introduced. The statistic uses a Hilbert transform to define a longitudinal wave envelope and dynamical latitude weighting to define the latitudes of interest. Here it is used to detect the existence, or otherwise, of multimodality in its distribution function. The empirical distribution function for the 1960-2000 period is close to a Weibull distribution with shape parameters between 2 and 3. There is substantial interdecadal variability but no apparent local multimodality or bimodality. The zonally averaged wave amplitude, akin to the more usual wave amplitude index, is close to being normally distributed. This is consistent with the central limit theorem, which applies to the construction of the wave amplitude index. For the period 1960-70 it is found that there is apparent bimodality in this index. However, the different amplitudes are realized at different longitudes, so there is no bimodality at any single longitude. As a corollary, it is found that many commonly used statistics to detect multimodality in atmospheric fields potentially satisfy the assumptions underlying the central limit theorem and therefore can only show approximately normal distributions. The author concludes that these techniques may therefore be suboptimal to detect any multimodality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A stochastic parameterization scheme for deep convection is described, suitable for use in both climate and NWP models. Theoretical arguments and the results of cloud-resolving models, are discussed in order to motivate the form of the scheme. In the deterministic limit, it tends to a spectrum of entraining/detraining plumes and is similar to other current parameterizations. The stochastic variability describes the local fluctuations about a large-scale equilibrium state. Plumes are drawn at random from a probability distribution function (pdf) that defines the chance of finding a plume of given cloud-base mass flux within each model grid box. The normalization of the pdf is given by the ensemble-mean mass flux, and this is computed with a CAPE closure method. The characteristics of each plume produced are determined using an adaptation of the plume model from the Kain-Fritsch parameterization. Initial tests in the single column version of the Unified Model verify that the scheme is effective in producing the desired distributions of convective variability without adversely affecting the mean state.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The orientational ordering of the nematic phase of a polyethylene glycol (PEG)-peptide block copolymer in aqueous solution is probed by small-angle neutron scattering (SANS), with the sample subjected to steady shear in a Couette cell. The PEG-peptide conjugate forms fibrils that behave as semiflexible rodlike chains. The orientational order parameters (P) over bar (2) and (P) over bar (4) are obtained by modeling the data using a series expansion approach to the form factor of uniform cylinders. The method used is independent of assumptions on the form of the singlet orientational distribution function. Good agreement with the anisotropic two-dimensional SANS patterns is obtained. The results show shear alignment starting at very low shear rates, and the orientational order parameters reach a plateau at higher shear rates with a pseudologarithmic dependence on shear rate. The most probable distribution functions correspond to fibrils parallel to the flow direction under shear, but a sample at rest shows a bimodal distribution with some of the rodlike peptide fibrils oriented perpendicular to the flow direction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The alignment of model amyloid peptide YYKLVFFC is investigated in bulk and at a solid surface using a range of spectroscopic methods employing polarized radiation. The peptide is based on a core sequence of the amyloid beta (A beta) peptide, KLVFF. The attached tyrosine and cysteine units are exploited to yield information on alignment and possible formation of disulfide or dityrosine links. Polarized Raman spectroscopy on aligned stalks provides information on tyrosine orientation, which complements data from linear dichroism (LD) on aqueous solutions subjected to shear in a Couette cell. LD provides a detailed picture of alignment of peptide strands and aromatic residues and was also used to probe the kinetics of self-assembly. This suggests initial association of phenylalanine residues, followed by subsequent registry of strands and orientation of tyrosine residues. X-ray diffraction (XRD) data from aligned stalks is used to extract orientational order parameters from the 0.48 nm reflection in the cross-beta pattern, from which an orientational distribution function is obtained. X-ray diffraction on solutions subject to capillary flow confirmed orientation in situ at the level of the cross-beta pattern. The information on fibril and tyrosine orientation from polarized Raman spectroscopy is compared with results from NEXAFS experiments on samples prepared as films on silicon. This indicates fibrils are aligned parallel to the surface, with phenyl ring normals perpendicular to the surface. Possible disulfide bridging leading to peptide dimer formation was excluded by Raman spectroscopy, whereas dityrosine formation was probed by fluorescence experiments and was found not to occur except under alkaline conditions. Congo red binding was found not to influence the cross-beta XRD pattern.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper compares a number of different extreme value models for determining the value at risk (VaR) of three LIFFE futures contracts. A semi-nonparametric approach is also proposed, where the tail events are modeled using the generalised Pareto distribution, and normal market conditions are captured by the empirical distribution function. The value at risk estimates from this approach are compared with those of standard nonparametric extreme value tail estimation approaches, with a small sample bias-corrected extreme value approach, and with those calculated from bootstrapping the unconditional density and bootstrapping from a GARCH(1,1) model. The results indicate that, for a holdout sample, the proposed semi-nonparametric extreme value approach yields superior results to other methods, but the small sample tail index technique is also accurate.