965 resultados para Cumulative Distribution Function
Resumo:
The effect of the heat flux on the rate of chemical reaction in dilute gases is shown to be important for reactions characterized by high activation energies and in the presence of very large temperature gradients. This effect, obtained from the second-order terms in the distribution function (similar to those obtained in the Burnett approximation to the solution of the Boltzmann equation), is derived on the basis of information theory. It is shown that the analytical results describing the effect are simpler if the kinetic definition for the nonequilibrium temperature is introduced than if the thermodynamic definition is introduced. The numerical results are nearly the same for both definitions
Resumo:
We propose a novel formulation to solve the problem of intra-voxel reconstruction of the fibre orientation distribution function (FOD) in each voxel of the white matter of the brain from diffusion MRI data. The majority of the state-of-the-art methods in the field perform the reconstruction on a voxel-by-voxel level, promoting sparsity of the orientation distribution. Recent methods have proposed a global denoising of the diffusion data using spatial information prior to reconstruction, while others promote spatial regularisation through an additional empirical prior on the diffusion image at each q-space point. Our approach reconciles voxelwise sparsity and spatial regularisation and defines a spatially structured FOD sparsity prior, where the structure originates from the spatial coherence of the fibre orientation between neighbour voxels. The method is shown, through both simulated and real data, to enable accurate FOD reconstruction from a much lower number of q-space samples than the state of the art, typically 15 samples, even for quite adverse noise conditions.
Resumo:
Reinsurance is one of the tools that an insurer can use to mitigate the underwriting risk and then to control its solvency. In this paper, we focus on the proportional reinsurance arrangements and we examine several optimization and decision problems of the insurer with respect to the reinsurance strategy. To this end, we use as decision tools not only the probability of ruin but also the random variable deficit at ruin if ruin occurs. The discounted penalty function (Gerber & Shiu, 1998) is employed to calculate as particular cases the probability of ruin and the moments and the distribution function of the deficit at ruin if ruin occurs.
Resumo:
In this study we used market settlement prices of European call options on stock index futures to extract implied probability distribution function (PDF). The method used produces a PDF of returns of an underlying asset at expiration date from implied volatility smile. With this method, the assumption of lognormal distribution (Black-Scholes model) is tested. The market view of the asset price dynamics can then be used for various purposes (hedging, speculation). We used the so called smoothing approach for implied PDF extraction presented by Shimko (1993). In our analysis we obtained implied volatility smiles from index futures markets (S&P 500 and DAX indices) and standardized them. The method introduced by Breeden and Litzenberger (1978) was then used on PDF extraction. The results show significant deviations from the assumption of lognormal returns for S&P500 options while DAX options mostly fit the lognormal distribution. A deviant subjective view of PDF can be used to form a strategy as discussed in the last section.
Resumo:
Problems related to fire hazard and fire management have become in recent decades one of the most relevant issues in the Wildland-Urban Interface (WUI), that is the area where human infrastructures meet or intermingle with natural vegetation. In this paper we develop a robust geospatial method for defining and mapping the WUI in the Alpine environment, where most interactions between infrastructures and wildland vegetation concern the fire ignition through human activities, whereas no significant threats exist for infrastructures due to contact with burning vegetation. We used the three Alpine Swiss cantons of Ticino, Valais and Grisons as the study area. The features representing anthropogenic infrastructures (urban or infrastructural components of the WUI) as well as forest cover related features (wildland component of the WUI) were selected from the Swiss Topographic Landscape Model (TLM3D). Georeferenced forest fire occurrences derived from the WSL Swissfire database were used to define suitable WUI interface distances. The Random Forest algorithm was applied to estimate the importance of predictor variables to fire ignition occurrence. This revealed that buildings and drivable roads are the most relevant anthropogenic components with respect to fire ignition. We consequently defined the combination of drivable roads and easily accessible (i.e. 100 m from the next drivable road) buildings as the WUI-relevant infrastructural component. For the definition of the interface (buffer) distance between WUI infrastructural and wildland components, we computed the empirical cumulative distribution functions (ECDF) of the percentage of ignition points (observed and simulated) arising at increasing distances from the selected infrastructures. The ECDF facilitates the calculation of both the distance at which a given percentage of ignition points occurred and, in turn, the amount of forest area covered at a given distance. Finally, we developed a GIS ModelBuilder routine to map the WUI for the selected buffer distance. The approach was found to be reproducible, robust (based on statistical analyses for evaluating parameters) and flexible (buffer distances depending on the targeted final area covered) so that fire managers may use it to detect WUI according to their specific priorities.
Resumo:
Climate change affects the rate of insect invasions as well as the abundance, distribution and impacts of such invasions on a global scale. Among the principal analytical approaches to predicting and understanding future impacts of biological invasions are Species Distribution Models (SDMs), typically in the form of correlative Ecological Niche Models (ENMs). An underlying assumption of ENMs is that species-environment relationships remain preserved during extrapolations in space and time, although this is widely criticised. The semi-mechanistic modelling platform, CLIMEX, employs a top-down approach using species ecophysiological traits and is able to avoid some of the issues of extrapolation, making it highly applicable to investigating biological invasions in the context of climate change. The tephritid fruit flies (Diptera: Tephritidae) comprise some of the most successful invasive species and serious economic pests around the world. Here we project 12 tephritid species CLIMEX models into future climate scenarios to examine overall patterns of climate suitability and forecast potential distributional changes for this group. We further compare the aggregate response of the group against species-specific responses. We then consider additional drivers of biological invasions to examine how invasion potential is influenced by climate, fruit production and trade indices. Considering the group of tephritid species examined here, climate change is predicted to decrease global climate suitability and to shift the cumulative distribution poleward. However, when examining species-level patterns, the predominant directionality of range shifts for 11 of the 12 species is eastward. Most notably, management will need to consider regional changes in fruit fly species invasion potential where high fruit production, trade indices and predicted distributions of these flies overlap.
Resumo:
Classical Monte Carlo simulations were carried out on the NPT ensemble at 25°C and 1 atm, aiming to investigate the ability of the TIP4P water model [Jorgensen, Chandrasekhar, Madura, Impey and Klein; J. Chem. Phys., 79 (1983) 926] to reproduce the newest structural picture of liquid water. The results were compared with recent neutron diffraction data [Soper; Bruni and Ricci; J. Chem. Phys., 106 (1997) 247]. The influence of the computational conditions on the thermodynamic and structural results obtained with this model was also analyzed. The findings were compared with the original ones from Jorgensen et al [above-cited reference plus Mol. Phys., 56 (1985) 1381]. It is notice that the thermodynamic results are dependent on the boundary conditions used, whereas the usual radial distribution functions g(O/O(r)) and g(O/H(r)) do not depend on them.
Resumo:
Electron transport in a self-consistent potential along a ballistic two-terminal conductor has been investigated. We have derived general formulas which describe the nonlinear current-voltage characteristics, differential conductance, and low-frequency current and voltage noise assuming an arbitrary distribution function and correlation properties of injected electrons. The analytical results have been obtained for a wide range of biases: from equilibrium to high values beyond the linear-response regime. The particular case of a three-dimensional Fermi-Dirac injection has been analyzed. We show that the Coulomb correlations are manifested in the negative excess voltage noise, i.e., the voltage fluctuations under high-field transport conditions can be less than in equilibrium.
Resumo:
It is a well known phenomenon that the constant amplitude fatigue limit of a large component is lower than the fatigue limit of a small specimen made of the same material. In notched components the opposite occurs: the fatigue limit defined as the maximum stress at the notch is higher than that achieved with smooth specimens. These two effects have been taken into account in most design handbooks with the help of experimental formulas or design curves. The basic idea of this study is that the size effect can mainly be explained by the statistical size effect. A component subjected to an alternating load can be assumed to form a sample of initiated cracks at the end of the crack initiation phase. The size of the sample depends on the size of the specimen in question. The main objective of this study is to develop a statistical model for the estimation of this kind of size effect. It was shown that the size of a sample of initiated cracks shall be based on the stressed surface area of the specimen. In case of varying stress distribution, an effective stress area must be calculated. It is based on the decreasing probability of equally sized initiated cracks at lower stress level. If the distribution function of the parent population of cracks is known, the distribution of the maximum crack size in a sample can be defined. This makes it possible to calculate an estimate of the largest expected crack in any sample size. The estimate of the fatigue limit can now be calculated with the help of the linear elastic fracture mechanics. In notched components another source of size effect has to be taken into account. If we think about two specimens which have similar shape, but the size is different, it can be seen that the stress gradient in the smaller specimen is steeper. If there is an initiated crack in both of them, the stress intensity factor at the crack in the larger specimen is higher. The second goal of this thesis is to create a calculation method for this factor which is called the geometric size effect. The proposed method for the calculation of the geometric size effect is also based on the use of the linear elastic fracture mechanics. It is possible to calculate an accurate value of the stress intensity factor in a non linear stress field using weight functions. The calculated stress intensity factor values at the initiated crack can be compared to the corresponding stress intensity factor due to constant stress. The notch size effect is calculated as the ratio of these stress intensity factors. The presented methods were tested against experimental results taken from three German doctoral works. Two candidates for the parent population of initiated cracks were found: the Weibull distribution and the log normal distribution. Both of them can be used successfully for the prediction of the statistical size effect for smooth specimens. In case of notched components the geometric size effect due to the stress gradient shall be combined with the statistical size effect. The proposed method gives good results as long as the notch in question is blunt enough. For very sharp notches, stress concentration factor about 5 or higher, the method does not give sufficient results. It was shown that the plastic portion of the strain becomes quite high at the root of this kind of notches. The use of the linear elastic fracture mechanics becomes therefore questionable.
Resumo:
An axisymmetric supersonic flow of rarefied gas past a finite cylinder was calculated applying the direct simulation Monte Carlo method. The drag force, the coefficients of pressure, of skin friction, and of heat transfer, the fields of density, of temperature, and of velocity were calculated as function of the Reynolds number for a fixed Mach number. The variation of the Reynolds number is related to the variation of the Knudsen number, which characterizes the gas rarefaction. The present results show that all quantities in the transition regime (Knudsen number is about the unity) are significantly different from those in the hydrodynamic regime, when the Knudsen number is small.
Resumo:
Tässä diplomityössä tutkitaan, miten verkkokaupan kävijävirran käyttäytymistä analysoimalla voidaan tehdä perusteltuja, tarkoituksenmukaisiin nimikkeisiin ja niiden parametreihin kohdistuvia päätöksiä tilanteessa, jossa laajamittaisemmat historiatiedot toteutuneesta myynnistä puuttuvat. Teoriakatsauksen perusteella muodostettiin ratkaisumalli, joka perustuu potentiaalisten kysyntäajurien muodostamiseen ja testaamiseen. Testisarjan perusteella valittavaa ajuria käytetään estimoimaan nimikkeiden kysyntää, jolloin sitä voidaan käyttää toteutuneen myynnin sijasta esimerkiksi Pareto-analyysissä. Näin huomio on mahdollista keskittää rajattuun määrään merkitykseltään suuria nimikkeitä ja niiden yksityiskohtaisiin parametreihin, joilla on merkitystä asiakkaan ostopäätöstilanteissa. Lisäksi voidaan tunnistaa nimikkeitä, joiden ongelmana on joko huono verkkonäkyvyys tai yhteensopimattomuus asiakastarpeiden kanssa. Ajurien testaamisperiaatteena käytetään kertymäfunktioiden yhdenmukaisuustarkastelua, joka rakentuu kolmesta peräkkäisestä vaiheesta; visuaalisesta tarkastelusta, kahden otoksen 2-suuntaisesta Kolmogorov-Smirnov-yhteensopivuustestistä ja Pearsonin korrelaatiotestistä. Mallia ja sen avulla tuotettua kysynnän ajuria testattiin veneilyalan kuluttaja-asiakkaille suunnatussa verkkokaupassa, jossa sillä tunnistettiin Pareto-jakauman alkupäästä runsaasti nimikkeitä, joiden parametreissa oli myynnin kannalta epäedullisia tekijöitä. Jakauman toisessa päässä tunnistettiin satoja nimikkeitä, joiden ongelmana on ilmeisesti joko huono verkkonäkyvyys tai nimikkeiden yhteensopimattomuus asiakastarpeiden kanssa.
Resumo:
Solid state nuclear magnetic resonance (NMR) spectroscopy is a powerful technique for studying structural and dynamical properties of disordered and partially ordered materials, such as glasses, polymers, liquid crystals, and biological materials. In particular, twodimensional( 2D) NMR methods such as ^^C-^^C correlation spectroscopy under the magicangle- spinning (MAS) conditions have been used to measure structural constraints on the secondary structure of proteins and polypeptides. Amyloid fibrils implicated in a broad class of diseases such as Alzheimer's are known to contain a particular repeating structural motif, called a /5-sheet. However, the details of such structures are poorly understood, primarily because the structural constraints extracted from the 2D NMR data in the form of the so-called Ramachandran (backbone torsion) angle distributions, g{^,'4)), are strongly model-dependent. Inverse theory methods are used to extract Ramachandran angle distributions from a set of 2D MAS and constant-time double-quantum-filtered dipolar recoupling (CTDQFD) data. This is a vastly underdetermined problem, and the stability of the inverse mapping is problematic. Tikhonov regularization is a well-known method of improving the stability of the inverse; in this work it is extended to use a new regularization functional based on the Laplacian rather than on the norm of the function itself. In this way, one makes use of the inherently two-dimensional nature of the underlying Ramachandran maps. In addition, a modification of the existing numerical procedure is performed, as appropriate for an underdetermined inverse problem. Stability of the algorithm with respect to the signal-to-noise (S/N) ratio is examined using a simulated data set. The results show excellent convergence to the true angle distribution function g{(j),ii) for the S/N ratio above 100.
Resumo:
A mixture of Chlorhexidine digluconate (CHG) with glycerophospholipid 1,2-dimyristoyl- <^54-glycero-3-phospocholine (DMPC-rf54) was analysed using ^H nuclear magnetic resonance. To analyze powder spectra, the de-Pake-ing technique was used. The method is able to extract simultaneously both the orientation distribution function and the anisotropy distribution function. The spectral moments, average order parameter profiles, and longitudinal and transverse relaxation times were used to explore the structural phase behaviour of various DMPC/CHG mixtures in the temperature range 5-60°C.
Resumo:
This paper considers various asymptotic approximations in the near-integrated firstorder autoregressive model with a non-zero initial condition. We first extend the work of Knight and Satchell (1993), who considered the random walk case with a zero initial condition, to derive the expansion of the relevant joint moment generating function in this more general framework. We also consider, as alternative approximations, the stochastic expansion of Phillips (1987c) and the continuous time approximation of Perron (1991). We assess how these alternative methods provide or not an adequate approximation to the finite-sample distribution of the least-squares estimator in a first-order autoregressive model. The results show that, when the initial condition is non-zero, Perron's (1991) continuous time approximation performs very well while the others only offer improvements when the initial condition is zero.
Resumo:
This note investigates the adequacy of the finite-sample approximation provided by the Functional Central Limit Theorem (FCLT) when the errors are allowed to be dependent. We compare the distribution of the scaled partial sums of some data with the distribution of the Wiener process to which it converges. Our setup is purposely very simple in that it considers data generated from an ARMA(1,1) process. Yet, this is sufficient to bring out interesting conclusions about the particular elements which cause the approximations to be inadequate in even quite large sample sizes.