945 resultados para Probability Distribution Function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Extreme stock price movements are of great concern to both investors and the entire economy. For investors, a single negative return, or a combination of several smaller returns, can possible wipe out so much capital that the firm or portfolio becomes illiquid or insolvent. If enough investors experience this loss, it could shock the entire economy. An example of such a case is the stock market crash of 1987. Furthermore, there has been a lot of recent interest regarding the increasing volatility of stock prices. ^ This study presents an analysis of extreme stock price movements. The data utilized was the daily returns for the Standard and Poor's 500 index from January 3, 1978 to May 31, 2001. Research questions were analyzed using the statistical models provided by extreme value theory. One of the difficulties in examining stock price data is that there is no consensus regarding the correct shape of the distribution function generating the data. An advantage with extreme value theory is that no detailed knowledge of this distribution function is required to apply the asymptotic theory. We focus on the tail of the distribution. ^ Extreme value theory allows us to estimate a tail index, which we use to derive bounds on the returns for very low probabilities on an excess. Such information is useful in evaluating the volatility of stock prices. There are three possible limit laws for the maximum: Gumbel (thick-tailed), Fréchet (thin-tailed) or Weibull (no tail). Results indicated that extreme returns during the time period studied follow a Fréchet distribution. Thus, this study finds that extreme value analysis is a valuable tool for examining stock price movements and can be more efficient than the usual variance in measuring risk. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. ^ A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: (a) increase the efficiency of the portfolio optimization process, (b) implement large-scale optimizations, and (c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. ^ The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. ^ The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH). ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we have investigated some aspects of the two-dimensional flow of a viscous Newtonian fluid through a disordered porous medium modeled by a random fractal system similar to the Sierpinski carpet. This fractal is formed by obstacles of various sizes, whose distribution function follows a power law. They are randomly disposed in a rectangular channel. The velocity field and other details of fluid dynamics are obtained by solving numerically of the Navier-Stokes and continuity equations at the pore level, where occurs actually the flow of fluids in porous media. The results of numerical simulations allowed us to analyze the distribution of shear stresses developed in the solid-fluid interfaces, and find algebraic relations between the viscous forces or of friction with the geometric parameters of the model, including its fractal dimension. Based on the numerical results, we proposed scaling relations involving the relevant parameters of the phenomenon, allowing quantifying the fractions of these forces with respect to size classes of obstacles. Finally, it was also possible to make inferences about the fluctuations in the form of the distribution of viscous stresses developed on the surface of obstacles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Various physical systems have dynamics that can be modeled by percolation processes. Percolation is used to study issues ranging from fluid diffusion through disordered media to fragmentation of a computer network caused by hacker attacks. A common feature of all of these systems is the presence of two non-coexistent regimes associated to certain properties of the system. For example: the disordered media can allow or not allow the flow of the fluid depending on its porosity. The change from one regime to another characterizes the percolation phase transition. The standard way of analyzing this transition uses the order parameter, a variable related to some characteristic of the system that exhibits zero value in one of the regimes and a nonzero value in the other. The proposal introduced in this thesis is that this phase transition can be investigated without the explicit use of the order parameter, but rather through the Shannon entropy. This entropy is a measure of the uncertainty degree in the information content of a probability distribution. The proposal is evaluated in the context of cluster formation in random graphs, and we apply the method to both classical percolation (Erd¨os- R´enyi) and explosive percolation. It is based in the computation of the entropy contained in the cluster size probability distribution and the results show that the transition critical point relates to the derivatives of the entropy. Furthermore, the difference between the smooth and abrupt aspects of the classical and explosive percolation transitions, respectively, is reinforced by the observation that the entropy has a maximum value in the classical transition critical point, while that correspondence does not occurs during the explosive percolation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Various physical systems have dynamics that can be modeled by percolation processes. Percolation is used to study issues ranging from fluid diffusion through disordered media to fragmentation of a computer network caused by hacker attacks. A common feature of all of these systems is the presence of two non-coexistent regimes associated to certain properties of the system. For example: the disordered media can allow or not allow the flow of the fluid depending on its porosity. The change from one regime to another characterizes the percolation phase transition. The standard way of analyzing this transition uses the order parameter, a variable related to some characteristic of the system that exhibits zero value in one of the regimes and a nonzero value in the other. The proposal introduced in this thesis is that this phase transition can be investigated without the explicit use of the order parameter, but rather through the Shannon entropy. This entropy is a measure of the uncertainty degree in the information content of a probability distribution. The proposal is evaluated in the context of cluster formation in random graphs, and we apply the method to both classical percolation (Erd¨os- R´enyi) and explosive percolation. It is based in the computation of the entropy contained in the cluster size probability distribution and the results show that the transition critical point relates to the derivatives of the entropy. Furthermore, the difference between the smooth and abrupt aspects of the classical and explosive percolation transitions, respectively, is reinforced by the observation that the entropy has a maximum value in the classical transition critical point, while that correspondence does not occurs during the explosive percolation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The population of naive T cells in the periphery is best described by determining both its T cell receptor diversity, or number of clonotypes, and the sizes of its clonal subsets. In this paper, we make use of a previously introduced mathematical model of naive T cell homeostasis, to study the fate and potential of naive T cell clonotypes in the periphery. This is achieved by the introduction of several new stochastic descriptors for a given naive T cell clonotype, such as its maximum clonal size, the time to reach this maximum, the number of proliferation events required to reach this maximum, the rate of contraction of the clonotype during its way to extinction, as well as the time to a given number of proliferation events. Our results show that two fates can be identified for the dynamics of the clonotype: extinction in the short-term if the clonotype experiences too hostile a peripheral environment, or establishment in the periphery in the long-term. In this second case the probability mass function for the maximum clonal size is bimodal, with one mode near one and the other mode far away from it. Our model also indicates that the fate of a recent thymic emigrant (RTE) during its journey in the periphery has a clear stochastic component, where the probability of extinction cannot be neglected, even in a friendly but competitive environment. On the other hand, a greater deterministic behaviour can be expected in the potential size of the clonotype seeded by the RTE in the long-term, once it escapes extinction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We report an investigation on the statistics of group delay for few-mode fibres operating in the weak and strong linear coupling regimes as well as in the intermediate coupling regime. A single expression linking the standard deviation of the group delay spread to the fibre linear mode coupling is validated for any coupling regime, considering up to six linearly polarized guided modes. Furthermore, the study of the probability density function of the group delays allowed deriving and validating an analytical estimation for the maximum group delay spread as a function of linear mode coupling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: a) increase the efficiency of the portfolio optimization process, b) implement large-scale optimizations, and c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main goal of this thesis is to show the versatility of glancing angle deposition (GLAD) thin films in applications. This research is first focused on studying the effect of select deposition variables in GLAD thin films and secondly, to demonstrate the flexibility of GLAD films to be incorporated in two different applications: (1) as a reflective coating in low-level concentration photovoltaic systems, and (2) as an anode structure in dye-sensitized solar cells (DSSC). A particular type of microstructure composed of tilted micro-columns of titanium is fabricated by GLAD. The microstructures form elongated and fan-like tilted micro-columns that demonstrate anisotropic scattering. The thin films texture changes from fiber texture to tilted fiber texture by increasing the vapor incidence angle. At very large deposition angles, biaxial texture forms. The morphology of the thin films deposited under extreme shadowing condition and at high temperature (below recrystallization zone) shows a porous and inclined micro-columnar morphology, resulting from the dominance of shadowing over adatom surface diffusion. The anisotropic scattering behavior of the tilted Ti thin film coatings is quantified by bidirectional reflectance distribution function (BRDF) measurements and is found to be consistent with reflectance from the microstructure acting as an array of inclined micro-mirrors that redirect the incident light in a non-specular reflection. A silver-coating of the surface of the tilted-Ti micro-columns is performed to enhance the total reflectance of the Ti-thin films while keeping the anisotropic scattering behavior. By using such coating is as a booster reflector in a laboratory-scale low-level concentration photovoltaic system, the short-circuit current of the reference silicon solar cell by 25%. Finally, based on the scattering properties of the tilted microcolumnar microstructure, its scattering effect is studied as a part of titanium dioxide microstructure for the anode in DSSCs. GLAD-fabricated TiO2 microstructures for the anode in a DSSC, consisting of vertical micro-columns, and combined vertical topped with tilted micro-columns are compared. The solar cell with the two-part microstructure shows the highest monochromatic incident photon to current efficiency with 20% improvement compared to the vertical microstructure, and the efficiency of the cell increases from 1.5% to 2% due to employing the scattering layer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We calculate the first two moments and full probability distribution of the work performed on a system of bosonic particles in a two-mode Bose-Hubbard Hamiltonian when the self-interaction term is varied instantaneously or with a finite-time ramp. In the instantaneous case, we show how the irreversible work scales differently depending on whether the system is driven to the Josephson or Fock regime of the bosonic Josephson junction. In the finite-time case, we use optimal control techniques to substantially decrease the irreversible work to negligible values. Our analysis can be implemented in present-day experiments with ultracold atoms and we show how to relate the work statistics to that of the population imbalance of the two modes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A regional offset (ΔR) from the marine radiocarbon calibration curve is widely used in calibration software (eg CALIB, OxCal) but often is not calculated correctly. While relatively straightforward for known age samples, such as mollusks from museum collections or banded corals, it is more difficult to calculate ΔR and the uncertainty in ΔR for 14C dates on paired marine and terrestrial samples. Previous researchers have often utilized classical intercept methods (Reimer et al. 2002; Dewar et al. 2012, Russell et al. 2011) but this does not account for the full calibrated probability density function (PDF). We have developed an on-line application for performing these calculations for known age, paired marine and terrestrial 14C dates, or U-Th dated corals which is available at http://calib.qub.ac.uk/deltar

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this work was to track and verify the delivery of respiratory-gated irradiations, performed with three versions of TrueBeam linac, using a novel phantom arrangement that combined the OCTAVIUS® SRS 1000 array with a moving platform. The platform was programmed to generate sinusoidal motion of the array. This motion was tracked using the real-time position management (RPM) system and four amplitude gating options were employed to interrupt MV beam delivery when the platform was not located within set limits. Time-resolved spatial information extracted from analysis of x-ray fluences measured by the array was compared to the programmed motion of the platform and to the trace recorded by the RPM system during the delivery of the x-ray field. Temporal data recorded by the phantom and the RPM system were validated against trajectory log files, recorded by the linac during the irradiation, as well as oscilloscope waveforms recorded from the linac target signal. Gamma analysis was employed to compare time-integrated 2D x-ray dose fluences with theoretical fluences derived from the probability density function for each of the gating settings applied, where gamma criteria of 2%/2 mm, 1%/1 mm and 0.5%/0.5 mm were used to evaluate the limitations of the RPM system. Excellent agreement was observed in the analysis of spatial information extracted from the SRS 1000 array measurements. Comparisons of the average platform position with the expected position indicated absolute deviations of  <0.5 mm for all four gating settings. Differences were observed when comparing time-resolved beam-on data stored in the RPM files and trajectory logs to the true target signal waveforms. Trajectory log files underestimated the cycle time between consecutive beam-on windows by 10.0  ±  0.8 ms. All measured fluences achieved 100% pass-rates using gamma criteria of 2%/2 mm and 50% of the fluences achieved pass-rates  >90% when criteria of 0.5%/0.5 mm were used. Results using this novel phantom arrangement indicate that the RPM system is capable of accurately gating x-ray exposure during the delivery of a fixed-field treatment beam.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The structure of a turbulent non-premixed flame of a biogas fuel in a hot and diluted coflow mimicking moderate and intense low dilution (MILD) combustion is studied numerically. Biogas fuel is obtained by dilution of Dutch natural gas (DNG) with CO2. The results of biogas combustion are compared with those of DNG combustion in the Delft Jet-in-Hot-Coflow (DJHC) burner. New experimental measurements of lift-off height and of velocity and temperature statistics have been made to provide a database for evaluating the capability of numerical methods in predicting the flame structure. Compared to the lift-off height of the DNG flame, addition of 30 % carbon dioxide to the fuel increases the lift-off height by less than 15 %. Numerical simulations are conducted by solving the RANS equations using Reynolds stress model (RSM) as turbulence model in combination with EDC (Eddy Dissipation Concept) and transported probability density function (PDF) as turbulence-chemistry interaction models. The DRM19 reduced mechanism is used as chemical kinetics with the EDC model. A tabulated chemistry model based on the Flamelet Generated Manifold (FGM) is adopted in the PDF method. The table describes a non-adiabatic three stream mixing problem between fuel, coflow and ambient air based on igniting counterflow diffusion flamelets. The results show that the EDC/DRM19 and PDF/FGM models predict the experimentally observed decreasing trend of lift-off height with increase of the coflow temperature. Although more detailed chemistry is used with EDC, the temperature fluctuations at the coflow inlet (approximately 100K) cannot be included resulting in a significant overprediction of the flame temperature. Only the PDF modeling results with temperature fluctuations predict the correct mean temperature profiles of the biogas case and compare well with the experimental temperature distributions.