31 resultados para distribution (probability theory)

em CentAUR: Central Archive University of Reading - UK


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the success of studies attempting to integrate remotely sensed data and flood modelling and the need to provide near-real time data routinely on a global scale as well as setting up online data archives, there is to date a lack of spatially and temporally distributed hydraulic parameters to support ongoing efforts in modelling. Therefore, the objective of this project is to provide a global evaluation and benchmark data set of floodplain water stages with uncertainties and assimilation in a large scale flood model using space-borne radar imagery. An algorithm is developed for automated retrieval of water stages with uncertainties from a sequence of radar imagery and data are assimilated in a flood model using the Tewkesbury 2007 flood event as a feasibility study. The retrieval method that we employ is based on possibility theory which is an extension of fuzzy sets and that encompasses probability theory. In our case we first attempt to identify main sources of uncertainty in the retrieval of water stages from radar imagery for which we define physically meaningful ranges of parameter values. Possibilities of values are then computed for each parameter using a triangular ‘membership’ function. This procedure allows the computation of possible values of water stages at maximum flood extents along a river at many different locations. At a later stage in the project these data are then used in assimilation, calibration or validation of a flood model. The application is subsequently extended to a global scale using wide swath radar imagery and a simple global flood forecasting model thereby providing improved river discharge estimates to update the latter.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Scale functions play a central role in the fluctuation theory of spectrally negative Lévy processes and often appear in the context of martingale relations. These relations are often require excursion theory rather than Itô calculus. The reason for the latter is that standard Itô calculus is only applicable to functions with a sufficient degree of smoothness and knowledge of the precise degree of smoothness of scale functions is seemingly incomplete. The aim of this article is to offer new results concerning properties of scale functions in relation to the smoothness of the underlying Lévy measure. We place particular emphasis on spectrally negative Lévy processes with a Gaussian component and processes of bounded variation. An additional motivation is the very intimate relation of scale functions to renewal functions of subordinators. The results obtained for scale functions have direct implications offering new results concerning the smoothness of such renewal functions for which there seems to be very little existing literature on this topic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We wish to characterize when a Lévy process X t crosses boundaries b(t), in a two-sided sense, for small times t, where b(t) satisfies very mild conditions. An integral test is furnished for computing the value of sup t→0|X t |/b(t) = c. In some cases, we also specify a function b(t) in terms of the Lévy triplet, such that sup t→0 |X t |/b(t) = 1.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The notions of resolution and discrimination of probability forecasts are revisited. It is argued that the common concept underlying both resolution and discrimination is the dependence (in the sense of probability theory) of forecasts and observations. More specifically, a forecast has no resolution if and only if it has no discrimination if and only if forecast and observation are stochastically independent. A statistical tests for independence is thus also a test for no resolution and, at the same time, for no discrimination. The resolution term in the decomposition of the logarithmic scoring rule, and the area under the Receiver Operating Characteristic will be investigated in this light.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sediments play a fundamental role in the behaviour of contaminants in aquatic systems. Various processes in sediments, eg adsorption-desorption, oxidation-reduction, ion exchange or biological activities, can cause accumulation or release of metals and anions from the bottom of reservoirs, and have been recently studied in Polish waters [1-3]. Sediment samples from layer A: (1 divided by 6 cm depth in direct contact with bottom water); layer B: (7 divided by 12 cm depth moderate contact); and layer C: (12+ cm depth, in theory an inactive layer) were collected in September 2007 from six sites representing different types of hydrological conditions along the Dobczyce Reservoir (Fig. l). Water depths at the sampling points varied from 3.5 to 21 m. We have focused on studying the distribution and accumulation of several heavy metals (Cr, Pb, Cd, Cu and Zn) in the sediments. The surface, bottom and pore water (extracted from sediments by centrifugation) samples were also collected. Possible relationships between the heavy-metal distribution in sediments and the sediment characteristics (mineralogy, organic matter) as well as the Fe, Mn and Ca content of sediments, have been studied. The 02 concentrations in water samples were also measured. The heavy metals in sediments ranged from 19.0 to 226.3 mg/kg of dry mass (ppm). The results show considerable variations in heavy-metal concentrations between the 6 stations, but not in the individual layers (A, B, C). These variations are related to the mineralogy and chemical composition of the sediments and their pore waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large number of processes are involved in the pathogenesis of atherosclerosis but it is unclear which of them play a rate-limiting role. One way of resolving this problem is to investigate the highly non-uniform distribution of disease within the arterial system; critical steps in lesion development should be revealed by identifying arterial properties that differ between susceptible and protected sites. Although the localisation of atherosclerotic lesions has been investigated intensively over much of the 20th century, this review argues that the factor determining the distribution of human disease has only recently been identified. Recognition that the distribution changes with age has, for the first time, allowed it to be explained by variation in transport properties of the arterial wall; hitherto, this view could only be applied to experimental atherosclerosis in animals. The newly discovered transport variations which appear to play a critical role in the development of adult disease have underlying mechanisms that differ from those elucidated for the transport variations relevant to experimental atherosclerosis: they depend on endogenous NO synthesis and on blood flow. Manipulation of transport properties might have therapeutic potential. Copyright (C) 2004 S. Karger AG, Basel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combined picosecond transient absorption and time-resolved infrared studies were performed, aimed at characterising low-lying excited states of the cluster [Os-3(CO)(10)(s-cis-L)] (L= cyclohexa-1,3-diene, 1) and monitoring the formation of its photoproducts. Theoretical (DFT and TD-DFT) calculations on the closely related cluster with L=buta-1,3-diene (2') have revealed that the low-lying electronic transitions of these [Os-3(CO)(10)(s-cis-1,3-diene)] clusters have a predominant sigma(core)pi*(CO) character. From the lowest sigmapi* excited state, cluster 1 undergoes fast Os-Os(1,3-diene) bond cleavage (tau=3.3 ps) resulting in the formation of a coordinatively unsaturated primary photoproduct (1a) with a single CO bridge. A new insight into the structure of the transient has been obtained by DFT calculations. The cleaved Os-Os(1,3-diene) bond is bridged by the donor 1,3-diene ligand, compensating for the electron deficiency at the neighbouring Os centre. Because of the unequal distribution of the electron density in transient la, a second CO bridge is formed in 20 ps in the photoproduct [Os-3(CO)(8)(mu-CO)(2)- (cyclohexa-1,3-diene)] (1b). The latter compound, absorbing strongly around 630 nm, mainly regenerates the parent cluster with a lifetime of about 100 ns in hexane. Its structure, as suggested by the DFT calculations, again contains the 1,3-diene ligand coordinated in a bridging fashion. Photoproduct 1b can therefore be assigned as a high-energy coordination isomer of the parent cluster with all Os-Os bonds bridged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spatial distribution of CO2 level in a classroom carried out in previous field work research has demonstrated that there is some evidence of variations in CO2 concentration in a classroom space. Significant fluctuations in CO2 concentration were found at different sampling points depending on the ventilation strategies and environmental conditions prevailing in individual classrooms. However, how these variations are affected by the emitting sources and the room air movement remains unknown. Hence, it was concluded that detailed investigation of the CO2 distribution need to be performed on a smaller scale. As a result, it was decided to use an environmental chamber with various methods and rates of ventilation, for the same internal temperature and heat loads, to study the effect of ventilation strategy and air movement on the distribution of CO2 concentration in a room. The role of human exhalation and its interaction with the plume induced by the body's convective flow and room air movement due to different ventilation strategies were studied in a chamber at the University of Reading. These phenomena are considered to be important in understanding and predicting the flow patterns in a space and how these impact on the distribution of contaminants. This paper attempts to study the CO2 dispersion and distribution at the exhalation zone of two people sitting in a chamber as well as throughout the occupied zone of the chamber. The horizontal and vertical distributions of CO2 were sampled at locations with a probability that CO2 variation is considered high. Although the room size, source location, ventilation rate and location of air supply and extract devices all can have influence on the CO2 distribution, this article gives general guidelines on the optimum positioning of CO2 sensor in a room.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores a new technique to calculate and plot the distribution of instantaneous transmit envelope power of OFDMA and SC-FDMA signals from the equation of Probability Density Function (PDF) solved numerically. The Complementary Cumulative Distribution Function (CCDF) of Instantaneous Power to Average Power Ratio (IPAPR) is computed from the structure of the transmit system matrix. This helps intuitively understand the distribution of output signal power if the structure of the transmit system matrix and the constellation used are known. The distribution obtained for OFDMA signal matches complex normal distribution. The results indicate why the CCDF of IPAPR in case of SC-FDMA is better than OFDMA for a given constellation. Finally, with this method it is shown again that cyclic prefixed DS-CDMA system is one case with optimum IPAPR. The insight that this technique provides may be useful in designing area optimised digital and power efficient analogue modules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper compares a number of different extreme value models for determining the value at risk (VaR) of three LIFFE futures contracts. A semi-nonparametric approach is also proposed, where the tail events are modeled using the generalised Pareto distribution, and normal market conditions are captured by the empirical distribution function. The value at risk estimates from this approach are compared with those of standard nonparametric extreme value tail estimation approaches, with a small sample bias-corrected extreme value approach, and with those calculated from bootstrapping the unconditional density and bootstrapping from a GARCH(1,1) model. The results indicate that, for a holdout sample, the proposed semi-nonparametric extreme value approach yields superior results to other methods, but the small sample tail index technique is also accurate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real estate development appraisal is a quantification of future expectations. The appraisal model relies upon the valuer/developer having an understanding of the future in terms of the future marketability of the completed development and the future cost of development. In some cases the developer has some degree of control over the possible variation in the variables, as with the cost of construction through the choice of specification. However, other variables, such as the sale price of the final product, are totally dependent upon the vagaries of the market at the completion date. To try to address the risk of a different outcome to the one expected (modelled) the developer will often carry out a sensitivity analysis on the development. However, traditional sensitivity analysis has generally only looked at the best and worst scenarios and has focused on the anticipated or expected outcomes. This does not take into account uncertainty and the range of outcomes that can happen. A fuller analysis should include examination of the uncertainties in each of the components of the appraisal and account for the appropriate distributions of the variables. Similarly, as many of the variables in the model are not independent, the variables need to be correlated. This requires a standardised approach and we suggest that the use of a generic forecasting software package, in this case Crystal Ball, allows the analyst to work with an existing development appraisal model set up in Excel (or other spreadsheet) and to work with a predetermined set of probability distributions. Without a full knowledge of risk, developers are unable to determine the anticipated level of return that should be sought to compensate for the risk. This model allows the user a better understanding of the possible outcomes for the development. Ultimately the final decision will be made relative to current expectations and current business constraints, but by assessing the upside and downside risks more appropriately, the decision maker should be better placed to make a more informed and “better”.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper approaches the question of why entrepreneurial firms exist from a broad business historical perspective. It observes that the original development of the modern business enterprise was very strongly associated with entrepreneurial innovation rather than an extension of managerial routine. The widely-used theory of the entrepreneur as a specialist in judgmental decision making is applied to the particular point in time when entrepreneurs had to develop novel organizational designs in what Chandler described as the prelude to the ‘managerial revolution’. The paper illustrates how the theory of entrepreneurship then best explains the rise of the modern corporation by focusing on the case study of vertical integration par excellence, Singer.