65 resultados para Probability distribution functions
Resumo:
The orientational ordering of the nematic phase of a polyethylene glycol (PEG)-peptide block copolymer in aqueous solution is probed by small-angle neutron scattering (SANS), with the sample subjected to steady shear in a Couette cell. The PEG-peptide conjugate forms fibrils that behave as semiflexible rodlike chains. The orientational order parameters (P) over bar (2) and (P) over bar (4) are obtained by modeling the data using a series expansion approach to the form factor of uniform cylinders. The method used is independent of assumptions on the form of the singlet orientational distribution function. Good agreement with the anisotropic two-dimensional SANS patterns is obtained. The results show shear alignment starting at very low shear rates, and the orientational order parameters reach a plateau at higher shear rates with a pseudologarithmic dependence on shear rate. The most probable distribution functions correspond to fibrils parallel to the flow direction under shear, but a sample at rest shows a bimodal distribution with some of the rodlike peptide fibrils oriented perpendicular to the flow direction.
Resumo:
The crystal structure of 4-phenyl-benzaldehyde reveals the presence of a dimer linked by the C=O and C( 9)-H groups of adjacent molecules. In the liquid phase, the presence of C-(HO)-O-... bonded forms is revealed by both vibrational and NMR spectroscopy. A Delta H value of - 8.2 +/- 0.5 kJ mol(-1) for the dimerisation equilibrium is established from the temperature-dependent intensities of the bands assigned to the carbonyl-stretching modes. The NMR data suggest the preferential engagement of the C(2,6)-H and C(10/12)/C(11)-H groups as hydrogen bond donors, instead of the C(9)-H group. While ab initio calculations for the isolated dimers are unable to corroborate these NMR results, the radial distribution functions obtained from molecular dynamics simulations show a preference for C(2,6)-H and C(10/12)/C(11)-(HO)-O-... contacts relative to the C(9)-(HO)-O-... ones.
Resumo:
A means of assessing, monitoring and controlling aggregate emissions from multi-instrument Emissions Trading Schemes is proposed. The approach allows contributions from different instruments with different forms of emissions targets to be integrated. Where Emissions Trading Schemes are helping meet specific national targets, the approach allows the entry requirements of new participants to be calculated and set at a level that will achieve these targets. The approach is multi-levelled, and may be extended downwards to support pooling of participants within instruments, or upwards to embed Emissions Trading Schemes within a wider suite of policies and measures with hard and soft targets. Aggregate emissions from each instrument are treated stochastically. Emissions from the scheme as a whole are then the joint probability distribution formed by integrating the emissions from its instruments. Because a Bayesian approach is adopted, qualitative and semi-qualitative data from expert opinion can be used where quantitative data is not currently available, or is incomplete. This approach helps government retain sufficient control over emissions trading scheme targets to allow them to meet their emissions reduction obligations, while minimising the need for retrospectively adjusting existing participants’ conditions of entry. This maintains participant confidence, while providing the necessary policy levers for good governance.
Resumo:
In real-world environments it is usually difficult to specify the quality of a preventive maintenance (PM) action precisely. This uncertainty makes it problematic to optimise maintenance policy.-This problem is tackled in this paper by assuming that the-quality of a PM action is a random variable following a probability distribution. Two frequently studied PM models, a failure rate PM model and an age reduction PM model, are investigated. The optimal PM policies are presented and optimised. Numerical examples are also given.
Resumo:
A new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) over the ocean is presented, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain-rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes’s theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance the understanding of theoretical benefits of the Bayesian approach, sensitivity analyses have been conducted based on two synthetic datasets for which the “true” conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism, but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak owing to saturation effects. It is also suggested that both the choice of the estimators and the prior information are crucial to the retrieval. In addition, the performance of the Bayesian algorithm herein is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
Resumo:
We present extensive molecular dynamics simulations of the dynamics of diluted long probe chains entangled with a matrix of shorter chains. The chain lengths of both components are above the entanglement strand length, and the ratio of their lengths is varied over a wide range to cover the crossover from the chain reptation regime to tube Rouse motion regime of the long probe chains. Reducing the matrix chain length results in a faster decay of the dynamic structure factor of the probe chains, in good agreement with recent neutron spin echo experiments. The diffusion of the long chains, measured by the mean square displacements of the monomers and the centers of mass of the chains, demonstrates a systematic speed-up relative to the pure reptation behavior expected for monodisperse melts of sufficiently long polymers. On the other hand, the diffusion of the matrix chains is only weakly perturbed by the diluted long probe chains. The simulation results are qualitatively consistent with the theoretical predictions based on constraint release Rouse model, but a detailed comparison reveals the existence of a broad distribution of the disentanglement rates, which is partly confirmed by an analysis of the packing and diffusion of the matrix chains in the tube region of the probe chains. A coarse-grained simulation model based on the tube Rouse motion model with incorporation of the probability distribution of the tube segment jump rates is developed and shows results qualitatively consistent with the fine scale molecular dynamics simulations. However, we observe a breakdown in the tube Rouse model when the short chain length is decreased to around N-S = 80, which is roughly 3.5 times the entanglement spacing N-e(P) = 23. The location of this transition may be sensitive to the chain bending potential used in our simulations.
Resumo:
Valuation is the process of estimating price. The methods used to determine value attempt to model the thought processes of the market and thus estimate price by reference to observed historic data. This can be done using either an explicit model, that models the worth calculation of the most likely bidder, or an implicit model, that that uses historic data suitably adjusted as a short cut to determine value by reference to previous similar sales. The former is generally referred to as the Discounted Cash Flow (DCF) model and the latter as the capitalisation (or All Risk Yield) model. However, regardless of the technique used, the valuation will be affected by uncertainties. Uncertainty in the comparable data available; uncertainty in the current and future market conditions and uncertainty in the specific inputs for the subject property. These input uncertainties will translate into an uncertainty with the output figure, the estimate of price. In a previous paper, we have considered the way in which uncertainty is allowed for in the capitalisation model in the UK. In this paper, we extend the analysis to look at the way in which uncertainty can be incorporated into the explicit DCF model. This is done by recognising that the input variables are uncertain and will have a probability distribution pertaining to each of them. Thus buy utilising a probability-based valuation model (using Crystal Ball) it is possible to incorporate uncertainty into the analysis and address the shortcomings of the current model. Although the capitalisation model is discussed, the paper concentrates upon the application of Crystal Ball to the Discounted Cash Flow approach.
Resumo:
In this paper sequential importance sampling is used to assess the impact of observations on a ensemble prediction for the decadal path transitions of the Kuroshio Extension (KE). This particle filtering approach gives access to the probability density of the state vector, which allows us to determine the predictive power — an entropy based measure — of the ensemble prediction. The proposed set-up makes use of an ensemble that, at each time, samples the climatological probability distribution. Then, in a post-processing step, the impact of different sets of observations is measured by the increase in predictive power of the ensemble over the climatological signal during one-year. The method is applied in an identical-twin experiment for the Kuroshio Extension using a reduced-gravity shallow water model. We investigate the impact of assimilating velocity observations from different locations during the elongated and the contracted meandering state of the KE. Optimal observations location correspond to regions with strong potential vorticity gradients. For the elongated state the optimal location is in the first meander of the KE. During the contracted state of the KE it is located south of Japan, where the Kuroshio separates from the coast.
Resumo:
In the present paper we characterize the statistical properties of non-precipitating tropical ice clouds (deep ice anvils resulting from deep convection and cirrus clouds) over Niamey, Niger, West Africa, and Darwin, northern Australia, using ground-based radar–lidar observations from the Atmospheric Radiation Measurement (ARM) programme. The ice cloud properties analysed in this paper are the frequency of ice cloud occurrence, cloud fraction, the morphological properties (cloud-top height, base height, and thickness), the microphysical and radiative properties (ice water content, visible extinction, effective radius, terminal fall speed, and concentration), and the internal cloud dynamics (in-cloud vertical air velocity). The main highlight of the paper is that it characterizes for the first time the probability density functions of the tropical ice cloud properties, their vertical variability and their diurnal variability at the same time. This is particularly important over West Africa, since the ARM deployment in Niamey provides the first vertically resolved observations of non-precipitating ice clouds in this crucial area in terms of redistribution of water and energy in the troposphere. The comparison between the two sites also provides an additional observational basis for the evaluation of the parametrization of clouds in large-scale models, which should be able to reproduce both the statistical properties at each site and the differences between the two sites. The frequency of ice cloud occurrence is found to be much larger over Darwin when compared to Niamey, and with a much larger diurnal variability, which is well correlated with the diurnal cycle of deep convective activity. The diurnal cycle of the ice cloud occurrence over Niamey is also much less correlated with that of deep convective activity than over Darwin, probably owing to the fact that Niamey is further away from the deep convective sources of the region. The frequency distributions of cloud fraction are strongly bimodal and broadly similar over the two sites, with a predominance of clouds characterized either by a very small cloud fraction (less than 0.3) or a very large cloud fraction (larger than 0.9). The ice clouds over Darwin are also much thicker (by 1 km or more statistically) and are characterized by a much larger diurnal variability than ice clouds over Niamey. Ice clouds over Niamey are also characterized by smaller particle sizes and fall speeds but in much larger concentrations, thereby carrying more ice water and producing more visible extinction than the ice clouds over Darwin. It is also found that there is a much larger occurrence of downward in-cloud air motions less than 1 m s−1 over Darwin, which together with the larger fall speeds retrieved over Darwin indicates that the life cycle of ice clouds is probably shorter over Darwin than over Niamey.
Resumo:
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point and to the field covariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data.
Resumo:
The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.
Resumo:
This paper proposes and demonstrates an approach, Skilloscopy, to the assessment of decision makers. In an increasingly sophisticated, connected and information-rich world, decision making is becoming both more important and more difficult. At the same time, modelling decision-making on computers is becoming more feasible and of interest, partly because the information-input to those decisions is increasingly on record. The aims of Skilloscopy are to rate and rank decision makers in a domain relative to each other: the aims do not include an analysis of why a decision is wrong or suboptimal, nor the modelling of the underlying cognitive process of making the decisions. In the proposed method a decision-maker is characterised by a probability distribution of their competence in choosing among quantifiable alternatives. This probability distribution is derived by classic Bayesian inference from a combination of prior belief and the evidence of the decisions. Thus, decision-makers’ skills may be better compared, rated and ranked. The proposed method is applied and evaluated in the gamedomain of Chess. A large set of games by players across a broad range of the World Chess Federation (FIDE) Elo ratings has been used to infer the distribution of players’ rating directly from the moves they play rather than from game outcomes. Demonstration applications address questions frequently asked by the Chess community regarding the stability of the Elo rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The method of Skilloscopy may be applied in any decision domain where the value of the decision-options can be quantified.
Resumo:
The evidence provided by modelled assessments of future climate impact on flooding is fundamental to water resources and flood risk decision making. Impact models usually rely on climate projections from global and regional climate models (GCM/RCMs). However, challenges in representing precipitation events at catchment-scale resolution mean that decisions must be made on how to appropriately pre-process the meteorological variables from GCM/RCMs. Here the impacts on projected high flows of differing ensemble approaches and application of Model Output Statistics to RCM precipitation are evaluated while assessing climate change impact on flood hazard in the Upper Severn catchment in the UK. Various ensemble projections are used together with the HBV hydrological model with direct forcing and also compared to a response surface technique. We consider an ensemble of single-model RCM projections from the current UK Climate Projections (UKCP09); multi-model ensemble RCM projections from the European Union's FP6 ‘ENSEMBLES’ project; and a joint probability distribution of precipitation and temperature from a GCM-based perturbed physics ensemble. The ensemble distribution of results show that flood hazard in the Upper Severn is likely to increase compared to present conditions, but the study highlights the differences between the results from different ensemble methods and the strong assumptions made in using Model Output Statistics to produce the estimates of future river discharge. The results underline the challenges in using the current generation of RCMs for local climate impact studies on flooding. Copyright © 2012 Royal Meteorological Society
Resumo:
The formation of complexes in solutions containing positively charged polyions (polycations) and a variable amount of negatively charged polyions (polyanions) has been investigated by Monte Carlo simulations. The polyions were described as flexible chains of charged hard spheres interacting through a screened Coulomb potential. The systems were analyzed in terms of cluster compositions, structure factors, and radial distribution functions. At 50% charge equivalence or less, complexes involving two polycations and one polyanion were frequent, while closer to charge equivalence, larger clusters were formed. Small and neutral complexes dominated the solution at charge equivalence in a monodisperse system, while larger clusters again dominated the solution when the polyions were made polydisperse. The cluster composition and solution structure were also examined as functions of added salt by varying the electrostatic screening length. The observed formation of clusters could be rationalized by a few simple rules.
Resumo:
The formation of complexes appearing in solutions containing oppositely charged polyelectrolytes has been investigated by Monte Carlo simulations using two different models. The polyions are described as flexible chains of 20 connected charged hard spheres immersed in a homogenous dielectric background representing water. The small ions are either explicitly included or their effect described by using a screened Coulomb potential. The simulated solutions contained 10 positively charged polyions with 0, 2, or 5 negatively charged polyions and the respective counterions. Two different linear charge densities were considered, and structure factors, radial distribution functions, and polyion extensions were determined. A redistribution of positively charged polyions involving strong complexes formed between the oppositely charged polyions appeared as the number of negatively charged polyions was increased. The nature of the complexes was found to depend on the linear charge density of the chains. The simplified model involving the screened Coulomb potential gave qualitatively similar results as the model with explicit small ions. Finally, owing to the complex formation, the sampling in configurational space is nontrivial, and the efficiency of different trial moves was examined.