75 resultados para Dispersion parameter
em CentAUR: Central Archive University of Reading - UK
Resumo:
The co-polar correlation coefficient (ρhv) has many applications, including hydrometeor classification, ground clutter and melting layer identification, interpretation of ice microphysics and the retrieval of rain drop size distributions (DSDs). However, we currently lack the quantitative error estimates that are necessary if these applications are to be fully exploited. Previous error estimates of ρhv rely on knowledge of the unknown "true" ρhv and implicitly assume a Gaussian probability distribution function of ρhv samples. We show that frequency distributions of ρhv estimates are in fact highly negatively skewed. A new variable: L = -log10(1 - ρhv) is defined, which does have Gaussian error statistics, and a standard deviation depending only on the number of independent radar pulses. This is verified using observations of spherical drizzle drops, allowing, for the first time, the construction of rigorous confidence intervals in estimates of ρhv. In addition, we demonstrate how the imperfect co-location of the horizontal and vertical polarisation sample volumes may be accounted for. The possibility of using L to estimate the dispersion parameter (µ) in the gamma drop size distribution is investigated. We find that including drop oscillations is essential for this application, otherwise there could be biases in retrieved µ of up to ~8. Preliminary results in rainfall are presented. In a convective rain case study, our estimates show µ to be substantially larger than 0 (an exponential DSD). In this particular rain event, rain rate would be overestimated by up to 50% if a simple exponential DSD is assumed.
Resumo:
A method to estimate the size and liquid water content of drizzle drops using lidar measurements at two wavelengths is described. The method exploits the differential absorption of infrared light by liquid water at 905 nm and 1.5 μm, which leads to a different backscatter cross section for water drops larger than ≈50 μm. The ratio of backscatter measured from drizzle samples below cloud base at these two wavelengths (the colour ratio) provides a measure of the median volume drop diameter D0. This is a strong effect: for D0=200 μm, a colour ratio of ≈6 dB is predicted. Once D0 is known, the measured backscatter at 905 nm can be used to calculate the liquid water content (LWC) and other moments of the drizzle drop distribution. The method is applied to observations of drizzle falling from stratocumulus and stratus clouds. High resolution (32 s, 36 m) profiles of D0, LWC and precipitation rate R are derived. The main sources of error in the technique are the need to assume a value for the dispersion parameter μ in the drop size spectrum (leading to at most a 35% error in R) and the influence of aerosol returns on the retrieval (≈10% error in R for the cases considered here). Radar reflectivities are also computed from the lidar data, and compared to independent measurements from a colocated cloud radar, offering independent validation of the derived drop size distributions.
Resumo:
A wind-tunnel study was conducted to investigate ventilation of scalars from urban-like geometries at neighbourhood scale by exploring two different geometries a uniform height roughness and a non-uniform height roughness, both with an equal plan and frontal density of λ p = λ f = 25%. In both configurations a sub-unit of the idealized urban surface was coated with a thin layer of naphthalene to represent area sources. The naphthalene sublimation method was used to measure directly total area-averaged transport of scalars out of the complex geometries. At the same time, naphthalene vapour concentrations controlled by the turbulent fluxes were detected using a fast Flame Ionisation Detection (FID) technique. This paper describes the novel use of a naphthalene coated surface as an area source in dispersion studies. Particular emphasis was also given to testing whether the concentration measurements were independent of Reynolds number. For low wind speeds, transfer from the naphthalene surface is determined by a combination of forced and natural convection. Compared with a propane point source release, a 25% higher free stream velocity was needed for the naphthalene area source to yield Reynolds-number-independent concentration fields. Ventilation transfer coefficients w T /U derived from the naphthalene sublimation method showed that, whilst there was enhanced vertical momentum exchange due to obstacle height variability, advection was reduced and dispersion from the source area was not enhanced. Thus, the height variability of a canopy is an important parameter when generalising urban dispersion. Fine resolution concentration measurements in the canopy showed the effect of height variability on dispersion at street scale. Rapid vertical transport in the wake of individual high-rise obstacles was found to generate elevated point-like sources. A Gaussian plume model was used to analyse differences in the downstream plumes. Intensified lateral and vertical plume spread and plume dilution with height was found for the non-uniform height roughness
Resumo:
A new parameter-estimation algorithm, which minimises the cross-validated prediction error for linear-in-the-parameter models, is proposed, based on stacked regression and an evolutionary algorithm. It is initially shown that cross-validation is very important for prediction in linear-in-the-parameter models using a criterion called the mean dispersion error (MDE). Stacked regression, which can be regarded as a sophisticated type of cross-validation, is then introduced based on an evolutionary algorithm, to produce a new parameter-estimation algorithm, which preserves the parsimony of a concise model structure that is determined using the forward orthogonal least-squares (OLS) algorithm. The PRESS prediction errors are used for cross-validation, and the sunspot and Canadian lynx time series are used to demonstrate the new algorithms.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
We develop a process-based model for the dispersion of a passive scalar in the turbulent flow around the buildings of a city centre. The street network model is based on dividing the airspace of the streets and intersections into boxes, within which the turbulence renders the air well mixed. Mean flow advection through the network of street and intersection boxes then mediates further lateral dispersion. At the same time turbulent mixing in the vertical detrains scalar from the streets and intersections into the turbulent boundary layer above the buildings. When the geometry is regular, the street network model has an analytical solution that describes the variation in concentration in a near-field downwind of a single source, where the majority of scalar lies below roof level. The power of the analytical solution is that it demonstrates how the concentration is determined by only three parameters. The plume direction parameter describes the branching of scalar at the street intersections and hence determines the direction of the plume centreline, which may be very different from the above-roof wind direction. The transmission parameter determines the distance travelled before the majority of scalar is detrained into the atmospheric boundary layer above roof level and conventional atmospheric turbulence takes over as the dominant mixing process. Finally, a normalised source strength multiplies this pattern of concentration. This analytical solution converges to a Gaussian plume after a large number of intersections have been traversed, providing theoretical justification for previous studies that have developed empirical fits to Gaussian plume models. The analytical solution is shown to compare well with very high-resolution simulations and with wind tunnel experiments, although re-entrainment of scalar previously detrained into the boundary layer above roofs, which is not accounted for in the analytical solution, is shown to become an important process further downwind from the source.
Resumo:
In the event of a release of toxic gas in the center of London, the emergency services would need to determine quickly the extent of the area contaminated. The transport of pollutants by turbulent flow within the complex street and building architecture of cities is not straightforward, and we might wonder whether it is at all possible to make a scientifically-reasoned decision. Here we describe recent progress from a major UK project, ‘Dispersion of Air Pollution and its Penetration into the Local Environment’ (DAPPLE, www.dapple.org.uk). In DAPPLE, we focus on the movement of airborne pollutants in cities by developing a greater understanding of atmospheric flow and dispersion within urban street networks. In particular, we carried out full-scale dispersion experiments in central London (UK) during 2003, 2004, 2007, and 2008 to address the extent of the dispersion of tracers following their release at street level. These measurements complemented previous studies because (i) our focus was on dispersion within the first kilometer from the source, when most of the material was expected to remain within the street network rather than being mixed into the boundary layer aloft, (ii) measurements were made under a wide variety of meteorological conditions, and (iii) central London represents a European, rather than North American, city geometry. Interpretation of the results from the full-scale experiments was supported by extensive numerical and wind tunnel modeling, which allowed more detailed analysis under idealized and controlled conditions. In this article, we review the full-scale DAPPLE methodologies and show early results from the analysis of the 2007 field campaign data.
Resumo:
Data assimilation is a sophisticated mathematical technique for combining observational data with model predictions to produce state and parameter estimates that most accurately approximate the current and future states of the true system. The technique is commonly used in atmospheric and oceanic modelling, combining empirical observations with model predictions to produce more accurate and well-calibrated forecasts. Here, we consider a novel application within a coastal environment and describe how the method can also be used to deliver improved estimates of uncertain morphodynamic model parameters. This is achieved using a technique known as state augmentation. Earlier applications of state augmentation have typically employed the 4D-Var, Kalman filter or ensemble Kalman filter assimilation schemes. Our new method is based on a computationally inexpensive 3D-Var scheme, where the specification of the error covariance matrices is crucial for success. A simple 1D model of bed-form propagation is used to demonstrate the method. The scheme is capable of recovering near-perfect parameter values and, therefore, improves the capability of our model to predict future bathymetry. Such positive results suggest the potential for application to more complex morphodynamic models.