959 resultados para idiosyncratic dispersion
Resumo:
During April-May 2010 volcanic ash clouds from the Icelandic Eyjafjallajökull volcano reached Europe causing an unprecedented disruption of the EUR/NAT region airspace. Civil aviation authorities banned all flight operations because of the threat posed by volcanic ash to modern turbine aircraft. New quantitative airborne ash mass concentration thresholds, still under discussion, were adopted for discerning regions contaminated by ash. This has implications for ash dispersal models routinely used to forecast the evolution of ash clouds. In this new context, quantitative model validation and assessment of the accuracies of current state-of-the-art models is of paramount importance. The passage of volcanic ash clouds over central Europe, a territory hosting a dense network of meteorological and air quality observatories, generated a quantity of observations unusual for volcanic clouds. From the ground, the cloud was observed by aerosol lidars, lidar ceilometers, sun photometers, other remote-sensing instru- ments and in-situ collectors. From the air, sondes and multiple aircraft measurements also took extremely valuable in-situ and remote-sensing measurements. These measurements constitute an excellent database for model validation. Here we validate the FALL3D ash dispersal model by comparing model results with ground and airplane-based measurements obtained during the initial 14e23 April 2010 Eyjafjallajökull explosive phase. We run the model at high spatial resolution using as input hourly- averaged observed heights of the eruption column and the total grain size distribution reconstructed from field observations. Model results are then compared against remote ground-based and in-situ aircraft-based measurements, including lidar ceilometers from the German Meteorological Service, aerosol lidars and sun photometers from EARLINET and AERONET networks, and flight missions of the German DLR Falcon aircraft. We find good quantitative agreement, with an error similar to the spread in the observations (however depending on the method used to estimate mass eruption rate) for both airborne and ground mass concentration. Such verification results help us understand and constrain the accuracy and reliability of ash transport models and it is of enormous relevance for designing future operational mitigation strategies at Volcanic Ash Advisory Centers.
Resumo:
Colloidal gas aphrons (CGA) have previously been defined as surfactant stabilized gas microbubbles and characterized for a number of surfactants in terms of stability, gas holdup and bubble size even though there is no conclusive evidence of their structure (that is, orientation of surfactant molecules at the gas–liquid interface, thickness of gas–liquid interface, and/or number of surfactant layers). Knowledge of the structure would enable us to use these dispersions more efficiently for their diverse applications (such as for removal of dyes, recovery of proteins, and enhancement of mass transfer in bioreactors). This study investigates dispersion and structural features of CGA utilizing a range of novel predictive (for prediction of aphron size and drainage rate) and experimental (electron microscopy and X-ray diffraction) methods. Results indicate structural differences between foams and CGA, which may have been caused by a multilayer structure of the latter as suggested by the electron and X-ray diffraction analysis.
Resumo:
The dispersion of a point-source release of a passive scalar in a regular array of cubical, urban-like, obstacles is investigated by means of direct numerical simulations. The simulations are conducted under conditions of neutral stability and fully rough turbulent flow, at a roughness Reynolds number of Reτ = 500. The Navier–Stokes and scalar equations are integrated assuming a constant rate release from a point source close to the ground within the array. We focus on short-range dispersion, when most of the material is still within the building canopy. Mean and fluctuating concentrations are computed for three different pressure gradient directions (0◦ , 30◦ , 45◦). The results agree well with available experimental data measured in a water channel for a flow angle of 0◦ . Profiles of mean concentration and the three-dimensional structure of the dispersion pattern are compared for the different forcing angles. A number of processes affecting the plume structure are identified and discussed, including: (i) advection or channelling of scalar down ‘streets’, (ii) lateral dispersion by turbulent fluctuations and topological dispersion induced by dividing streamlines around buildings, (iii) skewing of the plume due to flow turning with height, (iv) detrainment by turbulent dispersion or mean recirculation, (v) entrainment and release of scalar in building wakes, giving rise to ‘secondary sources’, (vi) plume meandering due to unsteady turbulent fluctuations. Finally, results on relative concentration fluctuations are presented and compared with the literature for point source dispersion over flat terrain and urban arrays. Keywords Direct numerical simulation · Dispersion modelling · Urban array
Resumo:
Along the lines of the nonlinear response theory developed by Ruelle, in a previous paper we have proved under rather general conditions that Kramers-Kronig dispersion relations and sum rules apply for a class of susceptibilities describing at any order of perturbation the response of Axiom A non equilibrium steady state systems to weak monochromatic forcings. We present here the first evidence of the validity of these integral relations for the linear and the second harmonic response for the perturbed Lorenz 63 system, by showing that numerical simulations agree up to high degree of accuracy with the theoretical predictions. Some new theoretical results, showing how to derive asymptotic behaviors and how to obtain recursively harmonic generation susceptibilities for general observables, are also presented. Our findings confirm the conceptual validity of the nonlinear response theory, suggest that the theory can be extended for more general non equilibrium steady state systems, and shed new light on the applicability of very general tools, based only upon the principle of causality, for diagnosing the behavior of perturbed chaotic systems and reconstructing their output signals, in situations where the fluctuation-dissipation relation is not of great help.
Resumo:
This paper investigates the potential benefits and limitations of equal and value-weighted diversification using as the example the UK institutional property market. To achieve this it uses the largest sample (392) of actual property returns that is currently available, over the period 1981 to 1996. To evaluate these issues two approaches are adopted; first, an analysis of the correlations within the sectors and regions and secondly simulations of property portfolios of increasing size constructed both naively and with value-weighting. Using these methods it is shown that the extent of possible risk reduction is limited because of the high positive correlations between assets in any portfolio, even when naively diversified. It is also shown that portfolios exhibit high levels of variability around the average risk, suggesting that previous work seriously understates the number of properties needed to achieve a satisfactory level of diversification. The results have implications for the development and maintenance of a property portfolio because they indicate that the achievable level of risk reduction depends upon the availability of assets, the weighting system used and the investor’s risk tolerance.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. This will damage some of the key properties of the space-time codes and can lead to substantial performance degradation. In this paper, we study the design of linear dispersion codes (LDCs) for such asynchronous cooperative communication networks. Firstly, the concept of conventional LDCs is extended to the delay-tolerant version and new design criteria are discussed. Then we propose a new design method to yield delay-tolerant LDCs that reach the optimal Jensen's upper bound on ergodic capacity as well as minimum average pairwise error probability. The proposed design employs stochastic gradient algorithm to approach a local optimum. Moreover, it is improved by using simulated annealing type optimization to increase the likelihood of the global optimum. The proposed method allows for flexible number of nodes, receive antennas, modulated symbols and flexible length of codewords. Simulation results confirm the performance of the newly-proposed delay-tolerant LDCs.
Resumo:
Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.
Resumo:
This paper seeks to synthesise the various contributions to the special issue of Long Range Planning on competence-creating subsidiaries (CCS), and identifies avenues for future research. Effective competence-creation through a network of subsidiaries requires an appropriate balance between internal and external embeddedness. There are multiple types of firm-specific advantages (FSAs) essential to achieve this. In addition, wide-bandwidth pathways are needed with collaborators, suppliers, customers as well as internally within the MNE. Paradoxically, there is a natural tendency for bandwidth to shrink as dispersion increases. As distances (technological, organisational, and physical) become greater, there may be decreasing returns to R&D spread. Greater resources for knowledge integration and coordination are needed as intra-MNE and inter-firm R&D cooperation becomes more intensive and extensive. MNEs need to invest in mechanisms to promote wide-bandwidth knowledge flows, without which widely dispersed and networked MNEs can suffer from internal market failures.
Resumo:
Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.
Resumo:
Dispersion in the near-field region of localised releases in urban areas is difficult to predict because of the strong influence of individual buildings. Effects include upstream dispersion, trapping of material into building wakes and enhanced concentration fluctuations. As a result, concentration patterns are highly variable in time and mean profiles in the near field are strongly non-Gaussian. These aspects of near-field dispersion are documented by analysing data from direct numerical simulations in arrays of building-like obstacles and are related to the underlying flow structure. The mean flow structure around the buildings is found to exert a strong influence over the dispersion of material in the near field. Diverging streamlines around buildings enhance lateral dispersion. Entrainment of material into building wakes in the very near field gives rise to secondary sources, which then affect the subsequent dispersion pattern. High levels of concentration fluctuations are also found in this very near field; the fluctuation intensity is of order 2 to 5.
Resumo:
This article examines the role of idiosyncratic volatility in explaining the cross-sectional variation of size- and value-sorted portfolio returns. We show that the premium for bearing idiosyncratic volatility varies inversely with the number of stocks included in the portfolios. This conclusion is robust within various multifactor models based on size, value, past performance, liquidity and total volatility and also holds within an ICAPM specification of the risk–return relationship. Our findings thus indicate that investors demand an additional return for bearing the idiosyncratic volatility of poorly-diversified portfolios.
Resumo:
We develop a process-based model for the dispersion of a passive scalar in the turbulent flow around the buildings of a city centre. The street network model is based on dividing the airspace of the streets and intersections into boxes, within which the turbulence renders the air well mixed. Mean flow advection through the network of street and intersection boxes then mediates further lateral dispersion. At the same time turbulent mixing in the vertical detrains scalar from the streets and intersections into the turbulent boundary layer above the buildings. When the geometry is regular, the street network model has an analytical solution that describes the variation in concentration in a near-field downwind of a single source, where the majority of scalar lies below roof level. The power of the analytical solution is that it demonstrates how the concentration is determined by only three parameters. The plume direction parameter describes the branching of scalar at the street intersections and hence determines the direction of the plume centreline, which may be very different from the above-roof wind direction. The transmission parameter determines the distance travelled before the majority of scalar is detrained into the atmospheric boundary layer above roof level and conventional atmospheric turbulence takes over as the dominant mixing process. Finally, a normalised source strength multiplies this pattern of concentration. This analytical solution converges to a Gaussian plume after a large number of intersections have been traversed, providing theoretical justification for previous studies that have developed empirical fits to Gaussian plume models. The analytical solution is shown to compare well with very high-resolution simulations and with wind tunnel experiments, although re-entrainment of scalar previously detrained into the boundary layer above roofs, which is not accounted for in the analytical solution, is shown to become an important process further downwind from the source.
Resumo:
It is shown that the open magnetosphere model can reproduce both the down-going and the up-going magnetosheath ions seen in the cusp and mantle regions by the Polar satellite at middle altitudes. ?he pass studied shows a series of discontinuities in the ion dispersion, most of which are shown to arise from pulses of magnetopause reconnection rate. A total of 9 pulses are detected in an interval estimated to be about 30 min long, giving a mean repetition period of about 3 min: they vary in length between 0.5 min and 3.5 min and are separated by periods of much slower reconnection of duration 1-3 min. One step is not as predicted for reconnection rate pulses but is explained in terms of compressive motions caused by a pulse of solar wind dynamic pressure. The reconnection site is found to be 16 +/- 3 R-E from the ionosphere along the separatrix field line, placing it at low latitudes on the dayside magnetopause.
Resumo:
We present an analysis of the accuracy of the method introduced by Lockwood et al. (1994) for the determination of the magnetopause reconnection rate from the dispersion of precipitating ions in the ionospheric cusp region. Tests are made by applying the method to synthesised data. The simulated cusp ion precipitation data are produced by an analytic model of the evolution of newly-opened field lines, along which magnetosheath ions are firstly injected across the magnetopause and then dispersed as they propagate into the ionosphere. The rate at which these newly opened field lines are generated by reconnection can be varied. The derived reconnection rate estimates are then compared with the input variation to the model and the accuracy of the method assessed. Results are presented for steady-state reconnection, for continuous reconnection showing a sine-wave variation in rate and for reconnection which only occurs in square wave pulses. It is found that the method always yields the total flux reconnected (per unit length of the open-closed field-line boundary) to within an accuracy of better than 5%, but that pulses tend to be smoothed so that the peak reconnection rate within the pulse is underestimated and the pulse length is overestimated. This smoothing is reduced if the separation between energy channels of the instrument is reduced; however this also acts to increase the experimental uncertainty in the estimates, an effect which can be countered by improving the time resolution of the observations. The limited time resolution of the data is shown to set a minimum reconnection rate below which the method gives spurious short-period oscillations about the true value. Various examples of reconnection rate variations derived from cusp observations are discussed in the light of this analysis.