862 resultados para Bubble nucleation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The flow patterns generated by a pulsating jet used to study hydrodynamic modulated voltammetry (HMV) are investigated. It is shown that the pronounced edge effect reported previously is the result of the generation of a vortex ring from the pulsating jet. This vortex behaviour of the pulsating jet system is imaged using a number of visualisation techniques. These include a dye system and an electrochemically generated bubble stream. In each case a toroidal vortex ring was observed. Image analysis revealed that the velocity of this motion was of the order of 250 mm s−1 with a corresponding Reynolds number of the order of 1200. This motion, in conjunction with the electrode structure, is used to explain the strong ‘ring and halo’ features detected by electrochemical mapping of the system reported previously.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An extensive off-line evaluation of the Noah/Single Layer Urban Canopy Model (Noah/SLUCM) urban land-surface model is presented using data from 15 sites to assess (1) the ability of the scheme to reproduce the surface energy balance observed in a range of urban environments, including seasonal changes, and (2) the impact of increasing complexity of input parameter information. Model performance is found to be most dependent on representation of vegetated surface area cover; refinement of other parameter values leads to smaller improvements. Model biases in net all-wave radiation and trade-offs between turbulent heat fluxes are highlighted using an optimization algorithm. Here we use the Urban Zones to characterize Energy partitioning (UZE) as the basis to assign default SLUCM parameter values. A methodology (FRAISE) to assign sites (or areas) to one of these categories based on surface characteristics is evaluated. Using three urban sites from the Basel Urban Boundary Layer Experiment (BUBBLE) dataset, an independent evaluation of the model performance with the parameter values representative of each class is performed. The scheme copes well with both seasonal changes in the surface characteristics and intra-urban heterogeneities in energy flux partitioning, with RMSE performance comparable to similar state-of-the-art models for all fluxes, sites and seasons. The potential of the methodology for high-resolution atmospheric modelling application using the Weather Research and Forecasting (WRF) model is highlighted. This analysis supports the recommendations that (1) three classes are appropriate to characterize the urban environment, and (2) that the parameter values identified should be adopted as default values in WRF.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Hadley Centre Global Environmental Model (HadGEM) includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC), and the new Global Model of Aerosol Processes (GLOMAP-mode). GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations of the aerosol optical depth using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. Because of differences in wet deposition rates, GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and abilities to affect cloud droplet number, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a presentday direct aerosol forcing of −0.49Wm−2 on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol forcing: the forcing of −1.17Wm−2 obtained with GLOMAP-mode is 20% weaker than with CLASSIC. Results suggest that mass-based schemes such as CLASSIC lack the necessary sophistication to provide realistic input to aerosol-cloud interaction schemes. Furthermore, the importance of the 1850 baseline highlights how model skill in predicting present-day aerosol does not guarantee reliable forcing estimates. Those findings suggest that the more complex representation of aerosol processes in microphysical schemes improves the fidelity of simulated aerosol forcings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterises aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth (τa) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is found that the model-simulated influence of aerosols on cloud droplet number concentration (Nd ) compares relatively well to the satellite data at least over the ocean. The relationship between �a and liquid water path is simulated much too strongly by the models. This suggests that the implementation of the second aerosol indirect effect mainly in terms of an autoconversion parameterisation has to be revisited in the GCMs. A positive relationship between total cloud fraction (fcld) and �a as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong fcld–�a relationship, our results indicate that none can be identified as a unique explanation. Relationships similar to the ones found in satellite data between �a and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR - �a relationship show a strong positive correlation between �a and fcld. The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of �a, and parameterisation assumptions such as a lower bound on Nd . Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of −1.5±0.5Wm−2. In an alternative approach, the radiative flux perturbation due to anthropogenic aerosols can be broken down into a component over the cloud-free portion of the globe (approximately the aerosol direct effect) and a component over the cloudy portion of the globe (approximately the aerosol indirect effect). An estimate obtained by scaling these simulated clearand cloudy-sky forcings with estimates of anthropogenic �a and satellite-retrieved Nd–�a regression slopes, respectively, yields a global, annual-mean aerosol direct effect estimate of −0.4±0.2Wm−2 and a cloudy-sky (aerosol indirect effect) estimate of −0.7±0.5Wm−2, with a total estimate of −1.2±0.4Wm−2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A mathematical model incorporating many of the important processes at work in the crystallization of emulsions is presented. The model describes nucleation within the discontinuous domain of an emulsion, precipitation in the continuous domain, transport of monomers between the two domains, and formation and subsequent growth of crystals in both domains. The model is formulated as an autonomous system of nonlinear, coupled ordinary differential equations. The description of nucleation and precipitation is based upon the Becker–Döring equations of classical nucleation theory. A particular feature of the model is that the number of particles of all species present is explicitly conserved; this differs from work that employs Arrhenius descriptions of nucleation rate. Since the model includes many physical effects, it is analyzed in stages so that the role of each process may be understood. When precipitation occurs in the continuous domain, the concentration of monomers falls below the equilibrium concentration at the surface of the drops of the discontinuous domain. This leads to a transport of monomers from the drops into the continuous domain that are then incorporated into crystals and nuclei. Since the formation of crystals is irreversible and their subsequent growth inevitable, crystals forming in the continuous domain effectively act as a sink for monomers “sucking” monomers from the drops. In this case, numerical calculations are presented which are consistent with experimental observations. In the case in which critical crystal formation does not occur, the stationary solution is found and a linear stability analysis is performed. Bifurcation diagrams describing the loci of stationary solutions, which may be multiple, are numerically calculated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Preliminary results are presented from a modelling study directed at the spatial variation of frazil ice formation and its effects on flow underneath large ice shelves. The chosen plume and frazil models are briefly introduced, and results from two simplified cases are outlined. It is found that growth and melting dominate the frazil model in the short term. Secondary nucleation converts larger crystals into several nuclei due to crystal collisions (microattrition) and fluid shear and therefore governs the ice crystal dynamics after the initial supercooling has been quenched. Frazil formation is found to have a significant depth-dependence in an idealised study of an Ice Shelf Water plume. Finally, plans for more extensive and realistic studies are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speculative bubbles are generated when investors include the expectation of the future price in their information set. Under these conditions, the actual market price of the security, that is set according to demand and supply, will be a function of the future price and vice versa. In the presence of speculative bubbles, positive expected bubble returns will lead to increased demand and will thus force prices to diverge from their fundamental value. This paper investigates whether the prices of UK equity-traded property stocks over the past 15 years contain evidence of a speculative bubble. The analysis draws upon the methodologies adopted in various studies examining price bubbles in the general stock market. Fundamental values are generated using two models: the dividend discount and the Gordon growth. Variance bounds tests are then applied to test for bubbles in the UK property asset prices. Finally, cointegration analysis is conducted to provide further evidence on the presence of bubbles. Evidence of the existence of bubbles is found, although these appear to be transitory and concentrated in the mid-to-late 1990s.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Steep orography can cause noisy solutions and instability in models of the atmosphere. A new technique for modelling flow over orography is introduced which guarantees curl free gradients on arbitrary grids, implying that the pressure gradient term is not a spurious source of vorticity. This mimetic property leads to better hydrostatic balance and better energy conservation on test cases using terrain following grids. Curl-free gradients are achieved by using the co-variant components of velocity over orography rather than the usual horizontal and vertical components. In addition, gravity and acoustic waves are treated implicitly without the need for mean and perturbation variables or a hydrostatic reference profile. This enables a straightforward description of the implicit treatment of gravity waves. Results are presented of a resting atmosphere over orography and the curl-free pressure gradient formulation is advantageous. Results of gravity waves over orography are insensitive to the placement of terrain-following layers. The model with implicit gravity waves is stable in strongly stratified conditions, with N∆t up to at least 10 (where N is the Brunt-V ̈ais ̈al ̈a frequency). A warm bubble rising over orography is simulated and the curl free pressure gradient formulation gives much more accurate results for this test case than a model without this mimetic property.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evidence suggests that rational, periodically collapsing speculative bubbles may be pervasive in stock markets globally, but there is no research that considers them at the individual stock level. In this study we develop and test an empirical asset pricing model that allows for speculative bubbles to affect stock returns. We show that stocks incorporating larger bubbles yield higher returns. The bubble deviation, at the stock level as opposed to the industry or market level, is a priced source of risk that is separate from the standard market risk, size and value factors. We demonstrate that much of the common variation in stock returns that can be attributable to market risk is due to the co-movement of bubbles rather than being driven by fundamentals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multi-model-mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation and growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We use a stratosphere–troposphere composition–climate model with interactive sulfur chemistry and aerosol microphysics, to investigate the effect of the 1991 Mount Pinatubo eruption on stratospheric aerosol properties. Satellite measurements indicate that shortly after the eruption, between 14 and 23 Tg of SO2 (7 to 11.5 Tg of sulfur) was present in the tropical stratosphere. Best estimates of the peak global stratospheric aerosol burden are in the range 19 to 26 Tg, or 3.7 to 6.7 Tg of sulfur assuming a composition of between 59 and 77 % H2SO4. In light of this large uncertainty range, we performed two main simulations with 10 and 20 Tg of SO2 injected into the tropical lower stratosphere. Simulated stratospheric aerosol properties through the 1991 to 1995 period are compared against a range of available satellite and in situ measurements. Stratospheric aerosol optical depth (sAOD) and effective radius from both simulations show good qualitative agreement with the observations, with the timing of peak sAOD and decay timescale matching well with the observations in the tropics and mid-latitudes. However, injecting 20 Tg gives a factor of 2 too high stratospheric aerosol mass burden compared to the satellite data, with consequent strong high biases in simulated sAOD and surface area density, with the 10 Tg injection in much better agreement. Our model cannot explain the large fraction of the injected sulfur that the satellite-derived SO2 and aerosol burdens indicate was removed within the first few months after the eruption. We suggest that either there is an additional alternative loss pathway for the SO2 not included in our model (e.g. via accommodation into ash or ice in the volcanic cloud) or that a larger proportion of the injected sulfur was removed via cross-tropopause transport than in our simulations. We also critically evaluate the simulated evolution of the particle size distribution, comparing in detail to balloon-borne optical particle counter (OPC) measurements from Laramie, Wyoming, USA (41° N). Overall, the model captures remarkably well the complex variations in particle concentration profiles across the different OPC size channels. However, for the 19 to 27 km injection height-range used here, both runs have a modest high bias in the lowermost stratosphere for the finest particles (radii less than 250 nm), and the decay timescale is longer in the model for these particles, with a much later return to background conditions. Also, whereas the 10 Tg run compared best to the satellite measurements, a significant low bias is apparent in the coarser size channels in the volcanically perturbed lower stratosphere. Overall, our results suggest that, with appropriate calibration, aerosol microphysics models are capable of capturing the observed variation in particle size distribution in the stratosphere across both volcanically perturbed and quiescent conditions. Furthermore, additional sensitivity simulations suggest that predictions with the models are robust to uncertainties in sub-grid particle formation and nucleation rates in the stratosphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stimulation protocols for medical devices should be rationally designed. For episodic migraine with aura we outline model-based design strategies toward preventive and acute therapies using stereotactic cortical neuromodulation. To this end, we regard a localized spreading depression (SD) wave segment as a central element in migraine pathophysiology. To describe nucleation and propagation features of the SD wave segment, we define the new concepts of cortical hot spots and labyrinths, respectively. In particular, we firstly focus exclusively on curvature-induced dynamical properties by studying a generic reaction-diffusion model of SD on the folded cortical surface. This surface is described with increasing level of details, including finally personalized simulations using patient's magnetic resonance imaging (MRI) scanner readings. At this stage, the only relevant factor that can modulate nucleation and propagation paths is the Gaussian curvature, which has the advantage of being rather readily accessible by MRI. We conclude with discussing further anatomical factors, such as areal, laminar, and cellular heterogeneity, that in addition to and in relation to Gaussian curvature determine the generalized concept of cortical hot spots and labyrinths as target structures for neuromodulation. Our numerical simulations suggest that these target structures are like fingerprints, they are individual features of each migraine sufferer. The goal in the future will be to provide individualized neural tissue simulations. These simulations should predict the clinical data and therefore can also serve as a test bed for exploring stereotactic cortical neuromodulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Liquid–vapour homogenisation temperatures of fluid inclusions in stalagmites are used for quantitative temperature reconstructions in paleoclimate research. Specifically for this application, we have developed a novel heating/cooling stage that can be operated with large stalagmite sections of up to 17 × 35 mm2 to simplify and improve the chronological reconstruction of paleotemperature time-series. The stage is designed for use of an oil immersion objective and a high-NA condenser front lens to obtain high-resolution images for bubble radius measurements. The temperature accuracy of the stage is better than ± 0.1 °C with a precision (reproducibility) of ± 0.02 °C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Observations have been obtained within an intense (precipitation rates > 50 mm h−1 ) narrow cold-frontal rainband (NCFR) embedded within a broader region of stratiform precipitation. In situ data were obtained from an aircraft which flew near a steerable dual-polarisation Doppler radar. The observations were obtained to characterise the microphysical properties of cold frontal clouds, with an emphasis on ice and precipitation formation and development. Primary ice nucleation near cloud top (−55◦ C) appeared to be enhanced by convective features. However, ice multiplication led to the largest ice particle number concentrations being observed at relatively high temperatures (> −10◦ C). The multiplication process (most likely rime splintering) occurs when stratiform precipitation interacts with supercooled water generated in the NCFR. Graupel was notably absent in the data obtained. Ice multiplication processes are known to have a strong impact in glaciating isolated convective clouds, but have rarely been studied within larger organised convective systems such as NCFRs. Secondary ice particles will impact on precipitation formation and cloud dynamics due to their relatively small size and high number density. Further modelling studies are required to quantify the effects of rime splintering on precipitation and dynamics in frontal rainbands. Available parametrizations used to diagnose the particle size distributions do not account for the influence of ice multiplication. This deficiency in parametrizations is likely to be important in some cases for modelling the evolution of cloud systems and the precipitation formation. Ice multiplication has significant impact on artefact removal from in situ particle imaging probes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sponge cakes have traditionally been manufactured using multistage mixing methods to enhance potential foam formation by the eggs. Today, use of all-in (single-stage) mixing methods is superseding multistage methods for large-scale batter preparation to reduce costs and production time. In this study, multistage and all-in mixing procedures and three final high-speed mixing times (3, 5, and 15 min) for sponge cake production were tested to optimize a mixing method for pilot-scale research. Mixing for 3 min produced batters with higher relative density values than did longer mixing times. These batters generated well-aerated cakes with high volume and low hardness. In contrast, after 5 and 15 min of high-speed mixing, batters with lower relative density and higher viscosity values were produced. Although higher bubble incorporation and retention were observed, longer mixing times produced better developed gluten networks, which stiffened the batters and inhibited bubble expansion during mixing. As a result, these batters did not expand properly and produced cakes with low volume, dense crumb, and high hardness values. Results for all-in mixing were similar to those for the multistage mixing procedure in terms of the physical properties of batters and cakes (i.e., relative density, elastic moduli, volume, total cell area, hardness, etc.). These results suggest the all-in mixing procedure with a final high-speed mixing time of 3 min is an appropriate mixing method for pilot-scale sponge cake production. The advantages of this method are reduced energy costs and production time.