46 resultados para Size Distributions


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Fennec climate program aims to improve understanding of the Saharan climate system through a synergy of observations and modelling. We present a description of the Fennec airborne observations during 2011 and 2012 over the remote Sahara (Mauritania and Mali) and the advances in the understanding of mineral dust and boundary layer processes they have provided. Aircraft instrumentation aboard the UK FAAM BAe146 and French SAFIRE Falcon 20 is described, with specific focus on instrumentation specially developed and relevant to Saharan meteorology and dust. Flight locations, aims and associated meteorology are described. Examples and applications of aircraft measurements from the Fennec flights are presented, highlighting new scientific results delivered using a synergy of different instruments and aircraft. These include: (1) the first airborne measurement of dust particles sized up to 300 microns and associated dust fluxes in the Saharan atmospheric boundary layer (SABL), (2) dust uplift from the breakdown of the nocturnal low-level jet before becoming visible in SEVIRI satellite imagery, (3) vertical profiles of the unique vertical structure of turbulent fluxes in the SABL, (4) in-situ observations of processes in SABL clouds showing dust acting as CCN and IN at −15 °C, (5) dual-aircraft observations of the SABL dynamics, thermodynamics and composition in the Saharan heat low region (SHL), (6) airborne observations of a dust storm associated with a cold-pool (haboob) issued from deep convection over the Atlas, (7) the first airborne chemical composition measurements of dust in the SHL region with differing composition, sources (determined using Lagrangian backward trajectory calculations) and absorption properties between 2011 and 2012, (8) coincident ozone and dust surface area measurements suggest coarser particles provide a route for ozone depletion, (9) discrepancies between airborne coarse mode size distributions and AERONET sunphotometer retrievals under light dust loadings. These results provide insights into boundary layer and dust processes in the SHL region – a region of substantial global climatic importance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Observations have been obtained within an intense (precipitation rates > 50 mm h−1 ) narrow cold-frontal rainband (NCFR) embedded within a broader region of stratiform precipitation. In situ data were obtained from an aircraft which flew near a steerable dual-polarisation Doppler radar. The observations were obtained to characterise the microphysical properties of cold frontal clouds, with an emphasis on ice and precipitation formation and development. Primary ice nucleation near cloud top (−55◦ C) appeared to be enhanced by convective features. However, ice multiplication led to the largest ice particle number concentrations being observed at relatively high temperatures (> −10◦ C). The multiplication process (most likely rime splintering) occurs when stratiform precipitation interacts with supercooled water generated in the NCFR. Graupel was notably absent in the data obtained. Ice multiplication processes are known to have a strong impact in glaciating isolated convective clouds, but have rarely been studied within larger organised convective systems such as NCFRs. Secondary ice particles will impact on precipitation formation and cloud dynamics due to their relatively small size and high number density. Further modelling studies are required to quantify the effects of rime splintering on precipitation and dynamics in frontal rainbands. Available parametrizations used to diagnose the particle size distributions do not account for the influence of ice multiplication. This deficiency in parametrizations is likely to be important in some cases for modelling the evolution of cloud systems and the precipitation formation. Ice multiplication has significant impact on artefact removal from in situ particle imaging probes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The effects of several fat replacement levels (0%, 35%, 50%, 70%, and 100%) by inulin in sponge cake microstructure and physicochemical properties were studied. Oil substitution for inulin decreased significantly (P < 0.05) batter viscosity, giving heterogeneous bubbles size distributions as it was observed by light microscopy. Using confocal laser scanning microscopy the fat was observed to be located at the bubbles’ interface, enabling an optimum crumb cake structure development during baking. Cryo-SEM micrographs of cake crumbs showed a continuous matrix with embedded starch granules and coated with oil; when fat replacement levels increased, starch granules appeared as detached structures. Cakes with fat replacement up to 70% had a high crumb air cell values; they were softer and rated as acceptable by an untrained sensory panel (n = 51). So, the reformulation of a standard sponge cake recipe to obtain a new product with additional health benefits and accepted by consumers is achieved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Substantial changes in anthropogenic aerosols and precursor gas emissions have occurred over recent decades due to the implementation of air pollution control legislation and economic growth. The response of atmospheric aerosols to these changes and the impact on climate are poorly constrained, particularly in studies using detailed aerosol chemistry–climate models. Here we compare the HadGEM3-UKCA (Hadley Centre Global Environment Model-United Kingdom Chemistry and Aerosols) coupled chemistry–climate model for the period 1960–2009 against extensive ground-based observations of sulfate aerosol mass (1978–2009), total suspended particle matter (SPM, 1978–1998), PM10 (1997–2009), aerosol optical depth (AOD, 2000–2009), aerosol size distributions (2008–2009) and surface solar radiation (SSR, 1960–2009) over Europe. The model underestimates observed sulfate aerosol mass (normalised mean bias factor (NMBF) = −0.4), SPM (NMBF = −0.9), PM10 (NMBF = −0.2), aerosol number concentrations (N30 NMBF = −0.85; N50 NMBF = −0.65; and N100 NMBF = −0.96) and AOD (NMBF = −0.01) but slightly overpredicts SSR (NMBF = 0.02). Trends in aerosol over the observational period are well simulated by the model, with observed (simulated) changes in sulfate of −68 % (−78 %), SPM of −42 % (−20 %), PM10 of −9 % (−8 %) and AOD of −11 % (−14 %). Discrepancies in the magnitude of simulated aerosol mass do not affect the ability of the model to reproduce the observed SSR trends. The positive change in observed European SSR (5 %) during 1990–2009 ("brightening") is better reproduced by the model when aerosol radiative effects (ARE) are included (3 %), compared to simulations where ARE are excluded (0.2 %). The simulated top-of-the-atmosphere aerosol radiative forcing over Europe under all-sky conditions increased by > 3.0 W m−2 during the period 1970–2009 in response to changes in anthropogenic emissions and aerosol concentrations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The co-polar correlation coefficient (ρhv) has many applications, including hydrometeor classification, ground clutter and melting layer identification, interpretation of ice microphysics and the retrieval of rain drop size distributions (DSDs). However, we currently lack the quantitative error estimates that are necessary if these applications are to be fully exploited. Previous error estimates of ρhv rely on knowledge of the unknown "true" ρhv and implicitly assume a Gaussian probability distribution function of ρhv samples. We show that frequency distributions of ρhv estimates are in fact highly negatively skewed. A new variable: L = -log10(1 - ρhv) is defined, which does have Gaussian error statistics, and a standard deviation depending only on the number of independent radar pulses. This is verified using observations of spherical drizzle drops, allowing, for the first time, the construction of rigorous confidence intervals in estimates of ρhv. In addition, we demonstrate how the imperfect co-location of the horizontal and vertical polarisation sample volumes may be accounted for. The possibility of using L to estimate the dispersion parameter (µ) in the gamma drop size distribution is investigated. We find that including drop oscillations is essential for this application, otherwise there could be biases in retrieved µ of up to ~8. Preliminary results in rainfall are presented. In a convective rain case study, our estimates show µ to be substantially larger than 0 (an exponential DSD). In this particular rain event, rain rate would be overestimated by up to 50% if a simple exponential DSD is assumed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Field observations of new particle formation and the subsequent particle growth are typically only possible at a fixed measurement location, and hence do not follow the temporal evolution of an air parcel in a Lagrangian sense. Standard analysis for determining formation and growth rates requires that the time-dependent formation rate and growth rate of the particles are spatially invariant; air parcel advection means that the observed temporal evolution of the particle size distribution at a fixed measurement location may not represent the true evolution if there are spatial variations in the formation and growth rates. Here we present a zero-dimensional aerosol box model coupled with one-dimensional atmospheric flow to describe the impact of advection on the evolution of simulated new particle formation events. Wind speed, particle formation rates and growth rates are input parameters that can vary as a function of time and location, using wind speed to connect location to time. The output simulates measurements at a fixed location; formation and growth rates of the particle mode can then be calculated from the simulated observations at a stationary point for different scenarios and be compared with the ‘true’ input parameters. Hence, we can investigate how spatial variations in the formation and growth rates of new particles would appear in observations of particle number size distributions at a fixed measurement site. We show that the particle size distribution and growth rate at a fixed location is dependent on the formation and growth parameters upwind, even if local conditions do not vary. We also show that different input parameters used may result in very similar simulated measurements. Erroneous interpretation of observations in terms of particle formation and growth rates, and the time span and areal extent of new particle formation, is possible if the spatial effects are not accounted for.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A single habit parameterization for the shortwave optical properties of cirrus is presented. The parameterization utilizes a hollow particle geometry, with stepped internal cavities as identified in laboratory and field studies. This particular habit was chosen as both experimental and theoretical results show that the particle exhibits lower asymmetry parameters when compared to solid crystals of the same aspect ratio. The aspect ratio of the particle was varied as a function of maximum dimension, D, in order to adhere to the same physical relationships assumed in the microphysical scheme in a configuration of the Met Office atmosphere-only global model, concerning particle mass, size and effective density. Single scattering properties were then computed using T-Matrix, Ray Tracing with Diffraction on Facets (RTDF) and Ray Tracing (RT) for small, medium, and large size parameters respectively. The scattering properties were integrated over 28 particle size distributions as used in the microphysical scheme. The fits were then parameterized as simple functions of Ice Water Content (IWC) for 6 shortwave bands. The parameterization was implemented into the GA6 configuration of the Met Office Unified Model along with the current operational long-wave parameterization. The GA6 configuration is used to simulate the annual twenty-year short-wave (SW) fluxes at top-of-atmosphere (TOA) and also the temperature and humidity structure of the atmosphere. The parameterization presented here is compared against the current operational model and a more recent habit mixture model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Magmas in volcanic conduits commonly contain microlites in association with preexisting phenocrysts, as often indicated by volcanic rock textures. In this study, we present two different experiments that inves- tigate the flow behavior of these bidisperse systems. In the first experiments, rotational rheometric methods are used to determine the rheology of monodisperse and polydisperse suspensions consisting of smaller, prolate particles (microlites) and larger, equant particles (phenocrysts) in a bubble‐free Newtonian liquid (silicate melt). Our data show that increasing the relative proportion of prolate microlites to equant pheno- crysts in a magma at constant total particle content can increase the relative viscosity by up to three orders of magnitude. Consequently, the rheological effect of particles in magmas cannot be modeled by assuming a monodisperse population of particles. We propose a new model that uses interpolated parameters based on the relative proportions of small and large particles and produces a considerably improved fit to the data than earlier models. In a second series of experiments we investigate the textures produced by shearing bimodal suspensions in gradually solidifying epoxy resin in a concentric cylinder setup. The resulting textures show the prolate particles are aligned with the flow lines and spherical particles are found in well‐organized strings, with sphere‐depleted shear bands in high‐shear regions. These observations may explain the measured variation in the shear thinning and yield stress behavior with increasing solid fraction and particle aspect ratio. The implications for magma flow are discussed, and rheological results and tex- tural observations are compared with observations on natural samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two simple and frequently used capture–recapture estimates of the population size are compared: Chao's lower-bound estimate and Zelterman's estimate allowing for contaminated distributions. In the Poisson case it is shown that if there are only counts of ones and twos, the estimator of Zelterman is always bounded above by Chao's estimator. If counts larger than two exist, the estimator of Zelterman is becoming larger than that of Chao's, if only the ratio of the frequencies of counts of twos and ones is small enough. A similar analysis is provided for the binomial case. For a two-component mixture of Poisson distributions the asymptotic bias of both estimators is derived and it is shown that the Zelterman estimator can experience large overestimation bias. A modified Zelterman estimator is suggested and also the bias-corrected version of Chao's estimator is considered. All four estimators are compared in a simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two simple and frequently used capture–recapture estimates of the population size are compared: Chao's lower-bound estimate and Zelterman's estimate allowing for contaminated distributions. In the Poisson case it is shown that if there are only counts of ones and twos, the estimator of Zelterman is always bounded above by Chao's estimator. If counts larger than two exist, the estimator of Zelterman is becoming larger than that of Chao's, if only the ratio of the frequencies of counts of twos and ones is small enough. A similar analysis is provided for the binomial case. For a two-component mixture of Poisson distributions the asymptotic bias of both estimators is derived and it is shown that the Zelterman estimator can experience large overestimation bias. A modified Zelterman estimator is suggested and also the bias-corrected version of Chao's estimator is considered. All four estimators are compared in a simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimation of population size with missing zero-class is an important problem that is encountered in epidemiological assessment studies. Fitting a Poisson model to the observed data by the method of maximum likelihood and estimation of the population size based on this fit is an approach that has been widely used for this purpose. In practice, however, the Poisson assumption is seldom satisfied. Zelterman (1988) has proposed a robust estimator for unclustered data that works well in a wide class of distributions applicable for count data. In the work presented here, we extend this estimator to clustered data. The estimator requires fitting a zero-truncated homogeneous Poisson model by maximum likelihood and thereby using a Horvitz-Thompson estimator of population size. This was found to work well, when the data follow the hypothesized homogeneous Poisson model. However, when the true distribution deviates from the hypothesized model, the population size was found to be underestimated. In the search of a more robust estimator, we focused on three models that use all clusters with exactly one case, those clusters with exactly two cases and those with exactly three cases to estimate the probability of the zero-class and thereby use data collected on all the clusters in the Horvitz-Thompson estimator of population size. Loss in efficiency associated with gain in robustness was examined based on a simulation study. As a trade-off between gain in robustness and loss in efficiency, the model that uses data collected on clusters with at most three cases to estimate the probability of the zero-class was found to be preferred in general. In applications, we recommend obtaining estimates from all three models and making a choice considering the estimates from the three models, robustness and the loss in efficiency. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The distributions of times to first cell division were determined for populations of Escherichia coli stationary-phase cells inoculated onto agar media. This was accomplished by using automated analysis of digital images of individual cells growing on agar and calculation of the "box area ratio." Using approximately 300 cells per experiment, the mean time to first division and standard deviation for cells grown in liquid medium at 37 degrees C and inoculated on agar and incubated at 20 degrees C were determined as 3.0 h and 0.7 h, respectively. Distributions were observed to tail toward the higher values, but no definitive model distribution was identified. Both preinoculation stress by heating cultures at 50 degrees C and postinoculation stress by growth in the presence of higher concentrations of NaCl increased mean times to first division. Both stresses also resulted in an increase in the spread of the distributions that was proportional to the mean division time, the coefficient of variation being constant at approximately 0.2 in all cases. The "relative division time," which is the time to first division for individual cells expressed in terms of the cell size doubling time, was used as measure of the "work to be done" to prepare for cell division. Relative division times were greater for heat-stressed cells than for those growing under osmotic stress.