948 resultados para Composite particle models
Resumo:
Maize silage nutritive quality is routinely determined by near infrared reflectance spectroscopy (NIRS). However, little is known about the impact of sample preparation on the accuracy of the calibration to predict biological traits. A sample population of 48 maize silages representing a wide range of physiological maturities was used in a study to determine the impact of different sample preparation procedures (i.e., drying regimes; the presence or absence of residual moisture; the degree of particle comminution) on resultant NIR prediction statistics. All silages were scanned using a total of 12 combinations of sample pre-treatments. Each sample preparation combination was subjected to three multivariate regression techniques to give a total of 36 predictions per biological trait. Increased sample preparations procedure, relative to scanning the unprocessed whole plant (WP) material, always resulted in a numerical minimisation of model statistics. However, the ability of each of the treatments to significantly minimise the model statistics differed. Particle comminution was the most important factor, oven-drying regime was intermediate, and residual moisture presence was the least important. Models to predict various biological parameters of maize silage will be improved if material is subjected to a high degree of particle comminution (i.e., having been passed through a 1 mm screen) and developed on plant material previously dried at 60 degrees C. The extra effort in terms of time and cost required to remove sample residual moisture cannot be justified. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A tunable radial basis function (RBF) network model is proposed for nonlinear system identification using particle swarm optimisation (PSO). At each stage of orthogonal forward regression (OFR) model construction, PSO optimises one RBF unit's centre vector and diagonal covariance matrix by minimising the leave-one-out (LOO) mean square error (MSE). This PSO aided OFR automatically determines how many tunable RBF nodes are sufficient for modelling. Compared with the-state-of-the-art local regularisation assisted orthogonal least squares algorithm based on the LOO MSE criterion for constructing fixed-node RBF network models, the PSO tuned RBF model construction produces more parsimonious RBF models with better generalisation performance and is computationally more efficient.
Resumo:
We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leave-one-out (LOO) criteria is developed to construct parsimonious radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines the center vector and diagonal covariance matrix of one RBF node by minimizing the LOO statistics. For regression applications, the LOO criterion is chosen to be the LOO mean square error, while the LOO misclassification rate is adopted in two-class classification applications. By adopting the Parzen window estimate as the desired response, the unsupervised density estimation problem is transformed into a constrained regression problem. This PSO aided OFR algorithm for tunable-node RBF networks is capable of constructing very parsimonious RBF models that generalize well, and our analysis and experimental results demonstrate that the algorithm is computationally even simpler than the efficient regularization assisted orthogonal least square algorithm based on LOO criteria for selecting fixed-node RBF models. Another significant advantage of the proposed learning procedure is that it does not have learning hyperparameters that have to be tuned using costly cross validation. The effectiveness of the proposed PSO aided OFR construction procedure is illustrated using several examples taken from regression and classification, as well as density estimation applications.
Resumo:
Several previous studies have attempted to assess the sublimation depth-scales of ice particles from clouds into clear air. Upon examining the sublimation depth-scales in the Met Office Unified Model (MetUM), it was found that the MetUM has evaporation depth-scales 2–3 times larger than radar observations. Similar results can be seen in the European Centre for Medium-Range Weather Forecasts (ECMWF), Regional Atmospheric Climate Model (RACMO) and Météo-France models. In this study, we use radar simulation (converting model variables into radar observations) and one-dimensional explicit microphysics numerical modelling to test and diagnose the cause of the deep sublimation depth-scales in the forecast model. The MetUM data and parametrization scheme are used to predict terminal velocity, which can be compared with the observed Doppler velocity. This can then be used to test the hypothesis as to why the sublimation depth-scale is too large within the MetUM. Turbulence could lead to dry air entrainment and higher evaporation rates; particle density may be wrong, particle capacitance may be too high and lead to incorrect evaporation rates or the humidity within the sublimating layer may be incorrectly represented. We show that the most likely cause of deep sublimation zones is an incorrect representation of model humidity in the layer. This is tested further by using a one-dimensional explicit microphysics model, which tests the sensitivity of ice sublimation to key atmospheric variables and is capable of including sonde and radar measurements to simulate real cases. Results suggest that the MetUM grid resolution at ice cloud altitudes is not sufficient enough to maintain the sharp drop in humidity that is observed in the sublimation zone.
Resumo:
We examine whether a three-regime model that allows for dormant, explosive and collapsing speculative behaviour can explain the dynamics of the S&P 500. We extend existing models of speculative behaviour by including a third regime that allows a bubble to grow at a steady rate, and propose abnormal volume as an indicator of the probable time of bubble collapse. We also examine the financial usefulness of the three-regime model by studying a trading rule formed using inferences from it, whose use leads to higher Sharpe ratios and end of period wealth than from employing existing models or a buy-and-hold strategy.
Resumo:
Almost all research fields in geosciences use numerical models and observations and combine these using data-assimilation techniques. With ever-increasing resolution and complexity, the numerical models tend to be highly nonlinear and also observations become more complicated and their relation to the models more nonlinear. Standard data-assimilation techniques like (ensemble) Kalman filters and variational methods like 4D-Var rely on linearizations and are likely to fail in one way or another. Nonlinear data-assimilation techniques are available, but are only efficient for small-dimensional problems, hampered by the so-called ‘curse of dimensionality’. Here we present a fully nonlinear particle filter that can be applied to higher dimensional problems by exploiting the freedom of the proposal density inherent in particle filtering. The method is illustrated for the three-dimensional Lorenz model using three particles and the much more complex 40-dimensional Lorenz model using 20 particles. By also applying the method to the 1000-dimensional Lorenz model, again using only 20 particles, we demonstrate the strong scale-invariance of the method, leading to the optimistic conjecture that the method is applicable to realistic geophysical problems. Copyright c 2010 Royal Meteorological Society
Resumo:
Airborne dust affects the Earth's energy balance — an impact that is measured in terms of the implied change in net radiation (or radiative forcing, in W m-2) at the top of the atmosphere. There remains considerable uncertainty in the magnitude and sign of direct forcing by airborne dust under current climate. Much of this uncertainty stems from simplified assumptions about mineral dust-particle size, composition and shape, which are applied in remote sensing retrievals of dust characteristics and dust-cycle models. Improved estimates of direct radiative forcing by dust will require improved characterization of the spatial variability in particle characteristics to provide reliable information dust optical properties. This includes constraints on: (1) particle-size distribution, including discrimination of particle subpopulations and quantification of the amount of dust in the sub-10 µm to <0.1 µm mass fraction; (2) particle composition, specifically the abundance of iron oxides, and whether particles consist of single or multi-mineral grains; (3) particle shape, including degree of sphericity and surface roughness, as a function of size and mineralogy; and (4) the degree to which dust particles are aggregated together. The use of techniques that measure the size, composition and shape of individual particles will provide a better basis for optical modelling.
Resumo:
The time discretization in weather and climate models introduces truncation errors that limit the accuracy of the simulations. Recent work has yielded a method for reducing the amplitude errors in leapfrog integrations from first-order to fifth-order. This improvement is achieved by replacing the Robert--Asselin filter with the RAW filter and using a linear combination of the unfiltered and filtered states to compute the tendency term. The purpose of the present paper is to apply the composite-tendency RAW-filtered leapfrog scheme to semi-implicit integrations. A theoretical analysis shows that the stability and accuracy are unaffected by the introduction of the implicitly treated mode. The scheme is tested in semi-implicit numerical integrations in both a simple nonlinear stiff system and a medium-complexity atmospheric general circulation model, and yields substantial improvements in both cases. We conclude that the composite-tendency RAW-filtered leapfrog scheme is suitable for use in semi-implicit integrations.
Resumo:
Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multi-model-mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation and growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.
Resumo:
This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.
Resumo:
Extratropical cyclones produce the majority of precipitation in many regions of the extratropics. This study evaluates the ability of a climate model, HiGEM, to reproduce the precipitation associated with extratropical cyclones. The model is evaluated using the ERA-Interim reanalysis and GPCP dataset. The analysis employs a cyclone centred compositing technique, evaluates composites across a range of geographical areas and cyclone intensities and also investigates the ability of the model to reproduce the climatological distribution of cyclone associated precipitation across the Northern Hemisphere. Using this phenomena centred approach provides an ability to identify the processes which are responsible for climatological biases in the model. Composite precipitation intensities are found to be comparable when all cyclones across the Northern Hemisphere are included. When the cyclones are filtered by region or intensity, differences are found, in particular, HiGEM produces too much precipitation in its most intense cyclones relative to ERA-Interim and GPCP. Biases in the climatological distribution of cyclone associated precipitation are also found, with biases around the storm track regions associated with both the number of cyclones in HiGEM and also their average precipitation intensity. These results have implications for the reliability of future projections of extratropical precipitation from the model.
Resumo:
Filter degeneracy is the main obstacle for the implementation of particle filter in non-linear high-dimensional models. A new scheme, the implicit equal-weights particle filter (IEWPF), is introduced. In this scheme samples are drawn implicitly from proposal densities with a different covariance for each particle, such that all particle weights are equal by construction. We test and explore the properties of the new scheme using a 1,000-dimensional simple linear model, and the 1,000-dimensional non-linear Lorenz96 model, and compare the performance of the scheme to a Local Ensemble Kalman Filter. The experiments show that the new scheme can easily be implemented in high-dimensional systems and is never degenerate, with good convergence properties in both systems.
MAGNETOHYDRODYNAMIC SIMULATIONS OF RECONNECTION AND PARTICLE ACCELERATION: THREE-DIMENSIONAL EFFECTS
Resumo:
Magnetic fields can change their topology through a process known as magnetic reconnection. This process in not only important for understanding the origin and evolution of the large-scale magnetic field, but is seen as a possibly efficient particle accelerator producing cosmic rays mainly through the first-order Fermi process. In this work we study the properties of particle acceleration inserted in reconnection zones and show that the velocity component parallel to the magnetic field of test particles inserted in magnetohydrodynamic (MHD) domains of reconnection without including kinetic effects, such as pressure anisotropy, the Hall term, or anomalous effects, increases exponentially. Also, the acceleration of the perpendicular component is always possible in such models. We find that within contracting magnetic islands or current sheets the particles accelerate predominantly through the first-order Fermi process, as previously described, while outside the current sheets and islands the particles experience mostly drift acceleration due to magnetic field gradients. Considering two-dimensional MHD models without a guide field, we find that the parallel acceleration stops at some level. This saturation effect is, however, removed in the presence of an out-of-plane guide field or in three-dimensional models. Therefore, we stress the importance of the guide field and fully three-dimensional studies for a complete understanding of the process of particle acceleration in astrophysical reconnection environments.
Resumo:
The problem of cosmological particle creation for a spatially flat, homogeneous and isotropic universes is discussed in the context of f (R) theories of gravity. Different from cosmological models based on general relativity theory, it is found that a conformal invariant metric does not forbid the creation of massless particles during the early stages (radiation era) of the universe. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We obtained long-slit spectra of high signal-to-noise ratio of the galaxy M32 with the Gemini Multi-Object Spectrograph at the Gemini-North telescope. We analysed the integrated spectra by means of full spectral fitting in order to extract the mixture of stellar populations that best represents its composite nature. Three different galactic radii were analysed, from the nuclear region out to 2 arcmin from the centre. This allows us to compare, for the first time, the results of integrated light spectroscopy with those of resolved colour-magnitude diagrams from the literature. As a main result we propose that an ancient and an intermediate-age population co-exist in M32, and that the balance between these two populations change between the nucleus and outside one effective radius (1r(eff)) in the sense that the contribution from the intermediate population is larger at the nuclear region. We retrieve a smaller signal of a young population at all radii whose origin is unclear and may be a contamination from horizontal branch stars, such as the ones identified by Brown et al. in the nuclear region. We compare our metallicity distribution function for a region 1 to 2 arcmin from the centre to the one obtained with photometric data by Grillmair et al. Both distributions are broad, but our spectroscopically derived distribution has a significant component with [Z/Z(circle dot)] <= -1, which is not found by Grillmair et al.