999 resultados para particle-particle Tamm-Dancoff approximation
Resumo:
The combination of the synthetic minority oversampling technique (SMOTE) and the radial basis function (RBF) classifier is proposed to deal with classification for imbalanced two-class data. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier structure and the parameters of RBF kernels are determined using a particle swarm optimization algorithm based on the criterion of minimizing the leave-one-out misclassification rate. The experimental results on both simulated and real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.
Resumo:
In this study the focus is on transfer in Brussels French, the variety of French spoken in Brussels. The methodology proposed in Jarvis (2000) and Jarvis and Pavlenko (2009) is followed to provide proof for the fact that grammatical collocations such as chercher après "to look for" are the result of contact with the source language, Brussels Dutch.
Resumo:
We develop a database of 110 gradual solar energetic particle (SEP) events, over the period 1967–2006, providing estimates of event onset, duration, fluence, and peak flux for protons of energy E > 60 MeV. The database is established mainly from the energetic proton flux data distributed in the OMNI 2 data set; however, we also utilize the McMurdo neutron monitor and the energetic proton flux from GOES missions. To aid the development of the gradual SEP database, we establish a method with which the homogeneity of the energetic proton flux record is improved. A comparison between other SEP databases and the database developed here is presented which discusses the different algorithms used to define an event. Furthermore, we investigate the variation of gradual SEP occurrence and fluence with solar cycle phase, sunspot number (SSN), and interplanetary magnetic field intensity (Bmag) over solar cycles 20–23. We find that the occurrence and fluence of SEP events vary with the solar cycle phase. Correspondingly, we find a positive correlation between SEP occurrence and solar activity as determined by SSN and Bmag, while the mean fluence in individual events decreases with the same measures of solar activity. Therefore, although the number of events decreases when solar activity is low, the events that do occur at such times have higher fluence. Thus, large events such as the “Carrington flare” may be more likely at lower levels of solar activity. These results are discussed in the context of other similar investigations.
Resumo:
We investigate the spatial characteristics of urban-like canopy flow by applying particle image velocimetry (PIV) to atmospheric turbulence. The study site was a Comprehensive Outdoor Scale MOdel (COSMO) experiment for urban climate in Japan. The PIV system captured the two-dimensional flow field within the canopy layer continuously for an hour with a sampling frequency of 30 Hz, thereby providing reliable outdoor turbulence statistics. PIV measurements in a wind-tunnel facility using similar roughness geometry, but with a lower sampling frequency of 4 Hz, were also done for comparison. The turbulent momentum flux from COSMO, and the wind tunnel showed similar values and distributions when scaled using friction velocity. Some different characteristics between outdoor and indoor flow fields were mainly caused by the larger fluctuations in wind direction for the atmospheric turbulence. The focus of the analysis is on a variety of instantaneous turbulent flow structures. One remarkable flow structure is termed 'flushing', that is, a large-scale upward motion prevailing across the whole vertical cross-section of a building gap. This is observed intermittently, whereby tracer particles are flushed vertically out from the canopy layer. Flushing phenomena are also observed in the wind tunnel where there is neither thermal stratification nor outer-layer turbulence. It is suggested that flushing phenomena are correlated with the passing of large-scale low-momentum regions above the canopy.
Resumo:
Almost all research fields in geosciences use numerical models and observations and combine these using data-assimilation techniques. With ever-increasing resolution and complexity, the numerical models tend to be highly nonlinear and also observations become more complicated and their relation to the models more nonlinear. Standard data-assimilation techniques like (ensemble) Kalman filters and variational methods like 4D-Var rely on linearizations and are likely to fail in one way or another. Nonlinear data-assimilation techniques are available, but are only efficient for small-dimensional problems, hampered by the so-called ‘curse of dimensionality’. Here we present a fully nonlinear particle filter that can be applied to higher dimensional problems by exploiting the freedom of the proposal density inherent in particle filtering. The method is illustrated for the three-dimensional Lorenz model using three particles and the much more complex 40-dimensional Lorenz model using 20 particles. By also applying the method to the 1000-dimensional Lorenz model, again using only 20 particles, we demonstrate the strong scale-invariance of the method, leading to the optimistic conjecture that the method is applicable to realistic geophysical problems. Copyright c 2010 Royal Meteorological Society
Resumo:
A particle filter is a data assimilation scheme that employs a fully nonlinear, non-Gaussian analysis step. Unfortunately as the size of the state grows the number of ensemble members required for the particle filter to converge to the true solution increases exponentially. To overcome this Vaswani [Vaswani N. 2008. IEEE Trans Signal Process 56:4583–97] proposed a new method known as mode tracking to improve the efficiency of the particle filter. When mode tracking, the state is split into two subspaces. One subspace is forecast using the particle filter, the other is treated so that its values are set equal to the mode of the marginal pdf. There are many ways to split the state. One hypothesis is that the best results should be obtained from the particle filter with mode tracking when we mode track the maximum number of unimodal dimensions. The aim of this paper is to test this hypothesis using the three dimensional stochastic Lorenz equations with direct observations. It is found that mode tracking the maximum number of unimodal dimensions does not always provide the best result. The best choice of states to mode track depends on the number of particles used and the accuracy and frequency of the observations.
Resumo:
We present a novel kinetic multi-layer model for gas-particle interactions in aerosols and clouds (KM-GAP) that treats explicitly all steps of mass transport and chemical reaction of semi-volatile species partitioning between gas phase, particle surface and particle bulk. KM-GAP is based on the PRA model framework (Pöschl-Rudich-Ammann, 2007), and it includes gas phase diffusion, reversible adsorption, surface reactions, bulk diffusion and reaction, as well as condensation, evaporation and heat transfer. The size change of atmospheric particles and the temporal evolution and spatial profile of the concentration of individual chemical species can be modelled along with gas uptake and accommodation coefficients. Depending on the complexity of the investigated system, unlimited numbers of semi-volatile species, chemical reactions, and physical processes can be treated, and the model shall help to bridge gaps in the understanding and quantification of multiphase chemistry and microphysics in atmo- spheric aerosols and clouds. In this study we demonstrate how KM-GAP can be used to analyze, interpret and design experimental investigations of changes in particle size and chemical composition in response to condensation, evaporation, and chemical reaction. For the condensational growth of water droplets, our kinetic model results provide a direct link between laboratory observations and molecular dynamic simulations, confirming that the accommodation coefficient of water at 270 K is close to unity. Literature data on the evaporation of dioctyl phthalate as a function of particle size and time can be reproduced, and the model results suggest that changes in the experimental conditions like aerosol particle concentration and chamber geometry may influence the evaporation kinetics and can be optimized for eðcient probing of specific physical effects and parameters. With regard to oxidative aging of organic aerosol particles, we illustrate how the formation and evaporation of volatile reaction products like nonanal can cause a decrease in the size of oleic acid particles exposed to ozone.
Resumo:
In this paper a new system identification algorithm is introduced for Hammerstein systems based on observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a non-uniform rational B-spline (NURB) neural network. The proposed system identification algorithm for this NURB network based Hammerstein system consists of two successive stages. First the shaping parameters in NURB network are estimated using a particle swarm optimization (PSO) procedure. Then the remaining parameters are estimated by the method of the singular value decomposition (SVD). Numerical examples including a model based controller are utilized to demonstrate the efficacy of the proposed approach. The controller consists of computing the inverse of the nonlinear static function approximated by NURB network, followed by a linear pole assignment controller.
Resumo:
We present a novel kinetic multi-layer model for gas-particle interactions in aerosols and clouds (KMGAP) that treats explicitly all steps of mass transport and chemical reaction of semi-volatile species partitioning between gas phase, particle surface and particle bulk. KMGAP is based on the PRA model framework (P¨oschl-Rudich- Ammann, 2007), and it includes gas phase diffusion, reversible adsorption, surface reactions, bulk diffusion and reaction, as well as condensation, evaporation and heat transfer. The size change of atmospheric particles and the temporal evolution and spatial profile of the concentration of individual chemical species can be modeled along with gas uptake and accommodation coefficients. Depending on the complexity of the investigated system and the computational constraints, unlimited numbers of semi-volatile species, chemical reactions, and physical processes can be treated, and the model shall help to bridge gaps in the understanding and quantification of multiphase chemistry and microphysics in atmospheric aerosols and clouds. In this study we demonstrate how KM-GAP can be used to analyze, interpret and design experimental investigations of changes in particle size and chemical composition in response to condensation, evaporation, and chemical reaction. For the condensational growth of water droplets, our kinetic model results provide a direct link between laboratory observations and molecular dynamic simulations, confirming that the accommodation coefficient of water at 270K is close to unity (Winkler et al., 2006). Literature data on the evaporation of dioctyl phthalate as a function of particle size and time can be reproduced, and the model results suggest that changes in the experimental conditions like aerosol particle concentration and chamber geometry may influence the evaporation kinetics and can be optimized for efficient probing of specific physical effects and parameters. With regard to oxidative aging of organic aerosol particles, we illustrate how the formation and evaporation of volatile reaction products like nonanal can cause a decrease in the size of oleic acid particles exposed to ozone.
Resumo:
The assumed relationship between ice particle mass and size is profoundly important in radar retrievals of ice clouds, but, for millimeter-wave radars, shape and preferred orientation are important as well. In this paper the authors first examine the consequences of the fact that the widely used ‘‘Brown and Francis’’ mass–size relationship has often been applied to maximumparticle dimension observed by aircraftDmax rather than to the mean of the particle dimensions in two orthogonal directions Dmean, which was originally used by Brown and Francis. Analysis of particle images reveals that Dmax ’ 1.25Dmean, and therefore, for clouds for which this mass–size relationship holds, the consequences are overestimates of ice water content by around 53% and of Rayleigh-scattering radar reflectivity factor by 3.7 dB. Simultaneous radar and aircraft measurements demonstrate that much better agreement in reflectivity factor is provided by using this mass–size relationship with Dmean. The authors then examine the importance of particle shape and fall orientation for millimeter-wave radars. Simultaneous radar measurements and aircraft calculations of differential reflectivity and dual-wavelength ratio are presented to demonstrate that ice particles may usually be treated as horizontally aligned oblate spheroids with an axial ratio of 0.6, consistent with them being aggregates. An accurate formula is presented for the backscatter cross section apparent to a vertically pointing millimeter-wave radar on the basis of a modified version of Rayleigh–Gans theory. It is then shown that the consequence of treating ice particles as Mie-scattering spheres is to substantially underestimate millimeter-wave reflectivity factor when millimeter-sized particles are present, which can lead to retrieved ice water content being overestimated by a factor of 4.h
Resumo:
A novel two-stage construction algorithm for linear-in-the-parameters classifier is proposed, aiming at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage to construct a sparse linear-in-the-parameters classifier. For the first stage learning of generating the prefiltered signal, a two-level algorithm is introduced to maximise the model's generalisation capability, in which an elastic net model identification algorithm using singular value decomposition is employed at the lower level while the two regularisation parameters are selected by maximising the Bayesian evidence using a particle swarm optimization algorithm. Analysis is provided to demonstrate how “Occam's razor” is embodied in this approach. The second stage of sparse classifier construction is based on an orthogonal forward regression with the D-optimality algorithm. Extensive experimental results demonstrate that the proposed approach is effective and yields competitive results for noisy data sets.
Resumo:
The Eyjafjallajökull volcano in Iceland emitted a cloud of ash into the atmosphere during April and May 2010. Over the UK the ash cloud was observed by the FAAM BAe-146 Atmospheric Research Aircraft which was equipped with in-situ probes measuring the concentration of volcanic ash carried by particles of varying sizes. The UK Met Office Numerical Atmospheric-dispersion Modelling Environment (NAME) has been used to simulate the evolution of the ash cloud emitted by the Eyjafjallajökull volcano during the period 4–18 May 2010. In the NAME simulations the processes controlling the evolution of the concentration and particle size distribution include sedimentation and deposition of particles, horizontal dispersion and vertical wind shear. For travel times between 24 and 72 h, a 1/t relationship describes the evolution of the concentration at the centre of the ash cloud and the particle size distribution remains fairly constant. Although NAME does not represent the effects of microphysical processes, it can capture the observed decrease in concentration with travel time in this period. This suggests that, for this eruption, microphysical processes play a small role in determining the evolution of the distal ash cloud. Quantitative comparison with observations shows that NAME can simulate the observed column-integrated mass if around 4% of the total emitted mass is assumed to be transported as far as the UK by small particles (< 30 μm diameter). NAME can also simulate the observed particle size distribution if a distal particle size distribution that contains a large fraction of < 10 μm diameter particles is used, consistent with the idea that phraetomagmatic volcanoes, such as Eyjafjallajökull, emit very fine particles.
Resumo:
Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.