899 resultados para particle trajectory computation
Resumo:
The combination of the synthetic minority oversampling technique (SMOTE) and the radial basis function (RBF) classifier is proposed to deal with classification for imbalanced two-class data. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier structure and the parameters of RBF kernels are determined using a particle swarm optimization algorithm based on the criterion of minimizing the leave-one-out misclassification rate. The experimental results on both simulated and real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.
Resumo:
In this study the focus is on transfer in Brussels French, the variety of French spoken in Brussels. The methodology proposed in Jarvis (2000) and Jarvis and Pavlenko (2009) is followed to provide proof for the fact that grammatical collocations such as chercher après "to look for" are the result of contact with the source language, Brussels Dutch.
Resumo:
We develop a database of 110 gradual solar energetic particle (SEP) events, over the period 1967–2006, providing estimates of event onset, duration, fluence, and peak flux for protons of energy E > 60 MeV. The database is established mainly from the energetic proton flux data distributed in the OMNI 2 data set; however, we also utilize the McMurdo neutron monitor and the energetic proton flux from GOES missions. To aid the development of the gradual SEP database, we establish a method with which the homogeneity of the energetic proton flux record is improved. A comparison between other SEP databases and the database developed here is presented which discusses the different algorithms used to define an event. Furthermore, we investigate the variation of gradual SEP occurrence and fluence with solar cycle phase, sunspot number (SSN), and interplanetary magnetic field intensity (Bmag) over solar cycles 20–23. We find that the occurrence and fluence of SEP events vary with the solar cycle phase. Correspondingly, we find a positive correlation between SEP occurrence and solar activity as determined by SSN and Bmag, while the mean fluence in individual events decreases with the same measures of solar activity. Therefore, although the number of events decreases when solar activity is low, the events that do occur at such times have higher fluence. Thus, large events such as the “Carrington flare” may be more likely at lower levels of solar activity. These results are discussed in the context of other similar investigations.
Resumo:
The authors consider the problem of a robot manipulator operating in a noisy workspace. The manipulator is required to move from an initial position P(i) to a final position P(f). P(i) is assumed to be completely defined. However, P(f) is obtained by a sensing operation and is assumed to be fixed but unknown. The authors approach to this problem involves the use of three learning algorithms, the discretized linear reward-penalty (DLR-P) automaton, the linear reward-penalty (LR-P) automaton and a nonlinear reinforcement scheme. An automaton is placed at each joint of the robot and by acting as a decision maker, plans the trajectory based on noisy measurements of P(f).
Resumo:
We investigate the spatial characteristics of urban-like canopy flow by applying particle image velocimetry (PIV) to atmospheric turbulence. The study site was a Comprehensive Outdoor Scale MOdel (COSMO) experiment for urban climate in Japan. The PIV system captured the two-dimensional flow field within the canopy layer continuously for an hour with a sampling frequency of 30 Hz, thereby providing reliable outdoor turbulence statistics. PIV measurements in a wind-tunnel facility using similar roughness geometry, but with a lower sampling frequency of 4 Hz, were also done for comparison. The turbulent momentum flux from COSMO, and the wind tunnel showed similar values and distributions when scaled using friction velocity. Some different characteristics between outdoor and indoor flow fields were mainly caused by the larger fluctuations in wind direction for the atmospheric turbulence. The focus of the analysis is on a variety of instantaneous turbulent flow structures. One remarkable flow structure is termed 'flushing', that is, a large-scale upward motion prevailing across the whole vertical cross-section of a building gap. This is observed intermittently, whereby tracer particles are flushed vertically out from the canopy layer. Flushing phenomena are also observed in the wind tunnel where there is neither thermal stratification nor outer-layer turbulence. It is suggested that flushing phenomena are correlated with the passing of large-scale low-momentum regions above the canopy.
Resumo:
Almost all research fields in geosciences use numerical models and observations and combine these using data-assimilation techniques. With ever-increasing resolution and complexity, the numerical models tend to be highly nonlinear and also observations become more complicated and their relation to the models more nonlinear. Standard data-assimilation techniques like (ensemble) Kalman filters and variational methods like 4D-Var rely on linearizations and are likely to fail in one way or another. Nonlinear data-assimilation techniques are available, but are only efficient for small-dimensional problems, hampered by the so-called ‘curse of dimensionality’. Here we present a fully nonlinear particle filter that can be applied to higher dimensional problems by exploiting the freedom of the proposal density inherent in particle filtering. The method is illustrated for the three-dimensional Lorenz model using three particles and the much more complex 40-dimensional Lorenz model using 20 particles. By also applying the method to the 1000-dimensional Lorenz model, again using only 20 particles, we demonstrate the strong scale-invariance of the method, leading to the optimistic conjecture that the method is applicable to realistic geophysical problems. Copyright c 2010 Royal Meteorological Society
Resumo:
A particle filter is a data assimilation scheme that employs a fully nonlinear, non-Gaussian analysis step. Unfortunately as the size of the state grows the number of ensemble members required for the particle filter to converge to the true solution increases exponentially. To overcome this Vaswani [Vaswani N. 2008. IEEE Trans Signal Process 56:4583–97] proposed a new method known as mode tracking to improve the efficiency of the particle filter. When mode tracking, the state is split into two subspaces. One subspace is forecast using the particle filter, the other is treated so that its values are set equal to the mode of the marginal pdf. There are many ways to split the state. One hypothesis is that the best results should be obtained from the particle filter with mode tracking when we mode track the maximum number of unimodal dimensions. The aim of this paper is to test this hypothesis using the three dimensional stochastic Lorenz equations with direct observations. It is found that mode tracking the maximum number of unimodal dimensions does not always provide the best result. The best choice of states to mode track depends on the number of particles used and the accuracy and frequency of the observations.
Resumo:
Although the role of the academic head of department (HoD) has always been important to university management and performance, an increasing significance given to bureaucracy, academic performance and productivity, and government accountability has greatly elevated the importance of this position. Previous research and anecdotal evidence suggests that as academics move into HoD roles, usually with little or no training, they experience a problem of struggling to adequately manage key aspects of their role. It is this problem – and its manifestations – that forms the research focus of this study. Based on the research question, “What are the career trajectories of academics who become HoDs in a selected post-1992 university?” the study aimed to achieve greater understanding of why academics become HoDs, what it is like being a HoD, and how the experience influences their future career plans. The study adopts an interpretive approach, in line with social constructivism. Edited topical life history interviews were undertaken with 17 male and female HoDs, from a range of disciplines, in a post-1992 UK university. These data were analysed using coding, categorisation and theme formation techniques and developing profiles of each of the respondents. The findings from this study suggest that academics who become HoDs not only need the capacity to assume a range of personal and professional identities, but need to regularly adopt and switch between them. Whether individuals can successfully balance and manage these multiple identities, or whether they experience major conflicts and difficulties within or between them, greatly affects their experiences of being a HoD and may influence their subsequent career decisions. It is claimed that the focus, approach and analytical framework - based on the interrelationships between the concepts of socialisation, identity and career trajectory - provide a distinct and original contribution to knowledge in this area. Although the results of this study cannot be generalised, the findings may help other individuals and institutions move towards a firmer understanding of the academic who becomes HoD - in relation to theory, practice and future research.
Resumo:
The real-time parallel computation of histograms using an array of pipelined cells is proposed and prototyped in this paper with application to consumer imaging products. The array operates in two modes: histogram computation and histogram reading. The proposed parallel computation method does not use any memory blocks. The resulting histogram bins can be stored into an external memory block in a pipelined fashion for subsequent reading or streaming of the results. The array of cells can be tuned to accommodate the required data path width in a VLSI image processing engine as present in many imaging consumer devices. Synthesis of the architectures presented in this paper in FPGA are shown to compute the real-time histogram of images streamed at over 36 megapixels at 30 frames/s by processing in parallel 1, 2 or 4 pixels per clock cycle.
Resumo:
We present a novel kinetic multi-layer model for gas-particle interactions in aerosols and clouds (KM-GAP) that treats explicitly all steps of mass transport and chemical reaction of semi-volatile species partitioning between gas phase, particle surface and particle bulk. KM-GAP is based on the PRA model framework (Pöschl-Rudich-Ammann, 2007), and it includes gas phase diffusion, reversible adsorption, surface reactions, bulk diffusion and reaction, as well as condensation, evaporation and heat transfer. The size change of atmospheric particles and the temporal evolution and spatial profile of the concentration of individual chemical species can be modelled along with gas uptake and accommodation coefficients. Depending on the complexity of the investigated system, unlimited numbers of semi-volatile species, chemical reactions, and physical processes can be treated, and the model shall help to bridge gaps in the understanding and quantification of multiphase chemistry and microphysics in atmo- spheric aerosols and clouds. In this study we demonstrate how KM-GAP can be used to analyze, interpret and design experimental investigations of changes in particle size and chemical composition in response to condensation, evaporation, and chemical reaction. For the condensational growth of water droplets, our kinetic model results provide a direct link between laboratory observations and molecular dynamic simulations, confirming that the accommodation coefficient of water at 270 K is close to unity. Literature data on the evaporation of dioctyl phthalate as a function of particle size and time can be reproduced, and the model results suggest that changes in the experimental conditions like aerosol particle concentration and chamber geometry may influence the evaporation kinetics and can be optimized for eðcient probing of specific physical effects and parameters. With regard to oxidative aging of organic aerosol particles, we illustrate how the formation and evaporation of volatile reaction products like nonanal can cause a decrease in the size of oleic acid particles exposed to ozone.
Resumo:
In this paper a new system identification algorithm is introduced for Hammerstein systems based on observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a non-uniform rational B-spline (NURB) neural network. The proposed system identification algorithm for this NURB network based Hammerstein system consists of two successive stages. First the shaping parameters in NURB network are estimated using a particle swarm optimization (PSO) procedure. Then the remaining parameters are estimated by the method of the singular value decomposition (SVD). Numerical examples including a model based controller are utilized to demonstrate the efficacy of the proposed approach. The controller consists of computing the inverse of the nonlinear static function approximated by NURB network, followed by a linear pole assignment controller.
Resumo:
This document contains a report on the work done under the ESA/Ariadna study 06/4101 on the global optimization of space trajectories with multiple gravity assist (GA) and deep space manoeuvres (DSM). The study was performed by a joint team of scientists from the University of Reading and the University of Glasgow.
Resumo:
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point and to the field covariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data.