899 resultados para particle trajectory computation
Resumo:
This paper extends the singular value decomposition to a path of matricesE(t). An analytic singular value decomposition of a path of matricesE(t) is an analytic path of factorizationsE(t)=X(t)S(t)Y(t) T whereX(t) andY(t) are orthogonal andS(t) is diagonal. To maintain differentiability the diagonal entries ofS(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic pathE(t) always admits a real analytic SVD, a full-rank, smooth pathE(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Euler-like and extrapolated Euler-like numerical methods for approximating an analytic SVD and prove that the Euler-like method converges.
Resumo:
We present a novel kinetic multi-layer model for gas-particle interactions in aerosols and clouds (KMGAP) that treats explicitly all steps of mass transport and chemical reaction of semi-volatile species partitioning between gas phase, particle surface and particle bulk. KMGAP is based on the PRA model framework (P¨oschl-Rudich- Ammann, 2007), and it includes gas phase diffusion, reversible adsorption, surface reactions, bulk diffusion and reaction, as well as condensation, evaporation and heat transfer. The size change of atmospheric particles and the temporal evolution and spatial profile of the concentration of individual chemical species can be modeled along with gas uptake and accommodation coefficients. Depending on the complexity of the investigated system and the computational constraints, unlimited numbers of semi-volatile species, chemical reactions, and physical processes can be treated, and the model shall help to bridge gaps in the understanding and quantification of multiphase chemistry and microphysics in atmospheric aerosols and clouds. In this study we demonstrate how KM-GAP can be used to analyze, interpret and design experimental investigations of changes in particle size and chemical composition in response to condensation, evaporation, and chemical reaction. For the condensational growth of water droplets, our kinetic model results provide a direct link between laboratory observations and molecular dynamic simulations, confirming that the accommodation coefficient of water at 270K is close to unity (Winkler et al., 2006). Literature data on the evaporation of dioctyl phthalate as a function of particle size and time can be reproduced, and the model results suggest that changes in the experimental conditions like aerosol particle concentration and chamber geometry may influence the evaporation kinetics and can be optimized for efficient probing of specific physical effects and parameters. With regard to oxidative aging of organic aerosol particles, we illustrate how the formation and evaporation of volatile reaction products like nonanal can cause a decrease in the size of oleic acid particles exposed to ozone.
Resumo:
Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.
Resumo:
This paper provides a selective review of literature on fair trade and introduces contributions to this Policy Arena. It focuses on policy practice as a dynamic process, highlighting the changing configurations of actors, policy spaces, knowledge, practices and commodities that are shaping the policy trajectory of fair trade. It highlights how recent literature has tackled questions of mainstreaming as part of this trajectory, bringing to the fore dimensions of change associated with the market, state and civil society.
Resumo:
Context: Emotion regulation is critically disrupted in depression and use of paradigms tapping these processes may uncover essential changes in neurobiology during treatment. In addition, as neuroimaging outcome studies of depression commonly utilize solely baseline and endpoint data – which is more prone to week-to week noise in symptomatology – we sought to use all data points over the course of a six month trial. Objective: To examine changes in neurobiology resulting from successful treatment. Design: Double-blind trial examining changes in the neural circuits involved in emotion regulation resulting from one of two antidepressant treatments over a six month trial. Participants were scanned pretreatment, at 2 months and 6 months posttreatment. Setting: University functional magnetic resonance imaging facility. Participants: 21 patients with Major Depressive Disorder and without other Axis I or Axis II diagnoses and 14 healthy controls. Interventions: Venlafaxine XR (doses up to 300mg) or Fluoxetine (doses up to 80mg). Main Outcome Measure: Neural activity, as measured using functional magnetic resonance imaging during performance of an emotion regulation paradigm as well as regular assessments of symptom severity by the Hamilton Rating Scale for Depression. To utilize all data points, slope trajectories were calculated for rate of change in depression severity as well as rate of change of neural engagement. Results: Those depressed individuals showing the steepest decrease in depression severity over the six months were those individuals showing the most rapid increases in BA10 and right DLPFC activity when regulating negative affect over the same time frame. This relationship was more robust than when using solely the baseline and endpoint data. Conclusions: Changes in PFC engagement when regulating negative affect correlate with changes in depression severity over six months. These results are buttressed by calculating these statistics which are more reliable and robust to week-to-week variation than difference scores.
Resumo:
A Lagrangian model of photochemistry and mixing is described (CiTTyCAT, stemming from the Cambridge Tropospheric Trajectory model of Chemistry And Transport), which is suitable for transport and chemistry studies throughout the troposphere. Over the last five years, the model has been developed in parallel at several different institutions and here those developments have been incorporated into one "community" model and documented for the first time. The key photochemical developments include a new scheme for biogenic volatile organic compounds and updated emissions schemes. The key physical development is to evolve composition following an ensemble of trajectories within neighbouring air-masses, including a simple scheme for mixing between them via an evolving "background profile", both within the boundary layer and free troposphere. The model runs along trajectories pre-calculated using winds and temperature from meteorological analyses. In addition, boundary layer height and precipitation rates, output from the analysis model, are interpolated to trajectory points and used as inputs to the mixing and wet deposition schemes. The model is most suitable in regimes when the effects of small-scale turbulent mixing are slow relative to advection by the resolved winds so that coherent air-masses form with distinct composition and strong gradients between them. Such air-masses can persist for many days while stretching, folding and thinning. Lagrangian models offer a useful framework for picking apart the processes of air-mass evolution over inter-continental distances, without being hindered by the numerical diffusion inherent to global Eulerian models. The model, including different box and trajectory modes, is described and some output for each of the modes is presented for evaluation. The model is available for download from a Subversion-controlled repository by contacting the corresponding authors.
Resumo:
During long-range transport, many distinct processes – including photochemistry, deposition, emissions and mixing – contribute to the transformation of air mass composition. Partitioning the effects of different processes can be useful when considering the sensitivity of chemical transformation to, for example, a changing environment or anthropogenic influence. However, transformation is not observed directly, since mixing ratios are measured, and models must be used to relate changes to processes. Here, four cases from the ITCT-Lagrangian 2004 experiment are studied. In each case, aircraft intercepted a distinct air mass several times during transport over the North Atlantic, providing a unique dataset and quantifying the net changes in composition from all processes. A new framework is presented to deconstruct the change in O3 mixing ratio (Δ O3) into its component processes, which were not measured directly, taking into account the uncertainty in measurements, initial air mass variability and its time evolution. The results show that the net chemical processing (Δ O3chem) over the whole simulation is greater than net physical processing (Δ O3phys) in all cases. This is in part explained by cancellation effects associated with mixing. In contrast, each case is in a regime of either net photochemical destruction (lower tropospheric transport) or production (an upper tropospheric biomass burning case). However, physical processes influence O3 indirectly through addition or removal of precursor gases, so that changes to physical parameters in a model can have a larger effect on Δ O3chem than Δ O3phys. Despite its smaller magnitude, the physical processing distinguishes the lower tropospheric export cases, since the net photochemical O3 change is −5 ppbv per day in all three cases. Processing is quantified using a Lagrangian photochemical model with a novel method for simulating mixing through an ensemble of trajectories and a background profile that evolves with them. The model is able to simulate the magnitude and variability of the observations (of O3, CO, NOy and some hydrocarbons) and is consistent with the time-average OH following air-masses inferred from hydrocarbon measurements alone (by Arnold et al., 2007). Therefore, it is a useful new method to simulate air mass evolution and variability, and its sensitivity to process parameters.
Resumo:
Approximate Bayesian computation (ABC) methods make use of comparisons between simulated and observed summary statistics to overcome the problem of computationally intractable likelihood functions. As the practical implementation of ABC requires computations based on vectors of summary statistics, rather than full data sets, a central question is how to derive low-dimensional summary statistics from the observed data with minimal loss of information. In this article we provide a comprehensive review and comparison of the performance of the principal methods of dimension reduction proposed in the ABC literature. The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization. In addition, we introduce two new methods of dimension reduction. The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure. We illustrate the performance of these dimension reduction techniques through the analysis of three challenging models and data sets.
Resumo:
Many modern statistical applications involve inference for complex stochastic models, where it is easy to simulate from the models, but impossible to calculate likelihoods. Approximate Bayesian computation (ABC) is a method of inference for such models. It replaces calculation of the likelihood by a step which involves simulating artificial data for different parameter values, and comparing summary statistics of the simulated data with summary statistics of the observed data. Here we show how to construct appropriate summary statistics for ABC in a semi-automatic manner. We aim for summary statistics which will enable inference about certain parameters of interest to be as accurate as possible. Theoretical results show that optimal summary statistics are the posterior means of the parameters. Although these cannot be calculated analytically, we use an extra stage of simulation to estimate how the posterior means vary as a function of the data; and we then use these estimates of our summary statistics within ABC. Empirical results show that our approach is a robust method for choosing summary statistics that can result in substantially more accurate ABC analyses than the ad hoc choices of summary statistics that have been proposed in the literature. We also demonstrate advantages over two alternative methods of simulation-based inference.
Resumo:
A novel two-stage construction algorithm for linear-in-the-parameters classifier is proposed, aiming at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage to construct a sparse linear-in-the-parameters classifier. For the first stage learning of generating the prefiltered signal, a two-level algorithm is introduced to maximise the model's generalisation capability, in which an elastic net model identification algorithm using singular value decomposition is employed at the lower level while the two regularisation parameters are selected by maximising the Bayesian evidence using a particle swarm optimization algorithm. Analysis is provided to demonstrate how “Occam's razor” is embodied in this approach. The second stage of sparse classifier construction is based on an orthogonal forward regression with the D-optimality algorithm. Extensive experimental results demonstrate that the proposed approach is effective and yields competitive results for noisy data sets.
Resumo:
The Eyjafjallajökull volcano in Iceland emitted a cloud of ash into the atmosphere during April and May 2010. Over the UK the ash cloud was observed by the FAAM BAe-146 Atmospheric Research Aircraft which was equipped with in-situ probes measuring the concentration of volcanic ash carried by particles of varying sizes. The UK Met Office Numerical Atmospheric-dispersion Modelling Environment (NAME) has been used to simulate the evolution of the ash cloud emitted by the Eyjafjallajökull volcano during the period 4–18 May 2010. In the NAME simulations the processes controlling the evolution of the concentration and particle size distribution include sedimentation and deposition of particles, horizontal dispersion and vertical wind shear. For travel times between 24 and 72 h, a 1/t relationship describes the evolution of the concentration at the centre of the ash cloud and the particle size distribution remains fairly constant. Although NAME does not represent the effects of microphysical processes, it can capture the observed decrease in concentration with travel time in this period. This suggests that, for this eruption, microphysical processes play a small role in determining the evolution of the distal ash cloud. Quantitative comparison with observations shows that NAME can simulate the observed column-integrated mass if around 4% of the total emitted mass is assumed to be transported as far as the UK by small particles (< 30 μm diameter). NAME can also simulate the observed particle size distribution if a distal particle size distribution that contains a large fraction of < 10 μm diameter particles is used, consistent with the idea that phraetomagmatic volcanoes, such as Eyjafjallajökull, emit very fine particles.
Resumo:
Computational formalisms have been pushing the boundaries of the field of computing for the last 80 years and much debate has surrounded what computing entails; what it is, and what it is not. This paper seeks to explore the boundaries of the ideas of computation and provide a framework for enabling a constructive discussion of computational ideas. First, a review of computing is given, ranging from Turing Machines to interactive computing. Then, a variety of natural physical systems are considered for their computational qualities. From this exploration, a framework is presented under which all dynamical systems can be considered as instances of the class of abstract computational platforms. An abstract computational platform is defined by both its intrinsic dynamics and how it allows computation that is meaningful to an external agent through the configuration of constraints upon those dynamics. It is asserted that a platform’s computational expressiveness is directly related to the freedom with which constraints can be placed. Finally, the requirements for a formal constraint description language are considered and it is proposed that Abstract State Machines may provide a reasonable basis for such a language.
Resumo:
Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.