879 resultados para Particle-based Model
Resumo:
We present a novel kinetic multi-layer model for gas-particle interactions in aerosols and clouds (KM-GAP) that treats explicitly all steps of mass transport and chemical reaction of semi-volatile species partitioning between gas phase, particle surface and particle bulk. KM-GAP is based on the PRA model framework (Pöschl-Rudich-Ammann, 2007), and it includes gas phase diffusion, reversible adsorption, surface reactions, bulk diffusion and reaction, as well as condensation, evaporation and heat transfer. The size change of atmospheric particles and the temporal evolution and spatial profile of the concentration of individual chemical species can be modelled along with gas uptake and accommodation coefficients. Depending on the complexity of the investigated system, unlimited numbers of semi-volatile species, chemical reactions, and physical processes can be treated, and the model shall help to bridge gaps in the understanding and quantification of multiphase chemistry and microphysics in atmo- spheric aerosols and clouds. In this study we demonstrate how KM-GAP can be used to analyze, interpret and design experimental investigations of changes in particle size and chemical composition in response to condensation, evaporation, and chemical reaction. For the condensational growth of water droplets, our kinetic model results provide a direct link between laboratory observations and molecular dynamic simulations, confirming that the accommodation coefficient of water at 270 K is close to unity. Literature data on the evaporation of dioctyl phthalate as a function of particle size and time can be reproduced, and the model results suggest that changes in the experimental conditions like aerosol particle concentration and chamber geometry may influence the evaporation kinetics and can be optimized for eðcient probing of specific physical effects and parameters. With regard to oxidative aging of organic aerosol particles, we illustrate how the formation and evaporation of volatile reaction products like nonanal can cause a decrease in the size of oleic acid particles exposed to ozone.
Resumo:
We present a novel kinetic multi-layer model for gas-particle interactions in aerosols and clouds (KMGAP) that treats explicitly all steps of mass transport and chemical reaction of semi-volatile species partitioning between gas phase, particle surface and particle bulk. KMGAP is based on the PRA model framework (P¨oschl-Rudich- Ammann, 2007), and it includes gas phase diffusion, reversible adsorption, surface reactions, bulk diffusion and reaction, as well as condensation, evaporation and heat transfer. The size change of atmospheric particles and the temporal evolution and spatial profile of the concentration of individual chemical species can be modeled along with gas uptake and accommodation coefficients. Depending on the complexity of the investigated system and the computational constraints, unlimited numbers of semi-volatile species, chemical reactions, and physical processes can be treated, and the model shall help to bridge gaps in the understanding and quantification of multiphase chemistry and microphysics in atmospheric aerosols and clouds. In this study we demonstrate how KM-GAP can be used to analyze, interpret and design experimental investigations of changes in particle size and chemical composition in response to condensation, evaporation, and chemical reaction. For the condensational growth of water droplets, our kinetic model results provide a direct link between laboratory observations and molecular dynamic simulations, confirming that the accommodation coefficient of water at 270K is close to unity (Winkler et al., 2006). Literature data on the evaporation of dioctyl phthalate as a function of particle size and time can be reproduced, and the model results suggest that changes in the experimental conditions like aerosol particle concentration and chamber geometry may influence the evaporation kinetics and can be optimized for efficient probing of specific physical effects and parameters. With regard to oxidative aging of organic aerosol particles, we illustrate how the formation and evaporation of volatile reaction products like nonanal can cause a decrease in the size of oleic acid particles exposed to ozone.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We study some perturbative and nonperturbative effects in the framework of the Standard Model of particle physics. In particular we consider the time dependence of the Higgs vacuum expectation value given by the dynamics of the StandardModel and study the non-adiabatic production of both bosons and fermions, which is intrinsically non-perturbative. In theHartree approximation, we analyze the general expressions that describe the dissipative dynamics due to the backreaction of the produced particles. Then, we solve numerically some relevant cases for the Standard Model phenomenology in the regime of relatively small oscillations of the Higgs vacuum expectation value (vev). As perturbative effects, we consider the leading logarithmic resummation in small Bjorken x QCD, concentrating ourselves on the Nc dependence of the Green functions associated to reggeized gluons. Here the eigenvalues of the BKP kernel for states of more than three reggeized gluons are unknown in general, contrary to the large Nc limit (planar limit) case where the problem becomes integrable. In this contest we consider a 4-gluon kernel for a finite number of colors and define some simple toy models for the configuration space dynamics, which are directly solvable with group theoretical methods. In particular we study the depencence of the spectrum of thesemodelswith respect to the number of colors andmake comparisons with the planar limit case. In the final part we move on the study of theories beyond the Standard Model, considering models built on AdS5 S5/Γ orbifold compactifications of the type IIB superstring, where Γ is the abelian group Zn. We present an appealing three family N = 0 SUSY model with n = 7 for the order of the orbifolding group. This result in a modified Pati–Salam Model which reduced to the StandardModel after symmetry breaking and has interesting phenomenological consequences for LHC.
Resumo:
One of the main targets of the CMS experiment is to search for the Standard Model Higgs boson. The 4-lepton channel (from the Higgs decay h->ZZ->4l, l = e,mu) is one of the most promising. The analysis is based on the identification of two opposite-sign, same-flavor lepton pairs: leptons are required to be isolated and to come from the same primary vertex. The Higgs would be statistically revealed by the presence of a resonance peak in the 4-lepton invariant mass distribution. The 4-lepton analysis at CMS is presented, spanning on its most important aspects: lepton identification, variables of isolation, impact parameter, kinematics, event selection, background control and statistical analysis of results. The search leads to an evidence for a signal presence with a statistical significance of more than four standard deviations. The excess of data, with respect to the background-only predictions, indicates the presence of a new boson, with a mass of about 126 GeV/c2 , decaying to two Z bosons, whose characteristics are compatible with the SM Higgs ones.
Resumo:
Attractive business cases in various application fields contribute to the sustained long-term interest in indoor localization and tracking by the research community. Location tracking is generally treated as a dynamic state estimation problem, consisting of two steps: (i) location estimation through measurement, and (ii) location prediction. For the estimation step, one of the most efficient and low-cost solutions is Received Signal Strength (RSS)-based ranging. However, various challenges - unrealistic propagation model, non-line of sight (NLOS), and multipath propagation - are yet to be addressed. Particle filters are a popular choice for dealing with the inherent non-linearities in both location measurements and motion dynamics. While such filters have been successfully applied to accurate, time-based ranging measurements, dealing with the more error-prone RSS based ranging is still challenging. In this work, we address the above issues with a novel, weighted likelihood, bootstrap particle filter for tracking via RSS-based ranging. Our filter weights the individual likelihoods from different anchor nodes exponentially, according to the ranging estimation. We also employ an improved propagation model for more accurate RSS-based ranging, which we suggested in recent work. We implemented and tested our algorithm in a passive localization system with IEEE 802.15.4 signals, showing that our proposed solution largely outperforms a traditional bootstrap particle filter.
Resumo:
Passive positioning systems produce user location information for third-party providers of positioning services. Since the tracked wireless devices do not participate in the positioning process, passive positioning can only rely on simple, measurable radio signal parameters, such as timing or power information. In this work, we provide a passive tracking system for WiFi signals with an enhanced particle filter using fine-grained power-based ranging. Our proposed particle filter provides an improved likelihood function on observation parameters and is equipped with a modified coordinated turn model to address the challenges in a passive positioning system. The anchor nodes for WiFi signal sniffing and target positioning use software defined radio techniques to extract channel state information to mitigate multipath effects. By combining the enhanced particle filter and a set of enhanced ranging methods, our system can track mobile targets with an accuracy of 1.5m for 50% and 2.3m for 90% in a complex indoor environment. Our proposed particle filter significantly outperforms the typical bootstrap particle filter, extended Kalman filter and trilateration algorithms.
Resumo:
Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper discusses the integrated design of parallel manipulators, which exhibit varying dynamics. This characteristic affects the machine stability and performance. The design methodology consists of four main steps: (i) the system modeling using flexible multibody technique, (ii) the synthesis of reduced-order models suitable for control design, (iii) the systematic flexible model-based input signal design, and (iv) the evaluation of some possible machine designs. The novelty in this methodology is to take structural flexibilities into consideration during the input signal design; therefore, enhancing the standard design process which mainly considers rigid bodies dynamics. The potential of the proposed strategy is exploited for the design evaluation of a two degree-of-freedom high-speed parallel manipulator. The results are experimentally validated. (C) 2010 Elsevier Ltd. All rights reserved.
A hybrid Particle Swarm Optimization - Simplex algorithm (PSOS) for structural damage identification
Resumo:
This study proposes a new PSOS-model based damage identification procedure using frequency domain data. The formulation of the objective function for the minimization problem is based on the Frequency Response Functions (FRFs) of the system. A novel strategy for the control of the Particle Swarm Optimization (PSO) parameters based on the Nelder-Mead algorithm (Simplex method) is presented; consequently, the convergence of the PSOS becomes independent of the heuristic constants and its stability and confidence are enhanced. The formulated hybrid method performs better in different benchmark functions than the Simulated Annealing (SA) and the basic PSO (PSO(b)). Two damage identification problems, taking into consideration the effects of noisy and incomplete data, were studied: first, a 10-bar truss and second, a cracked free-free beam, both modeled with finite elements. In these cases, the damage location and extent were successfully determined. Finally, a non-linear oscillator (Duffing oscillator) was identified by PSOS providing good results. (C) 2009 Elsevier Ltd. All rights reserved
Resumo:
The aim of this study is to quantify the mass transfer velocity using turbulence parameters from simultaneous measurements of oxygen concentration fields and velocity fields. The surface divergence model was considered in more detail, using data obtained for the lower range of beta (surface divergence). It is shown that the existing models that use the divergence concept furnish good predictions for the transfer velocity also for low values of beta, in the range of this study. Additionally, traditional conceptual models, such as the film model, the penetration-renewal model, and the large eddy model, were tested using the simultaneous information of concentration and velocity fields. It is shown that the film and the surface divergence models predicted the mass transfer velocity for all the range of the equipment Reynolds number used here. The velocity measurements showed viscosity effects close to the surface, which indicates that the surface was contaminated with some surfactant. Considering the results, this contamination can be considered slight for the mass transfer predictions. (C) 2009 American Institute of Chemical Engineers AIChE J, 56: 2005-2017; 2010
Resumo:
Corresponding to the updated flow pattern map presented in Part I of this study, an updated general flow pattern based flow boiling heat transfer model was developed for CO2 using the Cheng-Ribatski-Wojtan-Thome [L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside horizontal tubes, Int. J. Heat Mass Transfer 49 (2006) 4082-4094; L. Cheng, G. Ribatski, L. Wojtan, J.R. Thome, Erratum to: ""New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside tubes"" [Heat Mass Transfer 49 (21-22) (2006) 4082-4094], Int. J. Heat Mass Transfer 50 (2007) 391] flow boiling heat transfer model as the starting basis. The flow boiling heat transfer correlation in the dryout region was updated. In addition, a new mist flow heat transfer correlation for CO2 was developed based on the CO2 data and a heat transfer method for bubbly flow was proposed for completeness sake. The updated general flow boiling heat transfer model for CO2 covers all flow regimes and is applicable to a wider range of conditions for horizontal tubes: tube diameters from 0.6 to 10 mm, mass velocities from 50 to 1500 kg/m(2) s, heat fluxes from 1.8 to 46 kW/m(2) and saturation temperatures from -28 to 25 degrees C (reduced pressures from 0.21 to 0.87). The updated general flow boiling heat transfer model was compared to a new experimental database which contains 1124 data points (790 more than that in the previous model [Cheng et al., 2006, 2007]) in this study. Good agreement between the predicted and experimental data was found in general with 71.4% of the entire database and 83.2% of the database without the dryout and mist flow data predicted within +/-30%. However, the predictions for the dryout and mist flow regions were less satisfactory due to the limited number of data points, the higher inaccuracy in such data, scatter in some data sets ranging up to 40%, significant discrepancies from one experimental study to another and the difficulties associated with predicting the inception and completion of dryout around the perimeter of the horizontal tubes. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
Isotretinoin is the drug of choice for the management of severe recalcitrant nodular acne. Nevertheless, some of its physical-chemical properties are still poorly known. Hence, the aim of our study consisted to comparatively evaluate the particle size distribution (PSD) and characterize the thermal behavior of the three encapsulated isotretinoin products in oil suspension (one reference and two generics) commercialized in Brazil. Here, we show that the PSD, estimated by laser diffraction and by polarized light microscopy, differed between the generics and the reference product. However, the thermal behavior of the three products, determined by thermogravimetry (TGA), differential thermal (DTA) analyses and differential scanning calorimetry (DSC), displayed no significant changes and were more thermostable than the isotretinoin standard used as internal control. Thus, our study suggests that PSD analyses in isotretinoin lipid-based formulations should be routinely performed in order to improve their quality and bioavailability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this study, twenty hydroxylated and acetoxylated 3-phenylcoumarin derivatives were evaluated as inhibitors of immune complex-stimulated neutrophil oxidative metabolism and possible modulators of the inflammatory tissue damage found in type III hypersensitivity reactions. By using lucigenin- and luminol-enhanced chemiluminescence assays (CL-luc and CL-lum, respectively), we found that the 6,7-dihydroxylated and 6,7-diacetoxylated 3-phenylcoumarin derivatives were the most effective inhibitors. Different structural features of the other compounds determined CL-luc and/or CL-lum inhibition. The 2D-QSAR analysis suggested the importance of hydrophobic contributions to explain these effects. In addition, a statistically significant 3D-QSAR model built applying GRIND descriptors allowed us to propose a virtual receptor site considering pharmacophoric regions and mutual distances. Furthermore, the 3-phenylcoumarins studied were not toxic to neutrophils under the assessed conditions. (C) 2007 Elsevier Masson SAS. All rights reserved.