989 resultados para Meshfree particle methods
Resumo:
Recently, studies have shown that the classroom environment is very important for students' health and performance. Thus, the evaluation of indoor air quality (IAQ) in a classroom is necessary to ensure students' well-being. In this paper the emphasis is on airborne concentration of particulate matter (PM) in adult education rooms. The mass concentration of PM10 particulates was measured in two classrooms under different ventilation methods in the University of Reading, UK, during the winter period of 2008. In another study the measurement of the concentration of particles was accompanied with measurements of CO2 concentration in these classrooms but this study is the subject of another publication. The ambient PM10, temperature, relative humidity, wind speed and direction, and rainfall events were monitored as well. In general, this study showed that outdoor particle concentrations and outdoor meteorological parameters were identified as significant factors influencing indoor particle concentration levels. Ventilation methods showed significant effects on air change rate and on indoor/outdoor (I/O) concentration ratios. Higher levels of indoor particulates were seen during occupancy periods. I/O ratios were significantly higher when classrooms were occupied than when they were unoccupied, indicating the effect of both people presence and outdoor particle concentration levels. The concentrations of PM10 indoors and outdoors did not meet the requirements of WHO standards for PM10 annual average.
Resumo:
The 3D reconstruction of a Golgi-stained dendritic tree from a serial stack of images captured with a transmitted light bright-field microscope is investigated. Modifications to the bootstrap filter are discussed such that the tree structure may be estimated recursively as a series of connected segments. The tracking performance of the bootstrap particle filter is compared against Differential Evolution, an evolutionary global optimisation method, both in terms of robustness and accuracy. It is found that the particle filtering approach is significantly more robust and accurate for the data considered.
Resumo:
The Boltzmann equation in presence of boundary and initial conditions, which describes the general case of carrier transport in microelectronic devices is analysed in terms of Monte Carlo theory. The classical Ensemble Monte Carlo algorithm which has been devised by merely phenomenological considerations of the initial and boundary carrier contributions is now derived in a formal way. The approach allows to suggest a set of event-biasing algorithms for statistical enhancement as an alternative of the population control technique, which is virtually the only algorithm currently used in particle simulators. The scheme of the self-consistent coupling of Boltzmann and Poisson equation is considered for the case of weighted particles. It is shown that particles survive the successive iteration steps.
Resumo:
Accurate estimates for the fall speed of natural hydrometeors are vital if their evolution in clouds is to be understood quantitatively. In this study, laboratory measurements of the terminal velocity vt for a variety of ice particle models settling in viscous fluids, along with wind-tunnel and field measurements of ice particles settling in air, have been analyzed and compared to common methods of computing vt from the literature. It is observed that while these methods work well for a number of particle types, they fail for particles with open geometries, specifically those particles for which the area ratio Ar is small (Ar is defined as the area of the particle projected normal to the flow divided by the area of a circumscribing disc). In particular, the fall speeds of stellar and dendritic crystals, needles, open bullet rosettes, and low-density aggregates are all overestimated. These particle types are important in many cloud types: aggregates in particular often dominate snow precipitation at the ground and vertically pointing Doppler radar measurements. Based on the laboratory data, a simple modification to previous computational methods is proposed, based on the area ratio. This new method collapses the available drag data onto an approximately universal curve, and the resulting errors in the computed fall speeds relative to the tank data are less than 25% in all cases. Comparison with the (much more scattered) measurements of ice particles falling in air show strong support for this new method, with the area ratio bias apparently eliminated.
Resumo:
Almost all research fields in geosciences use numerical models and observations and combine these using data-assimilation techniques. With ever-increasing resolution and complexity, the numerical models tend to be highly nonlinear and also observations become more complicated and their relation to the models more nonlinear. Standard data-assimilation techniques like (ensemble) Kalman filters and variational methods like 4D-Var rely on linearizations and are likely to fail in one way or another. Nonlinear data-assimilation techniques are available, but are only efficient for small-dimensional problems, hampered by the so-called ‘curse of dimensionality’. Here we present a fully nonlinear particle filter that can be applied to higher dimensional problems by exploiting the freedom of the proposal density inherent in particle filtering. The method is illustrated for the three-dimensional Lorenz model using three particles and the much more complex 40-dimensional Lorenz model using 20 particles. By also applying the method to the 1000-dimensional Lorenz model, again using only 20 particles, we demonstrate the strong scale-invariance of the method, leading to the optimistic conjecture that the method is applicable to realistic geophysical problems. Copyright c 2010 Royal Meteorological Society
Resumo:
The A-Train constellation of satellites provides a new capability to measure vertical cloud profiles that leads to more detailed information on ice-cloud microphysical properties than has been possible up to now. A variational radar–lidar ice-cloud retrieval algorithm (VarCloud) takes advantage of the complementary nature of the CloudSat radar and Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar to provide a seamless retrieval of ice water content, effective radius, and extinction coefficient from the thinnest cirrus (seen only by the lidar) to the thickest ice cloud (penetrated only by the radar). In this paper, several versions of the VarCloud retrieval are compared with the CloudSat standard ice-only retrieval of ice water content, two empirical formulas that derive ice water content from radar reflectivity and temperature, and retrievals of vertically integrated properties from the Moderate Resolution Imaging Spectroradiometer (MODIS) radiometer. The retrieved variables typically agree to within a factor of 2, on average, and most of the differences can be explained by the different microphysical assumptions. For example, the ice water content comparison illustrates the sensitivity of the retrievals to assumed ice particle shape. If ice particles are modeled as oblate spheroids rather than spheres for radar scattering then the retrieved ice water content is reduced by on average 50% in clouds with a reflectivity factor larger than 0 dBZ. VarCloud retrieves optical depths that are on average a factor-of-2 lower than those from MODIS, which can be explained by the different assumptions on particle mass and area; if VarCloud mimics the MODIS assumptions then better agreement is found in effective radius and optical depth is overestimated. MODIS predicts the mean vertically integrated ice water content to be around a factor-of-3 lower than that from VarCloud for the same retrievals, however, because the MODIS algorithm assumes that its retrieved effective radius (which is mostly representative of cloud top) is constant throughout the depth of the cloud. These comparisons highlight the need to refine microphysical assumptions in all retrieval algorithms and also for future studies to compare not only the mean values but also the full probability density function.
Resumo:
The tiger nut tuber of the Cyperus esculentus L. plant is an unusual storage system with similar amounts of starch and lipid. The extraction of its oil employing both mechanical pressing and aqueous enzymatic extraction (AEE) methods was investigated and an examination of the resulting products was carried out. The effects of particle size and moisture content of the tuber on the yield of tiger nut oil with pressing were initially studied. Smaller particles were found to enhance oil yields while a range of moisture content was observed to favour higher oil yields. When samples were first subjected to high pressures up to 700 MPa before pressing at 38 MPa there was no increase in the oil yields. Ground samples incubated with a mixture of α- Amylase, Alcalase, and Viscozyme (a mixture of cell wall degrading enzyme) as a pre-treatment, increased oil yield by pressing and 90% of oil was recovered as a result. When aqueous enzymatic extraction was carried out on ground samples, the use of α- Amylase, Alcalase, and Celluclast independently improved extraction oil yields compared to oil extraction without enzymes by 34.5, 23.4 and 14.7% respectively. A mixture of the three enzymes further augmented the oil yield and different operational factors were individually studied for their effects on the process. These include time, total mixed enzyme concentration, linear agitation speed, and solid-liquid ratio. The largest oil yields were obtained with a solid-liquid ratio of 1:6, mixed enzyme concentration of 1% (w/w) and 6 h incubation time although the longer time allowed for the formation of an emulsion. Using stationary samples during incubation surprisingly gave the highest oil yields, and this was observed to be as a result of gravity separation occurring during agitation. Furthermore, the use of high pressure processing up to 300 MPa as a pre-treatment enhanced oil yields but additional pressure increments had a detrimental effect. The quality of oils recovered from both mechanical and aqueous enzymatic extraction based on the percentage free fatty acid (% FFA) and peroxide values (PV) all reflected the good stabilities of the oils with the highest % FFA of 1.8 and PV of 1.7. The fatty acid profiles of all oils also remained unchanged. The level of tocopherols in oils were enhanced with both enzyme aided pressing (EAP) and high pressure processing before AEE. Analysis on the residual meals revealed DP 3 and DP 4 oligosaccharides present in EAP samples but these would require further assessment on their identity and quality.
MAGNETOHYDRODYNAMIC SIMULATIONS OF RECONNECTION AND PARTICLE ACCELERATION: THREE-DIMENSIONAL EFFECTS
Resumo:
Magnetic fields can change their topology through a process known as magnetic reconnection. This process in not only important for understanding the origin and evolution of the large-scale magnetic field, but is seen as a possibly efficient particle accelerator producing cosmic rays mainly through the first-order Fermi process. In this work we study the properties of particle acceleration inserted in reconnection zones and show that the velocity component parallel to the magnetic field of test particles inserted in magnetohydrodynamic (MHD) domains of reconnection without including kinetic effects, such as pressure anisotropy, the Hall term, or anomalous effects, increases exponentially. Also, the acceleration of the perpendicular component is always possible in such models. We find that within contracting magnetic islands or current sheets the particles accelerate predominantly through the first-order Fermi process, as previously described, while outside the current sheets and islands the particles experience mostly drift acceleration due to magnetic field gradients. Considering two-dimensional MHD models without a guide field, we find that the parallel acceleration stops at some level. This saturation effect is, however, removed in the presence of an out-of-plane guide field or in three-dimensional models. Therefore, we stress the importance of the guide field and fully three-dimensional studies for a complete understanding of the process of particle acceleration in astrophysical reconnection environments.
Resumo:
To know how much misalignment is tolerable for a particle accelerator is an important input for the design of these machines. In particle accelerators the beam must be guided and focused using bending magnets and magnetic lenses, respectively. The alignment of the lenses along a transport line aims to ensure that the beam passes through their optical axes and represents a critical point in the assembly of the machine. There are more and more accelerators in the world, many of which are very small machines. Because the existing literature and programs are mostly targeted for large machines. in this work we describe a method suitable for small machines. This method consists in determining statistically the alignment tolerance in a set of lenses. Differently from the methods used in standard simulation codes for particle accelerators, the statistical method we propose makes it possible to evaluate particle losses as a function of the alignment accuracy of the optical elements in a transport line. Results for 100 key electrons, on the 3.5-m long conforming beam stage of the IFUSP Microtron are presented as an example of use. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In a recent paper, the hydrodynamic code NEXSPheRIO was used in conjunction with STAR analysis methods to study two-particle correlations as a function of Delta(eta) and Delta phi. The various structures observed in the data were reproduced. In this work, we discuss the origin of these structures as well as present new results.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Two high-performance liquid chromatographic methods for determination of residual monomer in dental acrylic resins are described. Monomers were detected by their UV absorbance at 230 nm, on a Nucleosil((R)) C-18 (5 mu m particle size, 100 angstrom pore size, 15 x 0.46 cm i.d.) column. The separation was performed using acetonitrile-water (55:45 v/v) containing 0.01% triethylamine (TEA) for methyl methacrylate and butyl methacrylate, and acetonitrile-water (60:40 v/v) containing 0.01% TEA for isobutyl methacrylate and 1,6-hexanediol dimethacrylate as mobile phases, at a flow rate of 0.8 mL/min. Good linear relationships were obtained in the concentration range 5.0-80.0 mu g/mL for methyl methacrylate, 10.0-160.0 mu g/mL for butyl methacrylate, 50.0-500.0 mu g/mL for isobutyl methacrylate and 2.5-180.0 mu g/mL for 1,6-hexanediol dimethacrylate. Adequate assay for intra- and inter-day precision and accuracy was observed during the validation process. An extraction procedure to remove residual monomer from the acrylic resins was also established. Residual monomer was obtained from broken specimens of acrylic disks using methanol as extraction solvent for 2 h in an ice-bath. The developed methods and the extraction procedure were applied to dental acrylic resins, tested with or without post-polymerization treatments, and proved to be accurate and precise for the determination of residual monomer content of the materials evaluated. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Purpose: To evaluate the effect of airborne-particle abrasion and mechanico-thermal cycling on the flexural strength of a ceramic fused to cobalt-chromium alloy or gold alloy.Materials and Methods: Metallic bars (n = 120) were made (25 mm x 3 mm x 0.5 mm): 60 with gold alloy and 60 with Co-Cr. At the central area of the bars (8 mm x 3 mm), a layer of opaque ceramic and then two layers of glass ceramic (Vita VM13, Vita Zahnfabrick) were fired onto it (thickness: 1 mm). Ten specimens from each alloy group were randomly allocated to a surface treatment [(tungsten bur or air-particle abrasion (APA) with Al(2)O(3) at 10 mm or 20 mm away)] and mechanico-thermal cycling (no cycling or mechanically loaded 20,000 cycles; 10 N distilled water at 37 degrees C and then thermocycled 3000 cycles; 5 degrees C to 55 degrees C, dwell time 30 seconds) combination. Those specimens that did not undergo mechanico-thermal cyclingwere stored inwater (37 degrees C) for 24 hours. Bond strength was measured using a three-point bend test, according to ISO 9693. After the flexural strength test, failure types were noted. The data were analyzed using three factor-ANOVA and Tukey's test (alpha = 0.05).Results: There were no significant differences between the flexural bond strength of gold and Co-Cr groups (42.64 +/- 8.25 and 43.39 +/- 10.89 MPa, respectively). APA 10 and 20 mm away surface treatment (45.86 +/- 9.31 and 46.38 +/- 8.89 MPa, respectively) had similar mean flexural strength values, and both had significantly higher bond strength than tungsten bur treatment (36.81 +/- 7.60 MPa). Mechanico-thermal cycling decreased the mean flexural strength values significantly for all six alloy-surface treatment combinations tested when compared to the control groups. The failure type was adhesive in the metal/ceramic interface for specimens surface treated only with the tungsten bur, and mixed for specimens surface treated with APA 10 and 20 mm.Conclusions: Considering the levels adopted in this study, the alloy did not affect the bond strength; APA with Al(2)O(3) at 10 and 20 mm improved the flexural bond strength between ceramics and alloys used, and the mechanico-thermal cycling of metal-ceramic specimens resulted in a decrease of bond strength.