865 resultados para grid points
Resumo:
By using a combinatorial screening method based on the self-consistent field theory, we investigate the equilibrium morphologies of linear ABCBA and H-shaped (AB)(2)C(BA)(2) block copolymers in two dimensions. The triangle phase diagrams of both block copolymers are constructed by systematically varying the volume fractions of blocks A, B, and C. In this study, the interaction energies between species A, B, and C are set to be equal. Four different equilibrium morphologies are identified, i.e., the lamellar phase (LAM), the hexagonal lattice phase (HEX), the core-shell hexagonal lattice phase (CSH), and the two interpenetrating tetragonal lattice phase (TET2). For the linear ABCBA block copolymer, the reflection symmetry is observed in the phase diagram except for some special grid points, and most of grid points are occupied by LAM morphology. However, for the H-shaped (AB)(2)C(BA)(2) block copolymer, most of the grid points in the triangle phase diagram are occupied by CSH morphology, which is ascribed to the different chain architectures of the two block copolymers. These results may help in the design of block copolymers with different microstructures.
Resumo:
When used in the determining the total electron content (TEC), which may be the most important ionospheric parameter, the worldwide GPS observation brings a revolutionary change in the ionospheric science. There are three steps in the data processing to retrieve GPS TEC: (1) to estimate slant TEC from the measurements of GPS signals; (2) to map the slant TEC into vertical; and (3) to interpolate the vertical TEC into grid points. In this scientific dissertation we focus our attention on the second step, the mapping theory and method to convert slant TEC into vertical. This is conventionally done by multiplying on the slant TEC a mapping function which is usually determined by certain models of electron density profile. Study of the vertical TEC mapping function is of significance in GPS TEC measurement. This paper first reviews briefly the three steps in GPS TEC mapping process. Then we compare the vertical TEC mapping function which were respectively calculated from the electron density profiles of the ionospheric model and retrieved from the observation of worldwide GPS TEC. We also perform the statistical analysis on the observational mapping functions. The main works and results are as follows: 1. We calculated the vertical TEC mapping functions for both SLM and Chapman models, and discussed the modulation of the ionosphere height to the mapping functions. We use two simple models, single layer model (SLM) and Chapman models, of the ionospheric electron density profiles to calculate the vertical TEC mapping function. In the case of the SLM, we discuss the control of the ionospheric altitude, i.e., the layer height hipp, to the mapping function. We find that the mapping function decreases rapidly as hipp increases. For the Chapman model we study also the control mapping function by both ionospheric altitude indicated by the peak electron density height hmF2, and the scale height, H, which present the thickness of the ionosphere. It is also found that the mapping function decreases rapidly as hmF2 increases. and it also decreases as H increases. 2. Then we estimate the mapping functions from the GPS observations and compare them with those calculated from the electron density models. We first, proposed a new method to estimate the mapping functions from GPS TEC data. This method is then used to retrieve the observational mapping function from both the slant TEC (TECS) provided by International GPS Service (IGS)and vertical TEC provide by JPL Global Ionospheric Maps (GIMs). Then we compare the observational mapping function with those calculated from the electron density models, SLM and Chapman. We find that the values of the observational mapping functions are much smaller than that from the model mapping functions, when the zenith angle is large enough. We attribute this to the effect of the plasmasphere which is above about 1000 km. 3. We statistically analyze the observational mapping functions and reveal their climatological changes. Observational mapping functions during 1999-2007 are used in our statistics. The main results are as follows. (1) The observational mapping functions decrease obviously with the decrement of the solar activity which is represented by the F10.7 index; (2) In annual variations of the observational mapping functions, the semiannual component is found at low-latitudes, and the remarkable seasonal variations at mid- and high-latitudes. (3) The diurnal variation of the observational mapping functions is that they are large in daytime and small at night, they become extremely small in the early morning before sunrise. (4) The observational mapping functions change with latitudes that they are smaller at lower latitudes and larger at higher. All of the above variations of the observational mapping functions are explained by the existence of the plasmasphere, which changes more slowly with time and more rapidly with latitude than the ionosphere does . In summary, our study on the vertical TEC mapping function imply that the ionosphere height has a modulative effect on the mapping function. We first propose the concept of the 'observational mapping functions' , and provide a new method to calculate them. This is important in improving the TEC mapping. It may also possible to retrieving the plasmaspheric information from GPS observations.
Resumo:
A 3D model of melt pool created by a moving arc type heat sources has been developed. The model solves the equations of turbulent fluid flow, heat transfer and electromagnetic field to demonstrate the flow behaviour phase-change in the pool. The coupled effects of buoyancy, capillary (Marangoni) and electromagnetic (Lorentz) forces are included within an unstructured finite volume mesh environment. The movement of the welding arc along the workpiece is accomplished via a moving co-ordinator system. Additionally a method enabling movement of the weld pool surface by fluid convection is presented whereby the mesh in the liquid region is allowed to move through a free surface. The surface grid lines move to restore equilibrium at the end of each computational time step and interior grid points then adjust following the solution of a Laplace equation.
Resumo:
The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.
Resumo:
Dans cette thèse, nous présentons une nouvelle méthode smoothed particle hydrodynamics (SPH) pour la résolution des équations de Navier-Stokes incompressibles, même en présence des forces singulières. Les termes de sources singulières sont traités d'une manière similaire à celle que l'on retrouve dans la méthode Immersed Boundary (IB) de Peskin (2002) ou de la méthode régularisée de Stokeslets (Cortez, 2001). Dans notre schéma numérique, nous mettons en oeuvre une méthode de projection sans pression de second ordre inspirée de Kim et Moin (1985). Ce schéma évite complètement les difficultés qui peuvent être rencontrées avec la prescription des conditions aux frontières de Neumann sur la pression. Nous présentons deux variantes de cette approche: l'une, Lagrangienne, qui est communément utilisée et l'autre, Eulerienne, car nous considérons simplement que les particules SPH sont des points de quadrature où les propriétés du fluide sont calculées, donc, ces points peuvent être laissés fixes dans le temps. Notre méthode SPH est d'abord testée à la résolution du problème de Poiseuille bidimensionnel entre deux plaques infinies et nous effectuons une analyse détaillée de l'erreur des calculs. Pour ce problème, les résultats sont similaires autant lorsque les particules SPH sont libres de se déplacer que lorsqu'elles sont fixes. Nous traitons, par ailleurs, du problème de la dynamique d'une membrane immergée dans un fluide visqueux et incompressible avec notre méthode SPH. La membrane est représentée par une spline cubique le long de laquelle la tension présente dans la membrane est calculée et transmise au fluide environnant. Les équations de Navier-Stokes, avec une force singulière issue de la membrane sont ensuite résolues pour déterminer la vitesse du fluide dans lequel est immergée la membrane. La vitesse du fluide, ainsi obtenue, est interpolée sur l'interface, afin de déterminer son déplacement. Nous discutons des avantages à maintenir les particules SPH fixes au lieu de les laisser libres de se déplacer. Nous appliquons ensuite notre méthode SPH à la simulation des écoulements confinés des solutions de polymères non dilués avec une interaction hydrodynamique et des forces d'exclusion de volume. Le point de départ de l'algorithme est le système couplé des équations de Langevin pour les polymères et le solvant (CLEPS) (voir par exemple Oono et Freed (1981) et Öttinger et Rabin (1989)) décrivant, dans le cas présent, les dynamiques microscopiques d'une solution de polymère en écoulement avec une représentation bille-ressort des macromolécules. Des tests numériques de certains écoulements dans des canaux bidimensionnels révèlent que l'utilisation de la méthode de projection d'ordre deux couplée à des points de quadrature SPH fixes conduit à un ordre de convergence de la vitesse qui est de deux et à une convergence d'ordre sensiblement égale à deux pour la pression, pourvu que la solution soit suffisamment lisse. Dans le cas des calculs à grandes échelles pour les altères et pour les chaînes de bille-ressort, un choix approprié du nombre de particules SPH en fonction du nombre des billes N permet, en l'absence des forces d'exclusion de volume, de montrer que le coût de notre algorithme est d'ordre O(N). Enfin, nous amorçons des calculs tridimensionnels avec notre modèle SPH. Dans cette optique, nous résolvons le problème de l'écoulement de Poiseuille tridimensionnel entre deux plaques parallèles infinies et le problème de l'écoulement de Poiseuille dans une conduite rectangulaire infiniment longue. De plus, nous simulons en dimension trois des écoulements confinés entre deux plaques infinies des solutions de polymères non diluées avec une interaction hydrodynamique et des forces d'exclusion de volume.
Accurate Hartree-Fock-Slater calculations on small diatomic molecules with the finite-element method
Resumo:
We report on the self-consistent field solution of the Hartree-Fock-Slater equations using the finite-element method for the three small diatomic molecules N_2, BH and CO as examples. The quality of the results is not only better by two orders of magnitude than the fully numerical finite difference method of Laaksonen et al. but the method also requires a smaller number of grid points.
Resumo:
We report on the solution of the Hartree-Fock equations for the ground state of the H_2 molecule using the finite element method. Both the Hartree-Fock and the Poisson equations are solved with this method to an accuracy of 10^-8 using only 26 x 11 grid points in two dimensions. A 41 x 16 grid gives a new Hartree-Fock benchmark to ten-figure accuracy.
Resumo:
We present the Finite-Element-Method (FEM) in its application to quantum mechanical problems solving for diatomic molecules. Results for Hartree-Fock calculations of H_2 and Hartree-Fock-Slater calculations of molecules like N_2 and C0 have been obtained. The accuracy achieved with less then 5000 grid points for the total energies of these systems is 10_-8 a.u., which is demonstrated for N_2.
Resumo:
We present the finite-element method in its application to solving quantum-mechanical problems for diatomic molecules. Results for Hartree-Fock calculations of H_2 and Hartree-Fock-Slater calculations for molecules like N_2 and CO are presented. The accuracy achieved with fewer than 5000 grid points for the total energies of these systems is 10^-8 a.u., which is about two orders of magnitude better than the accuracy of any other available method.
Resumo:
In the earth sciences, data are commonly cast on complex grids in order to model irregular domains such as coastlines, or to evenly distribute grid points over the globe. It is common for a scientist to wish to re-cast such data onto a grid that is more amenable to manipulation, visualization, or comparison with other data sources. The complexity of the grids presents a significant technical difficulty to the regridding process. In particular, the regridding of complex grids may suffer from severe performance issues, in the worst case scaling with the product of the sizes of the source and destination grids. We present a mechanism for the fast regridding of such datasets, based upon the construction of a spatial index that allows fast searching of the source grid. We discover that the most efficient spatial index under test (in terms of memory usage and query time) is a simple look-up table. A kd-tree implementation was found to be faster to build and to give similar query performance at the expense of a larger memory footprint. Using our approach, we demonstrate that regridding of complex data may proceed at speeds sufficient to permit regridding on-the-fly in an interactive visualization application, or in a Web Map Service implementation. For large datasets with complex grids the new mechanism is shown to significantly outperform algorithms used in many scientific visualization packages.
Resumo:
With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.
Resumo:
A mesoscale meteorological model (FOOT3DK) is coupled with a gas exchange model to simulate surface fluxes of CO2 and H2O under field conditions. The gas exchange model consists of a C3 single leaf photosynthesis sub-model and an extended big leaf (sun/shade) sub-model that divides the canopy into sunlit and shaded fractions. Simulated CO2 fluxes of the stand-alone version of the gas exchange model correspond well to eddy-covariance measurements at a test site in a rural area in the west of Germany. The coupled FOOT3DK/gas exchange model is validated for the diurnal cycle at singular grid points, and delivers realistic fluxes with respect to their order of magnitude and to the general daily course. Compared to the Jarvis-based big leaf scheme, simulations of latent heat fluxes with a photosynthesis-based scheme for stomatal conductance are more realistic. As expected, flux averages are strongly influenced by the underlying land cover. While the simulated net ecosystem exchange is highly correlated with leaf area index, this correlation is much weaker for the latent heat flux. Photosynthetic CO2 uptake is associated with transpirational water loss via the stomata, and the resulting opposing surface fluxes of CO2 and H2O are reproduced with the model approach. Over vegetated surfaces it is shown that the coupling of a photosynthesis-based gas exchange model with the land-surface scheme of a mesoscale model results in more realistic simulated latent heat fluxes.
Resumo:
The occurrence of wind storms in Central Europe is investigated with respect to large-scale atmospheric flow and local wind speeds in the investigation area. Two different methods of storm identification are applied for Central Europe as the target region: one based on characteristics of large-scale flow (circulation weather types, CWT) and the other on the occurrence of extreme wind speeds. The identified events are examined with respect to the NAO phases and CWTs under which they occur. Pressure patterns, wind speeds and cyclone tracks are investigated for storms assigned to different CWTs. Investigations are based on ERA40 reanalysis data. It is shown that about 80% of the storm days in Central Europe are connected with westerly flow and that Central European storm events primarily occur during a moderately positive NAO phase, while strongly positive NAO phases (6.4% of all days) account for more than 20% of the storms. A storm occurs over Central Europe during about 10% of the days with a strong positive NAO index. The most frequent pathway of cyclone systems associated with storms over Central Europe leads from the North Atlantic over the British Isles, North Sea and southern Scandinavia into the Baltic Sea. The mean intensity of the systems typically reaches its maximum near the British Isles. Differences between the characteristics for storms identified from the CWT identification procedure (gale days, based on MSLP fields) and those from extreme winds at Central European grid points are small, even though only 70% of the storm days agree. While most storms occur during westerly flow situations, specific characteristics of storms during the other CWTs are also considered. Copyright © 2009 Royal Meteorological Society
Resumo:
In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, which can lead to the formation of persistent contrails in ice-supersaturated regions. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time; that is to say, emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate-sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. This modelling chain forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region grid points). This is performed with the chemistry–climate model EMAC, extended via the two submodels AIRTRAC (V1.0) and CONTRAIL (V1.0), which describe the contribution of emissions to the composition of the atmosphere and to contrail formation, respectively. The impact of emissions from the large number of time-region grid points is efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the global climate impact of the emission at each time-region grid point. The result of the modelling chain comprises a four-dimensional data set in space and time, which we call climate cost functions and which describes the global climate impact of an emission at each grid point and each point in time. In a third step, these climate cost functions are used in an air traffic simulator (SAAM) coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling approach and show some example results. A number of sensitivity analyses are performed to motivate the settings of individual parameters. A stepwise sanity check of the results of the modelling chain is undertaken to demonstrate the plausibility of the climate cost functions.
Resumo:
Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20-45%) of the global land grid points, particularly in areas where the hydro-graph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5-30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies.