902 resultados para Grid-spacings
Resumo:
There is increasing evidence to suggest that the presence of mesoscopic heterogeneities constitutes the predominant attenuation mechanism at seismic frequencies. As a consequence, centimeter-scale perturbations of the subsurface physical properties should be taken into account for seismic modeling whenever detailed and accurate responses of the target structures are desired. This is, however, computationally prohibitive since extremely small grid spacings would be necessary. A convenient way to circumvent this problem is to use an upscaling procedure to replace the heterogeneous porous media by equivalent visco-elastic solids. In this work, we solve Biot's equations of motion to perform numerical simulations of seismic wave propagation through porous media containing mesoscopic heterogeneities. We then use an upscaling procedure to replace the heterogeneous poro-elastic regions by homogeneous equivalent visco-elastic solids and repeat the simulations using visco-elastic equations of motion. We find that, despite the equivalent attenuation behavior of the heterogeneous poro-elastic medium and the equivalent visco-elastic solid, the seismograms may differ due to diverging boundary conditions at fluid-solid interfaces, where there exist additional options for the poro-elastic case. In particular, we observe that the seismograms agree for closed-pore boundary conditions, but differ significantly for open-pore boundary conditions. This is an interesting result, which has potentially important implications for wave-equation-based algorithms in exploration geophysics involving fluid-solid interfaces, such as, for example, wave field decomposition.
Resumo:
The development of NWP models with grid spacing down to 1 km should produce more realistic forecasts of convective storms. However, greater realism does not necessarily mean more accurate precipitation forecasts. The rapid growth of errors on small scales in conjunction with preexisting errors on larger scales may limit the usefulness of such models. The purpose of this paper is to examine whether improved model resolution alone is able to produce more skillful precipitation forecasts on useful scales, and how the skill varies with spatial scale. A verification method will be described in which skill is determined from a comparison of rainfall forecasts with radar using fractional coverage over different sized areas. The Met Office Unified Model was run with grid spacings of 12, 4, and 1 km for 10 days in which convection occurred during the summers of 2003 and 2004. All forecasts were run from 12-km initial states for a clean comparison. The results show that the 1-km model was the most skillful over all but the smallest scales (approximately <10–15 km). A measure of acceptable skill was defined; this was attained by the 1-km model at scales around 40–70 km, some 10–20 km less than that of the 12-km model. The biggest improvement occurred for heavier, more localized rain, despite it being more difficult to predict. The 4-km model did not improve much on the 12-km model because of the difficulties of representing convection at that resolution, which was accentuated by the spinup from 12-km fields.
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
The ability of the HiGEM climate model to represent high-impact, regional, precipitation events is investigated in two ways. The first focusses on a case study of extreme regional accumulation of precipitation during the passage of a summer extra-tropical cyclone across southern England on 20 July 2007 that resulted in a national flooding emergency. The climate model is compared with a global Numerical Weather Prediction (NWP) model and higher resolution, nested limited area models. While the climate model does not simulate the timing and location of the cyclone and associated precipitation as accurately as the NWP simulations, the total accumulated precipitation in all models is similar to the rain gauge estimate across England and Wales. The regional accumulation over the event is insensitive to horizontal resolution for grid spacings ranging from 90km to 4km. Secondly, the free-running climate model reproduces the statistical distribution of daily precipitation accumulations observed in the England-Wales precipitation record. The model distribution diverges increasingly from the record for longer accumulation periods with a consistent under-representation of more intense multi-day accumulations. This may indicate a lack of low-frequency variability associated with weather regime persistence. Despite this, the overall seasonal and annual precipitation totals from the model are still comparable to those from ERA-Interim.
Resumo:
There are some long-established biases in atmospheric models that originate from the representation of tropical convection. Previously, it has been difficult to separate cause and effect because errors are often the result of a number of interacting biases. Recently, researchers have gained the ability to run multiyear global climate model simulations with grid spacings small enough to switch the convective parameterization off, which permits the convection to develop explicitly. There are clear improvements to the initiation of convective storms and the diurnal cycle of rainfall in the convection-permitting simulations, which enables a new process-study approach to model bias identification. In this study, multiyear global atmosphere-only climate simulations with and without convective parameterization are undertaken with the Met Office Unified Model and are analyzed over the Maritime Continent region, where convergence from sea-breeze circulations is key for convection initiation. The analysis shows that, although the simulation with parameterized convection is able to reproduce the key rain-forming sea-breeze circulation, the parameterization is not able to respond realistically to the circulation. A feedback of errors also occurs: the convective parameterization causes rain to fall in the early morning, which cools and wets the boundary layer, reducing the land–sea temperature contrast and weakening the sea breeze. This is, however, an effect of the convective bias, rather than a cause of it. Improvements to how and when convection schemes trigger convection will improve both the timing and location of tropical rainfall and representation of sea-breeze circulations.
Resumo:
A numerical study of mass conservation of MAC-type methods is presented, for viscoelastic free-surface flows. We use an implicit formulation which allows for greater time steps, and therefore time marching schemes for advecting the free surface marker particles have to be accurate in order to preserve the good mass conservation properties of this methodology. We then present an improvement by using a Runge-Kutta scheme coupled with a local linear extrapolation on the free surface. A thorough study of the viscoelastic impacting drop problem, for both Oldroyd-B and XPP fluid models, is presented, investigating the influence of timestep, grid spacing and other model parameters to the overall mass conservation of the method. Furthermore, an unsteady fountain flow is also simulated to illustrate the low mass conservation error obtained.
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
Embedded sensitivity analysis has proven to be a useful tool in finding optimum positions of structure reinforcements. However, it was not clear how sensitivities obtained from the embedded sensitivity method were related to the normal mode, or operational mode, associated to the frequency of interest. In this work, this relationship is studied based on a finite element of a slender sheet metal piece, with preponderant bending modes. It is shown that higher sensitivities always occur at nodes or antinodes of the vibrating system. [DOI: 10.1115/1.4002127]
Resumo:
Scheduling parallel and distributed applications efficiently onto grid environments is a difficult task and a great variety of scheduling heuristics has been developed aiming to address this issue. A successful grid resource allocation depends, among other things, on the quality of the available information about software artifacts and grid resources. In this article, we propose a semantic approach to integrate selection of equivalent resources and selection of equivalent software artifacts to improve the scheduling of resources suitable for a given set of application execution requirements. We also describe a prototype implementation of our approach based on the Integrade grid middleware and experimental results that illustrate its benefits. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
In 2007 Associate Professor Jay Hall retires from the University of Queensland after more than 30 years of service to the Australian archaeological community. Celebrated as a gifted teacher and a pioneer of Queensland archaeology, Jay leaves a rich legacy of scholarship and achievement across a wide range of archaeological endeavours. An Archæological Life brings together past and present students, colleagues and friends to celebrate Jay’s contributions, influences and interests.
Resumo:
Computer simulation was used to suggest potential selection strategies for beef cattle breeders with different mixes of clients between two potential markets. The traditional market paid on the basis of carcass weight (CWT), while a new market considered marbling grade in addition to CWT as a basis for payment. Both markets instituted discounts for CWT in excess of 340 kg and light carcasses below 300 kg. Herds were simulated for each price category on the carcass weight grid for the new market. This enabled the establishment of phenotypic relationships among the traits examined [CWT, percent intramuscular fat (IMF), carcass value in the traditional market, carcass value in the new market, and the expected proportion of progeny in elite price cells in the new market pricing grid]. The appropriateness of breeding goals was assessed on the basis of client satisfaction. Satisfaction was determined by the equitable distribution of available stock between markets combined with the assessment of the utility of the animal within the market to which it was assigned. The best goal for breeders with predominantly traditional clients was a CWT in excess of 330 kg, while that for breeders with predominantly new market clients was a CWT of between 310 and 329 kg and with a marbling grade of AAA in the Ontario carcass pricing system. For breeders who wished to satisfy both new and traditional clients, the optimal CWT was 310-329 kg and the optimal marbling grade was AA-AAA. This combination resulted in satisfaction levels of greater than 75% among clients, regardless of the distribution of the clients between the traditional and new marketplaces.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.