80 resultados para non-uniform scale perturbation finite difference scheme


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Techniques for modelling urban microclimates and urban block surfaces temperatures are desired by urban planners and architects for strategic urban designs at the early design stages. This paper introduces a simplified mathematical model for urban simulations (UMsim) including urban surfaces temperatures and microclimates. The nodal network model has been developed by integrating coupled thermal and airflow model. Direct solar radiation, diffuse radiation, reflected radiation, long-wave radiation, heat convection in air and heat transfer in the exterior walls and ground within the complex have been taken into account. The relevant equations have been solved using the finite difference method under the Matlab platform. Comparisons have been conducted between the data produced from the simulation and that from an urban experimental study carried out in a real architectural complex on the campus of Chongqing University, China in July 2005 and January 2006. The results show a satisfactory agreement between the two sets of data. The UMsim can be used to simulate the microclimates, in particular the surface temperatures of urban blocks, therefore it can be used to assess the impact of urban surfaces properties on urban microclimates. The UMsim will be able to produce robust data and images of urban environments for sustainable urban design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sea-level rise is an important aspect of climate change because of its impact on society and ecosystems. Here we present an intercomparison of results from ten coupled atmosphere-ocean general circulation models (AOGCMs) for sea-level changes simulated for the twentieth century and projected to occur during the twenty first century in experiments following scenario IS92a for greenhouse gases and sulphate aerosols. The model results suggest that the rate of sea-level rise due to thermal expansion of sea water has increased during the twentieth century, but the small set of tide gauges with long records might not be adequate to detect this acceleration. The rate of sea-level rise due to thermal expansion continues to increase throughout the twenty first century, and the projected total is consequently larger than in the twentieth century; for 1990-2090 it amounts to 0.20-0.37 in. This wide range results from systematic uncertainty in modelling of climate change and of heat uptake by the ocean. The AOGCMs agree that sea-level rise is expected to be geographically non-uniform, with some regions experiencing as much as twice the global average, and others practically zero, but they do not agree about the geographical pattern. The lack of agreement indicates that we cannot currently have confidence in projections of local sea- level changes, and reveals a need for detailed analysis and intercomparison in order to understand and reduce the disagreements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a completely new design of a bogie-frame made of glass fibre reinforced composites and its performance under various loading conditions predicted by finite element analysis. The bogie consists of two frames, with one placed on top of the other, and two axle ties connecting the axles. Each frame consists of two side arms and a transom between. The top frame is thinner and more compliant and has a higher curvature compared with the bottom frame. Variable vertical stiffness can be achieved before and after the contact between the two frames at the central section of the bogie to cope with different load levels. Finite element analysis played a very important role in the design of this structure. Stiffness and stress levels of the full scale bogie presented in this paper under various loading conditions have been predicted by using Marc provided by MSC Software. In order to verify the finite element analysis (FEA) models, a fifth scale prototype of the bogie has been made and tested under quasi-static loading conditions. Results of testing on the fifth scale bogie have been used to fine tune details like contact and friction in the fifth scale FEA models. These conditions were then applied to the full scale models. Finite element analysis results show that the stress levels in all directions are low compared with material strengths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we study a model economy that examines the optimal intraday rate. In Freeman’s (1996) paper, he shows that the efficient allocation can be implemented by adopting a policy in which the intraday rate is zero. We modify the production set and show that such a model economy can account for the non-uniform distribution of settlements within a day. In addition, by modifying both the consumption set and the production set, we show that the central bank may be able to implement the planner’s allocation with a positive intraday interest rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A system identification algorithm is introduced for Hammerstein systems that are modelled using a non-uniform rational B-spline (NURB) neural network. The proposed algorithm consists of two successive stages. First the shaping parameters in NURB network are estimated using a particle swarm optimization (PSO) procedure. Then the remaining parameters are estimated by the method of the singular value decomposition (SVD). Numerical examples are utilized to demonstrate the efficacy of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the problem of automatically identifying and restoring damaged and contaminated images. We suggest a novel approach based on a semi-parametric model. This has two components, a parametric component describing known physical characteristics and a more flexible non-parametric component. The latter avoids the need for a detailed model for the sensor, which is often costly to produce and lacking in robustness. We assess our approach using an analysis of electroencephalographic images contaminated by eye-blink artefacts and highly damaged photographs contaminated by non-uniform lighting. These experiments show that our approach provides an effective solution to problems of this type.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study we report detailed information on the internal structure of PNIPAM-b-PEG-b-PNIPAM nanoparticles formed from self-assembly in aqueous solutions upon increase in temperature. NMR spectroscopy, light scattering and small-angle neutron scattering (SANS) were used to monitor different stages of nanoparticle formation as a function of temperature, providing insight into the fundamental processes involved. The presence of PEG in a copolymer structure significantly affects the formation of nanoparticles, making their transition to occur over a broader temperature range. The crucial parameter that controls the transition is the ratio of PEG/PNIPAM. For pure PNIPAM, the transition is sharp; the higher the PEG/PNIPAM ratio results in a broader transition. This behavior is explained by different mechanisms of PNIPAM block incorporation during nanoparticle formation at different PEG/PNIPAM ratios. Contrast variation experiments using SANS show that the structure of nanoparticles above cloud point temperatures for PNIPAM-b-PEG-b-PNIPAM copolymers is drastically different from the structure of PNIPAM mesoglobules. In contrast with pure PNIPAM mesoglobules, where solid-like particles and chain network with a mesh size of 1-3 nm are present; nanoparticles formed from PNIPAM-b-PEG-b-PNIPAM copolymers have non-uniform structure with “frozen” areas interconnected by single chains in Gaussian conformation. SANS data with deuterated “invisible” PEG blocks imply that PEG is uniformly distributed inside of a nanoparticle. It is kinetically flexible PEG blocks which affect the nanoparticle formation by prevention of PNIPAM microphase separation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A scale-invariant moving finite element method is proposed for the adaptive solution of nonlinear partial differential equations. The mesh movement is based on a finite element discretisation of a scale-invariant conservation principle incorporating a monitor function, while the time discretisation of the resulting system of ordinary differential equations is carried out using a scale-invariant time-stepping which yields uniform local accuracy in time. The accuracy and reliability of the algorithm are successfully tested against exact self-similar solutions where available, and otherwise against a state-of-the-art h-refinement scheme for solutions of a two-dimensional porous medium equation problem with a moving boundary. The monitor functions used are the dependent variable and a monitor related to the surface area of the solution manifold. (c) 2005 IMACS. Published by Elsevier B.V. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

An algorithm based on flux difference splitting is presented for the solution of two-dimensional, open channel flows. A transformation maps a non-rectangular, physical domain into a rectangular one. The governing equations are then the shallow water equations, including terms of slope and friction, in a generalized coordinate system. A regular mesh on a rectangular computational domain can then be employed. The resulting scheme has good jump capturing properties and the advantage of using boundary/body-fitted meshes. The scheme is applied to a problem of flow in a river whose geometry induces a region of supercritical flow.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We consider the numerical treatment of second kind integral equations on the real line of the form ∅(s) = ∫_(-∞)^(+∞)▒〖κ(s-t)z(t)ϕ(t)dt,s=R〗 (abbreviated ϕ= ψ+K_z ϕ) in which K ϵ L_1 (R), z ϵ L_∞ (R) and ψ ϵ BC(R), the space of bounded continuous functions on R, are assumed known and ϕ ϵ BC(R) is to be determined. We first derive sharp error estimates for the finite section approximation (reducing the range of integration to [-A, A]) via bounds on (1-K_z )^(-1)as an operator on spaces of weighted continuous functions. Numerical solution by a simple discrete collocation method on a uniform grid on R is then analysed: in the case when z is compactly supported this leads to a coefficient matrix which allows a rapid matrix-vector multiply via the FFT. To utilise this possibility we propose a modified two-grid iteration, a feature of which is that the coarse grid matrix is approximated by a banded matrix, and analyse convergence and computational cost. In cases where z is not compactly supported a combined finite section and two-grid algorithm can be applied and we extend the analysis to this case. As an application we consider acoustic scattering in the half-plane with a Robin or impedance boundary condition which we formulate as a boundary integral equation of the class studied. Our final result is that if z (related to the boundary impedance in the application) takes values in an appropriate compact subset Q of the complex plane, then the difference between ϕ(s)and its finite section approximation computed numerically using the iterative scheme proposed is ≤C_1 [kh log⁡〖(1⁄kh)+(1-Θ)^((-1)⁄2) (kA)^((-1)⁄2) 〗 ] in the interval [-ΘA,ΘA](Θ<1) for kh sufficiently small, where k is the wavenumber and h the grid spacing. Moreover this numerical approximation can be computed in ≤C_2 N log⁡N operations, where N = 2A/h is the number of degrees of freedom. The values of the constants C1 and C2 depend only on the set Q and not on the wavenumber k or the support of z.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The constant-density Charney model describes the simplest unstable basic state with a planetary-vorticity gradient, which is uniform and positive, and baroclinicity that is manifest as a negative contribution to the potential-vorticity (PV) gradient at the ground and positive vertical wind shear. Together, these ingredients satisfy the necessary conditions for baroclinic instability. In Part I it was shown how baroclinic growth on a general zonal basic state can be viewed as the interaction of pairs of ‘counter-propagating Rossby waves’ (CRWs) that can be constructed from a growing normal mode and its decaying complex conjugate. In this paper the normal-mode solutions for the Charney model are studied from the CRW perspective. Clear parallels can be drawn between the most unstable modes of the Charney model and the Eady model, in which the CRWs can be derived independently of the normal modes. However, the dispersion curves for the two models are very different; the Eady model has a short-wave cut-off, while the Charney model is unstable at short wavelengths. Beyond its maximum growth rate the Charney model has a neutral point at finite wavelength (r=1). Thereafter follows a succession of unstable branches, each with weaker growth than the last, separated by neutral points at integer r—the so-called ‘Green branches’. A separate branch of westward-propagating neutral modes also originates from each neutral point. By approximating the lower CRW as a Rossby edge wave and the upper CRW structure as a single PV peak with a spread proportional to the Rossby scale height, the main features of the ‘Charney branch’ (0difference in the scale-dependence of PV inversion for boundary and interior PV anomalies, the Rossby-wave propagation mechanism and the CRW interaction. The behaviour of the Charney modes and the first neutral branch, which rely on tropospheric PV gradients, are arguably more applicable to the atmosphere than modes of the Eady model where the positive PV gradient exists only at the tropopause