24 resultados para Numerical Algorithms and Problems
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.
Resumo:
The sea ice export from the Arctic is of global importance due to its fresh water which influences the oceanic stratification and, thus, the global thermohaline circulation. This study deals with the effect of cyclones on sea ice and sea ice transport in particular on the basis of observations from two field experiments FRAMZY 1999 and FRAMZY 2002 in April 1999 and March 2002 as well as on the basis of simulations with a numerical sea ice model. The simulations realised by a dynamic-thermodynamic sea ice model are forced with 6-hourly atmospheric ECMWF- analyses (European Centre for Medium-Range Weather Forecasts) and 6-hourly oceanic data of a MPI-OM-simulation (Max-Planck-Institute Ocean Model). Comparing the observed and simulated variability of the sea ice drift and of the position of the ice edge shows that the chosen configuration of the model is appropriate for the performed studies. The seven observed cyclones change the position of the ice edge up to 100 km and cause an extensive decrease of sea ice coverage by 2 % up to more than 10 %. The decrease is only simulated by the model if the ocean current is strongly divergent in the centre of the cyclone. The impact is remarkable of the ocean current on divergence and shear deformation of the ice drift. As shown by sensitivity studies the ocean current at a depth of 6 m – the sea ice model is forced with – is mainly responsible for the ascertained differences between simulation and observation. The simulated sea ice transport shows a strong variability on a time scale from hours to days. Local minima occur in the time series of the ice transport during periods with Fram Strait cyclones. These minima are not caused by the local effect of the cyclone’s wind field, but mainly by the large-scale pattern of surface pressure. A displacement of the areas of strongest cyclone activity in the Nordic Seas would considerably influence the ice transport.
Resumo:
The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.
Resumo:
Exact error estimates for evaluating multi-dimensional integrals are considered. An estimate is called exact if the rates of convergence for the low- and upper-bound estimate coincide. The algorithm with such an exact rate is called optimal. Such an algorithm has an unimprovable rate of convergence. The problem of existing exact estimates and optimal algorithms is discussed for some functional spaces that define the regularity of the integrand. Important for practical computations data classes are considered: classes of functions with bounded derivatives and Holder type conditions. The aim of the paper is to analyze the performance of two optimal classes of algorithms: deterministic and randomized for computing multidimensional integrals. It is also shown how the smoothness of the integrand can be exploited to construct better randomized algorithms.
Resumo:
We consider a class of boundary integral equations that arise in the study of strongly elliptic BVPs in unbounded domains of the form $D = \{(x, z)\in \mathbb{R}^{n+1} : x\in \mathbb{R}^n, z > f(x)\}$ where $f : \mathbb{R}^n \to\mathbb{R}$ is a sufficiently smooth bounded and continuous function. A number of specific problems of this type, for example acoustic scattering problems, problems involving elastic waves, and problems in potential theory, have been reformulated as second kind integral equations $u+Ku = v$ in the space $BC$ of bounded, continuous functions. Having recourse to the so-called limit operator method, we address two questions for the operator $A = I + K$ under consideration, with an emphasis on the function space setting $BC$. Firstly, under which conditions is $A$ a Fredholm operator, and, secondly, when is the finite section method applicable to $A$?
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.
Resumo:
The dependence of much of Africa on rain fed agriculture leads to a high vulnerability to fluctuations in rainfall amount. Hence, accurate monitoring of near-real time rainfall is particularly useful, for example in forewarning possible crop shortfalls in drought-prone areas. Unfortunately, ground based observations are often inadequate. Rainfall estimates from satellite-based algorithms and numerical model outputs can fill this data gap, however rigorous assessment of such estimates is required. In this case, three satellite based products (NOAA-RFE 2.0, GPCP-1DD and TAMSAT) and two numerical model outputs (ERA-40 and ERA-Interim) have been evaluated for Uganda in East Africa using a network of 27 rain gauges. The study focuses on the years 2001 to 2005 and considers the main rainy season (February to June). All data sets were converted to the same temporal and spatial scales. Kriging was used for the spatial interpolation of the gauge data. All three satellite products showed similar characteristics and had a high level of skill that exceeded both model outputs. ERA-Interim had a tendency to overestimate whilst ERA-40 consistently underestimated the Ugandan rainfall.
Resumo:
Adaptive methods which “equidistribute” a given positive weight function are now used fairly widely for selecting discrete meshes. The disadvantage of such schemes is that the resulting mesh may not be smoothly varying. In this paper a technique is developed for equidistributing a function subject to constraints on the ratios of adjacent steps in the mesh. Given a weight function $f \geqq 0$ on an interval $[a,b]$ and constants $c$ and $K$, the method produces a mesh with points $x_0 = a,x_{j + 1} = x_j + h_j ,j = 0,1, \cdots ,n - 1$ and $x_n = b$ such that\[ \int_{xj}^{x_{j + 1} } {f \leqq c\quad {\text{and}}\quad \frac{1} {K}} \leqq \frac{{h_{j + 1} }} {{h_j }} \leqq K\quad {\text{for}}\, j = 0,1, \cdots ,n - 1 . \] A theoretical analysis of the procedure is presented, and numerical algorithms for implementing the method are given. Examples show that the procedure is effective in practice. Other types of constraints on equidistributing meshes are also discussed. The principal application of the procedure is to the solution of boundary value problems, where the weight function is generally some error indicator, and accuracy and convergence properties may depend on the smoothness of the mesh. Other practical applications include the regrading of statistical data.
Resumo:
Active learning plays a strong role in mathematics and statistics, and formative problems are vital for developing key problem-solving skills. To keep students engaged and help them master the fundamentals before challenging themselves further, we have developed a system for delivering problems tailored to a student‟s current level of understanding. Specifically, by adapting simple methodology from clinical trials, a framework for delivering existing problems and other illustrative material has been developed, making use of macros in Excel. The problems are assigned a level of difficulty (a „dose‟), and problems are presented to the student in an order depending on their ability, i.e. based on their performance so far on other problems. We demonstrate and discuss the application of the approach with formative examples developed for a first year course on plane coordinate geometry, and also for problems centred on the topic of chi-square tests.
Resumo:
Purpose – This paper summarises the main research findings from a detailed, qualitative set of structured interviews and case studies of private finance initiative (PFI) schemes in the UK, which involve the construction of built facilities. The research, which was funded by the Foundation for the Built Environment, examines the emergence of PFI in the UK. Benefits and problems in the PFI process are investigated. Best practice, the key critical factors for success, and lessons for the future are also analysed. Design/methodology/approach – The research is based around 11 semi-structured interviews conducted with stakeholders in key PFI projects in the UK. Findings – The research demonstrates that value for money and risk transfer are key success criteria. High procurement and transaction costs are a feature of PFI projects, and the large-scale nature of PFI projects frequently acts as barrier to entry. Research limitations/implications – The research is based on a limited number of in-depth case study interviews. The paper also shows that further research is needed to find better ways to measure these concepts empirically. Practical implications – The paper is important in highlighting four main areas of practical improvement in the PFI process: value for money assessment; establishing end-user needs; developing competitive markets and developing appropriate skills in the public sector. Originality/value – The paper examines the drivers, barriers and critical success factors for PFI in the UK for the first time in detail and will be of value to property investors, financiers, and others involved in the PFI process.
Resumo:
New representations and efficient calculation methods are derived for the problem of propagation from an infinite regularly spaced array of coherent line sources above a homogeneous impedance plane, and for the Green's function for sound propagation in the canyon formed by two infinitely high, parallel rigid or sound soft walls and an impedance ground surface. The infinite sum of source contributions is replaced by a finite sum and the remainder is expressed as a Laplace-type integral. A pole subtraction technique is used to remove poles in the integrand which lie near the path of integration, obtaining a smooth integrand, more suitable for numerical integration, and a specific numerical integration method is proposed. Numerical experiments show highly accurate results across the frequency spectrum for a range of ground surface types. It is expected that the methods proposed will prove useful in boundary element modeling of noise propagation in canyon streets and in ducts, and for problems of scattering by periodic surfaces.