910 resultados para problem difficulty


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we show stability and convergence for a novel Galerkin boundary element method approach to the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data. This problem models, for example, outdoor sound propagation over inhomogeneous flat terrain. To achieve a good approximation with a relatively low number of degrees of freedom we employ a graded mesh with smaller elements adjacent to discontinuities in impedance, and a special set of basis functions for the Galerkin method so that, on each element, the approximation space consists of polynomials (of degree $\nu$) multiplied by traces of plane waves on the boundary. In the case where the impedance is constant outside an interval $[a,b]$, which only requires the discretization of $[a,b]$, we show theoretically and experimentally that the $L_2$ error in computing the acoustic field on $[a,b]$ is ${\cal O}(\log^{\nu+3/2}|k(b-a)| M^{-(\nu+1)})$, where $M$ is the number of degrees of freedom and $k$ is the wavenumber. This indicates that the proposed method is especially commendable for large intervals or a high wavenumber. In a final section we sketch how the same methodology extends to more general scattering problems.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The combination of radar and lidar in space offers the unique potential to retrieve vertical profiles of ice water content and particle size globally, and two algorithms developed recently claim to have overcome the principal difficulty with this approach-that of correcting the lidar signal for extinction. In this paper "blind tests" of these algorithms are carried out, using realistic 94-GHz radar and 355-nm lidar backscatter profiles simulated from aircraft-measured size spectra, and including the effects of molecular scattering, multiple scattering, and instrument noise. Radiation calculations are performed on the true and retrieved microphysical profiles to estimate the accuracy with which radiative flux profiles could be inferred remotely. It is found that the visible extinction profile can be retrieved independent of assumptions on the nature of the size distribution, the habit of the particles, the mean extinction-to-backscatter ratio, or errors in instrument calibration. Local errors in retrieved extinction can occur in proportion to local fluctuations in the extinction-to-backscatter ratio, but down to 400 m above the height of the lowest lidar return, optical depth is typically retrieved to better than 0.2. Retrieval uncertainties are greater at the far end of the profile, and errors in total optical depth can exceed 1, which changes the shortwave radiative effect of the cloud by around 20%. Longwave fluxes are much less sensitive to errors in total optical depth, and may generally be calculated to better than 2 W m(-2) throughout the profile. It is important for retrieval algorithms to account for the effects of lidar multiple scattering, because if this is neglected, then optical depth is underestimated by approximately 35%, resulting in cloud radiative effects being underestimated by around 30% in the shortwave and 15% in the longwave. Unlike the extinction coefficient, the inferred ice water content and particle size can vary by 30%, depending on the assumed mass-size relationship (a problem common to all remote retrieval algorithms). However, radiative fluxes are almost completely determined by the extinction profile, and if this is correct, then errors in these other parameters have only a small effect in the shortwave (around 6%, compared to that of clear sky) and a negligible effect in the longwave.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new heuristic for the Steiner Minimal Tree problem is presented here. The method described is based on the detection of particular sets of nodes in networks, the “Hot Spot” sets, which are used to obtain better approximations of the optimal solutions. An algorithm is also proposed which is capable of improving the solutions obtained by classical heuristics, by means of a stirring process of the nodes in solution trees. Classical heuristics and an enumerative method are used CIS comparison terms in the experimental analysis which demonstrates the goodness of the heuristic discussed in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new heuristic for the Steiner minimal tree problem is presented. The method described is based on the detection of particular sets of nodes in networks, the “hot spot” sets, which are used to obtain better approximations of the optimal solutions. An algorithm is also proposed which is capable of improving the solutions obtained by classical heuristics, by means of a stirring process of the nodes in solution trees. Classical heuristics and an enumerative method are used as comparison terms in the experimental analysis which demonstrates the capability of the heuristic discussed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the recent years, the unpredictable growth of the Internet has moreover pointed out the congestion problem, one of the problems that historicallyha ve affected the network. This paper deals with the design and the evaluation of a congestion control algorithm which adopts a FuzzyCon troller. The analogyb etween Proportional Integral (PI) regulators and Fuzzycon trollers is discussed and a method to determine the scaling factors of the Fuzzycon troller is presented. It is shown that the Fuzzycon troller outperforms the PI under traffic conditions which are different from those related to the operating point considered in the design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a parallel genetic algorithm to the Steiner Problem in Networks. Several previous papers have proposed the adoption of GAs and others metaheuristics to solve the SPN demonstrating the validity of their approaches. This work differs from them for two main reasons: the dimension and the characteristics of the networks adopted in the experiments and the aim from which it has been originated. The reason that aimed this work was namely to build a comparison term for validating deterministic and computationally inexpensive algorithms which can be used in practical engineering applications, such as the multicast transmission in the Internet. On the other hand, the large dimensions of our sample networks require the adoption of a parallel implementation of the Steiner GA, which is able to deal with such large problem instances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Six parameters uniquely describe the orbit of a body about the Sun. Given these parameters, it is possible to make predictions of the body's position by solving its equation of motion. The parameters cannot be directly measured, so they must be inferred indirectly by an inversion method which uses measurements of other quantities in combination with the equation of motion. Inverse techniques are valuable tools in many applications where only noisy, incomplete, and indirect observations are available for estimating parameter values. The methodology of the approach is introduced and the Kepler problem is used as a real-world example. (C) 2003 American Association of Physics Teachers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with solving numerically the Dirichlet boundary value problem for Laplace’s equation in a nonlocally perturbed half-plane. This problem arises in the simulation of classical unsteady water wave problems. The starting point for the numerical scheme is the boundary integral equation reformulation of this problem as an integral equation of the second kind on the real line in Preston et al. (2008, J. Int. Equ. Appl., 20, 121–152). We present a Nystr¨om method for numerical solution of this integral equation and show stability and convergence, and we present and analyse a numerical scheme for computing the Dirichlet-to-Neumann map, i.e., for deducing the instantaneous fluid surface velocity from the velocity potential on the surface, a key computational step in unsteady water wave simulations. In particular, we show that our numerical schemes are superalgebraically convergent if the fluid surface is infinitely smooth. The theoretical results are illustrated by numerical experiments.