45 resultados para capacitated p-median problems
em CentAUR: Central Archive University of Reading - UK
Resumo:
As the calibration and evaluation of flood inundation models are a prerequisite for their successful application, there is a clear need to ensure that the performance measures that quantify how well models match the available observations are fit for purpose. This paper evaluates the binary pattern performance measures that are frequently used to compare flood inundation models with observations of flood extent. This evaluation considers whether these measures are able to calibrate and evaluate model predictions in a credible and consistent way, i.e. identifying the underlying model behaviour for a number of different purposes such as comparing models of floods of different magnitudes or on different catchments. Through theoretical examples, it is shown that the binary pattern measures are not consistent for floods of different sizes, such that for the same vertical error in water level, a model of a flood of large magnitude appears to perform better than a model of a smaller magnitude flood. Further, the commonly used Critical Success Index (usually referred to as F<2 >) is biased in favour of overprediction of the flood extent, and is also biased towards correctly predicting areas of the domain with smaller topographic gradients. Consequently, it is recommended that future studies consider carefully the implications of reporting conclusions using these performance measures. Additionally, future research should consider whether a more robust and consistent analysis could be achieved by using elevation comparison methods instead.
Resumo:
In this paper we consider the 2D Dirichlet boundary value problem for Laplace’s equation in a non-locally perturbed half-plane, with data in the space of bounded and continuous functions. We show uniqueness of solution, using standard Phragmen-Lindelof arguments. The main result is to propose a boundary integral equation formulation, to prove equivalence with the boundary value problem, and to show that the integral equation is well posed by applying a recent partial generalisation of the Fredholm alternative in Arens et al [J. Int. Equ. Appl. 15 (2003) pp. 1-35]. This then leads to an existence proof for the boundary value problem. Keywords. Boundary integral equation method, Water waves, Laplace’s
Resumo:
We consider the imposition of Dirichlet boundary conditions in the finite element modelling of moving boundary problems in one and two dimensions for which the total mass is prescribed. A modification of the standard linear finite element test space allows the boundary conditions to be imposed strongly whilst simultaneously conserving a discrete mass. The validity of the technique is assessed for a specific moving mesh finite element method, although the approach is more general. Numerical comparisons are carried out for mass-conserving solutions of the porous medium equation with Dirichlet boundary conditions and for a moving boundary problem with a source term and time-varying mass.
Resumo:
Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
This paper is concerned with solving numerically the Dirichlet boundary value problem for Laplace’s equation in a nonlocally perturbed half-plane. This problem arises in the simulation of classical unsteady water wave problems. The starting point for the numerical scheme is the boundary integral equation reformulation of this problem as an integral equation of the second kind on the real line in Preston et al. (2008, J. Int. Equ. Appl., 20, 121–152). We present a Nystr¨om method for numerical solution of this integral equation and show stability and convergence, and we present and analyse a numerical scheme for computing the Dirichlet-to-Neumann map, i.e., for deducing the instantaneous fluid surface velocity from the velocity potential on the surface, a key computational step in unsteady water wave simulations. In particular, we show that our numerical schemes are superalgebraically convergent if the fluid surface is infinitely smooth. The theoretical results are illustrated by numerical experiments.
Resumo:
Aircraft OH and HO2 measurements made over West Africa during the AMMA field campaign in summer 2006 have been investigated using a box model constrained to observations of long-lived species and physical parameters. "Good" agreement was found for HO2 (modelled to observed gradient of 1.23 ± 0.11). However, the model significantly overpredicts OH concentrations. The reasons for this are not clear, but may reflect instrumental instabilities affecting the OH measurements. Within the model, HOx concentrations in West Africa are controlled by relatively simple photochemistry, with production dominated by ozone photolysis and reaction of O(1D) with water vapour, and loss processes dominated by HO2 + HO2 and HO2 + RO2. Isoprene chemistry was found to influence forested regions. In contrast to several recent field studies in very low NOx and high isoprene environments, we do not observe any dependence of model success for HO2 on isoprene and attribute this to efficient recycling of HOx through RO2 + NO reactions under the moderate NOx concentrations (5–300 ppt NO in the boundary layer, median 76 ppt) encountered during AMMA. This suggests that some of the problems with understanding the impact of isoprene on atmospheric composition may be limited to the extreme low range of NOx concentrations.
Resumo:
Aim: Previous systematic reviews have found that drug-related morbidity accounts for 4.3% of preventable hospital admissions. None, however, has identified the drugs most commonly responsible for preventable hospital admissions. The aims of this study were to estimate the percentage of preventable drug-related hospital admissions, the most common drug causes of preventable hospital admissions and the most common underlying causes of preventable drug-related admissions. Methods: Bibliographic databases and reference lists from eligible articles and study authors were the sources for data. Seventeen prospective observational studies reporting the proportion of preventable drug-related hospital admissions, causative drugs and/or the underlying causes of hospital admissions were selected. Included studies used multiple reviewers and/or explicit criteria to assess causality and preventability of hospital admissions. Two investigators abstracted data from all included studies using a purpose-made data extraction form. Results: From 13 papers the median percentage of preventable drug-related admissions to hospital was 3.7% (range 1.4-15.4). From nine papers the majority (51%) of preventable drug-related admissions involved either antiplatelets (16%), diuretics (16%), nonsteroidal anti-inflammatory drugs (11%) or anticoagulants (8%). From five studies the median proportion of preventable drug-related admissions associated with prescribing problems was 30.6% (range 11.1-41.8), with adherence problems 33.3% (range 20.9-41.7) and with monitoring problems 22.2% (range 0-31.3). Conclusions: Four groups of drugs account for more than 50% of the drug groups associated with preventable drug-related hospital admissions. Concentrating interventions on these drug groups could reduce appreciably the number of preventable drug-related admissions to hospital from primary care.
Resumo:
Background: Shifting gaze and attention ahead of the hand is a natural component in the performance of skilled manual actions. Very few studies have examined the precise co-ordination between the eye and hand in children with Developmental Coordination Disorder (DCD). Methods This study directly assessed the maturity of eye-hand co-ordination in children with DCD. A double-step pointing task was used to investigate the coupling of the eye and hand in 7-year-old children with and without DCD. Sequential targets were presented on a computer screen, and eye and hand movements were recorded simultaneously. Results There were no differences between typically developing (TD) and DCD groups when completing fast single-target tasks. There were very few differences in the completion of the first movement in the double-step tasks, but differences did occur during the second sequential movement. One factor appeared to be the propensity for the DCD children to delay their hand movement until some period after the eye had landed on the target. This resulted in a marked increase in eye-hand lead during the second movement, disrupting the close coupling and leading to a slower and less accurate hand movement among children with DCD. Conclusions In contrast to skilled adults, both groups of children preferred to foveate the target prior to initiating a hand movement if time allowed. The TD children, however, were more able to reduce this foveation period and shift towards a feedforward mode of control for hand movements. The children with DCD persevered with a look-then-move strategy, which led to an increase in error. For the group of DCD children in this study, there was no evidence of a problem in speed or accuracy of simple movements, but there was a difficulty in concatenating the sequential shifts of gaze and hand required for the completion of everyday tasks or typical assessment items.
Resumo:
Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.