76 resultados para LEVEL SET METHODS
em CentAUR: Central Archive University of Reading - UK
Resumo:
The level set method is commonly used to address image noise removal. Existing studies concentrate mainly on determining the speed function of the evolution equation. Based on the idea of a Canny operator, this letter introduces a new method of controlling the level set evolution, in which the edge strength is taken into account in choosing curvature flows for the speed function and the normal to edge direction is used to orient the diffusion of the moving interface. The addition of an energy term to penalize the irregularity allows for better preservation of local edge information. In contrast with previous Canny-based level set methods that usually adopt a two-stage framework, the proposed algorithm can execute all the above operations in one process during noise removal.
Resumo:
This paper presents a unique two-stage image restoration framework especially for further application of a novel rectangular poor-pixels detector, which, with properties of miniature size, light weight and low power consumption, has great value in the micro vision system. To meet the demand of fast processing, only a few measured images shifted up to subpixel level are needed to join the fusion operation, fewer than those required in traditional approaches. By maximum likelihood estimation with a least squares method, a preliminary restored image is linearly interpolated. After noise removal via Canny operator based level set evolution, the final high-quality restored image is achieved. Experimental results demonstrate effectiveness of the proposed framework. It is a sensible step towards subsequent image understanding and object identification.
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
The paper considers second kind equations of the form (abbreviated x=y + K2x) in which and the factor z is bounded but otherwise arbitrary so that equations of Wiener-Hopf type are included as a special case. Conditions on a set are obtained such that a generalized Fredholm alternative is valid: if W satisfies these conditions and I − Kz, is injective for each z ε W then I − Kz is invertible for each z ε W and the operators (I − Kz)−1 are uniformly bounded. As a special case some classical results relating to Wiener-Hopf operators are reproduced. A finite section version of the above equation (with the range of integration reduced to [−a, a]) is considered, as are projection and iterated projection methods for its solution. The operators (where denotes the finite section version of Kz) are shown uniformly bounded (in z and a) for all a sufficiently large. Uniform stability and convergence results, for the projection and iterated projection methods, are obtained. The argument generalizes an idea in collectively compact operator theory. Some new results in this theory are obtained and applied to the analysis of projection methods for the above equation when z is compactly supported and k(s − t) replaced by the general kernel k(s,t). A boundary integral equation of the above type, which models outdoor sound propagation over inhomogeneous level terrain, illustrates the application of the theoretical results developed.
Resumo:
We present an efficient graph-based algorithm for quantifying the similarity of household-level energy use profiles, using a notion of similarity that allows for small time–shifts when comparing profiles. Experimental results on a real smart meter data set demonstrate that in cases of practical interest our technique is far faster than the existing method for computing the same similarity measure. Having a fast algorithm for measuring profile similarity improves the efficiency of tasks such as clustering of customers and cross-validation of forecasting methods using historical data. Furthermore, we apply a generalisation of our algorithm to produce substantially better household-level energy use forecasts from historical smart meter data.
Resumo:
This paper reports the current state of work to simplify our previous model-based methods for visual tracking of vehicles for use in a real-time system intended to provide continuous monitoring and classification of traffic from a fixed camera on a busy multi-lane motorway. The main constraints of the system design were: (i) all low level processing to be carried out by low-cost auxiliary hardware, (ii) all 3-D reasoning to be carried out automatically off-line, at set-up time. The system developed uses three main stages: (i) pose and model hypothesis using 1-D templates, (ii) hypothesis tracking, and (iii) hypothesis verification, using 2-D templates. Stages (i) & (iii) have radically different computing performance and computational costs, and need to be carefully balanced for efficiency. Together, they provide an effective way to locate, track and classify vehicles.
Resumo:
A new spectral-based approach is presented to find orthogonal patterns from gridded weather/climate data. The method is based on optimizing the interpolation error variance. The optimally interpolated patterns (OIP) are then given by the eigenvectors of the interpolation error covariance matrix, obtained using the cross-spectral matrix. The formulation of the approach is presented, and the application to low-dimension stochastic toy models and to various reanalyses datasets is performed. In particular, it is found that the lowest-frequency patterns correspond to largest eigenvalues, that is, variances, of the interpolation error matrix. The approach has been applied to the Northern Hemispheric (NH) and tropical sea level pressure (SLP) and to the Indian Ocean sea surface temperature (SST). Two main OIP patterns are found for the NH SLP representing respectively the North Atlantic Oscillation and the North Pacific pattern. The leading tropical SLP OIP represents the Southern Oscillation. For the Indian Ocean SST, the leading OIP pattern shows a tripole-like structure having one sign over the eastern and north- and southwestern parts and an opposite sign in the remaining parts of the basin. The pattern is also found to have a high lagged correlation with the Niño-3 index with 6-months lag.
Resumo:
Problem structuring methods or PSMs are widely applied across a range of variable but generally small-scale organizational contexts. However, it has been argued that they are seen and experienced less often in areas of wide ranging and highly complex human activity-specifically those relating to sustainability, environment, democracy and conflict (or SEDC). In an attempt to plan, track and influence human activity in SEDC contexts, the authors in this paper make the theoretical case for a PSM, derived from various existing approaches. They show how it could make a contribution in a specific practical context-within sustainable coastal development projects around the Mediterranean which have utilized systemic and prospective sustainability analysis or, as it is now known, Imagine. The latter is itself a PSM but one which is 'bounded' within the limits of the project to help deliver the required 'deliverables' set out in the project blueprint. The authors argue that sustainable development projects would benefit from a deconstruction of process by those engaged in the project and suggest one approach that could be taken-a breakout from a project-bounded PSM to an analysis that embraces the project itself. The paper begins with an introduction to the sustainable development context and literature and then goes on to illustrate the issues by grounding the debate within a set of projects facilitated by Blue Plan for Mediterranean coastal zones. The paper goes on to show how the analytical framework could be applied and what insights might be generated.
Resumo:
The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.
Resumo:
1. Habitat fragmentation can affect pollinator and plant population structure in terms of species composition, abundance, area covered and density of flowering plants. This, in turn, may affect pollinator visitation frequency, pollen deposition, seed set and plant fitness. 2. A reduction in the quantity of flower visits can be coupled with a reduction in the quality of pollination service and hence the plants’ overall reproductive success and long-term survival. Understanding the relationship between plant population size and⁄ or isolation and pollination limitation is of fundamental importance for plant conservation. 3. Weexamined flower visitation and seed set of 10 different plant species fromfive European countries to investigate the general effects of plant populations size and density, both within (patch level) and between populations (population level), on seed set and pollination limitation. 4. Wefound evidence that the effects of area and density of flowering plant assemblages were generally more pronounced at the patch level than at the population level. We also found that patch and population level together influenced flower visitation and seed set, and the latter increased with increasing patch area and density, but this effect was only apparent in small populations. 5. Synthesis. By using an extensive pan-European data set on flower visitation and seed set we have identified a general pattern in the interplay between the attractiveness of flowering plant patches for pollinators and density dependence of flower visitation, and also a strong plant species-specific response to habitat fragmentation effects. This can guide efforts to conserve plant–pollinator interactions, ecosystem functioning and plant fitness in fragmented habitats.
Resumo:
Satellite observed data for flood events have been used to calibrate and validate flood inundation models, providing valuable information on the spatial extent of the flood. Improvements in the resolution of this satellite imagery have enabled indirect remote sensing of water levels by using an underlying LiDAR DEM to extract the water surface elevation at the flood margin. Further to comparison of the spatial extent, this now allows for direct comparison between modelled and observed water surface elevations. Using a 12.5m ERS-1 image of a flood event in 2006 on the River Dee, North Wales, UK, both of these data types are extracted and each assessed for their value in the calibration of flood inundation models. A LiDAR guided snake algorithm is used to extract an outline of the flood from the satellite image. From the extracted outline a binary grid of wet / dry cells is created at the same resolution as the model, using this the spatial extent of the modelled and observed flood can be compared using a measure of fit between the two binary patterns of flooding. Water heights are extracted using points at intervals of approximately 100m along the extracted outline, and the students T-test is used to compare modelled and observed water surface elevations. A LISFLOOD-FP model of the catchment is set up using LiDAR topographic data resampled to the 12.5m resolution of the satellite image, and calibration of the friction parameter in the model is undertaken using each of the two approaches. Comparison between the two approaches highlights the sensitivity of the spatial measure of fit to uncertainty in the observed data and the potential drawbacks of using the spatial extent when parts of the flood are contained by the topography.
Resumo:
Recent observations from the Argo dataset of temperature and salinity profiles are used to evaluate a series of 3-year data assimilation experiments in a global ice–ocean general circulation model. The experiments are designed to evaluate a new data assimilation system whereby salinity is assimilated along isotherms, S(T ). In addition, the role of a balancing salinity increment to maintain water mass properties is investigated. This balancing increment is found to effectively prevent spurious mixing in tropical regions induced by univariate temperature assimilation, allowing the correction of isotherm geometries without adversely influencing temperature–salinity relationships. In addition, the balancing increment is able to correct a fresh bias associated with a weak subtropical gyre in the North Atlantic using only temperature observations. The S(T ) assimilation method is found to provide an important improvement over conventional depth level assimilation, with lower root-mean-squared forecast errors over the upper 500 m in the tropical Atlantic and Pacific Oceans. An additional set of experiments is performed whereby Argo data are withheld and used for independent evaluation. The most significant improvements from Argo assimilation are found in less well-observed regions (Indian, South Atlantic and South Pacific Oceans). When Argo salinity data are assimilated in addition to temperature, improvements to modelled temperature fields are obtained due to corrections to model density gradients and the resulting circulation. It is found that observations from the Argo array provide an invaluable tool for both correcting modelled water mass properties through data assimilation and for evaluating the assimilation methods themselves.