955 resultados para Convex Polygon
Resumo:
The number of remote sensing platforms and sensors rises almost every year, yet much work on the interpretation of land cover is still carried out using either single images or images from the same source taken at different dates. Two questions could be asked of this proliferation of images: can the information contained in different scenes be used to improve the classification accuracy and, what is the best way to combine the different imagery? Two of these multiple image sources are MODIS on the Terra platform and ETM+ on board Landsat7, which are suitably complementary. Daily MODIS images with 36 spectral bands in 250-1000 m spatial resolution and seven spectral bands of ETM+ with 30m and 16 days spatial and temporal resolution respectively are available. In the UK, cloud cover may mean that only a few ETM+ scenes may be available for any particular year and these may not be at the time of year of most interest. The MODIS data may provide information on land cover over the growing season, such as harvest dates, that is not present in the ETM+ data. Therefore, the primary objective of this work is to develop a methodology for the integration of medium spatial resolution Landsat ETM+ image, with multi-temporal, multi-spectral, low-resolution MODIS \Terra images, with the aim of improving the classification of agricultural land. Additionally other data may also be incorporated such as field boundaries from existing maps. When classifying agricultural land cover of the type seen in the UK, where crops are largely sown in homogenous fields with clear and often mapped boundaries, the classification is greatly improved using the mapped polygons and utilising the classification of the polygon as a whole as an apriori probability in classifying each individual pixel using a Bayesian approach. When dealing with multiple images from different platforms and dates it is highly unlikely that the pixels will be exactly co-registered and these pixels will contain a mixture of different real world land covers. Similarly the different atmospheric conditions prevailing during the different days will mean that the same emission from the ground will give rise to different sensor reception. Therefore, a method is presented with a model of the instantaneous field of view and atmospheric effects to enable different remote sensed data sources to be integrated.
Resumo:
Urban regions present some of the most challenging areas for the remote sensing community. Many different types of land cover have similar spectral responses, making them difficult to distinguish from one another. Traditional per-pixel classification techniques suffer particularly badly because they only use these spectral properties to determine a class, and no other properties of the image, such as context. This project presents the results of the classification of a deeply urban area of Dudley, West Midlands, using 4 methods: Supervised Maximum Likelihood, SMAP, ECHO and Unsupervised Maximum Likelihood. An accuracy assessment method is then developed to allow a fair representation of each procedure and a direct comparison between them. Subsequently, a classification procedure is developed that makes use of the context in the image, though a per-polygon classification. The imagery is broken up into a series of polygons extracted from the Marr-Hildreth zero-crossing edge detector. These polygons are then refined using a region-growing algorithm, and then classified according to the mean class of the fine polygons. The imagery produced by this technique is shown to be of better quality and of a higher accuracy than that of other conventional methods. Further refinements are suggested and examined to improve the aesthetic appearance of the imagery. Finally a comparison with the results produced from a previous study of the James Bridge catchment, in Darleston, West Midlands, is made, showing that the Polygon classified ATM imagery performs significantly better than the Maximum Likelihood classified videography used in the initial study, despite the presence of geometric correction errors.
Resumo:
The present thesis evaluates various aspects of videokeratoscopes, which are now becoming increasingly popular in the investigation of corneal topography. The accuracy and repeatability of these instruments has been assessed mainly using spherical surfaces, however, few studies have assessed the performance of videokeratoscopes in measuring convex aspheric surfaces. Using two videokeratoscopes, the accuracy and repeatability of measurements using twelve aspheric surfaces is determined. Overall, the accuracy and repeatability of both instruments were acceptable, however, progressively flatter surfaces introduced greater errors in measurement. The possible reasons for these errors are discussed. The corneal surface is a biological structure lubricated by the precorneal tear film. The effects of variations in the tear film on the repeatability of videokeratoscopes have not been determined in terms of peripheral corneal measurements. The repeatability of two commercially available videokeratoscopes is assessed. The repeatability is found to be dependent on the point of measurement on the corneal surface. Typically, superior and nasal meridians exhibit poorest repeatability. It is suggested that interference of the ocular adnexa is responsible for the reduced repeatability. This localised reduction in repeatability will occur for all videokeratoscopes. Further, comparison with the keratometers and videokeratoscopes used show that measurements between these instruments are not interchangeable. The final stage of this thesis evaluates the performance of new algorithms. The characteristics of a new videokeratoscope are described. This videokeratoscope is used to test the accuracy of the new algorithms for twelve aspheric surfaces. The new algorithms are accurate in determining the shape of aspheric surfaces, more so than those algorithms proposed at present.
Studies on the luminance-related characteristics of the transient pattern reversal electroretinogram
Resumo:
The electroretinogram evoked by reversal pattern stimulation (rPERG) is known to contain both pattern contrast and luminance related components. The retinal mechanisms of the transient rPERGs subserving these functional characteristics are the main concern in the present studies. Considerable attention has been paid to the luminance-related characteristics of the response. The transient PERGs were found to consist of two subsequent processes using low frequency attenuation analysis. The processes overlapped and the individual difference in each process timings formed the major cause for the variations of the negative potential waveform of the transient rPERGs. Attention has been paid to those having ‘notch’ type of variation. Under different contrast levels, the amplitudes of the positive and negative potentials were linearly increased with higher contrast level and the negative potential showed a higher sensitivity to contrast changes and higher contrast gain. Under lower contrast levels, the decreased amplitudes made the difference in the timing course of the positive and negative processes evident, interpreting the appearance of the notch in some cases. Visual adaptation conditions for recording the transient rPERG were discussed. Another effort was to study the large variation of the transient rPERGs (especially the positive potential, P50) in the elderly who’s distant and near visual acuity were normal. It was found that reduction of retinal illumination contributed mostly to the P50 amplitude loss and contrast loss mostly to the negative potential (N95) amplitude loss. Senile miosis was thought to have little effect on the reduction of the retinal illumination, while the changes in the optics of the eye was probably the major cause for it, which interpreted the larger individual variation of the P50 amplitude of the elderly PERGs. Convex defocus affected the transient rPERGs more effectively than concave lenses, especially the N95 amplitude in the elderly. The disability of accommodation and the type and the degree of subjects’ ametropia should be taken into consideration when the elderly rPERGs were analysed.
Resumo:
This paper investigates a cross-layer design approach for minimizing energy consumption and maximizing network lifetime (NL) of a multiple-source and single-sink (MSSS) WSN with energy constraints. The optimization problem for MSSS WSN can be formulated as a mixed integer convex optimization problem with the adoption of time division multiple access (TDMA) in medium access control (MAC) layer, and it becomes a convex problem by relaxing the integer constraint on time slots. Impacts of data rate, link access and routing are jointly taken into account in the optimization problem formulation. Both linear and planar network topologies are considered for NL maximization (NLM). With linear MSSS and planar single-source and single-sink (SSSS) topologies, we successfully use Karush-Kuhn-Tucker (KKT) optimality conditions to derive analytical expressions of the optimal NL when all nodes are exhausted simultaneously. The problem for planar MSSS topology is more complicated, and a decomposition and combination (D&C) approach is proposed to compute suboptimal solutions. An analytical expression of the suboptimal NL is derived for a small scale planar network. To deal with larger scale planar network, an iterative algorithm is proposed for the D&C approach. Numerical results show that the upper-bounds of the network lifetime obtained by our proposed optimization models are tight. Important insights into the NL and benefits of cross-layer design for WSN NLM are obtained.
Resumo:
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Resumo:
To investigate investment behaviour the present study applies panel data techniques, in particular the Arellano-Bond (1991) GMM estimator, based on data on Estonian manufacturing firms from the period 1995-1999. We employ the model of optimal capital accumulation in the presence of convex adjustment costs. The main research findings are that domestic companies seem to be financially more constrained than those where foreign investors are present, and also, smaller firms are more constrained than their larger counterparts.
Resumo:
Information extraction or knowledge discovery from large data sets should be linked to data aggregation process. Data aggregation process can result in a new data representation with decreased number of objects of a given set. A deterministic approach to separable data aggregation means a lesser number of objects without mixing of objects from different categories. A statistical approach is less restrictive and allows for almost separable data aggregation with a low level of mixing of objects from different categories. Layers of formal neurons can be designed for the purpose of data aggregation both in the case of deterministic and statistical approach. The proposed designing method is based on minimization of the of the convex and piecewise linear (CPL) criterion functions.
Resumo:
Using monotone bifunctions, we introduce a recession concept for general equilibrium problems relying on a variational convergence notion. The interesting purpose is to extend some results of P. L. Lions on variational problems. In the process we generalize some results by H. Brezis and H. Attouch relative to the convergence of the resolvents associated with maximal monotone operators.
Resumo:
We consider the question whether the assumption of convexity of the set involved in Clarke-Ledyaev inequality can be relaxed. In the case when the point is outside the convex hull of the set we show that Clarke-Ledyaev type inequality holds if and only if there is certain geometrical relation between the point and the set.
Resumo:
∗ Supported by Research grants GAUK 190/96 and GAUK 1/1998
Resumo:
In this paper an alternative characterization of the class of functions called k -uniformly convex is found. Various relations concerning connections with other classes of univalent functions are given. Moreover a new class of univalent functions, analogous to the ’Mocanu class’ of functions, is introduced. Some results concerning this class are derived.
Resumo:
For a polish space M and a Banach space E let B1 (M, E) be the space of first Baire class functions from M to E, endowed with the pointwise weak topology. We study the compact subsets of B1 (M, E) and show that the fundamental results proved by Rosenthal, Bourgain, Fremlin, Talagrand and Godefroy, in case E = R, also hold true in the general case. For instance: a subset of B1 (M, E) is compact iff it is sequentially (resp. countably) compact, the convex hull of a compact bounded subset of B1 (M, E) is relatively compact, etc. We also show that our class includes Gulko compact. In the second part of the paper we examine under which conditions a bounded linear operator T : X ∗ → Y so that T |BX ∗ : (BX ∗ , w∗ ) → Y is a Baire-1 function, is a pointwise limit of a sequence (Tn ) of operators with T |BX ∗ : (BX ∗ , w∗ ) → (Y, · ) continuous for all n ∈ N. Our results in this case are connected with classical results of Choquet, Odell and Rosenthal.
Resumo:
We prove that in some classes of optimization problems, like lower semicontinuous functions which are bounded from below, lower semi-continuous or continuous functions which are bounded below by a coercive function and quasi-convex continuous functions with the topology of the uniform convergence, the complement of the set of well-posed problems is σ-porous. These results are obtained as realization of a theorem extending a variational principle of Ioffe-Zaslavski.
Resumo:
Dedicated to the memory of our colleague Vasil Popov January 14, 1942 – May 31, 1990 * Partially supported by ISF-Center of Excellence, and by The Hermann Minkowski Center for Geometry at Tel Aviv University, Israel