19 resultados para Convex Polygon
em Aston University Research Archive
Resumo:
This paper draws attention for the fact that traditional Data Envelopment Analysis (DEA) models do not provide the closest possible targets (or peers) to inefficient units, and presents a procedure to obtain such targets. It focuses on non-oriented efficiency measures (which assume that production units are able to control, and thus change, inputs and outputs simultaneously) both measured in relation to a Free Disposal Hull (FDH) technology and in relation to a convex technology. The approaches developed for finding close targets are applied to a sample of Portuguese bank branches.
Resumo:
The title of Juana Castro’s poetry book publshed in 1978, Cóncava mujer — Concave woman — expresses the hollow nature of the social female subject. From Juana Castro’s point of view, this female social concavity is only allowed to transformed itself into its opposite, the convex women which clearly represents the reproductive role of the female body. These two extreme roles assigned to women, hollowness or maternity, are the poetic paradigms for Juana Castro’s two poetry books analised in this article. As if we were presented with the two sides of a coin, Cóncava mujer and Del dolor y las alas – On anguish and wings--(1982) reflect the author´s concious realisation of the above-mentioned female duality as a defined and percieved subject by male society. Each poetry book, however, respond to two different personal moments, and each result in two different ways of conceiving poetic language. On one hand, the poetic subject of Cóncava mujer emerges with all its force as a feminist voice whose goal is the attack of all aspects of the patriarchal society as the cause of the female concavity. On the other hand, in Del dolor y las alas the poetic voice unfolds her motherhood as both loss and creation: the death of Juana Castro’s son makes the poetic subject incomplete, and therefore a concave one; whereas the poetic discourse appears as the perfect way to occupy the empty space left by the son’s death.
Resumo:
This study tests the implications of tournament theory using data on 100 U.K. stock market companies, covering over 500 individual executives, in the late 1990s. Our results provide some evidence consistent with the operation of tournament mechanisms within the U.K. business context. Firstly, we find a convex relationship between executive pay and organizational level and secondly, that the gap between CEO pay and other board executives (i.e., tournament prize) is positively related to the number of participants in the tournament. However, we also show that the variation in executive team pay has little role in determining company performance.
Using interior point algorithms for the solution of linear programs with special structural features
Resumo:
Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.
Resumo:
Advances in both computer technology and the necessary mathematical models capable of capturing the geometry of arbitarily shaped objects has led to the development in this thesis of a surface generation package called 'IBSCURF' aimed at providing a more economically viable solution to free-form surface manufacture. A suit of computer programs written in FORTRAN 77 has been developed to provide computer aids for every aspect of work in designing and machining free-form surfaces. A vector-valued parametric method was used for shape description and a lofting technique employed for the construction of the surface. The development of the package 'IBSCURF' consists of two phases. The first deals with CAD. The design process commences in defining the cross-sections which are represented by uniform B-spline curves as approximations to give polygons. The order of the curve and the position and number of the polygon vertices can be used as parameters for the modification to achieve the required curves. When the definitions of the sectional curves is complete, the surface is interpolated over them by cubic cardinal splines. To use the CAD function of the package to design a mould for a plastic handle, a mathematical model was developed. To facilitate the integration of design and machining using the mathematical representation of the surface, the second phase of the package is concerned with CAM which enables the generation of tool offset positions for ball-nosed cutters and a general post-processor has been developed which automatically generates NC tape programs for any CNC milling machine. The two phases of these programs have been successfully implemented, as a CAD/CAM package for free-form surfaces on the VAX 11/750 super-minicomputer with graphics facilities for displaying drawings interactively on the terminal screen. The development of this package has been beneficial in all aspects of design and machining of free form surfaces.
Resumo:
Glass reinforced plastic (GRP) is now an established material for the fabrication of sonar windows. Its good mechanical strength, light weight, resistance to corrosion and acoustic transparency, are all properties which fit it for this application. This thesis describes a study, undertaken at the Royal Naval Engineering College, Plymouth, into the mechanical behaviour of a circular cylindrical sonar panel. This particular type of panel would be used to cover a flank array sonar in a ship or submarine. The case considered is that of a panel with all of its edges mechanically clamped and subject to pressure loading on its convex surface. A comprehensive program of testing, to determine the orthotropic elastic properties of the laminated composite panel material is described, together with a series of pressure tests on 1:5 scale sonar panels. These pressure tests were carried out in a purpose designed test rig, using air pressure to provide simulated hydrostatic and hydrodynamic loading. Details of all instrumentation used in the experimental work are given in the thesis. The experimental results from the panel testing are compared with predictions of panel behaviour obtained from both the Galerkin solution of Flugge's cylindrical shell equations (orthotropic case), and finite element modelling of the panels using PAFEC. A variety of appropriate panel boundary conditions are considered in each case. A parametric study, intended to be of use as a preliminary design tool, and based on the above Galerkin solution, is also presented. This parametric study considers cases of boundary conditions, material properties, and panel geometry, outside of those investigated in the experimental work Final conclusions are drawn and recommendations made regarding possible improvements to the procedures for design, manufacture and fixing of sonar panels in the Royal Navy.
Resumo:
The number of remote sensing platforms and sensors rises almost every year, yet much work on the interpretation of land cover is still carried out using either single images or images from the same source taken at different dates. Two questions could be asked of this proliferation of images: can the information contained in different scenes be used to improve the classification accuracy and, what is the best way to combine the different imagery? Two of these multiple image sources are MODIS on the Terra platform and ETM+ on board Landsat7, which are suitably complementary. Daily MODIS images with 36 spectral bands in 250-1000 m spatial resolution and seven spectral bands of ETM+ with 30m and 16 days spatial and temporal resolution respectively are available. In the UK, cloud cover may mean that only a few ETM+ scenes may be available for any particular year and these may not be at the time of year of most interest. The MODIS data may provide information on land cover over the growing season, such as harvest dates, that is not present in the ETM+ data. Therefore, the primary objective of this work is to develop a methodology for the integration of medium spatial resolution Landsat ETM+ image, with multi-temporal, multi-spectral, low-resolution MODIS \Terra images, with the aim of improving the classification of agricultural land. Additionally other data may also be incorporated such as field boundaries from existing maps. When classifying agricultural land cover of the type seen in the UK, where crops are largely sown in homogenous fields with clear and often mapped boundaries, the classification is greatly improved using the mapped polygons and utilising the classification of the polygon as a whole as an apriori probability in classifying each individual pixel using a Bayesian approach. When dealing with multiple images from different platforms and dates it is highly unlikely that the pixels will be exactly co-registered and these pixels will contain a mixture of different real world land covers. Similarly the different atmospheric conditions prevailing during the different days will mean that the same emission from the ground will give rise to different sensor reception. Therefore, a method is presented with a model of the instantaneous field of view and atmospheric effects to enable different remote sensed data sources to be integrated.
Resumo:
Urban regions present some of the most challenging areas for the remote sensing community. Many different types of land cover have similar spectral responses, making them difficult to distinguish from one another. Traditional per-pixel classification techniques suffer particularly badly because they only use these spectral properties to determine a class, and no other properties of the image, such as context. This project presents the results of the classification of a deeply urban area of Dudley, West Midlands, using 4 methods: Supervised Maximum Likelihood, SMAP, ECHO and Unsupervised Maximum Likelihood. An accuracy assessment method is then developed to allow a fair representation of each procedure and a direct comparison between them. Subsequently, a classification procedure is developed that makes use of the context in the image, though a per-polygon classification. The imagery is broken up into a series of polygons extracted from the Marr-Hildreth zero-crossing edge detector. These polygons are then refined using a region-growing algorithm, and then classified according to the mean class of the fine polygons. The imagery produced by this technique is shown to be of better quality and of a higher accuracy than that of other conventional methods. Further refinements are suggested and examined to improve the aesthetic appearance of the imagery. Finally a comparison with the results produced from a previous study of the James Bridge catchment, in Darleston, West Midlands, is made, showing that the Polygon classified ATM imagery performs significantly better than the Maximum Likelihood classified videography used in the initial study, despite the presence of geometric correction errors.
Resumo:
The present thesis evaluates various aspects of videokeratoscopes, which are now becoming increasingly popular in the investigation of corneal topography. The accuracy and repeatability of these instruments has been assessed mainly using spherical surfaces, however, few studies have assessed the performance of videokeratoscopes in measuring convex aspheric surfaces. Using two videokeratoscopes, the accuracy and repeatability of measurements using twelve aspheric surfaces is determined. Overall, the accuracy and repeatability of both instruments were acceptable, however, progressively flatter surfaces introduced greater errors in measurement. The possible reasons for these errors are discussed. The corneal surface is a biological structure lubricated by the precorneal tear film. The effects of variations in the tear film on the repeatability of videokeratoscopes have not been determined in terms of peripheral corneal measurements. The repeatability of two commercially available videokeratoscopes is assessed. The repeatability is found to be dependent on the point of measurement on the corneal surface. Typically, superior and nasal meridians exhibit poorest repeatability. It is suggested that interference of the ocular adnexa is responsible for the reduced repeatability. This localised reduction in repeatability will occur for all videokeratoscopes. Further, comparison with the keratometers and videokeratoscopes used show that measurements between these instruments are not interchangeable. The final stage of this thesis evaluates the performance of new algorithms. The characteristics of a new videokeratoscope are described. This videokeratoscope is used to test the accuracy of the new algorithms for twelve aspheric surfaces. The new algorithms are accurate in determining the shape of aspheric surfaces, more so than those algorithms proposed at present.
Studies on the luminance-related characteristics of the transient pattern reversal electroretinogram
Resumo:
The electroretinogram evoked by reversal pattern stimulation (rPERG) is known to contain both pattern contrast and luminance related components. The retinal mechanisms of the transient rPERGs subserving these functional characteristics are the main concern in the present studies. Considerable attention has been paid to the luminance-related characteristics of the response. The transient PERGs were found to consist of two subsequent processes using low frequency attenuation analysis. The processes overlapped and the individual difference in each process timings formed the major cause for the variations of the negative potential waveform of the transient rPERGs. Attention has been paid to those having ‘notch’ type of variation. Under different contrast levels, the amplitudes of the positive and negative potentials were linearly increased with higher contrast level and the negative potential showed a higher sensitivity to contrast changes and higher contrast gain. Under lower contrast levels, the decreased amplitudes made the difference in the timing course of the positive and negative processes evident, interpreting the appearance of the notch in some cases. Visual adaptation conditions for recording the transient rPERG were discussed. Another effort was to study the large variation of the transient rPERGs (especially the positive potential, P50) in the elderly who’s distant and near visual acuity were normal. It was found that reduction of retinal illumination contributed mostly to the P50 amplitude loss and contrast loss mostly to the negative potential (N95) amplitude loss. Senile miosis was thought to have little effect on the reduction of the retinal illumination, while the changes in the optics of the eye was probably the major cause for it, which interpreted the larger individual variation of the P50 amplitude of the elderly PERGs. Convex defocus affected the transient rPERGs more effectively than concave lenses, especially the N95 amplitude in the elderly. The disability of accommodation and the type and the degree of subjects’ ametropia should be taken into consideration when the elderly rPERGs were analysed.
Resumo:
This paper investigates a cross-layer design approach for minimizing energy consumption and maximizing network lifetime (NL) of a multiple-source and single-sink (MSSS) WSN with energy constraints. The optimization problem for MSSS WSN can be formulated as a mixed integer convex optimization problem with the adoption of time division multiple access (TDMA) in medium access control (MAC) layer, and it becomes a convex problem by relaxing the integer constraint on time slots. Impacts of data rate, link access and routing are jointly taken into account in the optimization problem formulation. Both linear and planar network topologies are considered for NL maximization (NLM). With linear MSSS and planar single-source and single-sink (SSSS) topologies, we successfully use Karush-Kuhn-Tucker (KKT) optimality conditions to derive analytical expressions of the optimal NL when all nodes are exhausted simultaneously. The problem for planar MSSS topology is more complicated, and a decomposition and combination (D&C) approach is proposed to compute suboptimal solutions. An analytical expression of the suboptimal NL is derived for a small scale planar network. To deal with larger scale planar network, an iterative algorithm is proposed for the D&C approach. Numerical results show that the upper-bounds of the network lifetime obtained by our proposed optimization models are tight. Important insights into the NL and benefits of cross-layer design for WSN NLM are obtained.
Resumo:
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Resumo:
To investigate investment behaviour the present study applies panel data techniques, in particular the Arellano-Bond (1991) GMM estimator, based on data on Estonian manufacturing firms from the period 1995-1999. We employ the model of optimal capital accumulation in the presence of convex adjustment costs. The main research findings are that domestic companies seem to be financially more constrained than those where foreign investors are present, and also, smaller firms are more constrained than their larger counterparts.
Resumo:
We propose and demonstrate a technique for monitoring the recovery deformation of the shape-memory polymers (SMP) using a surface-attached fiber Bragg grating (FBG) as a vector-bending sensor. The proposed sensing scheme could monitor the pure bending deformation for the SMP sample. When the SMP sample undergoes concave or convex bending, the resonance wavelength of the FBG will have red-shift or blue-shift according to the tensile or compressive stress gradient along the FBG. As the results show, the bending sensitivity is around 4.07 nm/cm−1. The experimental results clearly indicate that the deformation of such an SMP sample can be effectively monitored by the attached FBG not just for the bending curvature but also the bending direction.
Resumo:
A generalized Drucker–Prager (GD–P) viscoplastic yield surface model was developed and validated for asphalt concrete. The GD–P model was formulated based on fabric tensor modified stresses to consider the material inherent anisotropy. A smooth and convex octahedral yield surface function was developed in the GD–P model to characterize the full range of the internal friction angles from 0° to 90°. In contrast, the existing Extended Drucker–Prager (ED–P) was demonstrated to be applicable only for a material that has an internal friction angle less than 22°. Laboratory tests were performed to evaluate the anisotropic effect and to validate the GD–P model. Results indicated that (1) the yield stresses of an isotropic yield surface model are greater in compression and less in extension than that of an anisotropic model, which can result in an under-prediction of the viscoplastic deformation; and (2) the yield stresses predicted by the GD–P model matched well with the experimental results of the octahedral shear strength tests at different normal and confining stresses. By contrast, the ED–P model over-predicted the octahedral yield stresses, which can lead to an under-prediction of the permanent deformation. In summary, the rutting depth of an asphalt pavement would be underestimated without considering anisotropy and convexity of the yield surface for asphalt concrete. The proposed GD–P model was demonstrated to be capable of overcoming these limitations of the existing yield surface models for the asphalt concrete.