886 resultados para Optimal test set


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents advanced optimization techniques for Mission Path Planning (MPP) of a UAS fitted with a spore trap to detect and monitor spores and plant pathogens. The UAV MPP aims to optimise the mission path planning search and monitoring of spores and plant pathogens that may allow the agricultural sector to be more competitive and more reliable. The UAV will be fitted with an air sampling or spore trap to detect and monitor spores and plant pathogens in remote areas not accessible to current stationary monitor methods. The optimal paths are computed using a Multi-Objective Evolutionary Algorithms (MOEAs). Two types of multi-objective optimisers are compared; the MOEA Non-dominated Sorting Genetic Algorithms II (NSGA-II) and Hybrid Game are implemented to produce a set of optimal collision-free trajectories in three-dimensional environment. The trajectories on a three-dimension terrain, which are generated off-line, are collision-free and are represented by using Bézier spline curves from start position to target and then target to start position or different position with altitude constraints. The efficiency of the two optimization methods is compared in terms of computational cost and design quality. Numerical results show the benefits of coupling a Hybrid-Game strategy to a MOEA for MPP tasks. The reduction of numerical cost is an important point as the faster the algorithm converges the better the algorithms is for an off-line design and for future on-line decisions of the UAV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of driven pile foundations involves an iterative process requiring an initial estimate of the refusal level to determine the depth of boreholes for subsequent analyses. Current procedures for determining borehole depths incorporate parameters typically unknown at the investigation stage. Thus, a quantifiable procedure more applicable at this preliminary stage would provide greater confidence in estimating the founding level of driven piles. This paper examines the effectiveness of the Standard Penetration Test (SPT) in directly estimating driven pile refusal levels. A number of significant correlations were obtained between SPT information and pile penetration records demonstrating the potential application of the SPT. Results indicated pile penetration was generally best described as a function of both the pile toe and cumulative shaft SPT values. The influence of the toe SPT increased when piles penetrated rock. A refusal criteria was established from the results to guide both the estimation of borehole depths and likely pile lengths during the design stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given there is currently a migration trend from traditional electrical supervisory control and data acquisition (SCADA) systems towards a smart grid based approach to critical infrastructure management. This project provides an evaluation of existing and proposed implementations for both traditional electrical SCADA and smart grid based architectures, and proposals a set of reference requirements which test bed implementations should implement. A high-level design for smart grid test beds is proposed and initial implementation performed, based on the proposed design, using open source and freely available software tools. The project examines the move towards smart grid based critical infrastructure management and illustrates the increased security requirements. The implemented test bed provides a basic framework for testing network requirements in a smart grid environment, as well as a platform for further research and development. Particularly to develop, implement and test network security related disturbances such as intrusion detection and network forensics. The project undertaken proposes and develops an architecture of the emulation of some smart grid functionality. The Common Open Research Emulator (CORE) platform was used to emulate the communication network of the smart grid. Specifically CORE was used to virtualise and emulate the TCP/IP networking stack. This is intended to be used for further evaluation and analysis, for example the analysis of application protocol messages, etc. As a proof of concept, software libraries were designed, developed and documented to enable and support the design and development of further smart grid emulated components, such as reclosers, switches, smart meters, etc. As part of the testing and evaluation a Modbus based smart meter emulator was developed to provide basic functionality of a smart meter. Further code was developed to send Modbus request messages to the emulated smart meter and receive Modbus responses from it. Although the functionality of the emulated components were limited, it does provide a starting point for further research and development. The design is extensible to enable the design and implementation of additional SCADA protocols. The project also defines an evaluation criteria for the evaluation of the implemented test bed, and experiments are designed to evaluate the test bed according to the defined criteria. The results of the experiments are collated and presented, and conclusions drawn from the results to facilitate discussion on the test bed implementation. The discussion undertaken also present possible future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While in many travel situations consumers have an almost limitless range of destinations to choose from, their actual decision set will usually only comprise between two and six destinations. One of the greatest challenges facing destination marketers is positioning their destination, against the myriad of competing places that offer similar features, into consumer decision sets. Since positioning requires a narrow focus, marketing communications must present a succinct and meaningful proposition, the selection of which is often problematic for destination marketing organisations (DMO), which deal with a diverse and often eclectic range of attributes in addition to numerous self-interested and demanding stakeholders. This paper reports the application of two qualitative techniques used to explore the range of cognitive attributes, consequences and personal values that represent potential positioning opportunities in the context of short break holidays. The Repertory Test is an effective technique for understanding the salient attributes used by a traveller to differentiate destinations, while Laddering Analysis enables the researcher to explore the smaller set of personal values guiding such decision making. A key finding of the research was that while individuals might vary in their repertoire of salient attributes, there was a commonality of shared consequences and values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we are concerned with the practical implementation of time optimal numerical techniques on underwater vehicles. We briefly introduce the model of underwater vehicle we consider and present the parameters for the test bed ODIN (Omni-Directional Intelligent Navigator). Then we explain the numerical method used to obtain time optimal trajectories with a structure suitable for the implementation. We follow this with a discussion on the modifications to be made considering the characteristics of ODIN. Finally, we illustrate our computations with some experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary. Peter L. Bartlett, Alexander Rakhlin

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a comprehensive planning methodology is proposed that can minimize the line loss, maximize the reliability and improve the voltage profile in a distribution network. The injected active and reactive power of Distributed Generators (DG) and the installed capacitor sizes at different buses and for different load levels are optimally controlled. The tap setting of HV/MV transformer along with the line and transformer upgrading is also included in the objective function. A hybrid optimization method, called Hybrid Discrete Particle Swarm Optimization (HDPSO), is introduced to solve this nonlinear and discrete optimization problem. The proposed HDPSO approach is a developed version of DPSO in which the diversity of the optimizing variables is increased using the genetic algorithm operators to avoid trapping in local minima. The objective function is composed of the investment cost of DGs, capacitors, distribution lines and HV/MV transformer, the line loss, and the reliability. All of these elements are converted into genuine dollars. Given this, a single-objective optimization method is sufficient. The bus voltage and the line current as constraints are satisfied during the optimization procedure. The IEEE 18-bus test system is modified and employed to evaluate the proposed algorithm. The results illustrate the unavoidable need for optimal control on the DG active and reactive power and capacitors in distribution networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of learning problems can be cast as an Online Convex Game: on each round, a learner makes a prediction x from a convex set, the environment plays a loss function f, and the learner’s long-term goal is to minimize regret. Algorithms have been proposed by Zinkevich, when f is assumed to be convex, and Hazan et al., when f is assumed to be strongly convex, that have provably low regret. We consider these two settings and analyze such games from a minimax perspective, proving minimax strategies and lower bounds in each case. These results prove that the existing algorithms are essentially optimal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work details the results of a face authentication test (FAT2004) (http://www.ee.surrey.ac.uk/banca/icpr2004) held in conjunction with the 17th International Conference on Pattern Recognition. The contest was held on the publicly available BANCA database (http://www.ee.surrey.ac.uk/banca) according to a defined protocol (E. Bailly-Bailliere et al., June 2003). The competition also had a sequestered part in which institutions had to submit their algorithms for independent testing. 13 different verification algorithms from 10 institutions submitted results. Also, a standard set of face recognition software packages from the Internet (http://www.cs.colostate.edu/evalfacerec) were used to provide a baseline performance measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While in many travel situations there is an almost limitless range of available destinations, travellers will usually only actively consider two to six in their decision set. One of the greatest challenges facing destination marketers is positioning their destination, against the myriad of competing places that offer similar features, into consumer decision sets. Since positioning requires a narrow focus, marketing communications must present a succinct and meaningful proposition, the selection of which is often problematic for destination marketing organisations (DMO), which deal with a diverse and often eclectic range of attributes in addition to self-interested and demanding stakeholders who have interests in different market segments. This paper reports the application of two qualitative techniques used to explore the range of cognitive attributes, consequences and personal values that represent potential positioning opportunities in the context of short break holidays. The Repertory Test is an effective technique for understanding the salient attributes used by a traveller to differentiate destinations, and Laddering Analysis enables the researcher to explore the smaller set of consequences and personal values guiding such decision making. A key finding of the research was that while individuals might vary in their repertoire of salient attributes, there was a commonality of shared consequences and values. This has important implications for DMOs, since a brand positioning theme that is based on a value will subsume multiple and diverse attributes. It is posited that such a theme will appeal to a broader range of travellers, as well as appease a greater number of destination stakeholders, than would an attribute based theme.