911 resultados para MINIMIZING EARLINESS
Resumo:
We propose a simple and computationally efficient construction algorithm for two class linear-in-the-parameters classifiers. In order to optimize model generalization, a forward orthogonal selection (OFS) procedure is used for minimizing the leave-one-out (LOO) misclassification rate directly. An analytic formula and a set of forward recursive updating formula of the LOO misclassification rate are developed and applied in the proposed algorithm. Numerical examples are used to demonstrate that the proposed algorithm is an excellent alternative approach to construct sparse two class classifiers in terms of performance and computational efficiency.
Resumo:
Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd.
Resumo:
Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leave-one-out (LOO) criteria is developed to construct parsimonious radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines the center vector and diagonal covariance matrix of one RBF node by minimizing the LOO statistics. For regression applications, the LOO criterion is chosen to be the LOO mean square error, while the LOO misclassification rate is adopted in two-class classification applications. By adopting the Parzen window estimate as the desired response, the unsupervised density estimation problem is transformed into a constrained regression problem. This PSO aided OFR algorithm for tunable-node RBF networks is capable of constructing very parsimonious RBF models that generalize well, and our analysis and experimental results demonstrate that the algorithm is computationally even simpler than the efficient regularization assisted orthogonal least square algorithm based on LOO criteria for selecting fixed-node RBF models. Another significant advantage of the proposed learning procedure is that it does not have learning hyperparameters that have to be tuned using costly cross validation. The effectiveness of the proposed PSO aided OFR construction procedure is illustrated using several examples taken from regression and classification, as well as density estimation applications.
Resumo:
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.
Resumo:
In recent years nonpolynomial finite element methods have received increasing attention for the efficient solution of wave problems. As with their close cousin the method of particular solutions, high efficiency comes from using solutions to the Helmholtz equation as basis functions. We present and analyze such a method for the scattering of two-dimensional scalar waves from a polygonal domain that achieves exponential convergence purely by increasing the number of basis functions in each element. Key ingredients are the use of basis functions that capture the singularities at corners and the representation of the scattered field towards infinity by a combination of fundamental solutions. The solution is obtained by minimizing a least-squares functional, which we discretize in such a way that a matrix least-squares problem is obtained. We give computable exponential bounds on the rate of convergence of the least-squares functional that are in very good agreement with the observed numerical convergence. Challenging numerical examples, including a nonconvex polygon with several corner singularities, and a cavity domain, are solved to around 10 digits of accuracy with a few seconds of CPU time. The examples are implemented concisely with MPSpack, a MATLAB toolbox for wave computations with nonpolynomial basis functions, developed by the authors. A code example is included.
Resumo:
The problem of the appropriate distribution of forces among the fingers of a four-fingered robot hand is addressed. The finger-object interactions are modelled as point frictional contacts, hence the system is indeterminate and an optimal solution is required for controlling forces acting on an object. A fast and efficient method for computing the grasping and manipulation forces is presented, where computation has been based on using the true model of the nonlinear frictional cone of contact. Results are compared with previously employed methods of linearizing the cone constraints and minimizing the internal forces.
Resumo:
Genetic algorithms (GAs) have been introduced into site layout planning as reported in a number of studies. In these studies, the objective functions were defined so as to employ the GAs in searching for the optimal site layout. However, few studies have been carried out to investigate the actual closeness of relationships between site facilities; it is these relationships that ultimately govern the site layout. This study has determined that the underlying factors of site layout planning for medium-size projects include work flow, personnel flow, safety and environment, and personal preferences. By finding the weightings on these factors and the corresponding closeness indices between each facility, a closeness relationship has been deduced. Two contemporary mathematical approaches - fuzzy logic theory and an entropy measure - were adopted in finding these results in order to minimize the uncertainty and vagueness of the collected data and improve the quality of the information. GAs were then applied to searching for the optimal site layout in a medium-size government project using the GeneHunter software. The objective function involved minimizing the total travel distance. An optimal layout was obtained within a short time. This reveals that the application of GA to site layout planning is highly promising and efficient.
Resumo:
This paper presents a controller design scheme for a priori unknown non-linear dynamical processes that are identified via an operating point neurofuzzy system from process data. Based on a neurofuzzy design and model construction algorithm (NeuDec) for a non-linear dynamical process, a neurofuzzy state-space model of controllable form is initially constructed. The control scheme based on closed-loop pole assignment is then utilized to ensure the time invariance and linearization of the state equations so that the system stability can be guaranteed under some mild assumptions, even in the presence of modelling error. The proposed approach requires a known state vector for the application of pole assignment state feedback. For this purpose, a generalized Kalman filtering algorithm with coloured noise is developed on the basis of the neurofuzzy state-space model to obtain an optimal state vector estimation. The derived controller is applied in typical output tracking problems by minimizing the tracking error. Simulation examples are included to demonstrate the operation and effectiveness of the new approach.
Resumo:
This paper is concerned with the use of a genetic algorithm to select financial ratios for corporate distress classification models. For this purpose, the fitness value associated to a set of ratios is made to reflect the requirements of maximizing the amount of information available for the model and minimizing the collinearity between the model inputs. A case study involving 60 failed and continuing British firms in the period 1997-2000 is used for illustration. The classification model based on ratios selected by the genetic algorithm compares favorably with a model employing ratios usually found in the financial distress literature.
Resumo:
The combination of the synthetic minority oversampling technique (SMOTE) and the radial basis function (RBF) classifier is proposed to deal with classification for imbalanced two-class data. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier structure and the parameters of RBF kernels are determined using a particle swarm optimization algorithm based on the criterion of minimizing the leave-one-out misclassification rate. The experimental results on both simulated and real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.
Resumo:
The misuse of Personal Protective Equipment results in health risk among smallholders in developing countries, and education is often proposed to promote safer practices. However, evidence point to limited effects of education. This paper presents a System Dynamics model which allows the identification of risk-minimizing policies for behavioural change. The model is based on the IAC framework and survey data. It represents farmers' decision-making from an agent-oriented standpoint. The most successful intervention strategy was the one which intervened in the long term, targeted key stocks in the systems and was diversified. However, the results suggest that, under these conditions, no policy is able to trigger a self sustaining behavioural change. Two implementation approaches were suggested by experts. One, based on constant social control, corresponds to a change of the current model's parameters. The other, based on participation, would lead farmers to new thinking, i.e. changes in their decision-making structure.
Resumo:
Daily weather patterns over the North Atlantic are classified into relevant types: typical weather patterns that may characterize the range of climate impacts from aviation in this region, for both summer and winter. The motivation is to provide a set of weather types to facilitate an investigation of climate-optimal aircraft routing of trans-Atlantic flights (minimizing the climate impact on a flight-by-flight basis). Using the New York to London route as an example, the time-optimal route times are shown to vary by over 60 min, to take advantage of strong tailwinds or avoid headwinds, and for eastbound routes latitude correlates well with the latitude of the jet stream. The weather patterns are classified by their similarity to the North Atlantic Oscillation and East Atlantic teleconnection patterns. For winter, five types are defined; in summer, when there is less variation in jet latitude, only three types are defined. The types can be characterized by the jet strength and position, and therefore the location of the time-optimal routes varies by type. Simple proxies for the climate impact of carbon dioxide, ozone, water vapour and contrails are defined, which depend on parameters such as the route time, latitude and season, the time spent flying in the stratosphere, and the distance over which the air is supersaturated with respect to ice. These proxies are then shown to vary between weather types and between eastbound and westbound routes.
Resumo:
In the first part of this article, we introduced a new urban surface scheme, the Met Office – Reading Urban Surface Exchange Scheme (MORUSES), into the Met Office Unified Model (MetUM) and compared its impact on the surface fluxes with respect to the current urban scheme. In this second part, we aim to analyze further the reasons behind the differences. This analysis is conducted by a comparison of the performance of the two schemes against observations and against a third model, the Single Column Reading Urban model (SCRUM). The key differences between the three models lie in how each model incorporates the heat stored in the urban fabric and how the surface-energy balance is coupled to the underlying substrate. The comparison of the models with observations from Mexico City reveals that the performance of MORUSES is improved if roof insulation is included by minimizing the roof thickness. A comparison of MORUSES and SCRUM reveals that, once insulation is included within MORUSES, these two models perform equally well against the observations overall, but that there are differences in the details of the simulations at the roof and canyon level. These differences are attributed to the different representations of the heat-storage term, specifically differences in the dominant frequencies captured by the urban canopy and substrate, between the models. These results strongly suggest a need for an urban model intercomparison exercise. Copyright © 2010 Royal Meteorological Society and Crown Copyright
Resumo:
Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.