60 resultados para Well-Posed Optimization Problems
Resumo:
Attitudes to floristics have changed considerably during the past few decades as a result of increasing and often more focused consumer demands, heightened awareness of the threats to biodiversity, information flow and overload, and the application of electronic and web-based techniques to information handling and processing. This paper will examine these concerns in relation to our floristic knowledge and needs in the region of SW Asia. Particular reference will be made to the experience gained from the Euro+Med PlantBase project for the preparation of an electronic plant-information system for Europe and the Mediterranean, with a single core list of accepted plant names and synonyms, based on consensus taxonomy agreed by a specialist network. The many challenges Ð scientific, technical and organisational Ð that it has presented will be discussed as well as the problems of handling nontaxonomic information from fields such as conservation, karyology, biosystematics and mapping. The question of regional cooperation and the sharing of efforts and resources will also be raised and attention drawn to the recent planning workshop held in Rabat (May 2002) for establishing a technical cooperation network for taxonomic capacity building in North Africa as a possible model for the SW Asia region.
Resumo:
In financial decision-making, a number of mathematical models have been developed for financial management in construction. However, optimizing both qualitative and quantitative factors and the semi-structured nature of construction finance optimization problems are key challenges in solving construction finance decisions. The selection of funding schemes by a modified construction loan acquisition model is solved by an adaptive genetic algorithm (AGA) approach. The basic objectives of the model are to optimize the loan and to minimize the interest payments for all projects. Multiple projects being undertaken by a medium-size construction firm in Hong Kong were used as a real case study to demonstrate the application of the model to the borrowing decision problems. A compromise monthly borrowing schedule was finally achieved. The results indicate that Small and Medium Enterprise (SME) Loan Guarantee Scheme (SGS) was first identified as the source of external financing. Selection of sources of funding can then be made to avoid the possibility of financial problems in the firm by classifying qualitative factors into external, interactive and internal types and taking additional qualitative factors including sovereignty, credit ability and networking into consideration. Thus a more accurate, objective and reliable borrowing decision can be provided for the decision-maker to analyse the financial options.
Resumo:
We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.
Resumo:
The pipe sizing of water networks via evolutionary algorithms is of great interest because it allows the selection of alternative economical solutions that meet a set of design requirements. However, available evolutionary methods are numerous, and methodologies to compare the performance of these methods beyond obtaining a minimal solution for a given problem are currently lacking. A methodology to compare algorithms based on an efficiency rate (E) is presented here and applied to the pipe-sizing problem of four medium-sized benchmark networks (Hanoi, New York Tunnel, GoYang and R-9 Joao Pessoa). E numerically determines the performance of a given algorithm while also considering the quality of the obtained solution and the required computational effort. From the wide range of available evolutionary algorithms, four algorithms were selected to implement the methodology: a PseudoGenetic Algorithm (PGA), Particle Swarm Optimization (PSO), a Harmony Search and a modified Shuffled Frog Leaping Algorithm (SFLA). After more than 500,000 simulations, a statistical analysis was performed based on the specific parameters each algorithm requires to operate, and finally, E was analyzed for each network and algorithm. The efficiency measure indicated that PGA is the most efficient algorithm for problems of greater complexity and that HS is the most efficient algorithm for less complex problems. However, the main contribution of this work is that the proposed efficiency ratio provides a neutral strategy to compare optimization algorithms and may be useful in the future to select the most appropriate algorithm for different types of optimization problems.
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
For the very large nonlinear dynamical systems that arise in a wide range of physical, biological and environmental problems, the data needed to initialize a numerical forecasting model are seldom available. To generate accurate estimates of the expected states of the system, both current and future, the technique of ‘data assimilation’ is used to combine the numerical model predictions with observations of the system measured over time. Assimilation of data is an inverse problem that for very large-scale systems is generally ill-posed. In four-dimensional variational assimilation schemes, the dynamical model equations provide constraints that act to spread information into data sparse regions, enabling the state of the system to be reconstructed accurately. The mechanism for this is not well understood. Singular value decomposition techniques are applied here to the observability matrix of the system in order to analyse the critical features in this process. Simplified models are used to demonstrate how information is propagated from observed regions into unobserved areas. The impact of the size of the observational noise and the temporal position of the observations is examined. The best signal-to-noise ratio needed to extract the most information from the observations is estimated using Tikhonov regularization theory. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
The combination of the synthetic minority oversampling technique (SMOTE) and the radial basis function (RBF) classifier is proposed to deal with classification for imbalanced two-class data. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier structure and the parameters of RBF kernels are determined using a particle swarm optimization algorithm based on the criterion of minimizing the leave-one-out misclassification rate. The experimental results on both simulated and real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.
Resumo:
In this paper we explore classification techniques for ill-posed problems. Two classes are linearly separable in some Hilbert space X if they can be separated by a hyperplane. We investigate stable separability, i.e. the case where we have a positive distance between two separating hyperplanes. When the data in the space Y is generated by a compact operator A applied to the system states ∈ X, we will show that in general we do not obtain stable separability in Y even if the problem in X is stably separable. In particular, we show this for the case where a nonlinear classification is generated from a non-convergent family of linear classes in X. We apply our results to the problem of quality control of fuel cells where we classify fuel cells according to their efficiency. We can potentially classify a fuel cell using either some external measured magnetic field or some internal current. However we cannot measure the current directly since we cannot access the fuel cell in operation. The first possibility is to apply discrimination techniques directly to the measured magnetic fields. The second approach first reconstructs currents and then carries out the classification on the current distributions. We show that both approaches need regularization and that the regularized classifications are not equivalent in general. Finally, we investigate a widely used linear classification algorithm Fisher's linear discriminant with respect to its ill-posedness when applied to data generated via a compact integral operator. We show that the method cannot stay stable when the number of measurement points becomes large.
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.
Resumo:
Mediterranean ecosystems rival tropical ecosystems in terms of plant biodiversity. The Mediterranean Basin (MB) itself hosts 25 000 plant species, half of which are endemic. This rich biodiversity and the complex biogeographical and political issues make conservation a difficult task in the region. Species, habitat, ecosystem and landscape approaches have been used to identify conservation targets at various scales: ie, European, national, regional and local. Conservation decisions require adequate information at the species, community and habitat level. Nevertheless and despite recent improvements/efforts, this information is still incomplete, fragmented and varies from one country to another. This paper reviews the biogeographic data, the problems arising from current conservation efforts and methods for the conservation assessment and prioritization using GIS. GIS has an important role to play for managing spatial and attribute information on the ecosystems of the MB and to facilitate interactions with existing databases. Where limited information is available it can be used for prediction when directly or indirectly linked to externally built models. As well as being a predictive tool today GIS incorporate spatial techniques which can improve the level of information such as fuzzy logic, geostatistics, or provide insight about landscape changes such as 3D visualization. Where there are limited resources it can assist with identifying sites of conservation priority or the resolution of environmental conflicts (scenario building). Although not a panacea, GIS is an invaluable tool for improving the understanding of Mediterranean ecosystems and their dynamics and for practical management in a region that is under increasing pressure from human impact.
Resumo:
Forecasting atmospheric blocking is one of the main problems facing medium-range weather forecasters in the extratropics. The European Centre for Medium-Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) provides an excellent basis for medium-range forecasting as it provides a number of different possible realizations of the meteorological future. This ensemble of forecasts attempts to account for uncertainties in both the initial conditions and the model formulation. Since 18 July 2000, routine output from the EPS has included the field of potential temperature on the potential vorticity (PV) D 2 PV units (PVU) surface, the dynamical tropopause. This has enabled the objective identification of blocking using an index based on the reversal of the meridional potential-temperature gradient. A year of EPS probability forecasts of Euro-Atlantic and Pacific blocking have been produced and are assessed in this paper, concentrating on the Euro-Atlantic sector. Standard verification techniques such as Brier scores, Relative Operating Characteristic (ROC) curves and reliability diagrams are used. It is shown that Euro-Atlantic sector-blocking forecasts are skilful relative to climatology out to 10 days, and are more skilful than the deterministic control forecast at all lead times. The EPS is also more skilful than a probabilistic version of this deterministic forecast, though the difference is smaller. In addition, it is shown that the onset of a sector-blocking episode is less well predicted than its decay. As the lead time increases, the probability forecasts tend towards a model climatology with slightly less blocking than is seen in the real atmosphere. This small under-forecasting bias in the blocking forecasts is possibly related to a westerly bias in the ECMWF model. Copyright © 2003 Royal Meteorological Society
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
We study certain boundary value problems for the one-dimensional wave equation posed in a time-dependent domain. The approach we propose is based on a general transform method for solving boundary value problems for integrable nonlinear PDE in two variables, that has been applied extensively to the study of linear parabolic and elliptic equations. Here we analyse the wave equation as a simple illustrative example to discuss the particular features of this method in the context of linear hyperbolic PDEs, which have not been studied before in this framework.
Resumo:
We consider boundary value problems for the N-wave interaction equations in one and two space dimensions, posed for x [greater-or-equal, slanted] 0 and x,y [greater-or-equal, slanted] 0, respectively. Following the recent work of Fokas, we develop an inverse scattering formalism to solve these problems by considering the simultaneous spectral analysis of the two ordinary differential equations in the associated Lax pair. The solution of the boundary value problems is obtained through the solution of a local Riemann–Hilbert problem in the one-dimensional case, and a nonlocal Riemann–Hilbert problem in the two-dimensional case.
Resumo:
Mediterranean ecosystems rival tropical ecosystems in terms of plant biodiversity. The Mediterranean Basin (MB) itself hosts 25 000 plant species, half of which are endemic. This rich biodiversity and the complex biogeographical and political issues make conservation a difficult task in the region. Species, habitat, ecosystem and landscape approaches have been used to identify conservation targets at various scales: ie, European, national, regional and local. Conservation decisions require adequate information at the species, community and habitat level. Nevertheless and despite recent improvements/efforts, this information is still incomplete, fragmented and varies from one country to another. This paper reviews the biogeographic data, the problems arising from current conservation efforts and methods for the conservation assessment and prioritization using GIS. GIS has an important role to play for managing spatial and attribute information on the ecosystems of the MB and to facilitate interactions with existing databases. Where limited information is available it can be used for prediction when directly or indirectly linked to externally built models. As well as being a predictive tool today GIS incorporate spatial techniques which can improve the level of information such as fuzzy logic, geostatistics, or provide insight about landscape changes such as 3D visualization. Where there are limited resources it can assist with identifying sites of conservation priority or the resolution of environmental conflicts (scenario building). Although not a panacea, GIS is an invaluable tool for improving the understanding of Mediterranean ecosystems and their dynamics and for practical management in a region that is under increasing pressure from human impact.