926 resultados para Pareto-optimal solutions
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
Digital image
Resumo:
In this paper we consider the problem of computing an “optimal” popular matching. We assume that our input instance View the MathML source admits a popular matching and here we are asked to return not any popular matching but an optimal popular matching, where the definition of optimality is given as a part of the problem statement; for instance, optimality could be fairness in which case we are required to return a fair popular matching. We show an O(n2+m) algorithm for this problem, assuming that the preference lists are strict, where m is the number of edges in G and n is the number of applicants.
Resumo:
Optimal bang-coast maintenance policies for a machine, subject to failure, are considered. The approach utilizes a semi-Markov model for the system. A simplified model for modifying the probability of machine failure with maintenance is employed. A numerical example is presented to illustrate the procedure and results.
Similar solutions for the incompressible laminar boundary layer with pressure gradient in micropolar
Resumo:
This paper presents the similarity solution for the steady incompressible laminar boundary layer flow of a micropolar fluid past an infinite wedge. The governing equations have been solved numerically using fourth orderRunge-Kutta-Gill method. The results indicate the extent to which the velocity and microrotation profiles, and the surface shear stress are influenced by coupling, microrotation, and pressure gradient parameters. The important role played by the standard length of the micropolar fluid in determining the structure of the boundary layer has also been discussed.
Resumo:
Mobile RFID services for the Internet of Things can be created by using RFID as an enabling technology in mobile devices. Humans, devices, and things are the content providers and users of these services. Mobile RFID services can be either provided on mobile devices as stand-alone services or combined with end-to-end systems. When different service solution scenarios are considered, there are more than one possible architectural solution in the network, mobile, and back-end server areas. Combining the solutions wisely by applying the software architecture and engineering principles, a combined solution can be formulated for certain application specific use cases. This thesis illustrates these ideas. It also shows how generally the solutions can be used in real world use case scenarios. A case study is used to add further evidence.
Resumo:
This study reports an investigation of the ion exchange treatment of sodium chloride solutions in relation to use of resin technology for applications such as desalination of brackish water. In particular, a strong acid cation (SAC) resin (DOW Marathon C) was studied to determine its capacity for sodium uptake and to evaluate the fundamentals of the ion exchange process involved. Key questions to answer included: impact of resin identity; best models to simulate the kinetics and equilibrium exchange behaviour of sodium ions; difference between using linear least squares (LLS) and non-linear least squares (NLLS) methods for data interpretation; and, effect of changing the type of anion in solution which accompanied the sodium species. Kinetic studies suggested that the exchange process was best described by a pseudo first order rate expression based upon non-linear least squares analysis of the test data. Application of the Langmuir Vageler isotherm model was recommended as it allowed confirmation that experimental conditions were sufficient for maximum loading of sodium ions to occur. The Freundlich expression best fitted the equilibrium data when analysing the information by a NLLS approach. In contrast, LLS methods suggested that the Langmuir model was optimal for describing the equilibrium process. The Competitive Langmuir model which considered the stoichiometric nature of ion exchange process, estimated the maximum loading of sodium ions to be 64.7 g Na/kg resin. This latter value was comparable to sodium ion capacities for SAC resin published previously. Inherent discrepancies involved when using linearized versions of kinetic and isotherm equations were illustrated, and despite their widespread use, the value of this latter approach was questionable. The equilibrium behaviour of sodium ions form sodium fluoride solution revealed that the sodium ions were now more preferred by the resin compared to the situation with sodium chloride. The solution chemistry of hydrofluoric acid was suggested as promoting the affinity of the sodium ions to the resin.
Resumo:
A spatial sampling design that uses pair-copulas is presented that aims to reduce prediction uncertainty by selecting additional sampling locations based on both the spatial configuration of existing locations and the values of the observations at those locations. The novelty of the approach arises in the use of pair-copulas to estimate uncertainty at unsampled locations. Spatial pair-copulas are able to more accurately capture spatial dependence compared to other types of spatial copula models. Additionally, unlike traditional kriging variance, uncertainty estimates from the pair-copula account for influence from measurement values and not just the configuration of observations. This feature is beneficial, for example, for more accurate identification of soil contamination zones where high contamination measurements are located near measurements of varying contamination. The proposed design methodology is applied to a soil contamination example from the Swiss Jura region. A partial redesign of the original sampling configuration demonstrates the potential of the proposed methodology.
Resumo:
This paper presents an analysis of an optimal linear filter in the presence of constraints on the moan squared values of the estimates from the viewpoint of singular optimal control. The singular arc has been shown to satisfy the generalized Legcndrc-Clebseh condition and Jacobson's condition. Both the cases of white measurement noise and coloured measurement noise are considered. The constrained estimate is shown to be a linear transformation of the unconstrained Kalman estimate.
Resumo:
The stochastic version of Pontryagin's maximum principle is applied to determine an optimal maintenance policy of equipment subject to random deterioration. The deterioration of the equipment with age is modelled as a random process. Next the model is generalized to include random catastrophic failure of the equipment. The optimal maintenance policy is derived for two special probability distributions of time to failure of the equipment, namely, exponential and Weibull distributions Both the salvage value and deterioration rate of the equipment are treated as state variables and the maintenance as a control variable. The result is illustrated by an example
Resumo:
In this paper, we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we describe two well-known mechanisms for sponsored search auction-Generalized Second Price (GSP) and Vickrey-Clarke-Groves (VCG). We then derive a new mechanism for sponsored search auction which we call optimal (OPT) mechanism. The OPT mechanism maximizes the search engine's expected revenue, while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We then undertake a detailed comparative study of the mechanisms GSP, VCG, and OPT. We compute and compare the expected revenue earned by the search engine under the three mechanisms when the advertisers are symmetric and some special conditions are satisfied. We also compare the three mechanisms in terms of incentive compatibility, individual rationality, and computational complexity. Note to Practitioners-The advertiser-supported web site is one of the successful business models in the emerging web landscape. When an Internet user enters a keyword (i.e., a search phrase) into a search engine, the user gets back a page with results, containing the links most relevant to the query and also sponsored links, (also called paid advertisement links). When a sponsored link is clicked, the user is directed to the corresponding advertiser's web page. The advertiser pays the search engine in some appropriate manner for sending the user to its web page. Against every search performed by any user on any keyword, the search engine faces the problem of matching a set of advertisers to the sponsored slots. In addition, the search engine also needs to decide on a price to be charged to each advertiser. Due to increasing demands for Internet advertising space, most search engines currently use auction mechanisms for this purpose. These are called sponsored search auctions. A significant percentage of the revenue of Internet giants such as Google, Yahoo!, MSN, etc., comes from sponsored search auctions. In this paper, we study two auction mechanisms, GSP and VCG, which are quite popular in the sponsored auction context, and pursue the objective of designing a mechanism that is superior to these two mechanisms. In particular, we propose a new mechanism which we call the OPT mechanism. This mechanism maximizes the search engine's expected revenue subject to achieving Bayesian incentive compatibility and individual rationality. Bayesian incentive compatibility guarantees that it is optimal for each advertiser to bid his/her true value provided that all other agents also bid their respective true values. Individual rationality ensures that the agents participate voluntarily in the auction since they are assured of gaining a non-negative payoff by doing so.
Resumo:
This paper presents a Dubins model based strategy to determine the optimal path of a Miniature Air Vehicle (MAV), constrained by a bounded turning rate, that would enable it to fly along a given straight line, starting from an arbitrary initial position and orientation. The method is then extended to meet the same objective in the presence of wind which has a magnitude comparable to the speed of the MAV. We use a modification of the Dubins' path method to obtain the complete optimal solution to this problem in all its generality.
Resumo:
Estimates of flexural frequencies of clamped square plates are initially obtained by the modified Bolotin's method. The mode shapes in “each direction” are then determined and the product functions of these mode shapes are used as admissible functions in the Rayleigh-Ritz method. The data for the first twenty eigenvalues in each of the three (four) symmetric groups obtained by the (i) Bolotin, (ii) Rayleigh and (iii) Rayleigh-Ritz methods are reported here. The Rayleigh estimates are found to be much closer to the true eigenvalues than the Bolotin estimates. The present product functions are found to be much superior to the conventional beam eigenmodes as admissible functions in the Rayleigh-Ritz method of analysis.
Resumo:
An optimal pitch steering programme of a solid-fuel satellite launch vehicle to maximize either (1) the injection velocity at a given altitude, or (2) the size of circular orbit, for a given payload is presented. The two-dimensional model includes the rotation of atmosphere with the Earth, the vehicle's lift and drag, variation of thrust with time and altitude, inverse-square gravitational field, and the specified initial vertical take-off. The inequality constraints on the aerodynamic load, control force, and turning rates are also imposed. Using the properties of the central force motion the terminal constraint conditions at coast apogee are transferred to the penultimate stage burnout. Such a transformation converts a time-free problem into a time-fixed one, reduces the number of terminal constraints, improves accuracy, besides demanding less computer memory and time. The adjoint equations are developed in a compact matrix form. The problem is solved on an IBM 360/44 computer using a steepest ascent algorithm. An illustrative analysis of a typical launch vehicle establishes the speed of convergence, and accuracy and applicability of the algorithm.
Resumo:
Among the iterative schemes for computing the Moore — Penrose inverse of a woll-conditioned matrix, only those which have an order of convergence three or two are computationally efficient. A Fortran programme for these schemes is provided.