988 resultados para convexity theorem


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In contrast with the prediction of the Heckscher-Ohlin (HO) theorem, Leontief (1953) found that the capital-labor ratio embodied in the US exports is smaller than the capital-labor ratio embodied in the US competitive import replacements. In Leontief's analysis, the measured factor content of US imports is computed based on the assumption that all countries are using the US factor intensity techniques. This paper relaxes all assumption of identical factor intensity techniques. It uses an inferring method to infer the factor intensity techniques of different countries based on international relative factor price differences. With the inferred differentiated factor intensity techniques , the Leontief paradox is re-investigated and is found to be either disappeared or eased.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional approaches such as theorem proving and model checking have been successfully used to analyze security protocols. Ideally, they assume the data communication is reliable and require the user to predetermine authentication goals. However, missing and inconsistent data have been greatly ignored, and the increasingly complicated security protocol makes it difficult to predefine such goals. This paper presents a novel approach to analyze security protocols using association rule mining. It is able to not only validate the reliability of transactions but also discover potential correlations between secure messages. The algorithm and experiment demonstrate that our approaches are useful and promising.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a technique for the real-time modeling of deformable tissue. Specifically geared towards needle insertion simulation, the low computational requirements of the model enable highly accurate haptic feedback to a user without introducing noticeable time delay or buzzing generally associated with haptic surgery simulation. Using a spherical voxel array combined with aspects of computational geometry and agent communication and interaction principals, the model is capable of providing haptic update rates of over 1000Hz with real-time visual feedback. Iterating through over 1000 voxels per millisecond to determine collision and haptic response while making use of Vieta’s Theorem for extraneous force culling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Teachers in many introductory statistics courses demonstrate the Central Limit Theorem by using a computer to draw a large number of random samples of size n from a population distribution and plot the resulting empirical sampling distribution of the sample mean. There are
many computer applications that can be used for this (see, for example, the Rice Virtual Lab in Statistics: http://www.ruf.rice.edu/~lane/rvls.html). The effectiveness of such demonstrations has been questioned (see delMas et al (1999))) but in the work presented in this paper we do not rely on sampling distributions to convey or teach statistical concepts; only that the sampling distribution is independent of the distribution of the population, provided the sample size is sufficiently large.

We describe a lesson that starts out with a demonstration of the CTL, but sample from a (finite) population where actual census data is provided; doing this may help students more easily relate to the concepts – they can see the original data as a column of numbers and if the samples are shown they can also see random samples being taken. We continue with this theme of sampling from census data to teach the basic ideas of inference. We end up with standard resampling/bootstrap procedures.

We also demonstrate how Excel can provide a tool for developing a learning objects to support the program; a workbook called Sampling.xls is available from www.deakin.edu.au/~rodneyc/PS > Sampling.xls.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Least squares polynomial splines are an effective tool for data fitting, but they may fail to preserve essential properties of the underlying function, such as monotonicity or convexity. The shape restrictions are translated into linear inequality conditions on spline coefficients. The basis functions are selected in such a way that these conditions take a simple form, and the problem becomes non-negative least squares problem, for which effecitive and robust methods of solution exist. Multidimensional monotone approximation is achieved by using tensor-product splines with the appropriate restrictions. Additional inter polation conditions can also be introduced. The conversion formulas to traditional B-spline representation are provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The theory of abstract convexity provides us with the necessary tools for building accurate one-sided approximations of functions. Cutting angle methods have recently emerged as a tool for global optimization of families of abstract convex functions. Their applicability have been subsequently extended to other problems, such as scattered data interpolation. This paper reviews three different applications of cutting angle methods, namely global optimization, generation of nonuniform random variates and multivatiate interpolation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods of Lipschitz optimization allow one to find and confirm the global minimum of multivariate Lipschitz functions using a finite number of function evaluations. This paper extends the Cutting Angle method, in which the optimization problem is solved by building a sequence of piecewise linear underestimates of the objective function. We use a more flexible set of support functions, which yields a better underestimate of a Lipschitz objective function. An efficient algorithm for enumeration of all local minima of the underestimate is presented, along with the results of numerical experiments. One dimensional Pijavski-Shubert method arises as a special case of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been an increasing interest in face recognition in recent years. Many recognition methods have been developed so far, some very encouraging. A key remaining issue is the existence of variations in the input face image. Today, methods exist that can handle specific image variations. But we are yet to see methods that can be used more effectively in unconstrained situations. This paper presents a method that can handle partial translation, rotation, or scale variations in the input face image. The principal is to automatically identify objects within images using their partial self-similarities. The paper presents two recognition methods which can be used to recognise objects within images. A face recognition system is then presented that is insensitive to limited translation, rotation, or scale variations in the input face image. The performance of the system is evaluated through four experiments. The results show that the system achieves higher recognition rates than those of a number of existing approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyzes corruption as a collusive act which requires the participation of two willing partners. An agent intending to engage in a corrupt act must search for a like-minded partner. When many people in the economy are corrupt, such a search is more likely to be fruitful. Thus when an agent engages in a search, he raises the net benefit of searching for other similar agents in the economy, creating an externality. This introduces a non-convexity in the model, which consequently has multiple equilibria. The economy can be in stable equilibrium with a high or low level of corruption.

Starting from the high-corruption equilibrium, a sufficient increase in vigilance triggers a negative cascade, leading the economy to a new equilibrium in which no agent finds it profitable to search for corrupt partners. The no-corruption equilibrium continues to be stable if vigilance is then relaxed. This suggests that the correct way to deal with corruption is to launch a ``big push'' with large amounts of resources. Once the level of corruption declines, these resources can be withdrawn.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some knowledge of what it means to construct a proof is an extremely important part of mathematics. All mathematics teachers and students should have some exposure to the ideas of proof and proving. This paper deals with the issue of creating proofs in mathematics problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with leader-follower finite-time consensus control of multi-agent networks with input disturbances. Terminal sliding mode control scheme is used to design the distributed control law. A new terminal sliding mode surface is proposed to guarantee finite-time consensus under fixed topology, with the common assumption that the position and the velocity of the active leader is known to its neighbors only. By using the finite-time Lyapunov stability theorem, it is shown that if the directed graph of the network has a directed spanning tree, then the terminal sliding mode control law can guarantee finite-time consensus even under the assumption that the time-varying control input of the active leader is unknown to any follower.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development and use of cocycles for analysis of non-autonomous behaviour is a technique that has been known for several years. Initially developed as an extension to semi-group theory for studying rion-autonornous behaviour, it was extensively used in analysing random dynamical systems [2, 9, 10, 12]. Many of the results regarding asymptotic behaviour developed for random dynamical systems, including the concept of cocycle attractors were successfully transferred and reinterpreted for deterministic non-autonomous systems primarily by P. Kloeden and B. Schmalfuss [20, 21, 28, 29]. The theory concerning cocycle attractors was later developed in various contexts specific to particular classes of dynamical systems [6, 7, 13], although a comprehensive understanding of cocycle attractors (redefined as pullback attractors within this thesis) and their role in the stability of non-autonomous dynamical systems was still at this stage incomplete. It was this purpose that motivated Chapters 1-3 to define and formalise the concept of stability within non-autonomous dynamical systems. The approach taken incorporates the elements of classical asymptotic theory, and refines the notion of pullback attraction with further development towards a study of pull-back stability arid pullback asymptotic stability. In a comprehensive manner, it clearly establishes both pullback and forward (classical) stability theory as fundamentally unique and essential components of non-autonomous stability. Many of the introductory theorems and examples highlight the key properties arid differences between pullback and forward stability. The theory also cohesively retains all the properties of classical asymptotic stability theory in an autonomous environment. These chapters are intended as a fundamental framework from which further research in the various fields of non-autonomous dynamical systems may be extended. A preliminary version of a Lyapunov-like theory that characterises pullback attraction is created as a tool for examining non-autonomous behaviour in Chapter 5. The nature of its usefulness however is at this stage restricted to the converse theorem of asymptotic stability. Chapter 7 introduces the theory of Loci Dynamics. A transformation is made to an alternative dynamical system where forward asymptotic (classical asymptotic) behaviour characterises pullback attraction to a particular point in the original dynamical system. This has the advantage in that certain conventional techniques for a forward analysis may be applied. The remainder of the thesis, Chapters 4, 6 and Section 7.3, investigates the effects of perturbations and discretisations on non-autonomous dynamical systems known to possess structures that exhibit some form of stability or attraction. Chapter 4 investigates autonomous systems with semi-group attractors, that have been non-autonomously perturbed, whilst Chapter 6 observes the effects of discretisation on non-autonomous dynamical systems that exhibit properties of forward asymptotic stability. Chapter 7 explores the same problem of discretisation, but for pullback asymptotically stable systems. The theory of Loci Dynamics is used to analyse the nature of the discretisation, but establishment of results directly analogous to those discovered in Chapter 6 is shown to be unachievable. Instead a case by case analysis is provided for specific classes of dynamical systems, for which the results generate a numerical approximation of the pullback attraction in the original continuous dynamical system. The nature of the results regarding discretisation provide a non-autonomous extension to the work initiated by A. Stuart and J. Humphries [34, 35] for the numerical approximation of semi-group attractors within autonomous systems. . Of particular importance is the effect on the system's asymptotic behaviour over non-finite intervals of discretisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is about using appropriate tools in functional analysis arid classical analysis to tackle the problem of existence and uniqueness of nonlinear partial differential equations. There being no unified strategy to deal with these equations, one approaches each equation with an appropriate method, depending on the characteristics of the equation. The correct setting of the problem in appropriate function spaces is the first important part on the road to the solution. Here, we choose the setting of Sobolev spaces. The second essential part is to choose the correct tool for each equation. In the first part of this thesis (Chapters 3 and 4) we consider a variety of nonlinear hyperbolic partial differential equations with mixed boundary and initial conditions. The methods of compactness and monotonicity are used to prove existence and uniqueness of the solution (Chapter 3). Finding a priori estimates is the main task in this analysis. For some types of nonlinearity, these estimates cannot be easily obtained, arid so these two methods cannot be applied directly. In this case, we first linearise the equation, using linear recurrence (Chapter 4). In the second part of the thesis (Chapter 5), by using an appropriate tool in functional analysis (the Sobolev Imbedding Theorem), we are able to improve previous results on a posteriori error estimates for the finite element method of lines applied to nonlinear parabolic equations. These estimates are crucial in the design of adaptive algorithms for the method, and previous analysis relies on, what we show to be, unnecessary assumptions which limit the application of the algorithms. Our analysis does not require these assumptions. In the last part of the thesis (Chapter 6), staying with the theme of choosing the most suitable tools, we show that using classical analysis in a proper way is in some cases sufficient to obtain considerable results. We study in this chapter nonexistence of positive solutions to Laplace's equation with nonlinear Neumann boundary condition. This problem arises when one wants to study the blow-up at finite time of the solution of the corresponding parabolic problem, which models the heating of a substance by radiation. We generalise known results which were obtained by using more abstract methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the practical construction of k-Lipschitz triangular norms and conorms from empirical data. We apply a characterization of such functions based on k-convex additive generators and translate k-convexity of piecewise linear strictly decreasing functions into a simple set of linear inequalities on their coefficients. This is the basis of a simple linear spline-fitting algorithm, which guarantees k-Lipschitz property of the resulting triangular norms and conorms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article we develop a global optimization algorithm for quasiconvex programming where the objective function is a Lipschitz function which may have "flat parts". We adapt the Extended Cutting Angle method to quasiconvex functions, which reduces significantly the number of iterations and objective function evaluations, and consequently the total computing time. Applications of such an algorithm to mathematical programming problems inwhich the objective function is derived from economic systems and location problems are described. Computational results are presented.