387 resultados para resolvent convergence
Resumo:
A number of Game Strategies (GS) have been developed in past decades. They have been used in the fields of economics, engineering, computer science and biology due to their efficiency in solving design optimization problems. In addition, research in multi-objective (MO) and multidisciplinary design optimization (MDO) has focused on developing robust and efficient optimization methods to produce a set of high quality solutions with low computational cost. In this paper, two optimization techniques are considered; the first optimization method uses multi-fidelity hierarchical Pareto optimality. The second optimization method uses the combination of two Game Strategies; Nash-equilibrium and Pareto optimality. The paper shows how Game Strategies can be hybridised and coupled to Multi-Objective Evolutionary Algorithms (MOEA) to accelerate convergence speed and to produce a set of high quality solutions. Numerical results obtained from both optimization methods are compared in terms of computational expense and model quality. The benefits of using Hybrid-Game Strategies are clearly demonstrated
Resumo:
We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.
Resumo:
In semisupervised learning (SSL), a predictive model is learn from a collection of labeled data and a typically much larger collection of unlabeled data. These paper presented a framework called multi-view point cloud regularization (MVPCR), which unifies and generalizes several semisupervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbert spaces (RKHSs). Special cases of MVPCR include coregularized least squares (CoRLS), manifold regularization (MR), and graph-based SSL. An accompanying theorem shows how to reduce any MVPCR problem to standard supervised learning with a new multi-view kernel.
Resumo:
Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.
Resumo:
Background: Achieving health equity has been identified as a major challenge, both internationally and within Australia. Inequalities in cancer outcomes are well documented, and must be quantified before they can be addressed. One method of portraying geographical variation in data uses maps. Recently we have produced thematic maps showing the geographical variation in cancer incidence and survival across Queensland, Australia. This article documents the decisions and rationale used in producing these maps, with the aim to assist others in producing chronic disease atlases. Methods: Bayesian hierarchical models were used to produce the estimates. Justification for the cancers chosen, geographical areas used, modelling method, outcome measures mapped, production of the adjacency matrix, assessment of convergence, sensitivity analyses performed and determination of significant geographical variation is provided. Conclusions: Although careful consideration of many issues is required, chronic disease atlases are a useful tool for assessing and quantifying geographical inequalities. In addition they help focus research efforts to investigate why the observed inequalities exist, which in turn inform advocacy, policy, support and education programs designed to reduce these inequalities.
Resumo:
The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.
Resumo:
Recently, many new applications in engineering and science are governed by a series of fractional partial differential equations (FPDEs). Unlike the normal partial differential equations (PDEs), the differential order in a FPDE is with a fractional order, which will lead to new challenges for numerical simulation, because most existing numerical simulation techniques are developed for the PDE with an integer differential order. The current dominant numerical method for FPDEs is Finite Difference Method (FDM), which is usually difficult to handle a complex problem domain, and also hard to use irregular nodal distribution. This paper aims to develop an implicit meshless approach based on the moving least squares (MLS) approximation for numerical simulation of fractional advection-diffusion equations (FADE), which is a typical FPDE. The discrete system of equations is obtained by using the MLS meshless shape functions and the meshless strong-forms. The stability and convergence related to the time discretization of this approach are then discussed and theoretically proven. Several numerical examples with different problem domains and different nodal distributions are used to validate and investigate accuracy and efficiency of the newly developed meshless formulation. It is concluded that the present meshless formulation is very effective for the modeling and simulation of the FADE.
Resumo:
This paper aims to develop an implicit meshless approach based on the radial basis function (RBF) for numerical simulation of time fractional diffusion equations. The meshless RBF interpolation is firstly briefed. The discrete equations for two-dimensional time fractional diffusion equation (FDE) are obtained by using the meshless RBF shape functions and the strong-forms of the time FDE. The stability and convergence of this meshless approach are discussed and theoretically proven. Numerical examples with different problem domains and different nodal distributions are studied to validate and investigate accuracy and efficiency of the newly developed meshless approach. It has proven that the present meshless formulation is very effective for modeling and simulation of fractional differential equations.
Resumo:
In this paper, a variable-order nonlinear cable equation is considered. A numerical method with first-order temporal accuracy and fourth-order spatial accuracy is proposed. The convergence and stability of the numerical method are analyzed by Fourier analysis. We also propose an improved numerical method with second-order temporal accuracy and fourth-order spatial accuracy. Finally, the results of a numerical example support the theoretical analysis.
Resumo:
In this research we examined, by means of case studies, the mechanisms by which relationships can be managed and by which communication and cooperation can be enhanced in developing sustainable supply chains. The research was predicated on the contention that the development of a sustainable supply chain depends, in part, on the transfer of knowledge and capabilities from the larger players in the supply chain. A sustainable supply chain requires proactive relationship management and the development of an appropriate organisational culture, and trust. By legitimising individuals’ expectations of the type of culture which is appropriate to their company and empowering employees to address mismatches that may occur, a situation can be created whereby the collaborating organisations develop their competences symbiotically and so facilitate a sustainable supply chain. Effective supply chain management enhances organisation performance and competitiveness through the management of operations across organisational boundaries. Relational contracting approaches facilitate the exchange of information and knowledge and build capacity in the supply chain, thus enhancing its sustainability. Relationship management also provides the conditions necessary for the development of collaborative and cooperative relationships However, often subcontractors and suppliers are not empowered to attend project meetings or to have direct communication with project based staff. With this being a common phenomenon in the construction industry, one might ask: what are the barriers to implementation of relationship management through the supply chain? In other words, the problem addressed in this research is the engagement of the supply chain through relationship management.
Resumo:
We consider a stochastic regularization method for solving the backward Cauchy problem in Banach spaces. An order of convergence is obtained on sourcewise representative elements.
Resumo:
This thesis explores the proposition that growth and development in the screen and creative industries is not confined to the major capital cities. Lifestyle considerations, combined with advances in digital technology, convergence and greater access to broadband are altering requirements for geographic location, and creative workers are being drawn away from the big metropolises to certain regional areas. Regional screen industry enclaves are emerging outside of London, in the Highlands and Islands of Scotland, in Nova Scotia in Canada and in New Zealand. In the Australian context, the proposition is tested in an area regarded as a ‘special case’ in creative industry expansion: the Northern Rivers region of NSW. A key feature of the ‘specialness’ of this region is the large number of experienced, credited producers who live and operate their businesses within the region. The development of screen and creative industries in the Northern Rivers over the decade 2000 – 2010 has implications for regional regeneration and offers new insights into the rapidly changing screen industry landscape. This development also has implications for creative industry discourse, especially the dominance of the urban in creative industries thought. The research is pioneering in a number of ways. Building on the work conducted for my Masters thesis in 2000, a second study was conducted during the research phase, adapting creative industries theory and mapping methods, which have been largely city and nation-centric, and applying them to a regional context. The study adopted an action research approach as an industry development strategy for screen industries, while at the same time developing fine-grained ground up methods for collecting primary quantitative data on the size and scope of the creative industries. In accordance with the action research framework, the researcher also acted in the dual roles of industry activist and screen industry producer in the region. The central focus of the research has been both to document and contribute to the growth and development of screen and creative industries over the past decade in the Northern Rivers region. These interventions, along with policy developments at both a local and national level, and broader global shifts, have had the effect of repositioning the sector from a marginal one to a priority area considered integral to the future economic and cultural life of the region. The research includes a detailed mapping study undertaken in 2005 with comparisons to an earlier 2000 study and to ABS data for 2001 and 2006 to reveal growth trends. It also includes two case studies of projects that developed from idea to production and completion in the region during the decade in question. The studies reveal the drivers, impediments and policy implications for sustaining the development of screen industries in a regional area. A major finding of the research was the large and increasing number of experienced producers who operate within the region and the leadership role they play in driving the development of the emerging local industry. The two case studies demonstrate the impact of policy decisions on local screen industry producers and their enterprises. A brief overview of research in other regional areas is presented, including two international examples, and what they reveal about regional regeneration. Implications are drawn for creative industries discourse and regional development policy challenges for the future.
Resumo:
Employees' inability to balance work and non-work related responsibilities have resulted in an increase in stress related illnesses. Historically, research into the relationship between work and non-work has primarily focused on work/family conflict, predominately investigating the impact of this conflict on parents, usually mothers. To date research has not sufficiently examined the management practices that enable all 'individuals' to achieve a 'balance' between work and life. This study explores the relationship between contemporary life friendly HR management policies and work/life balance for individuals as well as the effect of managerial support to the policies. Self-report questionnaire data from 1,241 men and women is analysed and discussed to enable organizations to consider the use of life friendly policies and thus create a convergence between the well-being of employees and the effectiveness of the organization.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, few attempts have been made to explore the structure damage with frequency response functions (FRFs). This paper illustrates the damage identification and condition assessment of a beam structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). In practice, usage of all available FRF data as an input to artificial neural networks makes the training and convergence impossible. Therefore one of the data reduction techniques Principal Component Analysis (PCA) is introduced in the algorithm. In the proposed procedure, a large set of FRFs are divided into sub-sets in order to find the damage indices for different frequency points of different damage scenarios. The basic idea of this method is to establish features of damaged structure using FRFs from different measurement points of different sub-sets of intact structure. Then using these features, damage indices of different damage cases of the structure are identified after reconstructing of available FRF data using PCA. The obtained damage indices corresponding to different damage locations and severities are introduced as input variable to developed artificial neural networks. Finally, the effectiveness of the proposed method is illustrated and validated by using the finite element modal of a beam structure. The illustrated results show that the PCA based damage index is suitable and effective for structural damage detection and condition assessment of building structures.