45 resultados para Lagrangean Heuristics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is motivated by two important trends in consumer computing: (i) the growing pervasiveness of mobile computing devices, and (ii) the users’ desire for increasingly complex but readily acquired and manipulated information content. Specifically, we develop and describe a system for 3D model creation of an object, using only a standard mobile device such as a smart phone. Our approach applies the structured light projection methodology and exploits multiple image input such as frames from a video sequence. In comparison with previous work, a significant further challenge addressed here is that of lower quality input data and limited hardware (processing power and memory, camera and projector quality). Novelties include: (i) a comparison of projection pattern detection approaches in the context of a mobile environment – a robust method combining colour detection and a phase congruency descriptor is evaluated, (ii) a model for single view reconstruction which exploits epipolar, coplanarity and topological constraints, (iii) the use of mobile device sensor data in the iterative closest point algorithm used to register multiple partial 3D reconstructions, and (iv) two heuristics for determining the order in which buffered single view based reconstructions are merged. Our experiments demonstrate that visually appealing results are obtained in a speedy manner which does not require specialist knowledge or expertise from the user.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a practical and cost-effective approach to construct a fully distributed roadside communication infrastructure to facilitate the localized content dissemination to vehicles in the urban area. The proposed infrastructure is composed of distributed lightweight low-cost devices called roadside buffers (RSBs), where each RSB has the limited buffer storage and is able to transmit wirelessly the cached contents to fast-moving vehicles. To enable the distributed RSBs working toward the global optimal performance (e.g., minimal average file download delays), we propose a fully distributed algorithm to determine optimally the content replication strategy at RSBs. Specifically, we first develop a generic analytical model to evaluate the download delay of files, given the probability density of file distribution at RSBs. Then, we formulate the RSB content replication process as an optimization problem and devise a fully distributed content replication scheme accordingly to enable vehicles to recommend intelligently the desirable content files to RSBs. The proposed infrastructure is designed to optimize the global network utility, which accounts for the integrated download experience of users and the download demands of files. Using extensive simulations, we validate the effectiveness of the proposed infrastructure and show that the proposed distributed protocol can approach to the optimal performance and can significantly outperform the traditional heuristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is now widely accepted that firms should direct more effort into retaining existing customers than to attracting new ones. To achieve this, customers likely to defect need to be identified so that they can be approached with tailored incentives or other bespoke retention offers. Such strategies call for predictive models capable of identifying customers with higher probabilities of defecting in the relatively near future. A review of the extant literature on customer churn models reveals that although several predictive models have been developed to model churn in B2C contexts, the B2B context in general, and non-contractual settings in particular, have received less attention in this regard. Therefore, to address these gaps, this study proposes a data-mining approach to model non-contractual customer churn in B2B contexts. Several modeling techniques are compared in terms of their ability to predict true churners. The best performing data-mining technique (boosting) is then applied to develop a profit maximizing retention campaign. Results confirm that the model driven approach to churn prediction and developing retention strategies outperforms commonly used managerial heuristics. © 2014 Elsevier Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cuckoo search (CS) is a relatively new meta-heuristic that has proven its strength in solving continuous optimization problems. This papers applies cuckoo search to the class of sequencing problems by hybridizing it with a variable neighborhood descent local search for enhancing the quality of the obtained solutions. The Lévy flight operator proposed in the original CS is modified to address the discrete nature of scheduling problems. Two well-known problems are used to demonstrate the effectiveness of the proposed hybrid CS approach. The first is the NP-hard single objective problem of minimizing the weighted total tardiness time (Formula presented.) and the second is the multiobjective problem of minimizing the flowtime ¯ and the maximum tardiness Tmaxfor single machine (Formula presented.). For the first problem, computational results show that the hybrid CS is able to find the optimal solutions for all benchmark test instances with 40, 50, and 100 jobs and for most instances with 150, 200, 250, and 300 jobs. For the second problem, the hybrid CS generated solutions on and very close to the exact Pareto fronts of test instances with 10, 20, 30, and 40 jobs. In general, the results reveal that the hybrid CS is an adequate and robust method for tackling single and multiobjective scheduling problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exploratory factor analysis (hereafter, factor analysis) is a complex statistical method that is integral to many fields of research. Using factor analysis requires researchers to make several decisions, each of which affects the solutions generated. In this paper, we focus on five major decisions that are made in conducting factor analysis: (i) establishing how large the sample needs to be, (ii) choosing between factor analysis and principal components analysis, (iii) determining the number of factors to retain, (iv) selecting a method of data extraction, and (v) deciding upon the methods of factor rotation. The purpose of this paper is threefold: (i) to review the literature with respect to these five decisions, (ii) to assess current practices in nursing research, and (iii) to offer recommendations for future use. The literature reviews illustrate that factor analysis remains a dynamic field of study, with recent research having practical implications for those who use this statistical method. The assessment was conducted on 54 factor analysis (and principal components analysis) solutions presented in the results sections of 28 papers published in the 2012 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. The main findings from the assessment were that researchers commonly used (a) participants-to-items ratios for determining sample sizes (used for 43% of solutions), (b) principal components analysis (61%) rather than factor analysis (39%), (c) the eigenvalues greater than one rule and screen tests to decide upon the numbers of factors/components to retain (61% and 46%, respectively), (d) principal components analysis and unweighted least squares as methods of data extraction (61% and 19%, respectively), and (e) the Varimax method of rotation (44%). In general, well-established, but out-dated, heuristics and practices informed decision making with respect to the performance of factor analysis in nursing studies. Based on the findings from factor analysis research, it seems likely that the use of such methods may have had a material, adverse effect on the solutions generated. We offer recommendations for future practice with respect to each of the five decisions discussed in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The success of cloud computing makes an increasing number of real-time applications such as signal processing and weather forecasting run in the cloud. Meanwhile, scheduling for real-time tasks is playing an essential role for a cloud provider to maintain its quality of service and enhance the system's performance. In this paper, we devise a novel agent-based scheduling mechanism in cloud computing environment to allocate real-time tasks and dynamically provision resources. In contrast to traditional contract net protocols, we employ a bidirectional announcement-bidding mechanism and the collaborative process consists of three phases, i.e., basic matching phase, forward announcement-bidding phase and backward announcement-bidding phase. Moreover, the elasticity is sufficiently considered while scheduling by dynamically adding virtual machines to improve schedulability. Furthermore, we design calculation rules of the bidding values in both forward and backward announcement-bidding phases and two heuristics for selecting contractors. On the basis of the bidirectional announcement-bidding mechanism, we propose an agent-based dynamic scheduling algorithm named ANGEL for real-time, independent and aperiodic tasks in clouds. Extensive experiments are conducted on CloudSim platform by injecting random synthetic workloads and the workloads from the last version of the Google cloud tracelogs to evaluate the performance of our ANGEL. The experimental results indicate that ANGEL can efficiently solve the real-time task scheduling problem in virtualized clouds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a discrete state transition algorithm is introduced to solve a multiobjective single machine job shop scheduling problem. In the proposed approach, a non-dominated sort technique is used to select the best from a candidate state set, and a Pareto archived strategy is adopted to keep all the non-dominated solutions. Compared with the enumeration and other heuristics, experimental results have demonstrated the effectiveness of the multiobjective state transition algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

All rights reserved. In this paper, we propose and study a unified mixed-integer programming model that simultaneously optimizes fluence weights and multi-leaf collimator (MLC) apertures in the treatment planning optimization of VMAT, Tomotherapy, and CyberKnife. The contribution of our model is threefold: (i) Our model optimizes the fluence and MLC apertures simultaneously for a given set of control points. (ii) Our model can incorporate all volume limits or dose upper bounds for organs at risk (OAR) and dose lower bound limits for planning target volumes (PTV) as hard constraints, but it can also relax either of these constraint sets in a Lagrangian fashion and keep the other set as hard constraints. (iii) For faster solutions, we propose several heuristic methods based on the MIP model, as well as a meta-heuristic approach. The meta-heuristic is very efficient in practice, being able to generate dose- and machinery-feasible solutions for problem instances of clinical scale, e.g., obtaining feasible treatment plans to cases with 180 control points, 6750 sample voxels and 18,000 beamlets in 470 seconds, or cases with 72 control points, 8000 sample voxels and 28,800 beamlets in 352 seconds. With discretization and down-sampling of voxels, our method is capable of tackling a treatment field of 8000-64,000cm3, depending on the ratio of critical structure versus unspecified tissues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous research has shown that front-of-pack labels (FoPLs) can assist people to make healthier food choices if they are easy to understand and people are motivated to use them. There is some evidence that FoPLs providing an assessment of a food's health value (evaluative FoPLs) are easier to use than those providing only numerical information on nutrients (reductive FoPLs). Recently, a new evaluative FoPL (the Health Star Rating (HSR)) has been introduced to Australia and New Zealand. The HSR features a summary indicator, differentiating it from many other FoPLs being used around the world. The aim of this study was to understand how consumers of all ages use and make sense of reductive FoPLs and evaluative FoPLs including evaluative FoPLs with and without summary indicators. Ten focus groups were conducted in Perth, Western Australia with adults (n = 50) and children aged 10–17 years (n = 35) to explore reactions to one reductive FoPL (the Daily Intake Guide), an existing evaluative FoPL (multiple traffic lights), and a new evaluative FoPL (the HSR). Participants preferred the evaluative FoPLs over the reductive FoPL, with the strongest preference being for the FoPL with the summary indicator (HSR). Discussions revealed the cognitive strategies used when interpreting each FoPL (e.g., using cut offs, heuristics, and the process of elimination), which differed according to FoPL format. Most participants reported being motivated to use the evaluative FoPLs (particularly the HSR) to make choices about foods consumed as part of regular daily meals, but not for discretionary foods consumed as snacks or deserts. The findings provide further evidence of the potential utility of evaluative FoPLs in supporting healthy food choices and can assist policy makers in selecting between alternative FoPL formats.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, there has been studies on the cardinality constrained multi-cycle problems on directed graphs, some of which considered chains co-existing on the same digraph whilst others did not. These studies were inspired by the optimal matching of kidneys known as the Kidney Exchange Problem (KEP). In a KEP, a vertex on the digraph represents a donor-patient pair who are related, though the kidney of the donor is incompatible to the patient. When there are multiple such incompatible pairs in the kidney exchange pool, the kidney of the donor of one incompatible pair may in fact be compatible to the patient of another incompatible pair. If Donor A’s kidney is suitable for Patient B, and vice versa, then there will be arcs in both directions between Vertex A to Vertex B. Such exchanges form a 2-cycle. There may also be cycles involving 3 or more vertices. As all exchanges in a kidney exchange cycle must take place simultaneously, (otherwise a donor can drop out from the program once his/her partner has received a kidney from another donor), due to logistic and human resource reasons, only a limited number of kidney exchanges can occur simultaneously, hence the cardinality of these cycles are constrained. In recent years, kidney exchange programs around the world have altruistic donors in the pool. A sequence of exchanges that starts from an altruistic donor forms a chain instead of a cycle. We therefore have two underlying combinatorial optimization problems: Cardinality Constrained Multi-cycle Problem (CCMcP) and the Cardinality Constrained Cycles and Chains Problem (CCCCP). The objective of the KEP is either to maximize the number of kidney matches, or to maximize a certain weighted function of kidney matches. In a CCMcP, a vertex can be in at most one cycle whereas in a CCCCP, a vertex can be part of (but in no more than) a cycle or a chain. The cardinality of the cycles are constrained in all studies. The cardinality of the chains, however, are considered unconstrained in some studies, constrained but larger than that of cycles, or the same as that of cycles in others. Although the CCMcP has some similarities to the ATSP- and VRP-family of problems, there is a major difference: strong subtour elimination constraints are mostly invalid for the CCMcP, as we do allow smaller subtours as long as they do not exceed the size limit. The CCCCP has its distinctive feature that allows chains as well as cycles on the same directed graph. Hence, both the CCMcP and the CCCCP are interesting and challenging combinatorial optimization problems in their own rights. Most existing studies focused on solution methodologies, and as far as we aware, there is no polyhedral studies so far. In this paper, we will study the polyhedral structure of the natural arc-based integer programming models of the CCMcP and the CCCCP, both containing exponentially many constraints. We do so to pave the way for studying strong valid cuts we have found that can be applied in a Lagrangean relaxation-based branch-and-bound framework where at each node of the branch-and-bound tree, we may be able to obtain a relaxation that can be solved in polynomial time, with strong valid cuts dualized into the objective function and the dual multipliers optimised by subgradient optimisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, diabetes education has relied on written materials, with limited resources available for children with diabetes. Mobile games can be effective and motivating tools for the promotion of children's health. In our earlier work, we proposed a novel approach for designing computer games aimed at educating children with diabetes. In this article, we apply our game design to a mobile Android game (Mario Brothers). We also introduce four heuristics that are specifically designed for evaluating the mobile game, by adapting traditional usability heuristics. Results of a pilot study (n = 12) to evaluate gameplay over 1-week showed that the children found the game engaging and improved their knowledge of healthy diet and lifestyle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conspiracy Theory (CT) endorsers believe in an omnipresent, malevolent, and highly coordinated group that wields secret influence for personal gain, and credit this group with the responsibility for many noteworthy events. Two explanations for the emergence of CTs are that they result from social marginalisation and a lack of agency, or that they are due to a need-to-explain-the-unexplained. Furthermore, representativeness heuristics may form reasoning biases that make such beliefs more likely. Two related studies (N = 107; N = 120) examined the relationships between these social marginalisation, intolerance of uncertainty, heuristics and CT belief using a correlational design. Overall, intolerance of uncertainty did not link strongly to CT belief, but worldview variables did - particularly a sense of the world as (socially) threatening, non-random, and with no fixed morality. The use of both representative heuristics that were examined was heightened in those participants more likely to endorse CTs. These factors seem to contribute to the likelihood of whether the individual will endorse CTs generally, relating similarly to common CTs, CTs generally historically accepted as "true", and to the endorsement of fictional CTs that the individual would find novel. Implications are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evolutionary algorithms (EAs) have recently been suggested as candidate for solving big data optimisation problems that involve very large number of variables and need to be analysed in a short period of time. However, EAs face scalability issue when dealing with big data problems. Moreover, the performance of EAs critically hinges on the utilised parameter values and operator types, thus it is impossible to design a single EA that can outperform all other on every problem instances. To address these challenges, we propose a heterogeneous framework that integrates a cooperative co-evolution method with various types of memetic algorithms. We use the cooperative co-evolution method to split the big problem into sub-problems in order to increase the efficiency of the solving process. The subproblems are then solved using various heterogeneous memetic algorithms. The proposed heterogeneous framework adaptively assigns, for each solution, different operators, parameter values and local search algorithm to efficiently explore and exploit the search space of the given problem instance. The performance of the proposed algorithm is assessed using the Big Data 2015 competition benchmark problems that contain data with and without noise. Experimental results demonstrate that the proposed algorithm, with the cooperative co-evolution method, performs better than without cooperative co-evolution method. Furthermore, it obtained very competitive results for all tested instances, if not better, when compared to other algorithms using a lower computational times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Malware replicates itself and produces offspring with the same characteristics but different signatures by using code obfuscation techniques. Current generation anti-virus engines employ a signature-template type detection approach where malware can easily evade existing signatures in the database. This reduces the capability of current anti-virus engines in detecting malware. In this paper, we propose a stepwise binary logistic regression-based dimensionality reduction techniques for malware detection using application program interface (API) call statistics. Finding the most significant malware feature using traditional wrapper-based approaches takes an exponential complexity of the dimension (m) of the dataset with a brute-force search strategies and order of (m-1) complexity with a backward elimination filter heuristics. The novelty of the proposed approach is that it finds the worst case computational complexity which is less than order of (m-1). The proposed approach uses multi-linear regression and the p-value of each individual API feature for selection of the most uncorrelated and significant features in order to reduce the dimensionality of the large malware data and to ensure the absence of multi-collinearity. The stepwise logistic regression approach is then employed to test the significance of the individual malware feature based on their corresponding Wald statistic and to construct the binary decision the model. When the selected most significant APIs are used in a decision rule generation systems, this approach not only reduces the tree size but also improves classification performance. Exhaustive experiments on a large malware data set show that the proposed approach clearly exceeds the existing standard decision rule, support vector machine-based template approach with complete data and provides a better statistical fitness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ecological theory often fails applied ecologists in three ways: (1) Theory has little predictive value but is nevertheless applied in conservation with a risk of perverse outcomes, (2) individual theories have limited heuristic value for planning and framing research because they are narrowly focused, and (3) theory can lead to poor communication among scientists and hinder scientific progress through inconsistent use of terms and widespread redundancy. New approaches are therefore needed that improve the distillation, communication, and application of ecological theory. We advocate three approaches to resolve these problems: (1) improve prediction by reviewing theory across case studies to develop contingent theory where possible, (2) plan new research using a checklist of phenomena to avoid the narrow heuristic value of individual theories, and (3) improve communication among scientists by rationalizing theory associated with particular phenomena to purge redundancy and by developing definitions for key terms. We explored the extent to which these problems and solutions have been featured in two case studies of long-term ecological research programs in forests and plantations of southeastern Australia. We found that our main contentions were supported regarding the prediction, planning, and communication limitations of ecological theory. We illustrate how inappropriate application of theory can be overcome or avoided by investment in boundary-spanning actions. The case studies also demonstrate how some of our proposed solutions could work, particularly the use of theory in secondary case studies after developing primary case studies without theory. When properly coordinated and implemented through a widely agreed upon and broadly respected international collaboration, the framework that we present will help to speed the progress of ecological research and lead to better conservation decisions.