892 resultados para Constructive heuristics
Resumo:
We explicitly construct simple, piecewise minimizing geodesic, arbitrarily fine interpolation of simple and Jordan curves on a Riemannian manifold. In particular, a finite sequence of partition points can be specified in advance to be included in our construction. Then we present two applications of our main results: the generalized Green’s theorem and the uniqueness of signature for planar Jordan curves with finite p -variation for 1⩽p<2.
Resumo:
The study furthers our understanding of the persuasive and constructive aspects of accounting information. We consider it as a process of ‘interpretive framing’ in the quest for legitimacy - an attempt to justify decisions and excuse mistakes. We base our theoretical discussion on the premise that the picture reported by accounting information is an example of institutional reality and thus mediated by the social contexts in which it is constructed and interpreted. Accounting information is a matter of ‘the interpretation of interpretations’ - the provision of accounting information, which is already a result of a competitive interplay among prior interpretations of certain aspects of our economic phenomena, undergoes further interpretation by the recipients of that information. This notion applies equally to narratives and numbers. We challenge notions of rigor, accuracy and objectivity assigned to quantification in accounting and posit that numbers can be an even more powerful rhetorical device due to their image of being rational and ‘rhetoric free’. We illustrate our theoretical propositions presenting explicit references to the constructive and rhetorical aspects of financial reporting from Pacioli and his times (late 15th century) to the recent regulatory developments of FASB/IASB in 2013, i.e. from the rhetoric of double entry book-keeping to the rhetoric of 'fair value’. We acknowledge, building on these theoretical foundations, the inherent subjectivity of accounting information (influenced by perceptions and interests) without entirely denying however its informative functions. We illustrate the practical implications of this, in a situation where “shared and socially accepted” perceptions may be the nearest we can get to anything resembling a faithful representation of economic reality. The paper contributes to a broader understanding of how accounting information can be viewed as a social and humanistic construction, and challenges taken-for-granted assumptions about impartiality, neutrality and rationality in regard to the process.
Resumo:
We explore the debates surrounding the constructive and discursive capabilities of accounting information focusing in particular on the reception volatility of numbers once they are produced and ‘exposed’ to various communities of minds. Drawing on Goffman’s (1974) frame analysis and Vollmer’s (2007) work on the three-dimensional character of numerical signs, we explore how numbers can go through gradual or instantaneous transformations, get caught up in public debates and become ‘agents’ or ‘captives’ in creating social order and in some cases social drama. In our analysis we also relate to the work of Durkheim (1993, 2002) on the sociology of morality to illustrate how numbers can become indicators of moral transgression. The study explores both historical and contemporary examples of controversies and recent accounting scandals to demonstrate how preparers (of financial information) can lose control over numbers which then acquire new meanings through social context and collective (re)framing. The main contribution of the study is to illustrate how the narratives attached to numbers are malleable and fluid across both time and space.
Resumo:
Abstract Managers face hard choices between process and outcome systems of accountability in evaluating employees, but little is known about how managers resolve them. Building on the premise that political ideologies serve as uncertainty-reducing heuristics, two studies of working managers show that: (1) conservatives prefer outcome accountability and liberals prefer process accountability in an unspecified policy domain; (2) this split becomes more pronounced in a controversial domain (public schools) in which the foreground value is educational efficiency but reverses direction in a controversial domain (affirmative action) in which the foreground value is demographic equality; (3) managers who discover employees have subverted their preferred system favor tinkering over switching to an alternative system; (4) but bipartisan consensus arises when managers have clear evidence about employee trustworthiness and the tightness of the causal links between employee effort and success. These findings shed light on ideological and contextual factors that shape preferences for accountability systems.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
This paper argues that the intellectual contribution of Alan Rugman reflects his distinctive research methodology. Alan Rugman trained as an economist, and relied heavily on economic principles throughout his work. He believed that one good theory was sufficient for IB studies, and that theory, he maintained, was internalisation theory. He rejected theoretical pluralism, and believed that IB suffered from a surfeit of theories. Alan was a positivist. The test of a good theory was that it led to clear predictions which were corroborated by empirical evidence. Many IB theories, Alan believed, were weak; their proliferation sowed confusion and they needed to be refuted. Alan’s interpretation of internalisation was, however, unconventional in some respects. He played down the trade-offs presented in Coase’s original work, and substituted heuristics in their place. Instead of analysing internalisation as a context-specific choice between alternative contractual arrangements, he presented it as a strategic imperative for firms possessing strong knowledge advantages. His heuristics did not apply to every possible case, but in Alan’s view they applied in the great majority of cases and were therefore a basis for management action.
Resumo:
GPR (Ground Penetrating Radar) results are shown for perpendicular broadside and parallel broadside antenna orientations. Performance in detection and localization of concrete tubes and steel tanks is compared as a function of acquisition configuration. The comparison is done using 100 MHz and 200 MHz center frequency antennas. All tubes and tanks are buried at the geophysical test site of IAG/USP in Sao Paulo city, Brazil. The results show that the long steel pipe with a 38-mm diameter was well detected with the perpendicular broadside configuration. The concrete tubes were better detected with the parallel broadside configuration, clearly showing hyperbolic diffraction events from all targets up to 2-m depth. Steel tanks were detected with the two configurations. However, the parallel broadside configuration was generated to a much lesser extent an apparent hyperbolic reflection corresponding to constructive interference of diffraction hyperbolas of adjacent targets placed at the same depth. Vertical concrete tubes and steel tanks were better contained with parallel broadside antennas, where the apexes of the diffraction hyperbolas better corresponded to the horizontal location of the buried target disposition. The two configurations provide details about buried targets emphasizing how GPR multi-component configurations have the potential to improve the subsurface image quality as well as to discriminate different buried targets. It is judged that they hold some applicability in geotechnical and geoscientific studies. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper tackles the problem of showing that evolutionary algorithms for fuzzy clustering can be more efficient than systematic (i.e. repetitive) approaches when the number of clusters in a data set is unknown. To do so, a fuzzy version of an Evolutionary Algorithm for Clustering (EAC) is introduced. A fuzzy cluster validity criterion and a fuzzy local search algorithm are used instead of their hard counterparts employed by EAC. Theoretical complexity analyses for both the systematic and evolutionary algorithms under interest are provided. Examples with computational experiments and statistical analyses are also presented.
Resumo:
The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.
Resumo:
We consider the two-level network design problem with intermediate facilities. This problem consists of designing a minimum cost network respecting some requirements, usually described in terms of the network topology or in terms of a desired flow of commodities between source and destination vertices. Each selected link must receive one of two types of edge facilities and the connection of different edge facilities requires a costly and capacitated vertex facility. We propose a hybrid decomposition approach which heuristically obtains tentative solutions for the vertex facilities number and location and use these solutions to limit the computational burden of a branch-and-cut algorithm. We test our method on instances of the power system secondary distribution network design problem. The results show that the method is efficient both in terms of solution quality and computational times. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this article we propose a 0-1 optimization model to determine a crop rotation schedule for each plot in a cropping area. The rotations have the same duration in all the plots and the crops are selected to maximize plot occupation. The crops may have different production times and planting dates. The problem includes planting constraints for adjacent plots and also for sequences of crops in the rotations. Moreover, cultivating crops for green manuring and fallow periods are scheduled into each plot. As the model has, in general, a great number of constraints and variables, we propose a heuristics based on column generation. To evaluate the performance of the model and the method, computational experiments using real-world data were performed. The solutions obtained indicate that the method generates good results.
Resumo:
In this paper we present a genetic algorithm with new components to tackle capacitated lot sizing and scheduling problems with sequence dependent setups that appear in a wide range of industries, from soft drink bottling to food manufacturing. Finding a feasible solution to highly constrained problems is often a very difficult task. Various strategies have been applied to deal with infeasible solutions throughout the search. We propose a new scheme of classifying individuals based on nested domains to determine the solutions according to the level of infeasibility, which in our case represents bands of additional production hours (overtime). Within each band, individuals are just differentiated by their fitness function. As iterations are conducted, the widths of the bands are dynamically adjusted to improve the convergence of the individuals into the feasible domain. The numerical experiments on highly capacitated instances show the effectiveness of this computational tractable approach to guide the search toward the feasible domain. Our approach outperforms other state-of-the-art approaches and commercial solvers. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper addresses the independent multi-plant, multi-period, and multi-item capacitated lot sizing problem where transfers between the plants are allowed. This is an NP-hard combinatorial optimization problem and few solution methods have been proposed to solve it. We develop a GRASP (Greedy Randomized Adaptive Search Procedure) heuristic as well as a path-relinking intensification procedure to find cost-effective solutions for this problem. In addition, the proposed heuristics is used to solve some instances of the capacitated lot sizing problem with parallel machines. The results of the computational tests show that the proposed heuristics outperform other heuristics previously described in the literature. The results are confirmed by statistical tests. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this paper we consider the programming of job rotation in the assembly line worker assignment and balancing problem. The motivation for this study comes from the designing of assembly lines in sheltered work centers for the disabled, where workers have different task execution times. In this context, the well-known training aspects associated with job rotation are particularly desired. We propose a metric along with a mixed integer linear model and a heuristic decomposition method to solve this new job rotation problem. Computational results show the efficacy of the proposed heuristics. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A lot sizing and scheduling problem prevalent in small market-driven foundries is studied. There are two related decision levels: (I the furnace scheduling of metal alloy production, and (2) moulding machine planning which specifies the type and size of production lots. A mixed integer programming (MIP) formulation of the problem is proposed, but is impractical to solve in reasonable computing time for non-small instances. As a result, a faster relax-and-fix (RF) approach is developed that can also be used on a rolling horizon basis where only immediate-term schedules are implemented. As well as a MIP method to solve the basic RF approach, three variants of a local search method are also developed and tested using instances based on the literature. Finally, foundry-based tests with a real-order book resulted in a very substantial reduction of delivery delays and finished inventory, better use of capacity, and much faster schedule definition compared to the foundry`s own practice. (c) 2006 Elsevier Ltd. All rights reserved.