916 resultados para heuristic
Resumo:
Dissertation considers the birth of modernist and avant-gardist authorship as a reaction against mass society and massculture. Radical avant-gardism is studied as figurative violence done against the human form. The main argument claims avant-gardist authorship to be an act of masculine autogenesis. This act demands human form to be worked to an elementary state of disarticulateness, then to be reformed to the model of the artist's own psychophysical and idiosyncratic vision and experience. This work is connected to concrete mass, mass of pigment, charcoal, film, or flesh. This mass of the figure is worked to create a likeness in the nervous system of the spectator. The act of violence against the human figure is intended to shock the spectator. This shock is also a state of emotional and perceptional massification. I use theatrical image as heuristic tool and performance analysis, connecting figure and spectator into a larger image, which is constituted by relationships of mimesis, where figure presents the likeness of the spectator and spectator the likeness of the figure. Likeness is considered as both gestural - social mimetic - and sensuous - kinesthetically mimetic. Through this kind of construction one can describe and contextualize the process of violent autogenesis using particular images as case studies. Avant-gardist author is the author of theatrical image, not particular figure, and through act of massification the nervous system of the spectator is also part of this image. This is the most radical form and ideology of avant-gardist and modernist authorship or imagerial will to power. I construct a model of gestural-mimic performer to explicate the nature of violence done for human form in specific works, in Mann's novella Death in Venice, in Schiele's and Artaud's selfportaits, in Francis Bacon's paintings, in Beckett's shortplat NOT I, in Orlan's chirurgical performance Operation Omnipresense, in Cindy Sherman's Film/Stills, in Diamanda Galás's recording Vena Cava and in Hitchcock's Psycho. Masspsychology constructed a phobic picture of human form's plasticity and capability to be constituted by influencies coming both inside and outside - childhood, atavistic organic memories, urban field of nervous impulses, unconsciousness, capitalist (image)market and democratic masspolitics. Violence is then antimimetic and antitheatrical, a paradoxical situation, considering that massmedias and massaudiences created an enormous fascination about possibilities of theatrical and hypnotic influence in artistic elites. The problem was how to use theatrical image without coming as author under influence. In this work one possible answer is provided: by destructing the gestural-mimetic performer, by eliminating representations of mimic body techniques from the performer of human (a painted figure, a photographed figure, a filmed figure or an acted figure, audiovisual or vocal) figure. This work I call the chirurgical operation, which also indicates co-option with medical portraitures or medico-cultural diagnoses of human form. Destruction of the autonomy of the performer was a parallel process to constructing the new mass media audience as passive, plastic, feminine. The process created an image of a new kind of autotelic masculine author-hero, freed from human form in its bourgeois, aristocratic, classical and popular versions.
Resumo:
In this study I discuss G. W. Leibniz's (1646-1716) views on rational decision-making from the standpoint of both God and man. The Divine decision takes place within creation, as God freely chooses the best from an infinite number of possible worlds. While God's choice is based on absolutely certain knowledge, human decisions on practical matters are mostly based on uncertain knowledge. However, in many respects they could be regarded as analogous in more complicated situations. In addition to giving an overview of the divine decision-making and discussing critically the criteria God favours in his choice, I provide an account of Leibniz's views on human deliberation, which includes some new ideas. One of these concerns is the importance of estimating probabilities in making decisions one estimates both the goodness of the act itself and its consequences as far as the desired good is concerned. Another idea is related to the plurality of goods in complicated decisions and the competition this may provoke. Thirdly, heuristic models are used to sketch situations under deliberation in order to help in making the decision. Combining the views of Marcelo Dascal, Jaakko Hintikka and Simo Knuuttila, I argue that Leibniz applied two kinds of models of rational decision-making to practical controversies, often without explicating the details. The more simple, traditional pair of scales model is best suited to cases in which one has to decide for or against some option, or to distribute goods among parties and strive for a compromise. What may be of more help in more complicated deliberations is the novel vectorial model, which is an instance of the general mathematical doctrine of the calculus of variations. To illustrate this distinction, I discuss some cases in which he apparently applied these models in different kinds of situation. These examples support the view that the models had a systematic value in his theory of practical rationality.
Resumo:
Secure communication channels are typically constructed from an authenticated key exchange (AKE) protocol, which authenticates the communicating parties and establishes shared secret keys, and a secure data transmission layer, which uses the secret keys to encrypt data. We address the partial leakage of communicating parties' long-term secret keys due to various side-channel attacks, and the partial leakage of plaintext due to data compression. Both issues can negatively affect the security of channel establishment and data transmission. In this work, we advance the modelling of security for AKE protocols by considering more granular partial leakage of parties' long-term secrets. We present generic and concrete constructions of two-pass leakage-resilient key exchange protocols that are secure in the proposed security models. We also examine two techniques--heuristic separation of secrets and fixed-dictionary compression--for enabling compression while protecting high-value secrets.
Resumo:
An assessment of the relative influences of management and environment on the composition of floodplain grasslands of north-western New South Wales was made using a regional vegetation survey sampling a range of land tenures (e. g. private property, travelling stock routes and nature reserves). A total of 364 taxa belonging to 55 different plant families was recorded. Partitioning of variance with redundancy analysis determined that environmental variables accounted for a greater proportion (61.3%) of the explained variance in species composition than disturbance-related variables (37.6%). Soil type (and fertility), sampling time and rainfall had a strong influence on species composition and there were also east-west variations in composition across the region. Of the disturbance-related variables, cultivation, stocking rate and flooding frequency were all influential. Total, native, forb, shrub and subshrub richness were positively correlated with increasing time since cultivation. Flood frequency was positively correlated with graminoid species richness and was negatively correlated with total and forb species richness. Site species richness was also influenced by environmental variables (e. g. soil type and rainfall). Despite the resilience of these grasslands, some forms of severe disturbance (e. g. several years of cultivation) can result in removal of some dominant perennial grasses (e. g. Astrebla spp.) and an increase in disturbance specialists. A simple heuristic transitional model is proposed that has conceptual thresholds for plant biodiversity status. This knowledge representation may be used to assist in the management of these grasslands by defining four broad levels of community richness and the drivers that change this status.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.
Resumo:
We study the performance of greedy scheduling in multihop wireless networks where the objective is aggregate utility maximization. Following standard approaches, we consider the dual of the original optimization problem. Optimal scheduling requires selecting independent sets of maximum aggregate price, but this problem is known to be NP-hard. We propose and evaluate a simple greedy heuristic. We suggest how the greedy heuristic can be implemented in a distributed manner. We evaluate an analytical bound in detail, for the special case of a line graph and also provide a loose bound on the greedy heuristic for the case of an arbitrary graph.
Resumo:
Department of Forest Resource Management in the University of Helsinki has in years 2004?2007 carried out so-called SIMO -project to develop a new generation planning system for forest management. Project parties are organisations doing most of Finnish forest planning in government, industry and private owned forests. Aim of this study was to find out the needs and requirements for new forest planning system and to clarify how parties see targets and processes in today's forest planning. Representatives responsible for forest planning in each organisation were interviewed one by one. According to study the stand-based system for managing and treating forests continues in the future. Because of variable data acquisition methods with different accuracy and sources, and development of single tree interpretation, more and more forest data is collected without field work. The benefits of using more specific forest data also calls for use of information units smaller than tree stand. In Finland the traditional way to arrange forest planning computation is divided in two elements. After updating the forest data to present situation every stand unit's growth is simulated with different alternative treatment schedule. After simulation, optimisation selects for every stand one treatment schedule so that the management program satisfies the owner's goals in the best possible way. This arrangement will be maintained in the future system. The parties' requirements to add multi-criteria problem solving, group decision support methods as well as heuristic and spatial optimisation into system make the programming work more challenging. Generally the new system is expected to be adjustable and transparent. Strict documentation and free source code helps to bring these expectations into effect. Variable growing models and treatment schedules with different source information, accuracy, methods and the speed of processing are supposed to work easily in system. Also possibilities to calibrate models regionally and to set local parameters changing in time are required. In future the forest planning system will be integrated in comprehensive data management systems together with geographic, economic and work supervision information. This requires a modular method of implementing the system and the use of a simple data transmission interface between modules and together with other systems. No major differences in parties' view of the systems requirements were noticed in this study. Rather the interviews completed the full picture from slightly different angles. In organisation the forest management is considered quite inflexible and it only draws the strategic lines. It does not yet have a role in operative activity, although the need and benefits of team level forest planning are admitted. Demands and opportunities of variable forest data, new planning goals and development of information technology are known. Party organisations want to keep on track with development. One example is the engagement in extensive SIMO-project which connects the whole field of forest planning in Finland.
Resumo:
(i) Incistrans pairs of cyclic 1,3-dicarboxylic acid ethyl esters thecis-foms exhibit higher O-methylene proton (HA, HB) anisochrony than thetrans-forms; (ii) anisochrony, easily observed in certain decalin-10-carboxylic ethyl esters, ‘disappears’ on one of the rings attaining the possibility of transforming into a ‘twist’ form; (iii) in certain pairs of chiralsecethyl esters and theirtert-methylated analogues anisochrony is higher in the latter, contrary to expectation, while, in certain others, the reverse is observed. Attempted explanations are based on assessments whether H A and H B are or are not in highly different magnetic environments in confomers regarded as preferred. This subsumes the possibility thatXYZC-CO2H A H B Me chiral ethyl acetates differ fromXYZC-CH A H B Me ethanes because intervention by the carboxyl group insulates the prochiral centre and allows anisotropic effects to gain somewhat in importance among mechanisms that discriminate between H A and H B so long as rotamerpopulation inequalities persist. Background information on why rotamer-population inequalities will always persist and on a heuristic that attempts to generalize the effects ofXYZ inXYZC - CU AUB V is provided. Possible effects when connectivity exists between a pair amongX, Y, Z or when specific interactions occur betweenV andX, Y orZ are considered. An interpretation in terms of ‘increasing conformational mobility’ has been suggested for the observed increase in the rate of temperature-dependence of O-methylene anisochrony down a series of chiral ethyl esters.
Resumo:
The velocity ratio algorithm developed from a heuristic study of transfer matrix multiplication has been employed to bring out the relative effects of the elements constituting a linear, one-dimensional acoustic filter, the overall dimensions of which are fixed, and synthesize a suitable straight-through configuration for a low-pass, wide-band, non-dissipative acoustic filter. The potential of the foregoing approach in applications to the rational design of practical acoustic filters such as automotive mufflers is indicated.
Resumo:
The possibility or the impossibility of separating the particle and the electrode interactions is discussed in a wider context of the effects due to any two interaction potentials on the equation of state. The involved nature of the pressure dependence on two individually definable forces is illustrated through the Percus Yevick results for the adhesive hard spheres. An alternative form of the adsorption isotherm is given to bring home the intimate relationship between the actual equation of state and the free energy of adsorption. Thermodynamic consequences of congruence with respect to E (or q) as reflected through the linear plots of q (or E) vs. θ are well known. Mathematical consequences of simultaneous congruence have been pointed out recently. In this paper, the physical nature of congruence hypothesis is revealed. In passing "the pseudo-congruence" is also discussed. It is emphasised that the problem is no less ambiguous with regard to modelling the particle/particle interaction. The ad hoc nature of our dependence of the available equations of state is emphasised through a discussion on the HFL theory. Finally, a heuristic method for modelling ΔG mathematically-incorporating its behaviour at saturation coverages-is advanced. The more interesting aspects of this approach, which generalises almost all isotherms hitherto known, are sketched.
Resumo:
In an earlier paper [1], it has been shown that velocity ratio, defined with reference to the analogous circuit, is a basic parameter in the complete analysis of a linear one-dimensional dynamical system. In this paper it is shown that the terms constituting velocity ratio can be readily determined by means of an algebraic algorithm developed from a heuristic study of the process of transfer matrix multiplication. The algorithm permits the set of most significant terms at a particular frequency of interest to be identified from a knowledge of the relative magnitudes of the impedances of the constituent elements of a proposed configuration. This feature makes the algorithm a potential tool in a first approach to a rational design of a complex dynamical filter. This algorithm is particularly suited for the desk analysis of a medium size system with lumped as well as distributed elements.
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
This study presents a comprehensive mathematical formulation model for a short-term open-pit mine block sequencing problem, which considers nearly all relevant technical aspects in open-pit mining. The proposed model aims to obtain the optimum extraction sequences of the original-size (smallest) blocks over short time intervals and in the presence of real-life constraints, including precedence relationship, machine capacity, grade requirements, processing demands and stockpile management. A hybrid branch-and-bound and simulated annealing algorithm is developed to solve the problem. Computational experiments show that the proposed methodology is a promising way to provide quantitative recommendations for mine planning and scheduling engineers.
Resumo:
This paper proposes a new multi-stage mine production timetabling (MMPT) model to optimise open-pit mine production operations including drilling, blasting and excavating under real-time mining constraints. The MMPT problem is formulated as a mixed integer programming model and can be optimally solved for small-size MMPT instances by IBM ILOG-CPLEX. Due to NP-hardness, an improved shifting-bottleneck-procedure algorithm based on the extended disjunctive graph is developed to solve large-size MMPT instances in an effective and efficient way. Extensive computational experiments are presented to validate the proposed algorithm that is able to efficiently obtain the near-optimal operational timetable of mining equipment units. The advantages are indicated by sensitivity analysis under various real-life scenarios. The proposed MMPT methodology is promising to be implemented as a tool for mining industry because it is straightforwardly modelled as a standard scheduling model, efficiently solved by the heuristic algorithm, and flexibly expanded by adopting additional industrial constraints.
Resumo:
Incursions of plant pests and diseases pose serious threats to food security, agricultural productivity and the natural environment. One of the challenges in confidently delimiting and eradicating incursions is how to choose from an arsenal of surveillance and quarantine approaches in order to best control multiple dispersal pathways. Anthropogenic spread (propagules carried on humans or transported on produce or equipment) can be controlled with quarantine measures, which in turn can vary in intensity. In contrast, environmental spread processes are more difficult to control, but often have a temporal signal (e.g. seasonality) which can introduce both challenges and opportunities for surveillance and control. This leads to complex decisions regarding when, where and how to search. Recent modelling investigations of surveillance performance have optimised the output of simulation models, and found that a risk-weighted randomised search can perform close to optimally. However, exactly how quarantine and surveillance strategies should change to reflect different dispersal modes remains largely unaddressed. Here we develop a spatial simulation model of a plant fungal-pathogen incursion into an agricultural region, and its subsequent surveillance and control. We include structural differences in dispersal via the interplay of biological, environmental and anthropogenic connectivity between host sites (farms). Our objective was to gain broad insights into the relative roles played by different spread modes in propagating an invasion, and how incorporating knowledge of these spread risks may improve approaches to quarantine restrictions and surveillance. We find that broad heuristic rules for quarantine restrictions fail to contain the pathogen due to residual connectivity between sites, but surveillance measures enable early detection and successfully lead to suppression of the pathogen in all farms. Alternative surveillance strategies attain similar levels of performance by incorporating environmental or anthropogenic dispersal risk in the prioritisation of sites. Our model provides the basis to develop essential insights into the effectiveness of different surveillance and quarantine decisions for fungal pathogen control. Parameterised for authentic settings it will aid our understanding of how the extent and resolution of interventions should suitably reflect the spatial structure of dispersal processes.