895 resultados para Anchoring heuristic
Resumo:
This paper describes a new strategy for the blind equalization so that the blind Constant Module Algorithm (CMA) can be smoothly switched to the decision- directed (DD) equalization. First, we propose a combination approach by running the CMA and DD equalization simultaneously to obtain a smooth switch between them. We then describe an "anchoring process" to eliminate the effect from the CMA at the steady state to achieve low residual noise. The overall equalization can be regarded as the DD equalization being anchored by the combination approach. Numerical simulations are given to verify the proposed strategy.
Resumo:
We introduce and describe the Multiple Gravity Assist problem, a global optimisation problem that is of great interest in the design of spacecraft and their trajectories. We discuss its formalization and we show, in one particular problem instance, the performance of selected state of the art heuristic global optimisation algorithms. A deterministic search space pruning algorithm is then developed and its polynomial time and space complexity derived. The algorithm is shown to achieve search space reductions of greater than six orders of magnitude, thus reducing significantly the complexity of the subsequent optimisation.
Resumo:
While search is normally modelled by economists purely in terms of decisions over making observations, this paper models it as a process in which information is gained through feedback from innovatory product launches. The information gained can then be used to decide whether to exercise real options. In the model the initial decisions involve a product design and the scale of production capacity. There are then real options to change these factors based on what is learned. The case of launching product variants in parallel is also considered. Under ‘true’ uncertainty, the model can be seen in terms of heuristic decision-making based on subjective beliefs with limited foresight. Search costs, the values of the real options, beliefs, and the cost of capital are all shown to be significant in determining the search path.
Resumo:
Radial basis functions can be combined into a network structure that has several advantages over conventional neural network solutions. However, to operate effectively the number and positions of the basis function centres must be carefully selected. Although no rigorous algorithm exists for this purpose, several heuristic methods have been suggested. In this paper a new method is proposed in which radial basis function centres are selected by the mean-tracking clustering algorithm. The mean-tracking algorithm is compared with k means clustering and it is shown that it achieves significantly better results in terms of radial basis function performance. As well as being computationally simpler, the mean-tracking algorithm in general selects better centre positions, thus providing the radial basis functions with better modelling accuracy
Resumo:
The use of expert system techniques in power distribution system design is examined. The selection and siting of equipment on overhead line networks is chosen for investigation as the use of equipment such as auto-reclosers, etc., represents a substantial investment and has a significant effect on the reliability of the system. Through past experience with both equipment and network operations, most decisions in selection and siting of this equipment are made intuitively, following certain general guidelines or rules of thumb. This heuristic nature of the problem lends itself to solution using an expert system approach. A prototype has been developed and is currently under evaluation in the industry. Results so far have demonstrated both the feasibility and benefits of the expert system as a design aid.
Resumo:
We report the single-crystal X-ray structure for the complex of the bisacridine bis-(9-aminooctyl(2-(dimethylaminoethyl)acridine-4-carboxamide)) with the oligonucleotide d(CGTACG)2 to a resolution of 2.4 Å. Solution studies with closed circular DNA show this compound to be a bisintercalating threading agent, but so far we have no crystallographic or NMR structural data conforming to the model of contiguous intercalation within the same duplex. Here, with the hexameric duplex d(CGTACG), the DNA is observed to undergo a terminal cytosine base exchange to yield an unusual guanine quadruplex intercalation site through which the bisacridine threads its octamethylene linker to fuse two DNA duplexes. The 4-carboxamide side-chains form anchoring hydrogen-bonding interactions with guanine O6 atoms on each side of the quadruplex. This higher-order DNA structure provides insight into an unexpected property of bisintercalating threading agents, and suggests the idea of targeting such compounds specifically at four-way DNA junctions.
Resumo:
In this contribution we aim at anchoring Agent-Based Modeling (ABM) simulations in actual models of human psychology. More specifically, we apply unidirectional ABM to social psychological models using low level agents (i.e., intra-individual) to examine whether they generate better predictions, in comparison to standard statistical approaches, concerning the intentions of performing a behavior and the behavior. Moreover, this contribution tests to what extent the predictive validity of models of attitude such as the Theory of Planned Behavior (TPB) or Model of Goal-directed Behavior (MGB) depends on the assumption that peoples’ decisions and actions are purely rational. Simulations were therefore run by considering different deviations from rationality of the agents with a trembling hand method. Two data sets concerning respectively the consumption of soft drinks and physical activity were used. Three key findings emerged from the simulations. First, compared to standard statistical approach the agent-based simulation generally improves the prediction of behavior from intention. Second, the improvement in prediction is inversely proportional to the complexity of the underlying theoretical model. Finally, the introduction of varying degrees of deviation from rationality in agents’ behavior can lead to an improvement in the goodness of fit of the simulations. By demonstrating the potential of ABM as a complementary perspective to evaluating social psychological models, this contribution underlines the necessity of better defining agents in terms of psychological processes before examining higher levels such as the interactions between individuals.
Resumo:
Proponents of the “fast and frugal” approach to decision-making suggest that inferential judgments are best made on the basis of limited information. For example, if only one of two cities is recognized and the task is to judge which city has the larger population, the recognition heuristic states that the recognized city should be selected. In preference choices with >2 options, it is also standard to assume that a “consideration set”, based upon some simple criterion, is established to reduce the options available. A multinomial processing tree model is outlined which provides the basis for estimating the extent to which recognition is used as a criterion in establishing a consideration set for inferential judgments.
Resumo:
This project is concerned with the way that illustrations, photographs, diagrams and graphs, and typographic elements interact to convey ideas on the book page. A framework for graphic description is proposed to elucidate this graphic language of ‘complex texts’. The model is built up from three main areas of study, with reference to a corpus of contemporary children’s science books. First, a historical survey puts the subjects for study in context. Then a multidisciplinary discussion of graphic communication provides a theoretical underpinning for the model; this leads to various proposals, such as the central importance of ratios and relationships among parts in creating meaning in graphic communication. Lastly a series of trials in description contribute to the structure of the model itself. At the heart of the framework is an organising principle that integrates descriptive models from fields of design, literary criticism, art history, and linguistics, among others, as well as novel categories designed specifically for book design. Broadly, design features are described in terms of elemental component parts (micro-level), larger groupings of these (macro-level), and finally in terms of overarching, ‘whole book’ qualities (meta-level). Various features of book design emerge at different levels; for instance, the presence of nested discursive structures, a form of graphic recursion in editorial design, is proposed at the macro-level. Across these three levels are the intersecting categories of ‘rule’ and ‘context’, offering different perspectives with which to describe graphic characteristics. Contextbased features are contingent on social and cultural environment, the reader’s previous knowledge, and the actual conditions of reading; rule-based features relate to the systematic or codified aspects of graphic language. The model aims to be a frame of reference for graphic description, of use in different forms of qualitative or quantitative research and as a heuristic tool in practice and teaching.
Resumo:
Background A whole-genome genotyping array has previously been developed for Malus using SNP data from 28 Malus genotypes. This array offers the prospect of high throughput genotyping and linkage map development for any given Malus progeny. To test the applicability of the array for mapping in diverse Malus genotypes, we applied the array to the construction of a SNPbased linkage map of an apple rootstock progeny. Results Of the 7,867 Malus SNP markers on the array, 1,823 (23.2 %) were heterozygous in one of the two parents of the progeny, 1,007 (12.8 %) were heterozygous in both parental genotypes, whilst just 2.8 % of the 921 Pyrus SNPs were heterozygous. A linkage map spanning 1,282.2 cM was produced comprising 2,272 SNP markers, 306 SSR markers and the S-locus. The length of the M432 linkage map was increased by 52.7 cM with the addition of the SNP markers, whilst marker density increased from 3.8 cM/marker to 0.5 cM/marker. Just three regions in excess of 10 cM remain where no markers were mapped. We compared the positions of the mapped SNP markers on the M432 map with their predicted positions on the ‘Golden Delicious’ genome sequence. A total of 311 markers (13.7 % of all mapped markers) mapped to positions that conflicted with their predicted positions on the ‘Golden Delicious’ pseudo-chromosomes, indicating the presence of paralogous genomic regions or misassignments of genome sequence contigs during the assembly and anchoring of the genome sequence. Conclusions We incorporated data for the 2,272 SNP markers onto the map of the M432 progeny and have presented the most complete and saturated map of the full 17 linkage groups of M. pumila to date. The data were generated rapidly in a high-throughput semi-automated pipeline, permitting significant savings in time and cost over linkage map construction using microsatellites. The application of the array will permit linkage maps to be developed for QTL analyses in a cost-effective manner, and the identification of SNPs that have been assigned erroneous positions on the ‘Golden Delicious’ reference sequence will assist in the continued improvement of the genome sequence assembly for that variety.
Resumo:
We review and structure some of the mathematical and statistical models that have been developed over the past half century to grapple with theoretical and experimental questions about the stochastic development of aging over the life course. We suggest that the mathematical models are in large part addressing the problem of partitioning the randomness in aging: How does aging vary between individuals, and within an individual over the lifecourse? How much of the variation is inherently related to some qualities of the individual, and how much is entirely random? How much of the randomness is cumulative, and how much is merely short-term flutter? We propose that recent lines of statistical inquiry in survival analysis could usefully grapple with these questions, all the more so if they were more explicitly linked to the relevant mathematical and biological models of aging. To this end, we describe points of contact among the various lines of mathematical and statistical research. We suggest some directions for future work, including the exploration of information-theoretic measures for evaluating components of stochastic models as the basis for analyzing experiments and anchoring theoretical discussions of aging.
Resumo:
It has long been supposed that preference judgments between sets of to-be-considered possibilities are made by means of initially winnowing down the most promising-looking alternatives to form smaller “consideration sets” (Howard, 1963; Wright & Barbour, 1977). In preference choices with >2 options, it is standard to assume that a “consideration set”, based upon some simple criterion, is established to reduce the options available. Inferential judgments, in contrast, have more frequently been investigated in situations in which only two possibilities need to be considered (e.g., which of these two cities is the larger?) Proponents of the “fast and frugal” approach to decision-making suggest that such judgments are also made on the basis of limited, simple criteria. For example, if only one of two cities is recognized and the task is to judge which city has the larger population, the recognition heuristic states that the recognized city should be selected. A multinomial processing tree model is outlined which provides the basis for estimating the extent to which recognition is used as a criterion in establishing a consideration set for inferential judgments between three possible options.
Resumo:
A nonlinear symmetric stability theorem is derived in the context of the f-plane Boussinesq equations, recovering an earlier result of Xu within a more general framework. The theorem applies to symmetric disturbances to a baroclinic basic flow, the disturbances having arbitrary structure and magnitude. The criteria for nonlinear stability are virtually identical to those for linear stability. As in Xu, the nonlinear stability theorem can be used to obtain rigorous upper bounds on the saturation amplitude of symmetric instabilities. In a simple example, the bounds are found to compare favorably with heuristic parcel-based estimates in both the hydrostatic and non-hydrostatic limits.
Resumo:
This paper argues for the use of ‘fractals’ in theorising sociospatial relations. From a realist position, a nonmathematical but nonmetaphoric and descriptive view of ‘fractals’ is advanced. Insights from the natural sciences are combined with insights on the position of the observer from Luhmann and notions of assemblages and repetitions from Deleuze. It is argued that the notion of ‘fractals’ can augment current understanding of sociospatialities in three ways. First, it can pose questions about the scalar position of the observer or the grain of observation; second, as a signifier of particular attributes, it prompts observation and description of particular structuring processes; and third, the epistemic access afforded by the concept can open up possibilities for transformative interventions and thereby inform the same. The theoretical usefulness of the concept is demonstrated by discussing the territory, place, scale, and networks (TPSN) model for theorising sociospatial relations advanced by B Jessop, N Brenner, and M Jones in their 2008 paper “Theorizing sociospatial relations”, published in this journal (volume 26, pages 389–401). It is suggested that a heuristic arising from a ‘fractal’ ontology can contribute to a polymorphous, as opposed to polyvalent, understanding of sociospatial relations.
Resumo:
Rigorous upper bounds are derived on the saturation amplitude of baroclinic instability in the two-layer model. The bounds apply to the eddy energy and are obtained by appealing to a finite amplitude conservation law for the disturbance pseudoenergy. These bounds are to be distinguished from those derived in Part I of this study, which employed a pseudomomentum conservation law and provided bounds on the eddy potential enstrophy. The bounds apply to conservative (inviscid, unforced) flow, as well as to forced-dissipative flow when the dissipation is proportional to the potential vorticity. Bounds on the eddy energy are worked out for a general class of unstable westerly jets. In the special case of the Phillips model of baroclinic instability, and in the limit of infinitesimal initial eddy amplitude, the bound states that the eddy energy cannot exceed ϵβ2/6F where ϵ = (U − Ucrit)/Ucrit is the relative supercriticality. This bound captures the essential dynamical scalings (i.e., the dependence on ϵ, β, and F) of the saturation amplitudes predicted by weakly nonlinear theory, as well as exhibiting remarkable quantitative agreement with those predictions, and is also consistent with heuristic baroclinic adjustment estimates.