562 resultados para Heuristics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce a new algorithm to automatically identify the time and pixel location of foot contact events in high speed video of sprinters. We use this information to autonomously synchronise and overlay multiple recorded performances to provide feedback to athletes and coaches during their training sessions. The algorithm exploits the variation in speed of different parts of the body during sprinting. We use an array of foreground accumulators to identify short-term static pixels and a temporal analysis of the associated static regions to identify foot contacts. We evaluated the technique using 13 videos of three sprinters. It successfully identifed 55 of the 56 contacts, with a mean localisation error of 1.39±1.05 pixels. Some videos were also seen to produce additional, spurious contacts. We present heuristics to help identify the true contacts. © 2011 Springer-Verlag Berlin Heidelberg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many transductive inference algorithms assume that distributions over training and test estimates should be related, e.g. by providing a large margin of separation on both sets. We use this idea to design a transduction algorithm which can be used without modification for classification, regression, and structured estimation. At its heart we exploit the fact that for a good learner the distributions over the outputs on training and test sets should match. This is a classical two-sample problem which can be solved efficiently in its most general form by using distance measures in Hilbert Space. It turns out that a number of existing heuristics can be viewed as special cases of our approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a Newton-like iteration that evolves on the set of fixed dimensional subspaces of ℝ n and converges locally cubically to the invariant subspaces of a symmetric matrix. This iteration is compared in terms of numerical cost and global behavior with three other methods that display the same property of cubic convergence. Moreover, we consider heuristics that greatly improve the global behavior of the iterations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spatial normalisation is a key element of statistical parametric mapping and related techniques for analysing cohort statistics on voxel arrays and surfaces. The normalisation process involves aligning each individual specimen to a template using some sort of registration algorithm. Any misregistration will result in data being mapped onto the template at the wrong location. At best, this will introduce spatial imprecision into the subsequent statistical analysis. At worst, when the misregistration varies systematically with a covariate of interest, it may lead to false statistical inference. Since misregistration generally depends on the specimen's shape, we investigate here the effect of allowing for shape as a confound in the statistical analysis, with shape represented by the dominant modes of variation observed in the cohort. In a series of experiments on synthetic surface data, we demonstrate how allowing for shape can reveal true effects that were previously masked by systematic misregistration, and also guard against misinterpreting systematic misregistration as a true effect. We introduce some heuristics for disentangling misregistration effects from true effects, and demonstrate the approach's practical utility in a case study of the cortical bone distribution in 268 human femurs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Central Venous Catheterisation (CVC) has occasionally been associated with cases of retained guidewires in patients after surgery. In theory, this is a completely avoidable complication; however, as with any human procedure, operator error leading to guidewires being occasionally retained cannot be fully eliminated. OBJECTIVE: The work described here investigated the issue in an attempt to better understand it both from an operator and a systems perspective, and to ultimately recommend appropriate safe design solutions that reduce guidewire retention errors. METHODS: Nine distinct methods were used: observations of the procedure, a literature review, interviewing CVC end-users, task analysis construction, CVC procedural audits, two human reliability assessments, usability heuristics and a comprehensive solution survey with CVC end-users. RESULTS: The three solutions that operators rated most highly, in terms of both practicality and effectiveness, were: making trainees better aware of the potential guidewire complications and strongly emphasising guidewire removal in CVC training, actively checking that the guidewire is present in the waste tray for disposal, and standardising purchase of central line sets so that differences that may affect chances of guidewire loss is minimised. CONCLUSIONS: Further work to eliminate/engineer out the possibility of guidewires being retained is proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e. g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only "nice to have" but is in fact a necessary tool for designing embodied agents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Combinatorial testing is an important testing method. It requires the test cases to cover various combinations of parameters of the system under test. The test generation problem for combinatorial testing can be modeled as constructing a matrix which has certain properties. This paper first discusses two combinatorial testing criteria: covering array and orthogonal array, and then proposes a backtracking search algorithm to construct matrices satisfying them. Several search heuristics and symmetry breaking techniques are used to reduce the search time. This paper also introduces some techniques to generate large covering array instances from smaller ones. All the techniques have been implemented in a tool called EXACT (EXhaustive seArch of Combinatorial Test suites). A new optimal covering array is found by this tool.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

针对多品种批量生产类型,建立了调度约束的生产计划与调度集成优化模型。模型的目标函数是使总调整费用、库存费用及生产费用之和最小,约束函数包括库存平衡约束和生产能力约束,同时考虑了调度约束,即工序顺序约束和工件在单机上的加工能力约束,保证了计划可行性。该模型为两层混合整数规划模型,对其求解综合运用了遗传算法和启发式规则,提出了混合启发式求解算法。最后,针对某机床厂多品种批量生产类型车间进行了实例应用,对车间零件月份作业计划进行分解,得到各工段单元零件周作业计划,确定了零件各周生产批量与投产顺序。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Problem solving is one of the basic processes of human cognition and heuristic strategy is the key to human problem solving, hence, the studies on heuristic strategy is of great importance in cognitive psychology. Current studies on heuristics in problem solving may be summarized as follows: nature and structure of heuristics, problem structure and representation, expert knowledge and expert intuition, nature and role of image, social cognition and social learning. The present study deals with the nature and structure of heuristics. The Solitaire problem was used in our the experiments. Both traditional experimental method and computer simulation were used to study the nature and structure of heuristics. Through a series of experiments, the knowledge of Solitaire problem solving was summed up, its metastrategy is worked out, and then the the metastrategy by computer simulation and experimental verification are tested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Affine transformations are often used in recognition systems, to approximate the effects of perspective projection. The underlying mathematics is for exact feature data, with no positional uncertainty. In practice, heuristics are added to handle uncertainty. We provide a precise analysis of affine point matching, obtaining an expression for the range of affine-invariant values consistent with bounded uncertainty. This analysis reveals that the range of affine-invariant values depends on the actual $x$-$y$-positions of the features, i.e. with uncertainty, affine representations are not invariant with respect to the Cartesian coordinate system. We analyze the effect of this on geometric hashing and alignment recognition methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I have invented "Internet Fish," a novel class of resource-discovery tools designed to help users extract useful information from the Internet. Internet Fish (IFish) are semi-autonomous, persistent information brokers; users deploy individual IFish to gather and refine information related to a particular topic. An IFish will initiate research, continue to discover new sources of information, and keep tabs on new developments in that topic. As part of the information-gathering process the user interacts with his IFish to find out what it has learned, answer questions it has posed, and make suggestions for guidance. Internet Fish differ from other Internet resource discovery systems in that they are persistent, personal and dynamic. As part of the information-gathering process IFish conduct extended, long-term conversations with users as they explore. They incorporate deep structural knowledge of the organization and services of the net, and are also capable of on-the-fly reconfiguration, modification and expansion. Human users may dynamically change the IFish in response to changes in the environment, or IFish may initiate such changes itself. IFish maintain internal state, including models of its own structure, behavior, information environment and its user; these models permit an IFish to perform meta-level reasoning about its own structure. To facilitate rapid assembly of particular IFish I have created the Internet Fish Construction Kit. This system provides enabling technology for the entire class of Internet Fish tools; it facilitates both creation of new IFish as well as additions of new capabilities to existing ones. The Construction Kit includes a collection of encapsulated heuristic knowledge modules that may be combined in mix-and-match fashion to create a particular IFish; interfaces to new services written with the Construction Kit may be immediately added to "live" IFish. Using the Construction Kit I have created a demonstration IFish specialized for finding World-Wide Web documents related to a given group of documents. This "Finder" IFish includes heuristics that describe how to interact with the Web in general, explain how to take advantage of various public indexes and classification schemes, and provide a method for discovering similarity relationships among documents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates a new approach to lattice basis reduction suggested by M. Seysen. Seysen's algorithm attempts to globally reduce a lattice basis, whereas the Lenstra, Lenstra, Lovasz (LLL) family of reduction algorithms concentrates on local reductions. We show that Seysen's algorithm is well suited for reducing certain classes of lattice bases, and often requires much less time in practice than the LLL algorithm. We also demonstrate how Seysen's algorithm for basis reduction may be applied to subset sum problems. Seysen's technique, used in combination with the LLL algorithm, and other heuristics, enables us to solve a much larger class of subset sum problems than was previously possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report describes a knowledge-base system in which the information is stored in a network of small parallel processing elements ??de and link units ??ich are controlled by an external serial computer. This network is similar to the semantic network system of Quillian, but is much more tightly controlled. Such a network can perform certain critical deductions and searches very quickly; it avoids many of the problems of current systems, which must use complex heuristics to limit and guided their searches. It is argued (with examples) that the key operation in a knowledge-base system is the intersection of large explicit and semi-explicit sets. The parallel network system does this in a small, essentially constant number of cycles; a serial machine takes time proportional to the size of the sets, except in special cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A computer program, named ADEPT (A Distinctly Empirical Prover of Theorems), has been written which proves theorems taken from the abstract theory of groups. Its operation is basically heuristic, incorporating many of the techniques of the human mathematician in a "natural" way. This program has proved almost 100 theorems, as well as serving as a vehicle for testing and evaluating special-purpose heuristics. A detailed description of the program is supplemented by accounts of its performance on a number of theorems, thus providing many insights into the particular problems inherent in the design of a procedure capable of proving a variety of theorems from this domain. Suggestions have been formulated for further efforts along these lines, and comparisons with related work previously reported in the literature have been made.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes some aspects of a computer system for doing medical diagnosis in the specialized field of kidney disease. Because such a system faces the spectre of combinatorial explosion, this discussion concentrates on heuristics which control the number of concurrent hypotheses and efficient "compiled" representations of medical knowledge. In particular, the differential diagnosis of hematuria (blood in the urine) is discussed in detail. A protocol of a simulated doctor/patient interaction is presented and analyzed to determine the crucial structures and processes involved in the diagnosis procedure. The data structure proposed for representing medical information revolves around elementary hypotheses which are activated when certain disposing of findings, activating hypotheses, evaluating hypotheses locally and combining hypotheses globally is examined for its heuristic implications. The thesis attempts to fit the problem of medical diagnosis into the framework of other Artifcial Intelligence problems and paradigms and in particular explores the notions of pure search vs. heuristic methods, linearity and interaction, local vs. global knowledge and the structure of hypotheses within the world of kidney disease.