887 resultados para very strict Hurwitz
Resumo:
Most animals have significant behavioral expertise built in without having to explicitly learn it all from scratch. This expertise is a product of evolution of the organism; it can be viewed as a very long term form of learning which provides a structured system within which individuals might learn more specialized skills or abilities. This paper suggests one possible mechanism for analagous robot evolution by describing a carefully designed series of networks, each one being a strict augmentation of the previous one, which control a six legged walking machine capable of walking over rough terrain and following a person passively sensed in the infrared spectrum. As the completely decentralized networks are augmented, the robot's performance and behavior repertoire demonstrably improve. The rationale for such demonstrations is that they may provide a hint as to the requirements for automatically building massive networks to carry out complex sensory-motor tasks. The experiments with an actual robot ensure that an essence of reality is maintained and that no critical problems have been ignored.
Resumo:
The performance of a randomized version of the subgraph-exclusion algorithm (called Ramsey) for CLIQUE by Boppana and Halldorsson is studied on very large graphs. We compare the performance of this algorithm with the performance of two common heuristic algorithms, the greedy heuristic and a version of simulated annealing. These algorithms are tested on graphs with up to 10,000 vertices on a workstation and graphs as large as 70,000 vertices on a Connection Machine. Our implementations establish the ability to run clique approximation algorithms on very large graphs. We test our implementations on a variety of different graphs. Our conclusions indicate that on randomly generated graphs minor changes to the distribution can cause dramatic changes in the performance of the heuristic algorithms. The Ramsey algorithm, while not as good as the others for the most common distributions, seems more robust and provides a more even overall performance. In general, and especially on deterministically generated graphs, a combination of simulated annealing with either the Ramsey algorithm or the greedy heuristic seems to perform best. This combined algorithm works particularly well on large Keller and Hamming graphs and has a competitive overall performance on the DIMACS benchmark graphs.
Resumo:
Timing-related defects are major contributors to test escapes and in-field reliability problems for very-deep submicrometer integrated circuits. Small delay variations induced by crosstalk, process variations, power-supply noise, as well as resistive opens and shorts can potentially cause timing failures in a design, thereby leading to quality and reliability concerns. We present a test-grading technique that uses the method of output deviations for screening small-delay defects (SDDs). A new gate-delay defect probability measure is defined to model delay variations for nanometer technologies. The proposed technique intelligently selects the best set of patterns for SDD detection from an n-detect pattern set generated using timing-unaware automatic test-pattern generation (ATPG). It offers significantly lower computational complexity and excites a larger number of long paths compared to a current generation commercial timing-aware ATPG tool. Our results also show that, for the same pattern count, the selected patterns provide more effective coverage ramp-up than timing-aware ATPG and a recent pattern-selection method for random SDDs potentially caused by resistive shorts, resistive opens, and process variations. © 2010 IEEE.
Resumo:
In three related experiments, 250 participants rated properties of their autobiographical memory of a very negative event before and after writing about either their deepest thoughts and emotions of the event or a control topic. Levels of emotional intensity of the event, distress associated with the event, intrusive symptoms, and other phenomenological memory properties decreased over the course of the experiment, but did not differ by writing condition. We argue that the act of answering our extensive questions about a very negative event led to the decrease, thereby masking the effects of expressive writing. To show that the changes could not be explained by the mere passage of time, we replicated our findings in a fourth experiment in which all 208 participants nominated a very negative event, but only half the participants rated properties of their memory in the first session. Implications for reducing the effects of negative autobiographical memories are discussed.
Resumo:
Very Large Transport Aircraft (VLTA) pose considerable challenges to designers, operators and certification authorities. Questions concerning seating arrangement, nature and design of recreational space, the number, design and location of internal staircases, the number of cabin crew required and the nature of the cabin crew emergency procedures are just some of the issues that need to be addressed. Other more radical concepts such as blended wing body (BWB) design, involving one or two decks with possibly four or more aisles offer even greater challenges. Can the largest exits currently available cope with passenger flow arising from four or five aisles? Do we need to consider new concepts in exit design? Should the main aisles be made wider to accommodate more passengers? In this paper we demonstrate how computer based evacuation models can be used to investigate these issues through examination of staircase evacuation procedures for VLTA and aisle/exit configuration for BWB cabin layouts.
Resumo:
Computer egress simulation has potential to be used in large scale incidents to provide live advice to incident commanders. While there are many considerations which must be taken into account when applying such models to live incidents, one of the first concerns the computational speed of simulations. No matter how important the insight provided by the simulation, numerical hindsight will not prove useful to an incident commander. Thus for this type of application to be useful, it is essential that the simulation can be run many times faster than real time. Parallel processing is a method of reducing run times for very large computational simulations by distributing the workload amongst a number of CPUs. In this paper we examine the development of a parallel version of the buildingEXODUS software. The parallel strategy implemented is based on a systematic partitioning of the problem domain onto an arbitrary number of sub-domains. Each sub-domain is computed on a separate processor and runs its own copy of the EXODUS code. The software has been designed to work on typical office based networked PCs but will also function on a Windows based cluster. Two evaluation scenarios using the parallel implementation of EXODUS are described; a large open area and a 50 story high-rise building scenario. Speed-ups of up to 3.7 are achieved using up to six computers, with high-rise building evacuation simulation achieving run times of 6.4 times faster than real time.
Resumo:
While evidence for optimal random search patterns, known as Lévy walks, in empirical movement data is mounting for a growing list of taxa spanning motile cells to humans, there is still much debate concerning the theoretical generality of Lévy walk optimisation. Here, using a new and robust simulation environment, we investigate in the most detailed study to date (24×10(6) simulations) the foraging and search efficiencies of 2-D Lévy walks with a range of exponents, target resource distributions and several competing models. We find strong and comprehensive support for the predictions of the Lévy flight foraging hypothesis and in particular for the optimality of inverse square distributions of move step-lengths across a much broader range of resource densities and distributions than previously realised. Further support for the evolutionary advantage of Lévy walk movement patterns is provided by an investigation into the 'feast and famine' effect, with Lévy foragers in heterogeneous environments experiencing fewer long 'famines' than other types of searchers. Therefore overall, optimal Lévy foraging results in more predictable resources in unpredictable environments.