90 resultados para random search algorithms
Resumo:
Algorithms for explicit integration of structural dynamics problems with multiple time steps (subcycling) are investigated. Only one such algorithm, due to Smolinski and Sleith has proved to be stable in a classical sense. A simplified version of this algorithm that retains its stability is presented. However, as with the original version, it can be shown to sacrifice accuracy to achieve stability. Another algorithm in use is shown to be only statistically stable, in that a probability of stability can be assigned if appropriate time step limits are observed. This probability improves rapidly with the number of degrees of freedom in a finite element model. The stability problems are shown to be a property of the central difference method itself, which is modified to give the subcycling algorithm. A related problem is shown to arise when a constraint equation in time is introduced into a time-continuous space-time finite element model. (C) 1998 Elsevier Science S.A.
Resumo:
Religious belief and practice plays an important role in the lives of millions of people worldwide, and yet little is Known of the spiritual lives of people with a disability. This review explores the realm of disability, religion and health, and draws together literature from a variety of sources to illustrate the diversity of the sparse research in the field. An historical, cross-cultural and religious textual overview of attitudes toward disability throughout the centuries is presented. Studies in religious orientation, health and well-being are reviewed, highlighting the potential of religion to effect the lives of people with a disability, their families and caregivers. Finally, the spiritual dimensions of disability are explored to gain some understanding of the spiritual lives and existential challenges of people with a disability, and a discussion ensues on the importance of further research into this new field of endeavour.
Resumo:
Extended gcd calculation has a long history and plays an important role in computational number theory and linear algebra. Recent results have shown that finding optimal multipliers in extended gcd calculations is difficult. We present an algorithm which uses lattice basis reduction to produce small integer multipliers x(1), ..., x(m) for the equation s = gcd (s(1), ..., s(m)) = x(1)s(1) + ... + x(m)s(m), where s1, ... , s(m) are given integers. The method generalises to produce small unimodular transformation matrices for computing the Hermite normal form of an integer matrix.
Resumo:
An order of magnitude sensitivity gain is described for using quasar spectra to investigate possible time or space variation in the fine structure constant alpha. Applied to a sample of 30 absorption systems, spanning redshifts 0.5 < z < 1.6, we derive limits on variations in alpha over a wide range of epochs. For the whole sample, Delta alpha/alpha = (-1.1 +/- 0.4) x 10(-5). This deviation is dominated by measurements at z > 1, where Delta alpha/alpha = (-1.9 +/- 0.5) x 10(-5). For z < 1, Delta alpha/alpha = (-0.2 +/- 0.4) x 10(-5). While this is consistent with a time-varying alpha, further work is required to explore possible systematic errors in the data, although careful searches have so far revealed none.
Resumo:
We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.
Resumo:
The aim of this study was to investigate the association between false belief comprehension, the exhibition of pretend play and the use of mental state terms in pre-school children. Ferry children, aged between 36 and 54 months were videotaped engaging in free play with each parent. The exhibit-ion of six distinct acts of pretend play and the expression of 16 mental sr:ate terms were coded during play. Each child was also administered a pantomime task and three standard false belief casks. Reliable associations were also found between false belief performance and the pretence categories of object substitution and role assignment, and the exhibition of imaginary object pantomimes. Moreover, the use of mental state terms was positively correlated with false belief and the pretence categories of object substitution, imaginary play and role assignment, and negatively correlated with the exhibition of body part object pantomimes. These findings indicate that the development of a mental state lexicon and some, bur not all, components of pretend play are dependent on the capacity for metarepresentational cognition.
Resumo:
Perceived depth was measured for three-types of stereograms with the colour/texture of half-occluded (monocular) regions either similar to or dissimilar to that of binocular regions or background. In a two-panel random dot stereogram the monocular region was filled with texture either similar or different to the far panel or left blank. In unpaired background stereograms the monocular region either matched the background or was different in colour or texture and in phantom stereograms the monocular region matched the partially occluded object or was a different colour or texture. In all three cases depth was considerably impaired when the monocular texture did not match either the background or the more distant surface. The content and context of monocular regions as well as their position are important in determining their role as occlusion cues and thus in three-dimensional layout. We compare coincidence and accidental view accounts of these effects. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
Ohman and colleagues provided evidence for preferential processing of pictures depicting fear-relevant animals by showing that pictures of snakes and spiders are found faster among pictures of fiowers and mushrooms than vice versa and that the speed of detecting fear-relevant animals was not affected by set size whereas the speed of detecting fiowers/mushrooms was. Experiment 1 replicated this finding. Experiment 2, however, found similar search advantages when pictures of cats and horses or of wolves and big cats were to be found among pictures of flowers and mushrooms. Moreover, Experiment 3, in a within subject comparison, failed to find faster identification of snakes and spiders than of cats and horses among flowers and mushrooms. The present findings seem to indicate that previous reports of preferential processing of pictures of snakes and spiders in a visual search task may reflect a processing advantage for animal pictures in general rather than fear-relevance.
Resumo:
Minimal perfect hash functions are used for memory efficient storage and fast retrieval of items from static sets. We present an infinite family of efficient and practical algorithms for generating order preserving minimal perfect hash functions. We show that almost all members of the family construct space and time optimal order preserving minimal perfect hash functions, and we identify the one with minimum constants. Members of the family generate a hash function in two steps. First a special kind of function into an r-graph is computed probabilistically. Then this function is refined deterministically to a minimal perfect hash function. We give strong theoretical evidence that the first step uses linear random time. The second step runs in linear deterministic time. The family not only has theoretical importance, but also offers the fastest known method for generating perfect hash functions.