176 resultados para Kolmogorov complexity
Resumo:
We introduce a model of computation based on read only memory (ROM), which allows us to compare the space-efficiency of reversible, error-free classical computation with reversible, error-free quantum computation. We show that a ROM-based quantum computer with one writable qubit is universal, whilst two writable bits are required for a universal classical ROM-based computer. We also comment on the time-efficiency advantages of quantum computation within this model.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.
Resumo:
Observational data collected in the Lake Tekapo hydro catchment of the Southern Alps in New Zealand are used to analyse the wind and temperature fields in the alpine lake basin during summertime fair weather conditions. Measurements from surface stations, pilot balloon and tethersonde soundings, Doppler sodar and an instrumented light aircraft provide evidence of multi-scale interacting wind systems, ranging from microscale slope winds to mesoscale coast-to-basin flows. Thermal forcing of the winds occurred due to differential heating as a consequence of orography and heterogeneous surface features, which is quantified by heat budget and pressure field analysis. The daytime vertical temperature structure was characterised by distinct layering. Features of particular interest are the formation of thermal internal boundary layers due to the lake-land discontinuity and the development of elevated mixed layers. The latter were generated by advective heating from the basin and valley sidewalls by slope winds and by a superimposed valley wind blowing from the basin over Lake Tekapo and up the tributary Godley Valley. Daytime heating in the basin and its tributary valleys caused the development of a strong horizontal temperature gradient between the basin atmosphere and that over the surrounding landscape, and hence the development of a mesoscale heat low over the basin. After noon, air from outside the basin started flowing over mountain saddles into the basin causing cooling in the lowest layers, whereas at ridge top height the horizontal air temperature gradient between inside and outside the basin continued to increase. In the early evening, a more massive intrusion of cold air caused rapid cooling and a transition to a rather uniform slightly stable stratification up to about 2000 m agl. The onset time of this rapid cooling varied about 1-2 h between observation sites and was probably triggered by the decay of up-slope winds inside the basin, which previously countered the intrusion of air over the surrounding ridges. The intrusion of air from outside the basin continued until about mid-night, when a northerly mountain wind from the Godley Valley became dominant. The results illustrate the extreme complexity that can be caused by the operation of thermal forcing processes at a wide range of spatial scales.
Resumo:
Thc oen itninteureasc ttioo nb eo fa pcreangtmraal tiiscs uaen di ns ymnotadcetlisc ocfo nsestnrtaeinnctse processing. It is well established that object relatives (1) are harder to process than subject relatives (2). Passivization, other things being equal, increases sentence complexity. However, one of the functions of the passive construction is to promote an NP into the role of subject so that it can be more easily bound to the head NP in a higher clause. Thus, (3) is predicted to be marginally preferred over (1). Passiviazation in this instance may be seen as a way of avoiding the object relative construction. 1. The pipe that the traveller smoked annoyed the passengers. 2. The traveller that smoked the pipe annoyed the passengers. 3.The pipe that was smoked by the traveller annoyed the 4.The traveller that the pipe was smoked by annoyed the 5.The traveller that the lady was assaulted by annoyed the In (4) we have relativization of an NP which has been demoted by passivization to the status of a by-phrase. Such relative clauses may only be obtained under quite restrictive pragmatic conditions. Many languages do not permit relativization of a constituent as low as a by-phrase on the NP accessibility hierarchy (Comrie, 1984). The factors which determine the acceptability of demoted NP relatives like (4-5) reflect the ease with which the NP promoted to subject position can be taken as a discourse topic. We explored the acceptability of sentences such as (1-5) using pair-wise judgements of samddifferent meaning, accompanied by ratings of easeof understanding. Results are discussed with reference to Gibsons DLT model of linguistic complexity and sentence processing (Gibson, 2000)
Resumo:
Recent research support sLocke's (1976) model of facet satisfaction in which the range of affect of objectively defined facet descriptions is moderated by subjective evaluations of facet importance (McFarlin & Rice, 1992). This study examined the utility of Locke's moderated model of face t satisfaction for the prediction of organizationally important global measures of job satisfaction. A large dataset of two groups of workers allowed testing over different time periods and across a broad range of satisfaction measures. The hypothesis derived from Locke's model, that global satisfaction would represent a linear function of facet satisfaction (i.e., facet description x facet importance), was not supported. Instead, a simple (have-want) discrepancy model (operationalized as facet description) provided the most consistent set of predictors. The results suggests that workers, when providing global measures of job satisfaction, may use cognitive heuristics to reduce the complexity of facet description x importance calculations. The implications of these data for Locke's model and directions for future research are outlined.
Resumo:
Issues concerning the influence of attachment characteristics on own and partner's disclosure were addressed using a sample of 113 couples in medium-term dating relationships. Individual differences in attachment were assessed in terms of relationship anxiety and avoidance. Disposition to disclose was assessed using questionnaire measures of self-disclosure, relationship-focused disclosure, and the ability to elicit disclosure from the partner; in addition, structured diaries were used to assess aspects of disclosure (amount, intimacy, emotional tone, and satisfaction) in the context of couples' everyday interactions. Couple-level analyses showed that avoidance strongly predicted dispositional measures of disclosure, whereas anxiety (particularly partner's anxiety) was related to negative evaluations of everyday interactions. Interactive effects of attachment dimensions and gender were also obtained, highlighting the complexity of communication behavior. The results are discussed in terms of the goals and strategies associated with working models of attachment.
Resumo:
This paper tests the explanatory capacities of different versions of new institutionalism by examining the Australian case of a general transition in central banking practice and monetary politics: namely, the increased emphasis on low inflation and central bank independence. Standard versions of rational choice institutionalism largely dominate the literature on the politics of central banking, but this approach (here termed RC1) fails to account for Australian empirics. RC1 has a tendency to establish actor preferences exogenously to the analysis; actors' motives are also assumed a priori; actor's preferences are depicted in relatively static, ahistorical terms. And there is the tendency, even a methodological requirement, to assume relatively simple motives and preference sets among actors, in part because of the game theoretic nature of RC1 reasoning. It is possible to build a more accurate rational choice model by re-specifying and essentially updating the context, incentives and choice sets that have driven rational choice in this case. Enter RC2. However, this move subtly introduces methodological shifts and new theoretical challenges. By contrast, historical institutionalism uses an inductive methodology. Compared with deduction, it is arguably better able to deal with complexity and nuance. It also utilises a dynamic, historical approach, and specifies (dynamically) endogenous preference formation by interpretive actors. Historical institutionalism is also able to more easily incorporate a wider set of key explanatory variables and incorporate wider social aggregates. Hence, it is argued that historical institutionalism is the preferred explanatory theory and methodology in this case.
Resumo:
Super vision probably does have benefits both for the maintenance and improvement of clinical skills and for job satisfaction, but the data are very thin and almost non-existent in the area of alcohol and other drugs services. Because of the potential complexity of objectives and roles in super vision, a structured agreement appears to be an important part of the effective supervision relationship. Because sessions can degenerate easily into unstructured socialization, agendas and session objectives may also be important. While a working alliance based on mutual respect and trust is an essential base for the supervision relationship, procedures for direct observation of clinical skills, demonstration of new procedures and skills practice with detailed feedback appear critical to super vision's impact on practice. To ensure effective super vision, there needs not only to be a minimum of personnel and resources, but also a compatibility with the values and procedures of management and staff, access to supervision training and consultation and sufficient incentives to ensure it continues.
Resumo:
A technique based on laser light diffraction is shown to be successful in collecting on-line experimental data. Time series of floc size distributions (FSD) under different shear rates (G) and calcium additions were collected. The steady state mass mean diameter decreased with increasing shear rate G and increased when calcium additions exceeded 8 mg/l. A so-called population balance model (PBM) was used to describe the experimental data, This kind of model describes both aggregation and breakage through birth and death terms. A discretised PBM was used since analytical solutions of the integro-partial differential equations are non-existing. Despite the complexity of the model, only 2 parameters need to be estimated: the aggregation rate and the breakage rate. The model seems, however, to lack flexibility. Also, the description of the floc size distribution (FSD) in time is not accurate.
Resumo:
Circular proteins are a recently discovered phenomenon. They presumably evolved to confer advantages over ancestral linear proteins while maintaining the intrinsic biological functions of those proteins. In general, these advantages include a reduced sensitivity to proteolytic cleavage and enhanced stability. In one remarkable family of circular proteins, the cyclotides, the cyclic backbone is additionally braced by a knotted arrangement of disulfide bonds that confers additional stability and topological complexity upon the family. This article describes the discovery, structure, function and biosynthesis of the currently known circular proteins. The discovery of naturally occurring circular proteins in the past few years has been complemented by new chemical and biochemical methods to make synthetic circular proteins; these are also briefly described.
Resumo:
Recent studies have revealed striking differences in pyramidal cell structure among cortical regions involved in the processing of different functional modalities. For example, cells involved in visual processing show systematic variation, increasing in morphological complexity with rostral progression from V1 through extrastriate areas. Differences have also been identified between pyramidal cells in somatosensory, motor and prefrontal cortex, but the extent to which the pyramidal cell phenotype may vary between these functionally related cortical regions remains unknown. In the present study we investigated the structure of layer III pyramidal cells in somatosensory and motor areas 3b, 4, 5, 6 and 7b of the macaque monkey. Cells were intracellularly injected in fixed, flat-mounted cortical slices and analysed for morphometric parameters. The size of the basal dendritic arbours, the number of their branches and their spine density were found to vary systematically between areas. Namely, we found a trend for increasing complexity in dendritic arbour structure through areas 3b, 5 and 7b. A similar trend occurred through areas 4 and 6. The differences in arbour structure may determine the number of inputs received by neurons and may thus be an important factor in determining function at the cellular and systems level.
Resumo:
The neuronal circuitry underlying the generation of direction selectivity in the retina has remained elusive for almost 40 years. Recent studies indicate that direction selectivity may be established within the radial dendrites of 'starburst' amacrine cells and that retinal ganglion cells may acquire their direction selectivity by the appropriate weighting of excitatory and inhibitory inputs from starburst dendrites pointing in different directions. If so, this would require unexpected complexity and subtlety in the synaptic connectivity of these CNS neurons.
Resumo:
We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Most considerations of knowledge management focus on corporations and, until recently, considered knowledge to be objective, stable, and asocial. In this paper we wish to move the focus away from corporations, and examine knowledge and national innovation systems. We argue that the knowledge systems in which innovation takes place are phenomenologically turbulent, a state not made explicit in the change, innovation and socio-economic studies of knowledge literature, and that this omission poses a serious limitation to the successful analysis of innovation and knowledge systems. To address this lack we suggest that three evolutionary processes must be considered: self-referencing, self-transformation and self-organisation. These processes, acting simultaneously, enable system cohesion, radical innovation and adaptation. More specifically, we argue that in knowledge-based economies the high levels of phenomenological turbulence drives these processes. Finally, we spell out important policy principles that derive from these processes.
Resumo:
The ability to recall the location of a predator and later avoid it was tested in nine populations of rainbowfish (Melanotaenia spp.), representing three species from a variety of environments. Following the introduction of a model predator into a particular microhabitat, the model was removed, the arena rotated and the distribution of the fish recorded again. In this manner it could be determined what cues the fish relied on in order to recall the previous location of the predator model. Fish from all populations but one (Dirran Creek) were capable of avoiding the predator by remembering either the location and/or the microhabitat in which the predator was recently observed. Reliance on different types of visual cues appears to vary between populations but the reason for this variation remains elusive. Of the ecological variables tested (flow variability, predator density and habitat complexity), only the level of predation appeared to be correlated with the orientation technique employed by each population. There was no effect of species identity, which suggests that the habitat that each population occupies plays a strong role in the development of both predator avoidance responses and the cues used to track predators in the wild.