765 resultados para sample complexity
Resumo:
This paper presents an analysis of dysfluencies in two oral tellings of a familiar children's story by a young boy with autism. Thurber & Tager-Flusberg (1993) postulate a lower degree of cognitive and communicative investment to explain a lower frequency of non-grammatical pauses observed in elicited narratives of children with autism in comparison to typically developing and intellectually disabled controls. we also found a very low frequency of non-grammatical pauses in our data, but indications of high engagement and cognitive and communicative investment. We point to a wider range of disfluencies as indicators of cognitive load, and show that the kind and location of dysfluencies produced may reveal which aspects of the narrative task are creating the greatest cognitive demand: here, mental state ascription, perspectivization, and adherence to story schema. This paper thus generates analytical options and hypotheses that can be explored further in a larger population of children with autism and typically developing controls.
Resumo:
This paper presents a new relative measure of signal complexity, referred to here as relative structural complexity, which is based on the matching pursuit (MP) decomposition. By relative, we refer to the fact that this new measure is highly dependent on the decomposition dictionary used by MP. The structural part of the definition points to the fact that this new measure is related to the structure, or composition, of the signal under analysis. After a formal definition, the proposed relative structural complexity measure is used in the analysis of newborn EEG. To do this, firstly, a time-frequency (TF) decomposition dictionary is specifically designed to compactly represent the newborn EEG seizure state using MP. We then show, through the analysis of synthetic and real newborn EEG data, that the relative structural complexity measure can indicate changes in EEG structure as it transitions between the two EEG states; namely seizure and background (non-seizure).
Resumo:
The generalized Gibbs sampler (GGS) is a recently developed Markov chain Monte Carlo (MCMC) technique that enables Gibbs-like sampling of state spaces that lack a convenient representation in terms of a fixed coordinate system. This paper describes a new sampler, called the tree sampler, which uses the GGS to sample from a state space consisting of phylogenetic trees. The tree sampler is useful for a wide range of phylogenetic applications, including Bayesian, maximum likelihood, and maximum parsimony methods. A fast new algorithm to search for a maximum parsimony phylogeny is presented, using the tree sampler in the context of simulated annealing. The mathematics underlying the algorithm is explained and its time complexity is analyzed. The method is tested on two large data sets consisting of 123 sequences and 500 sequences, respectively. The new algorithm is shown to compare very favorably in terms of speed and accuracy to the program DNAPARS from the PHYLIP package.
Resumo:
There are many factors which affect the L2 learner’s performance at the levels of phonology, morphology and syntax. Consequently when L2 learners attempt to communicate in the target language, their language production will show systematic variability across the above mentioned linguistic domains. This variation can be attributed to some factors such as interlocutors, topic familiarity, prior knowledge, task condition, planning time and tasks types. This paper reports the results of an on going research investigating the issue of variability attributed to the task type. It is hypothesized that the particular type of task learners are required to perform will result in variation in their performance. Results of the statistical analyses of this study investigating the issue of variation in the performance of twenty L2 learners at the English department of Tabriz University provided evidence in support of the hypothesis that performance of L2 learners show systematic variability attributed to task.
Resumo:
Objectives. To investigate the test-retest stability of a standardized version of Nelson's (1976) Modified Card Sorting Test (MCST) and its relationships with demographic variables in a sample of healthy older adults. Design. A standard card order and administration were devised for the MCST and administered to participants at an initial assessment, and again at a second session conducted a minimum of six months later in order to examine its test-retest stability. Participants were also administered the WAIS-R at initial assessment in order to provide a measure of psychometric intelligence. Methods. Thirty-six (24 female, 12 male) healthy older adults aged 52 to 77 years with mean education 12.42 years (SD = 3.53) completed the MCST on two occasions approximately 7.5 months (SD = 1.61) apart. Stability coefficients and test-retest differences were calculated for the range of scores. The effect of gender on MCST performance was examined. Correlations between MCST scores and age, education and WAIS-R IQs were also determined. Results. Stability coefficients ranged from .26 for the percent perseverative errors measure to .49 for the failure to maintain set measure. Several measures were significantly correlated with age, education and WAIS-R IQs, although no effect of gender on MCST performance was found. Conclusions. None of the stability coefficients reached the level required for clinical decision making. The results indicate that participants' age, education, and intelligence need to be considered when interpreting MCST performance. Normative studies of MCST performance as well as further studies with patients with executive dysfunction are needed.
Resumo:
We investigate the X-ray properties of the Parkes sample of Bat-spectrum radio sources using data from the ROSAT All-Sky Survey and archival pointed PSPC observations. In total, 163 of the 323 sources are detected. For the remaining 160 sources, 2 sigma upper limits to the X-ray flux are derived. We present power-law photon indices in the 0.1-2.4 keV energy band for 115 sources, which were determined either with a hardness ratio technique or from direct fits to pointed PSPC data if a sufficient number of photons were available. The average photon index is <Gamma > = 1.95(-0.12)(+0.13) for flat-spectrum radio-loud quasars, <Gamma > = 1.70(-0.24)(+0.23) for galaxies, and <Gamma > = 2.40(-0.31)(+0.12) for BL Lac objects. The soft X-ray photon index is correlated with redshift and with radio spectral index in the sense that sources at high redshift and/or with flat (or inverted) radio spectra have flatter X-ray spectra on average. The results are in accord with orientation-dependent unification schemes for radio-loud active galactic nuclei. Webster et al. discovered many sources with unusually red optical continua among the quasars of this sample, and interpreted this result in terms of extinction by dust. Although the X-ray spectra in general do not show excess absorption, we find that low-redshift optically red quasars have significantly lower soft X-ray luminosities on average than objects with blue optical continua. The difference disappears for higher redshifts, as is expected for intrinsic absorption by cold gas associated with the dust. In addition, the scatter in log(f(x)/f(o)) is consistent with the observed optical extinction, contrary to previous claims based on optically or X-ray selected samples. Although alternative explanations for the red optical continua cannot be excluded with the present X-ray data, we note that the observed X-ray properties are consistent with the idea that dust plays an important role in some of the radio-loud quasars with red optical continua.
Resumo:
Multiple sampling is widely used in vadose zone percolation experiments to investigate the extent in which soil structure heterogeneities influence the spatial and temporal distributions of water and solutes. In this note, a simple, robust, mathematical model, based on the beta-statistical distribution, is proposed as a method of quantifying the magnitude of heterogeneity in such experiments. The model relies on fitting two parameters, alpha and zeta to the cumulative elution curves generated in multiple-sample percolation experiments. The model does not require knowledge of the soil structure. A homogeneous or uniform distribution of a solute and/or soil-water is indicated by alpha = zeta = 1, Using these parameters, a heterogeneity index (HI) is defined as root 3 times the ratio of the standard deviation and mean. Uniform or homogeneous flow of water or solutes is indicated by HI = 1 and heterogeneity is indicated by HI > 1. A large value for this index may indicate preferential flow. The heterogeneity index relies only on knowledge of the elution curves generated from multiple sample percolation experiments and is, therefore, easily calculated. The index may also be used to describe and compare the differences in solute and soil-water percolation from different experiments. The use of this index is discussed for several different leaching experiments. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The Fornax Spectroscopic Survey will use the Two degree Field spectrograph (2dF) of the Angle-Australian Telescope to obtain spectra for a complete sample of all 14000 objects with 16.5 less than or equal to b(j) less than or equal to 19.7 in a 12 square degree area centred on the Fornax Cluster. The aims of this project include the study of dwarf galaxies in the cluster (both known low surface brightness objects and putative normal surface brightness dwarfs) and a comparison sample of background field galaxies. We will also measure quasars and other active galaxies, any previously unrecognised compact galaxies and a large sample of Galactic stars. By selecting all objects-both stars and galaxies-independent of morphology, we cover a much larger range of surface brightness and scale size than previous surveys. In this paper we first describe the design of the survey. Our targets are selected from UK Schmidt Telescope sky survey plates digitised by the Automated Plate Measuring (APM) facility. We then describe the photometric and astrometric calibration of these data and show that the APM astrometry is accurate enough for use with the 2dF. We also describe a general approach to object identification using cross-correlations which allows us to identify and classify both stellar and galaxy spectra. We present results from the first 2dF field. Redshift distributions and velocity structures are shown for all observed objects in the direction of Fornax, including Galactic stars? galaxies in and around the Fornax Cluster, and for the background galaxy population. The velocity data for the stars show the contributions from the different Galactic components, plus a small tail to high velocities. We find no galaxies in the foreground to the cluster in our 2dF field. The Fornax Cluster is clearly defined kinematically. The mean velocity from the 26 cluster members having reliable redshifts is 1560 +/- 80 km s(-1). They show a velocity dispersion of 380 +/- 50 km s(-1). Large-scale structure can be traced behind the cluster to a redshift beyond z = 0.3. Background compact galaxies and low surface brightness galaxies are found to follow the general galaxy distribution.
Resumo:
Rates of cell size increase are an important measure of success during the baculovirus infection process. Batch and fed batch cultures sustain large fluctuations in osmolarity that can affect the measured cell volume if this parameter is not considered during the sizing protocol. Where osmolarity differences between the sizing diluent and the culture broth exist, biased measurements of size are obtained as a result of the cell osmometer response. Spodoptera frugiperda (Sf9) cells are highly sensitive to volume change when subjected to a change in osmolarity. Use of the modified protocol with culture supernatants for sample dilution prior to sizing removed the observed error during measurement.
Resumo:
The Fornax Cluster Spectroscopic Survey (FCSS) project utilizes the Two-degree Field (2dF) multi-object spectrograph on the Anglo-Australian Telescope (AAT). Its aim is to obtain spectra for a complete sample of all 14 000 objects with 16 5 less than or equal to b(j) less than or equal to 19 7 irrespective of their morphology in a 12 deg(2) area centred on the Fornax cluster. A sample of 24 Fornax cluster members has been identified from the first 2dF field (3.1 deg(2) in area) to be completed. This is the first complete sample of cluster objects of known distance with well-defined selection limits. Nineteen of the galaxies (with -15.8 < M-B < 12.7) appear to be conventional dwarf elliptical (dE) or dwarf S0 (dS0) galaxies. The other five objects (with -13.6 < M-B < 11.3) are those galaxies which were described recently by Drinkwater et al. and labelled 'ultracompact dwarfs' (UCDs). A major result is that the conventional dwarfs all have scale sizes alpha greater than or similar to 3 arcsec (similar or equal to300 pc). This apparent minimum scale size implies an equivalent minimum luminosity for a dwarf of a given surface brightness. This produces a limit on their distribution in the magnitude-surface brightness plane, such that we do not observe dEs with high surface brightnesses but faint absolute magnitudes. Above this observed minimum scale size of 3 arcsec, the dEs and dS0s fill the whole area of the magnitude-surface brightness plane sampled by our selection limits. The observed correlation between magnitude and surface brightness noted by several recent studies of brighter galaxies is not seen with our fainter cluster sample. A comparison of our results with the Fornax Cluster Catalog (FCC) of Ferguson illustrates that attempts to determine cluster membership solely on the basis of observed morphology can produce significant errors. The FCC identified 17 of the 24 FCSS sample (i.e. 71 per cent) as being 'cluster' members, in particular missing all five of the UCDs. The FCC also suffers from significant contamination: within the FCSS's field and selection limits, 23 per cent of those objects described as cluster members by the FCC are shown by the FCSS to be background objects.
Resumo:
Overcommitment of development capacity or development resource deficiencies are important problems in new product development (NPD). Existing approaches to development resource planning have largely neglected the issue of resource magnitude required for NPD. This research aims to fill the void by developing a simple higher-level aggregate model based on an intuitive idea: The number of new product families that a firm can effectively undertake is bound by the complexity of its products or systems and the total amount of resources allocated to NPD. This study examines three manufacturing companies to verify the proposed model. The empirical results confirm the study`s initial hypothesis: The more complex the product family, the smaller the number of product families that are launched per unit of revenue. Several suggestions and implications for managing NPD resources are discussed, such as how this study`s model can establish an upper limit for the capacity to develop and launch new product families.
Resumo:
Aims: To estimate dementia prevalence and describe the etiology of dementia in a community sample from the city of Sao Paulo, Brazil. Methods: A sample of subjects older than 60 years was screened for dementia in the first phase. During the second phase, the diagnostic workup included a structured interview, physical and neurological examination, laboratory exams, a brain scan, and DSM-IV criteria diagnosis. Results: Mean age was 71.5 years (n = 1,563) and 58.3% had up to 4 years of schooling (68.7% female). Dementia was diagnosed in 107 subjects with an observed prevalence of 6.8%. The estimate of dementia prevalence was 12.9%, considering design effect, nonresponse during the community phase, and positive and negative predictive values. Alzheimer`s disease was the most frequent cause of dementia (59.8%), followed by vascular dementia (15.9%). Older age and illiteracy were significantly associated with dementia. Conclusions: The estimate of dementia prevalence was higher than previously reported in Brazil, with Alzheimer`s disease and vascular dementia being the most frequent causes of dementia. Dementia prevalence in Brazil and in other Latin American countries should be addressed by additional studies to confirm these higher dementia rates which might have a sizable impact on countries` health services. Copyright (C) 2008 S. Karger AG, Basel