99 resultados para sample complexity
Resumo:
This letter presents an effective approach for selection of appropriate terrain modeling methods in forming a digital elevation model (DEM). This approach achieves a balance between modeling accuracy and modeling speed. A terrain complexity index is defined to represent a terrain's complexity. A support vector machine (SVM) classifies terrain surfaces into either complex or moderate based on this index associated with the terrain elevation range. The classification result recommends a terrain modeling method for a given data set in accordance with its required modeling accuracy. Sample terrain data from the lunar surface are used in constructing an experimental data set. The results have shown that the terrain complexity index properly reflects the terrain complexity, and the SVM classifier derived from both the terrain complexity index and the terrain elevation range is more effective and generic than that designed from either the terrain complexity index or the terrain elevation range only. The statistical results have shown that the average classification accuracy of SVMs is about 84.3% ± 0.9% for terrain types (complex or moderate). For various ratios of complex and moderate terrain types in a selected data set, the DEM modeling speed increases up to 19.5% with given DEM accuracy.
Resumo:
A series of imitation games involving 3-participant (simultaneous comparison of two hidden entities) and 2-participant (direct interrogation of a hidden entity) were conducted at Bletchley Park on the 100th anniversary of Alan Turing’s birth: 23 June 2012. From the ongoing analysis of over 150 games involving (expert and non-expert, males and females, adults and child) judges, machines and hidden humans (foils for the machines), we present six particular conversations that took place between human judges and a hidden entity that produced unexpected results. From this sample we focus on features of Turing’s machine intelligence test that the mathematician/code breaker did not consider in his examination for machine thinking: the subjective nature of attributing intelligence to another mind.
Resumo:
Developing models to predict the effects of social and economic change on agricultural landscapes is an important challenge. Model development often involves making decisions about which aspects of the system require detailed description and which are reasonably insensitive to the assumptions. However, important components of the system are often left out because parameter estimates are unavailable. In particular, measurements of the relative influence of different objectives, such as risk, environmental management, on farmer decision making, have proven difficult to quantify. We describe a model that can make predictions of land use on the basis of profit alone or with the inclusion of explicit additional objectives. Importantly, our model is specifically designed to use parameter estimates for additional objectives obtained via farmer interviews. By statistically comparing the outputs of this model with a large farm-level land-use data set, we show that cropping patterns in the United Kingdom contain a significant contribution from farmer’s preference for objectives other than profit. In particular, we found that risk aversion had an effect on the accuracy of model predictions, whereas preference for a particular number of crops grown was less important. While nonprofit objectives have frequently been identified as factors in farmers’ decision making, our results take this analysis further by demonstrating the relationship between these preferences and actual cropping patterns.
Resumo:
Mean field models (MFMs) of cortical tissue incorporate salient, average features of neural masses in order to model activity at the population level, thereby linking microscopic physiology to macroscopic observations, e.g., with the electroencephalogram (EEG). One of the common aspects of MFM descriptions is the presence of a high-dimensional parameter space capturing neurobiological attributes deemed relevant to the brain dynamics of interest. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two archetypal categories or “families”. After investigating and characterizing them in depth, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli such as thalamic input, and distributions of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We here unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They instead emerge when the nonlinear structure of parameter space is partitioned according to bifurcation responses. We call this general method “metabifurcation analysis”. The partitioning cannot be achieved by the investigation of only a small number of parameter sets and is instead the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible parameter sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces.
Resumo:
A recently proposed mean-field theory of mammalian cortex rhythmogenesis describes the salient features of electrical activity in the cerebral macrocolumn, with the use of inhibitory and excitatory neuronal populations (Liley et al 2002). This model is capable of producing a range of important human EEG (electroencephalogram) features such as the alpha rhythm, the 40 Hz activity thought to be associated with conscious awareness (Bojak & Liley 2007) and the changes in EEG spectral power associated with general anesthetic effect (Bojak & Liley 2005). From the point of view of nonlinear dynamics, the model entails a vast parameter space within which multistability, pseudoperiodic regimes, various routes to chaos, fat fractals and rich bifurcation scenarios occur for physiologically relevant parameter values (van Veen & Liley 2006). The origin and the character of this complex behaviour, and its relevance for EEG activity will be illustrated. The existence of short-lived unstable brain states will also be discussed in terms of the available theoretical and experimental results. A perspective on future analysis will conclude the presentation.
Resumo:
This article reviews the use of complexity theory in planning theory using the theory of metaphors for theory transfer and theory construction. The introduction to the article presents the author's positioning of planning theory. The first section thereafter provides a general background of the trajectory of development of complexity theory and discusses the rationale of using the theory of metaphors for evaluating the use of complexity theory in planning. The second section introduces the workings of metaphors in general and theory-constructing metaphors in particular, drawing out an understanding of how to proceed with an evaluative approach towards an analysis of the use of complexity theory in planning. The third section presents two case studies – reviews of two articles – to illustrate how the framework might be employed. It then discusses the implications of the evaluation for the question ‘can complexity theory contribute to planning?’ The concluding section discusses the employment of the ‘theory of metaphors’ for evaluating theory transfer and draws out normative suggestions for engaging in theory transfer using the metaphorical route.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
The self-assembly of proteins and peptides into b-sheet-rich amyloid fibers is a process that has gained notoriety because of its association with human diseases and disorders. Spontaneous self-assembly of peptides into nonfibrillar supramolecular structures can also provide a versatile and convenient mechanism for the bottom-up design of biocompatible materials with functional properties favoring a wide range of practical applications.[1] One subset of these fascinating and potentially useful nanoscale constructions are the peptide nanotubes, elongated cylindrical structures with a hollow center bounded by a thin wall of peptide molecules.[2] A formidable challenge in optimizing and harnessing the properties of nanotube assemblies is to gain atomistic insight into their architecture, and to elucidate precisely how the tubular morphology is constructed from the peptide building blocks. Some of these fine details have been elucidated recently with the use of magic-angle-spinning (MAS) solidstate NMR (SSNMR) spectroscopy.[3] MAS SSNMR measurements of chemical shifts and through-space interatomic distances provide constraints on peptide conformation (e.g., b-strands and turns) and quaternary packing. We describe here a new application of a straightforward SSNMR technique which, when combined with FTIR spectroscopy, reports quantitatively on the orientation of the peptide molecules within the nanotube structure, thereby providing an additional structural constraint not accessible to MAS SSNMR.
Resumo:
Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.
Resumo:
An extensive off-line evaluation of the Noah/Single Layer Urban Canopy Model (Noah/SLUCM) urban land-surface model is presented using data from 15 sites to assess (1) the ability of the scheme to reproduce the surface energy balance observed in a range of urban environments, including seasonal changes, and (2) the impact of increasing complexity of input parameter information. Model performance is found to be most dependent on representation of vegetated surface area cover; refinement of other parameter values leads to smaller improvements. Model biases in net all-wave radiation and trade-offs between turbulent heat fluxes are highlighted using an optimization algorithm. Here we use the Urban Zones to characterize Energy partitioning (UZE) as the basis to assign default SLUCM parameter values. A methodology (FRAISE) to assign sites (or areas) to one of these categories based on surface characteristics is evaluated. Using three urban sites from the Basel Urban Boundary Layer Experiment (BUBBLE) dataset, an independent evaluation of the model performance with the parameter values representative of each class is performed. The scheme copes well with both seasonal changes in the surface characteristics and intra-urban heterogeneities in energy flux partitioning, with RMSE performance comparable to similar state-of-the-art models for all fluxes, sites and seasons. The potential of the methodology for high-resolution atmospheric modelling application using the Weather Research and Forecasting (WRF) model is highlighted. This analysis supports the recommendations that (1) three classes are appropriate to characterize the urban environment, and (2) that the parameter values identified should be adopted as default values in WRF.
Resumo:
We investigated selective impairments in the production of regular and irregular past tense by examining language performance and lesion sites in a sample of twelve stroke patients. A disadvantage in regular past tense production was observed in six patients when phonological complexity was greater for regular than irregular verbs, and in three patients when phonological complexity was closely matched across regularity. These deficits were not consistently related to grammatical difficulties or phonological errors but were consistently related to lesion site. All six patients with a regular past tense disadvantage had damage to the left ventral pars opercularis (in the inferior frontal cortex), an area associated with articulatory sequencing in prior functional imaging studies. In addition, those that maintained a disadvantage for regular verbs when phonological complexity was controlled had damage to the left ventral supramarginal gyrus (in the inferior parietal lobe), an area associated with phonological short-term memory. When these frontal and parietal regions were spared in patients who had damage to subcortical (n = 2) or posterior temporo-parietal regions (n = 3), past tense production was relatively unimpaired for both regular and irregular forms. The remaining (12th) patient was impaired in producing regular past tense but was significantly less accurate when producing irregular past tense. This patient had frontal, parietal, subcortical and posterior temporo-parietal damage, but was distinguished from the other patients by damage to the left anterior temporal cortex, an area associated with semantic processing. We consider how our lesion site and behavioral observations have implications for theoretical accounts of past tense production.
Resumo:
This paper considers the effect of using a GARCH filter on the properties of the BDS test statistic as well as a number of other issues relating to the application of the test. It is found that, for certain values of the user-adjustable parameters, the finite sample distribution of the test is far-removed from asymptotic normality. In particular, when data generated from some completely different model class are filtered through a GARCH model, the frequency of rejection of iid falls, often substantially. The implication of this result is that it might be inappropriate to use non-rejection of iid of the standardised residuals of a GARCH model as evidence that the GARCH model ‘fits’ the data.
Resumo:
Greater self-complexity has been suggested as a protective factor for people under stress (Linville, 1985). Two different measures have been proposed to assess individual self-complexity: Attneave’s H statistic (1959) and a composite index of two components of self-complexity (SC; Rafaeli-Mor et al., 1999). Using mood-incongruent recall, i.e., recalling positive events while in negative mood, the present study compared validity of the two measures through reanalysis of Sakaki’s (2004) data. Results indicated that H statistic did not predict performance of mood-incongruent recall. In contrast, greater SC was associated with better mood-incongruent recall even when the effect of H statistic was controlled.
Resumo:
The projected hand illusion (PHI) is a variant of the rubber hand illusion (RHI), and both are commonly used to study mechanisms of self-perception. A questionnaire was developed by Longo et al. (2008) to measure qualitative changes in the RHI. Such psychometric analyses have not yet been conducted on the questionnaire for the PHI. The present study is an attempt to validate minor modifications of the questionnaire of Longo et al. to assess the PHI in a community sample (n = 48) and to determine the association with selected demographic (age, sex, years of education), cognitive (Digit Span), and clinical (psychotic-like experiences) variables. Principal components analysis on the questionnaire data extracted four components: Embodiment of “Other” Hand, Disembodiment of Own Hand, Deafference, and Agency—in both synchronous and asynchronous PHI conditions. Questions assessing “Embodiment” and “Agency” loaded onto orthogonal components. Greater illusion ratings were positively associated with being female, being younger, and having higher scores on psychotic-like experiences. There was no association with cognitive performance. Overall, this study confirmed that self-perception as measured with PHI is a multicomponent construct, similar in many respects to the RHI. The main difference lies in the separation of Embodiment and Agency into separate constructs, and this likely reflects the fact that the “live” image of the PHI presents a more realistic picture of the hand and of the stroking movements of the experimenter compared with the RHI.