99 resultados para sample complexity
Resumo:
Complexity is integral to planning today. Everyone and everything seem to be interconnected, causality appears ambiguous, unintended consequences are ubiquitous, and information overload is a constant challenge. The nature of complexity, the consequences of it for society, and the ways in which one might confront it, understand it and deal with it in order to allow for the possibility of planning, are issues increasingly demanding analytical attention. One theoretical framework that can potentially assist planners in this regard is Luhmann's theory of autopoiesis. This article uses insights from Luhmann's ideas to understand the nature of complexity and its reduction, thereby redefining issues in planning, and explores the ways in which management of these issues might be observed in actual planning practice via a reinterpreted case study of the People's Planning Campaign in Kerala, India. Overall, this reinterpretation leads to a different understanding of the scope of planning and planning practice, telling a story about complexity and systemic response. It allows the reinterpretation of otherwise familiar phenomena, both highlighting the empirical relevance of the theory and providing new and original insight into particular dynamics of the case study. This not only provides a greater understanding of the dynamics of complexity, but also produces advice to help planners implement structures and processes that can cope with complexity in practice.
Resumo:
The Stochastic Diffusion Search algorithm -an integral part of Stochastic Search Networks is investigated. Stochastic Diffusion Search is an alternative solution for invariant pattern recognition and focus of attention. It has been shown that the algorithm can be modelled as an ergodic, finite state Markov Chain under some non-restrictive assumptions. Sub-linear time complexity for some settings of parameters has been formulated and proved. Some properties of the algorithm are then characterised and numerical examples illustrating some features of the algorithm are presented.
Resumo:
Recent excavations at Pre-Pottery Neolithic A (PPNA) WF16 in southern Jordan have revealed remarkable evidence of architectural developments in the early Neolithic. This sheds light on both special purpose structures and “domestic” settlement, allowing fresh insights into the development of increasingly sedentary communities and the social systems they supported. The development of sedentary communities is a central part of the Neolithic process in Southwest Asia. Architecture and ideas of homes and households have been important to the debate, although there has also been considerable discussion on the role of communal buildings and the organization of early sedentarizing communities since the discovery of the tower at Jericho. Recently, the focus has been on either northern Levantine PPNA sites, such as Jerf el Ahmar, or the emergence of ritual buildings in the Pre-Pottery Neolithic B of the southern Levant. Much of the debate revolves around a division between what is interpreted as domestic space, contrasted with “special purpose” buildings. Our recent evidence allows a fresh examination of the nature of early Neolithic communities.
Resumo:
The aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.
Resumo:
This paper examined the incidence of intrafirmcausalambiguity in the management's perception concerning the critical drivers of their firms’ performance. Building on insights from the resource-based view we developed and tested hypotheses that examine (1) linkage ambiguity as a discrepancy between perceived and measured resource–performance linkages, (2) characteristic ambiguity for resources and capabilities with a high degree of complexity and tacitness, and (3) the negative association between linkage ambiguity and performance. The observations based on the explicit perceptions of 356 surveyed managers were contrasted with the empirical findings of the resource/performance relationship derived by structural equation modelling from the same data sample. The findings validate the presence of linkage ambiguity particularly in the case of resources and capabilities with higher degree of characteristic ambiguity. The findings also provide empirical evidence in support of the advocacy for a negative relationship between intrafirmcausalambiguity and performance. The paper discusses the potential reasons for the disparities between empirical findings and management's perceptions of the key determinants of export success and makes recommendations for future research.
Resumo:
Recent research in social neuroscience proposes a link between mirror neuron system (MNS) and social cognition. The MNS has been proposed to be the neural mechanism underlying action recognition and intention understanding and more broadly social cognition. Pre-motor MNS has been suggested to modulate the motor cortex during action observation. This modulation results in an enhanced cortico-motor excitability reflected in increased motor evoked potentials (MEPs) at the muscle of interest during action observation. Anomalous MNS activity has been reported in the autistic population whose social skills are notably impaired. It is still an open question whether traits of autism in the normal population are linked to the MNS functioning. We measured TMS-induced MEPs in normal individuals with high and low traits of autism as measured by the autistic quotient (AQ), while observing videos of hand or mouth actions, static images of a hand or mouth or a blank screen. No differences were observed between the two while they observed a blank screen. However participants with low traits of autism showed significantly greater MEP amplitudes during observation of hand/mouth actions relative to static hand/mouth stimuli. In contrast, participants with high traits of autism did not show such a MEP amplitude difference between observation of actions and static stimuli. These results are discussed with reference to MNS functioning.
Resumo:
This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
Data assimilation refers to the problem of finding trajectories of a prescribed dynamical model in such a way that the output of the model (usually some function of the model states) follows a given time series of observations. Typically though, these two requirements cannot both be met at the same time–tracking the observations is not possible without the trajectory deviating from the proposed model equations, while adherence to the model requires deviations from the observations. Thus, data assimilation faces a trade-off. In this contribution, the sensitivity of the data assimilation with respect to perturbations in the observations is identified as the parameter which controls the trade-off. A relation between the sensitivity and the out-of-sample error is established, which allows the latter to be calculated under operational conditions. A minimum out-of-sample error is proposed as a criterion to set an appropriate sensitivity and to settle the discussed trade-off. Two approaches to data assimilation are considered, namely variational data assimilation and Newtonian nudging, also known as synchronization. Numerical examples demonstrate the feasibility of the approach.
Resumo:
The detection of long-range dependence in time series analysis is an important task to which this paper contributes by showing that whilst the theoretical definition of a long-memory (or long-range dependent) process is based on the autocorrelation function, it is not possible for long memory to be identified using the sum of the sample autocorrelations, as usually defined. The reason for this is that the sample sum is a predetermined constant for any stationary time series; a result that is independent of the sample size. Diagnostic or estimation procedures, such as those in the frequency domain, that embed this sum are equally open to this criticism. We develop this result in the context of long memory, extending it to the implications for the spectral density function and the variance of partial sums of a stationary stochastic process. The results are further extended to higher order sample autocorrelations and the bispectral density. The corresponding result is that the sum of the third order sample (auto) bicorrelations at lags h,k≥1, is also a predetermined constant, different from that in the second order case, for any stationary time series of arbitrary length.
Resumo:
This article examines selected methodological insights that complexity theory might provide for planning. In particular, it focuses on the concept of fractals and, through this concept, how ways of organising policy domains across scales might have particular causal impacts. The aim of this article is therefore twofold: (a) to position complexity theory within social science through a ‘generalised discourse’, thereby orienting it to particular ontological and epistemological biases and (b) to reintroduce a comparatively new concept – fractals – from complexity theory in a way that is consistent with the ontological and epistemological biases argued for, and expand on the contribution that this might make to planning. Complexity theory is theoretically positioned as a neo-systems theory with reasons elaborated. Fractal systems from complexity theory are systems that exhibit self-similarity across scales. This concept (as previously introduced by the author in ‘Fractal spaces in planning and governance’) is further developed in this article to (a) illustrate the ontological and epistemological claims for complexity theory, and to (b) draw attention to ways of organising policy systems across scales to emphasise certain characteristics of the systems – certain distinctions. These distinctions when repeated across scales reinforce associated processes/values/end goals resulting in particular policy outcomes. Finally, empirical insights from two case studies in two different policy domains are presented and compared to illustrate the workings of fractals in planning practice.
Resumo:
This article argues that a native-speaker baseline is a neglected dimension of studies into second language (L2) performance. If we investigate how learners perform language tasks, we should distinguish what performance features are due to their processing an L2 and which are due to their performing a particular task. Having defined what we mean by “native speaker,” we present the background to a research study into task features on nonnative task performance, designed to include native-speaker data as a baseline for interpreting nonnative-speaker performance. The nonnative results, published in this journal (Tavakoli & Foster, 2008) are recapitulated and then the native-speaker results are presented and discussed in the light of them. The study is guided by the assumption that limited attentional resources impact on L2 performance and explores how narrative design features—namely complexity of storyline and tightness of narrative structure— affect complexity, fluency, accuracy, and lexical diversity in language. The results show that both native and nonnative speakers are prompted by storyline complexity to use more subordinated language, but narrative structure had different effects on native and nonnative fluency. The learners, who were based in either London or Tehran, did not differ in their performance when compared to each other, except in lexical diversity, where the learners in London were close to native-speaker levels. The implications of the results for the applicability of Levelt’s model of speaking to an L2 are discussed, as is the potential for further L2 research using native speakers as a baseline.
Resumo:
The recession of mountain glaciers around the world has been linked to anthropogenic climate change and small glaciers (e.g. < 2 km2) are thought to be particularly vulnerable, with reports of their disappearance from several regions. However, the response of small glaciers to climate change can be modulated by non-climatic factors such as topography and debris cover and there remain a number of regions where their recent change has evaded scrutiny. This paper presents results of the first multi-year remote sensing survey of glaciers in the Kodar Mountains, the only glaciers in SE Siberia, which we compare to previous glacier inventories from this continental setting that reported total glacier areas of 18.8 km2 in ca. 1963 (12.6 km2 of exposed ice) and 15.5 km2 in 1974 (12 km2 of exposed ice). Mapping their debris-covered termini is difficult but delineation of debris-free ice on Landsat imagery reveals 34 glaciers with a total area of 11.72 ± 0.72 km2 in 1995, followed by a reduction to 9.53 ± 0.29 km2 in 2001 and 7.01 ± 0.23 km2 in 2010. This represents a ~ 44% decrease in exposed glacier ice between ca. 1963 and 2010, but with 40% lost since 1995 and with individual glaciers losing as much as 93% of their exposed ice. Thus, although continental glaciers are generally thought to be less sensitive than their maritime counterparts, a recent acceleration in shrinkage of exposed ice has taken place and we note its coincidence with a strong summer warming trend in the region initiated at the start of the 1980s. Whilst smaller and shorter glaciers have, proportionally, tended to shrink more rapidly, we find no statistically significant relationship between shrinkage and elevation characteristics, aspect or solar radiation. This is probably due to the small sample size, limited elevation range, and topographic setting of the glaciers in deep valleys-heads. Furthermore, many of the glaciers possess debris-covered termini and it is likely that the ablation of buried ice is lagging the shrinkage of exposed ice, such that a growth in the proportion of debris cover is occurring, as observed elsewhere. If recent trends continue, we hypothesise that glaciers could evolve into a type of rock glacier within the next few decades, introducing additional complexity in their response and delaying their potential demise.