831 resultados para computational complexity
Resumo:
The ‘Public interest’, even if viewed with ambiguity or scepticism, has been one of the primary means by which various professional roles of planners have been justified. Many objections to the concept have been advanced by writers in planning academia. Notwithstanding these, ‘public interest’ continues to be mobilised, to justify, defend or argue for planning interventions and reforms. This has led to arguments that planning will have to adopt and recognise some form of public interest in practice to legitimise itself.. This paper explores current debates around public interest and social justice and advances a vision of the public interest informed by complexity theory. The empirical context of the paper is the poverty alleviation programme, the Kudumbashree project in Kerala, India.
Resumo:
With the increase in e-commerce and the digitisation of design data and information,the construction sector has become reliant upon IT infrastructure and systems. The design and production process is more complex, more interconnected, and reliant upon greater information mobility, with seamless exchange of data and information in real time. Construction small and medium-sized enterprises (CSMEs), in particular,the speciality contractors, can effectively utilise cost-effective collaboration-enabling technologies, such as cloud computing, to help in the effective transfer of information and data to improve productivity. The system dynamics (SD) approach offers a perspective and tools to enable a better understanding of the dynamics of complex systems. This research focuses upon system dynamics methodology as a modelling and analysis tool in order to understand and identify the key drivers in the absorption of cloud computing for CSMEs. The aim of this paper is to determine how the use of system dynamics (SD) can improve the management of information flow through collaborative technologies leading to improved productivity. The data supporting the use of system dynamics was obtained through a pilot study consisting of questionnaires and interviews from five CSMEs in the UK house-building sector.
Resumo:
Let λ1,…,λn be real numbers in (0,1) and p1,…,pn be points in Rd. Consider the collection of maps fj:Rd→Rd given by fj(x)=λjx+(1−λj)pj. It is a well known result that there exists a unique nonempty compact set Λ⊂Rd satisfying Λ=∪nj=1fj(Λ). Each x∈Λ has at least one coding, that is a sequence (ϵi)∞i=1 ∈{1,…,n}N that satisfies limN→∞fϵ1…fϵN(0)=x. We study the size and complexity of the set of codings of a generic x∈Λ when Λ has positive Lebesgue measure. In particular, we show that under certain natural conditions almost every x∈Λ has a continuum of codings. We also show that almost every x∈Λ has a universal coding. Our work makes no assumptions on the existence of holes in Λ and improves upon existing results when it is assumed Λ contains no holes.
Resumo:
A causal explanation provides information about the causal history of whatever is being explained. However, most causal histories extend back almost infinitely and can be described in almost infinite detail. Causal explanations therefore involve choices about which elements of causal histories to pick out. These choices are pragmatic: they reflect our explanatory interests. When adjudicating between competing causal explanations, we must therefore consider not only questions of epistemic adequacy (whether we have good grounds for identifying certain factors as causes) but also questions of pragmatic adequacy (whether the aspects of the causal history picked out are salient to our explanatory interests). Recognizing that causal explanations differ pragmatically as well as epistemically is crucial for identifying what is at stake in competing explanations of the relative peacefulness of the nineteenth-century Concert system. It is also crucial for understanding how explanations of past events can inform policy prescription.
Resumo:
The pipe sizing of water networks via evolutionary algorithms is of great interest because it allows the selection of alternative economical solutions that meet a set of design requirements. However, available evolutionary methods are numerous, and methodologies to compare the performance of these methods beyond obtaining a minimal solution for a given problem are currently lacking. A methodology to compare algorithms based on an efficiency rate (E) is presented here and applied to the pipe-sizing problem of four medium-sized benchmark networks (Hanoi, New York Tunnel, GoYang and R-9 Joao Pessoa). E numerically determines the performance of a given algorithm while also considering the quality of the obtained solution and the required computational effort. From the wide range of available evolutionary algorithms, four algorithms were selected to implement the methodology: a PseudoGenetic Algorithm (PGA), Particle Swarm Optimization (PSO), a Harmony Search and a modified Shuffled Frog Leaping Algorithm (SFLA). After more than 500,000 simulations, a statistical analysis was performed based on the specific parameters each algorithm requires to operate, and finally, E was analyzed for each network and algorithm. The efficiency measure indicated that PGA is the most efficient algorithm for problems of greater complexity and that HS is the most efficient algorithm for less complex problems. However, the main contribution of this work is that the proposed efficiency ratio provides a neutral strategy to compare optimization algorithms and may be useful in the future to select the most appropriate algorithm for different types of optimization problems.
Resumo:
This study investigates effects of syntactic complexity operationalised in terms of movement, intervention and (NP) feature similarity in the development of A’ dependencies in 4-, 6-, and 8-year old typically developing (TD) French children and children with Autism Spectrum Disorders (ASD). Children completed an off-line comprehension task testing eight syntactic structures classified in four levels of complexity: Level 0: No Movement; Level 1: Movement without (configurational) Intervention; Level 2: Movement with Intervention from an element which is maximally different or featurally ‘disjoint’ (mismatched in both lexical NP restriction and number); Level 3: Movement with Intervention from an element similar in one feature or featurally ‘intersecting’ (matched in lexical NP restriction, mismatched in number). The results show that syntactic complexity affects TD children across the three age groups, but also indicate developmental differences between these groups. Movement affected all three groups in a similar way, but intervention effects in intersection cases were stronger in younger than older children, with NP feature similarity affecting only 4-year olds. Complexity effects created by the similarity in lexical restriction of an intervener thus appear to be overcome early in development, arguably thanks to other differences of this intervener (which was mismatched in number). Children with ASD performed less well than the TD children although they were matched on non-verbal reasoning. Overall, syntactic complexity affected their performance in a similar way as in their TD controls, but their performance correlated with non-verbal abilities rather than age, suggesting that their grammatical development does not follow the smooth relation to age that is found in TD children.
Resumo:
Given the long-term negative outcomes associated with depression in adolescence, there is a pressing need to develop brief, evidence based treatments that are accessible to more young people experiencing low mood. Behavioural Activation (BA) is an effective treatment for adult depression, however little research has focused on the use of BA with depressed adolescents, particularly with briefer forms of BA. In this article we outline an adaptation of brief Behavioral Activation Treatment of Depression (BATD) designed for adolescents and delivered in eight sessions (Brief BA). This case example illustrates how a structured, brief intervention was useful for a depressed young person with a number of complicating and risk factors.
Resumo:
Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.
Resumo:
This paper demonstrates the oscillatory characteristics of electrical signals acquired from two ornamental plant types (Epipremnum pinnatum and Philodendron scandens - Family Araceae), using a noninvasive acquisition system. The electrical signal was recorded using Ag/AgCl superficial electrodes inside a Faraday cage. The presence of the oscillatory electric generator was shown using a classical power spectral density. The Lempel and Ziv complexity measurement showed that the plant signal was not noise despite its nonlinear behavior. The oscillatory characteristics of the signal were explained using a simulated electrical model that establishes that for a frequency range from 5 to 15 Hz, the oscillatory characteristic is higher than for other frequency ranges. All results show that non-invasive electrical plant signals can be acquired with improvement of signal-to-noise ratio using a Faraday cage, and a simple electrical model is able to explain the electrical signal being generated. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Familial idiopathic basal ganglia calcification, also known as ""Fahr`s disease"" (FD), is a neuropsychiatric disorder with autosomal dominant pattern of inheritance and characterized by symmetric basal ganglia calcifications and, occasionally, other brain regions. Currently, there are three loci linked to this devastating disease. The first one (IBGC1) is located in 14q11.2-21.3 and the other two have been identified in 2q37 (IBGC2) and 8p21.1-q11.13 (IBGC3). Further studies identified a heterozygous variation (rs36060072) which consists in the change of the cytosine to guanine located at MGEA6/CTAGE5 gene, present in all of the affected large American family linked to IBGC1. This missense substitution, which induces changes of a proline to alanine at the 521 position (P521A), in a proline-rich and highly conserved protein domain was considered a rare variation, with a minor allele frequency (MAF) of 0.0058 at the US population. Considering that the population frequency of a given variation is an indirect indicative of potential pathogenicity, we screened 200 chromosomes in a random control set of Brazilian samples and in two nuclear families, comparing with our previous analysis in a US population. In addition, we accomplished analyses through bioinformatics programs to predict the pathogenicity of such variation. Our genetic screen found no P521A carriers. Polling these data together with the previous study in the USA, we have now a MAF of 0.0036, showing that this mutation is very rare. On the other hand, the bioinformatics analysis provided conflicting findings. There are currently various candidate genes and loci that could be involved with the underlying molecular basis of FD etiology, and other groups suggested the possible role played by genes in 2q37, related to calcium metabolism, and at chromosome 8 (NRG1 and SNTG1). Additional mutagenesis and in vivo studies are necessary to confirm the pathogenicity for variation in the P521A MGEA6.
Resumo:
Several accounts put forth to explain the flash-lag effect (FLE) rely mainly on either spatial or temporal mechanisms. Here we investigated the relationship between these mechanisms by psychophysical and theoretical approaches. In a first experiment we assessed the magnitudes of the FLE and temporal-order judgments performed under identical visual stimulation. The results were interpreted by means of simulations of an artificial neural network, that wits also employed to make predictions concerning the F LE. The model predicted that a spatio-temporal mislocalisation would emerge from two, continuous and abrupt-onset, moving stimuli. Additionally, a straightforward prediction of the model revealed that the magnitude of this mislocalisation should be task-dependent, increasing when the use of the abrupt-onset moving stimulus switches from a temporal marker only to both temporal and spatial markers. Our findings confirmed the model`s predictions and point to an indissoluble interplay between spatial facilitation and processing delays in the FLE.
Resumo:
One of the top ten most influential data mining algorithms, k-means, is known for being simple and scalable. However, it is sensitive to initialization of prototypes and requires that the number of clusters be specified in advance. This paper shows that evolutionary techniques conceived to guide the application of k-means can be more computationally efficient than systematic (i.e., repetitive) approaches that try to get around the above-mentioned drawbacks by repeatedly running the algorithm from different configurations for the number of clusters and initial positions of prototypes. To do so, a modified version of a (k-means based) fast evolutionary algorithm for clustering is employed. Theoretical complexity analyses for the systematic and evolutionary algorithms under interest are provided. Computational experiments and statistical analyses of the results are presented for artificial and text mining data sets. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.