35 resultados para Complexity analysis
Resumo:
This paper will present a conceptual framework for the examination of land redevelopment based on a complex systems/networks approach. As Alvin Toffler insightfully noted, modern scientific enquiry has become exceptionally good at splitting problems into pieces but has forgotten how to put the pieces back together. Twenty-five years after his remarks, governments and corporations faced with the requirements of sustainability are struggling to promote an ‘integrated’ or ‘holistic’ approach to tackling problems. Despite the talk, both practice and research provide few platforms that allow for ‘joined up’ thinking and action. With socio-economic phenomena, such as land redevelopment, promising prospects open up when we assume that their constituents can make up complex systems whose emergent properties are more than the sum of the parts and whose behaviour is inherently difficult to predict. A review of previous research shows that it has mainly focused on idealised, ‘mechanical’ views of property development processes that fail to recognise in full the relationships between actors, the structures created and their emergent qualities. When reality failed to live up to the expectations of these theoretical constructs then somebody had to be blamed for it: planners, developers, politicians. However, from a ‘synthetic’ point of view the agents and networks involved in property development can be seen as constituents of structures that perform complex processes. These structures interact, forming new more complex structures and networks. Redevelopment then can be conceptualised as a process of transformation: a complex system, a ‘dissipative’ structure involving developers, planners, landowners, state agencies etc., unlocks the potential of previously used sites, transforms space towards a higher order of complexity and ‘consumes’ but also ‘creates’ different forms of capital in the process. Analysis of network relations point toward the ‘dualism’ of structure and agency in these processes of system transformation and change. Insights from actor network theory can be conjoined with notions of complexity and chaos to build an understanding of the ways in which actors actively seek to shape these structures and systems, whilst at the same time are recursively shaped by them in their strategies and actions. This approach transcends the blame game and allows for inter-disciplinary inputs to be placed within a broader explanatory framework that does away with many past dichotomies. Better understanding of the interactions between actors and the emergent qualities of the networks they form can improve our comprehension of the complex socio-spatial phenomena that redevelopment comprises. The insights that this framework provides when applied in UK institutional investment into redevelopment are considered to be significant.
Resumo:
This paper describes the novel use of cluster analysis in the field of industrial process control. The severe multivariable process problems encountered in manufacturing have often led to machine shutdowns, where the need for corrective actions arises in order to resume operation. Production faults which are caused by processes running in less efficient regions may be prevented or diagnosed using a reasoning based on cluster analysis. Indeed the intemal complexity of a production machinery may be depicted in clusters of multidimensional data points which characterise the manufacturing process. The application of a Mean-Tracking cluster algorithm (developed in Reading) to field data acquired from a high-speed machinery will be discussed. The objective of such an application is to illustrate how machine behaviour can be studied, in particular how regions of erroneous and stable running behaviour can be identified.
Resumo:
Proteomics approaches have made important contributions to the characterisation of platelet regulatory mechanisms. A common problem encountered with this method, however, is the masking of low-abundance (e.g. signalling) proteins in complex mixtures by highly abundant proteins. In this study, subcellular fractionation of washed human platelets either inactivated or stimulated with the glycoprotein (GP) VI collagen receptor agonist, collagen-related peptide, reduced the complexity of the platelet proteome. The majority of proteins identified by tandem mass spectrometry are involved in signalling. The effect of GPVI stimulation on levels of specific proteins in subcellular compartments was compared and analysed using in silico quantification, and protein associations were predicted using STRING (the search tool for recurring instances of neighbouring genes/proteins). Interestingly, we observed that some proteins that were previously unidentified in platelets including teneurin-1 and Van Gogh-like protein 1, translocated to the membrane upon GPVI stimulation. Newly identified proteins may be involved in GPVI signalling nodes of importance for haemostasis and thrombosis.
Resumo:
Basic Network transactions specifies that datagram from source to destination is routed through numerous routers and paths depending on the available free and uncongested paths which results in the transmission route being too long, thus incurring greater delay, jitter, congestion and reduced throughput. One of the major problems of packet switched networks is the cell delay variation or jitter. This cell delay variation is due to the queuing delay depending on the applied loading conditions. The effect of delay, jitter accumulation due to the number of nodes along transmission routes and dropped packets adds further complexity to multimedia traffic because there is no guarantee that each traffic stream will be delivered according to its own jitter constraints therefore there is the need to analyze the effects of jitter. IP routers enable a single path for the transmission of all packets. On the other hand, Multi-Protocol Label Switching (MPLS) allows separation of packet forwarding and routing characteristics to enable packets to use the appropriate routes and also optimize and control the behavior of transmission paths. Thus correcting some of the shortfalls associated with IP routing. Therefore MPLS has been utilized in the analysis for effective transmission through the various networks. This paper analyzes the effect of delay, congestion, interference, jitter and packet loss in the transmission of signals from source to destination. In effect the impact of link failures, repair paths in the various physical topologies namely bus, star, mesh and hybrid topologies are all analyzed based on standard network conditions.
Resumo:
This report describes the analysis and development of novel tools for the global optimisation of relevant mission design problems. A taxonomy was created for mission design problems, and an empirical analysis of their optimisational complexity performed - it was demonstrated that the use of global optimisation was necessary on most classes and informed the selection of appropriate global algorithms. The selected algorithms were then applied to the di®erent problem classes: Di®erential Evolution was found to be the most e±cient. Considering the speci¯c problem of multiple gravity assist trajectory design, a search space pruning algorithm was developed that displays both polynomial time and space complexity. Empirically, this was shown to typically achieve search space reductions of greater than six orders of magnitude, thus reducing signi¯cantly the complexity of the subsequent optimisation. The algorithm was fully implemented in a software package that allows simple visualisation of high-dimensional search spaces, and e®ective optimisation over the reduced search bounds.
Resumo:
The aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.
Resumo:
Snakebites are a major neglected tropical disease responsible for as many as 95000 deaths every year worldwide. Viper venom serine proteases disrupt haemostasis of prey and victims by affecting various stages of the blood coagulation system. A better understanding of their sequence, structure, function and phylogenetic relationships will improve the knowledge on the pathological conditions and aid in the development of novel therapeutics for treating snakebites. A large dataset for all available viper venom serine proteases was developed and analysed to study various features of these enzymes. Despite the large number of venom serine protease sequences available, only a small proportion of these have been functionally characterised. Although, they share some of the common features such as a C-terminal extension, GWG motif and disulphide linkages, they vary widely between each other in features such as isoelectric points, potential N-glycosylation sites and functional characteristics. Some of the serine proteases contain substitutions for one or more of the critical residues in catalytic triad or primary specificity pockets. Phylogenetic analysis clustered all the sequences in three major groups. The sequences with substitutions in catalytic triad or specificity pocket clustered together in separate groups. Our study provides the most complete information on viper venom serine proteases to date and improves the current knowledge on the sequence, structure, function and phylogenetic relationships of these enzymes. This collective analysis of venom serine proteases will help in understanding the complexity of envenomation and potential therapeutic avenues.
Resumo:
Developing models to predict the effects of social and economic change on agricultural landscapes is an important challenge. Model development often involves making decisions about which aspects of the system require detailed description and which are reasonably insensitive to the assumptions. However, important components of the system are often left out because parameter estimates are unavailable. In particular, measurements of the relative influence of different objectives, such as risk, environmental management, on farmer decision making, have proven difficult to quantify. We describe a model that can make predictions of land use on the basis of profit alone or with the inclusion of explicit additional objectives. Importantly, our model is specifically designed to use parameter estimates for additional objectives obtained via farmer interviews. By statistically comparing the outputs of this model with a large farm-level land-use data set, we show that cropping patterns in the United Kingdom contain a significant contribution from farmer’s preference for objectives other than profit. In particular, we found that risk aversion had an effect on the accuracy of model predictions, whereas preference for a particular number of crops grown was less important. While nonprofit objectives have frequently been identified as factors in farmers’ decision making, our results take this analysis further by demonstrating the relationship between these preferences and actual cropping patterns.
Resumo:
Mean field models (MFMs) of cortical tissue incorporate salient, average features of neural masses in order to model activity at the population level, thereby linking microscopic physiology to macroscopic observations, e.g., with the electroencephalogram (EEG). One of the common aspects of MFM descriptions is the presence of a high-dimensional parameter space capturing neurobiological attributes deemed relevant to the brain dynamics of interest. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two archetypal categories or “families”. After investigating and characterizing them in depth, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli such as thalamic input, and distributions of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We here unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They instead emerge when the nonlinear structure of parameter space is partitioned according to bifurcation responses. We call this general method “metabifurcation analysis”. The partitioning cannot be achieved by the investigation of only a small number of parameter sets and is instead the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible parameter sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces.
Resumo:
A recently proposed mean-field theory of mammalian cortex rhythmogenesis describes the salient features of electrical activity in the cerebral macrocolumn, with the use of inhibitory and excitatory neuronal populations (Liley et al 2002). This model is capable of producing a range of important human EEG (electroencephalogram) features such as the alpha rhythm, the 40 Hz activity thought to be associated with conscious awareness (Bojak & Liley 2007) and the changes in EEG spectral power associated with general anesthetic effect (Bojak & Liley 2005). From the point of view of nonlinear dynamics, the model entails a vast parameter space within which multistability, pseudoperiodic regimes, various routes to chaos, fat fractals and rich bifurcation scenarios occur for physiologically relevant parameter values (van Veen & Liley 2006). The origin and the character of this complex behaviour, and its relevance for EEG activity will be illustrated. The existence of short-lived unstable brain states will also be discussed in terms of the available theoretical and experimental results. A perspective on future analysis will conclude the presentation.
Resumo:
This article reviews the use of complexity theory in planning theory using the theory of metaphors for theory transfer and theory construction. The introduction to the article presents the author's positioning of planning theory. The first section thereafter provides a general background of the trajectory of development of complexity theory and discusses the rationale of using the theory of metaphors for evaluating the use of complexity theory in planning. The second section introduces the workings of metaphors in general and theory-constructing metaphors in particular, drawing out an understanding of how to proceed with an evaluative approach towards an analysis of the use of complexity theory in planning. The third section presents two case studies – reviews of two articles – to illustrate how the framework might be employed. It then discusses the implications of the evaluation for the question ‘can complexity theory contribute to planning?’ The concluding section discusses the employment of the ‘theory of metaphors’ for evaluating theory transfer and draws out normative suggestions for engaging in theory transfer using the metaphorical route.
First order k-th moment finite element analysis of nonlinear operator equations with stochastic data
Resumo:
We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.
Resumo:
To manage agroecosystems for multiple ecosystem services, we need to know whether the management of one service has positive, negative, or no effects on other services. We do not yet have data on the interactions between pollination and pest-control services. However, we do have data on the distributions of pollinators and natural enemies in agroecosystems. Therefore, we compared these two groups of ecosystem service providers, to see if the management of farms and agricultural landscapes might have similar effects on the abundance and richness of both. In a meta-analysis, we compared 46 studies that sampled bees, predatory beetles, parasitic wasps, and spiders in fields, orchards, or vineyards of food crops. These studies used the proximity or proportion of non-crop or natural habitats in the landscapes surrounding these crops (a measure of landscape complexity), or the proximity or diversity of non-crop plants in the margins of these crops (a measure of local complexity), to explain the abundance or richness of these beneficial arthropods. Compositional complexity at both landscape and local scales had positive effects on both pollinators and natural enemies, but different effects on different taxa. Effects on bees and spiders were significantly positive, but effects on parasitoids and predatory beetles (mostly Carabidae and Staphylinidae) were inconclusive. Landscape complexity had significantly stronger effects on bees than it did on predatory beetles and significantly stronger effects in non-woody rather than in woody crops. Effects on richness were significantly stronger than effects on abundance, but possibly only for spiders. This abundance-richness difference might be caused by differences between generalists and specialists, or between arthropods that depend on non-crop habitats (ecotone species and dispersers) and those that do not (cultural species). We call this the ‘specialist-generalist’ or ‘cultural difference’ mechanism. If complexity has stronger effects on richness than abundance, it might have stronger effects on the stability than the magnitude of these arthropod-mediated ecosystem services. We conclude that some pollinators and natural enemies seem to have compatible responses to complexity, and it might be possible to manage agroecosystems for the benefit of both. However, too few studies have compared the two, and so we cannot yet conclude that there are no negative interactions between pollinators and natural enemies, and no trade-offs between pollination and pest-control services. Therefore, we suggest a framework for future research to bridge these gaps in our knowledge.
Resumo:
The nonlinearity of high-power amplifiers (HPAs) has a crucial effect on the performance of multiple-input-multiple-output (MIMO) systems. In this paper, we investigate the performance of MIMO orthogonal space-time block coding (OSTBC) systems in the presence of nonlinear HPAs. Specifically, we propose a constellation-based compensation method for HPA nonlinearity in the case with knowledge of the HPA parameters at the transmitter and receiver, where the constellation and decision regions of the distorted transmitted signal are derived in advance. Furthermore, in the scenario without knowledge of the HPA parameters, a sequential Monte Carlo (SMC)-based compensation method for the HPA nonlinearity is proposed, which first estimates the channel-gain matrix by means of the SMC method and then uses the SMC-based algorithm to detect the desired signal. The performance of the MIMO-OSTBC system under study is evaluated in terms of average symbol error probability (SEP), total degradation (TD) and system capacity, in uncorrelated Nakagami-m fading channels. Numerical and simulation results are provided and show the effects on performance of several system parameters, such as the parameters of the HPA model, output back-off (OBO) of nonlinear HPA, numbers of transmit and receive antennas, modulation order of quadrature amplitude modulation (QAM), and number of SMC samples. In particular, it is shown that the constellation-based compensation method can efficiently mitigate the effect of HPA nonlinearity with low complexity and that the SMC-based detection scheme is efficient to compensate for HPA nonlinearity in the case without knowledge of the HPA parameters.
Resumo:
A number of urban land-surface models have been developed in recent years to satisfy the growing requirements for urban weather and climate interactions and prediction. These models vary considerably in their complexity and the processes that they represent. Although the models have been evaluated, the observational datasets have typically been of short duration and so are not suitable to assess the performance over the seasonal cycle. The First International Urban Land-Surface Model comparison used an observational dataset that spanned a period greater than a year, which enables an analysis over the seasonal cycle, whilst the variety of models that took part in the comparison allows the analysis to include a full range of model complexity. The results show that, in general, urban models do capture the seasonal cycle for each of the surface fluxes, but have larger errors in the summer months than in the winter. The net all-wave radiation has the smallest errors at all times of the year but with a negative bias. The latent heat flux and the net storage heat flux are also underestimated, whereas the sensible heat flux generally has a positive bias throughout the seasonal cycle. A representation of vegetation is a necessary, but not sufficient, condition for modelling the latent heat flux and associated sensible heat flux at all times of the year. Models that include a temporal variation in anthropogenic heat flux show some increased skill in the sensible heat flux at night during the winter, although their daytime values are consistently overestimated at all times of the year. Models that use the net all-wave radiation to determine the net storage heat flux have the best agreement with observed values of this flux during the daytime in summer, but perform worse during the winter months. The latter could result from a bias of summer periods in the observational datasets used to derive the relations with net all-wave radiation. Apart from these models, all of the other model categories considered in the analysis result in a mean net storage heat flux that is close to zero throughout the seasonal cycle, which is not seen in the observations. Models with a simple treatment of the physical processes generally perform at least as well as models with greater complexity.