28 resultados para Primary distribution networks
em CentAUR: Central Archive University of Reading - UK
Resumo:
The use of expert system techniques in power distribution system design is examined. The selection and siting of equipment on overhead line networks is chosen for investigation as the use of equipment such as auto-reclosers, etc., represents a substantial investment and has a significant effect on the reliability of the system. Through past experience with both equipment and network operations, most decisions in selection and siting of this equipment are made intuitively, following certain general guidelines or rules of thumb. This heuristic nature of the problem lends itself to solution using an expert system approach. A prototype has been developed and is currently under evaluation in the industry. Results so far have demonstrated both the feasibility and benefits of the expert system as a design aid.
Resumo:
In this paper we consider the structure of dynamically evolving networks modelling information and activity moving across a large set of vertices. We adopt the communicability concept that generalizes that of centrality which is defined for static networks. We define the primary network structure within the whole as comprising of the most influential vertices (both as senders and receivers of dynamically sequenced activity). We present a methodology based on successive vertex knockouts, up to a very small fraction of the whole primary network,that can characterize the nature of the primary network as being either relatively robust and lattice-like (with redundancies built in) or relatively fragile and tree-like (with sensitivities and few redundancies). We apply these ideas to the analysis of evolving networks derived from fMRI scans of resting human brains. We show that the estimation of performance parameters via the structure tests of the corresponding primary networks is subject to less variability than that observed across a very large population of such scans. Hence the differences within the population are significant.
Resumo:
It has been known for decades that the metabolic rate of animals scales with body mass with an exponent that is almost always <1, >2/3, and often very close to 3/4. The 3/4 exponent emerges naturally from two models of resource distribution networks, radial explosion and hierarchically branched, which incorporate a minimum of specific details. Both models show that the exponent is 2/3 if velocity of flow remains constant, but can attain a maximum value of 3/4 if velocity scales with its maximum exponent, 1/12. Quarterpower scaling can arise even when there is no underlying fractality. The canonical “fourth dimension” in biological scaling relations can result from matching the velocity of flow through the network to the linear dimension of the terminal “service volume” where resources are consumed. These models have broad applicability for the optimal design of biological and engineered systems where energy, materials, or information are distributed from a single source.
Resumo:
The Distribution Network Operators (DNOs) role is becoming more difficult as electric vehicles and electric heating penetrate the network, increasing the demand. As a result it becomes harder for the distribution networks infrastructure to remain within its operating constraints. Energy storage is a potential alternative to conventional network reinforcement such as upgrading cables and transformers. The research presented here in this paper shows that due to the volatile nature of the LV network, the control approach used for energy storage has a significant impact on performance. This paper presents and compares control methodologies for energy storage where the objective is to get the greatest possible peak demand reduction across the day from a pre-specified storage device. The results presented show the benefits and detriments of specific types of control on a storage device connected to a single phase of an LV network, using aggregated demand profiles based on real smart meter data from individual homes. The research demonstrates an important relationship between how predictable an aggregation is and the best control methodology required to achieve the objective.
Resumo:
The international appeal of Hollywood films through the twentieth century has been a subject of interest to economic and film historians alike. This paper employs some of the methods of the economic historian to evaluate key arguments within the film history literature explaining the global success of American films. Through careful analysis of both existing and newly constructed data sets, the paper examines the extent to which Hollywood's foreign earnings were affected by: film production costs; the extent of global distribution networks; and also the international orientation of the films themselves. The paper finds that these factors influenced foreign earnings in quite distinct ways, and that their relative importance changed over time. The evidence presented here suggests a degree of interaction between the production and distribution arms of the major US film companies in their pursuit of foreign markets that would benefit from further archival-based investigation.
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
Background: We report an analysis of a protein network of functionally linked proteins, identified from a phylogenetic statistical analysis of complete eukaryotic genomes. Phylogenetic methods identify pairs of proteins that co-evolve on a phylogenetic tree, and have been shown to have a high probability of correctly identifying known functional links. Results: The eukaryotic correlated evolution network we derive displays the familiar power law scaling of connectivity. We introduce the use of explicit phylogenetic methods to reconstruct the ancestral presence or absence of proteins at the interior nodes of a phylogeny of eukaryote species. We find that the connectivity distribution of proteins at the point they arise on the tree and join the network follows a power law, as does the connectivity distribution of proteins at the time they are lost from the network. Proteins resident in the network acquire connections over time, but we find no evidence that 'preferential attachment' - the phenomenon of newly acquired connections in the network being more likely to be made to proteins with large numbers of connections - influences the network structure. We derive a 'variable rate of attachment' model in which proteins vary in their propensity to form network interactions independently of how many connections they have or of the total number of connections in the network, and show how this model can produce apparent power-law scaling without preferential attachment. Conclusion: A few simple rules can explain the topological structure and evolutionary changes to protein-interaction networks: most change is concentrated in satellite proteins of low connectivity and small phenotypic effect, and proteins differ in their propensity to form attachments. Given these rules of assembly, power law scaled networks naturally emerge from simple principles of selection, yielding protein interaction networks that retain a high-degree of robustness on short time scales and evolvability on longer evolutionary time scales.
Resumo:
A 'mapping task' was used to explore the networks available to head teachers, school coordinators and local authority staff. Beginning from an ego-centred perspective on networks, we illustrate a number of key analytic categories, including brokerage, formality, and strength and weakness of links with reference to a single UK primary school. We describe how teachers differentiate between the strength of network links and their value, which is characteristically related to their potential impact on classroom practice.
Resumo:
Combined picosecond transient absorption and time-resolved infrared studies were performed, aimed at characterising low-lying excited states of the cluster [Os-3(CO)(10)(s-cis-L)] (L= cyclohexa-1,3-diene, 1) and monitoring the formation of its photoproducts. Theoretical (DFT and TD-DFT) calculations on the closely related cluster with L=buta-1,3-diene (2') have revealed that the low-lying electronic transitions of these [Os-3(CO)(10)(s-cis-1,3-diene)] clusters have a predominant sigma(core)pi*(CO) character. From the lowest sigmapi* excited state, cluster 1 undergoes fast Os-Os(1,3-diene) bond cleavage (tau=3.3 ps) resulting in the formation of a coordinatively unsaturated primary photoproduct (1a) with a single CO bridge. A new insight into the structure of the transient has been obtained by DFT calculations. The cleaved Os-Os(1,3-diene) bond is bridged by the donor 1,3-diene ligand, compensating for the electron deficiency at the neighbouring Os centre. Because of the unequal distribution of the electron density in transient la, a second CO bridge is formed in 20 ps in the photoproduct [Os-3(CO)(8)(mu-CO)(2)- (cyclohexa-1,3-diene)] (1b). The latter compound, absorbing strongly around 630 nm, mainly regenerates the parent cluster with a lifetime of about 100 ns in hexane. Its structure, as suggested by the DFT calculations, again contains the 1,3-diene ligand coordinated in a bridging fashion. Photoproduct 1b can therefore be assigned as a high-energy coordination isomer of the parent cluster with all Os-Os bonds bridged.
Resumo:
Geographic distributions of pathogens are the outcome of dynamic processes involving host availability, susceptibility and abundance, suitability of climate conditions, and historical contingency including evolutionary change. Distributions have changed fast and are changing fast in response to many factors, including climatic change. The response time of arable agriculture is intrinsically fast, but perennial crops and especially forests are unlikely to adapt easily. Predictions of many of the variables needed to predict changes in pathogen range are still rather uncertain, and their effects will be profoundly modified by changes elsewhere in the agricultural system, including both economic changes affecting growing systems and hosts and evolutionary changes in pathogens and hosts. Tools to predict changes based on environmental correlations depend on good primary data, which is often absent, and need to be checked against the historical record, which remains very poor for almost all pathogens. We argue that at present the uncertainty in predictions of change is so great that the important adaptive response is to monitor changes and to retain the capacity to innovate, both by access to economic capital with reasonably long-term rates of return and by retaining wide scientific expertise, including currently less fashionable specialisms.
Resumo:
Research on the cortical sources of nociceptive laser-evoked brain potentials (LEPs) began almost two decades ago (Tarkka and Treede, 1993). Whereas there is a large consensus on the sources of the late part of the LEP waveform (N2 and P2 waves), the relative contribution of the primary somatosensory cortex (S1) to the early part of the LEP waveform (N1 wave) is still debated. To address this issue we recorded LEPs elicited by the stimulation of four limbs in a large population (n=35). Early LEP generators were estimated both at single-subject and group level, using three different approaches: distributed source analysis, dipolar source modeling, and probabilistic independent component analysis (ICA). We show that the scalp distribution of the earliest LEP response to hand stimulation was maximal over the central-parietal electrodes contralateral to the stimulated side, while that of the earliest LEP response to foot stimulation was maximal over the central-parietal midline electrodes. Crucially, all three approaches indicated hand and foot S1 areas as generators of the earliest LEP response. Altogether, these findings indicate that the earliest part of the scalp response elicited by a selective nociceptive stimulus is largely explained by activity in the contralateral S1, with negligible contribution from the secondary somatosensory cortex (S2).
Resumo:
Constrained principal component analysis (CPCA) with a finite impulse response (FIR) basis set was used to reveal functionally connected networks and their temporal progression over a multistage verbal working memory trial in which memory load was varied. Four components were extracted, and all showed statistically significant sensitivity to the memory load manipulation. Additionally, two of the four components sustained this peak activity, both for approximately 3 s (Components 1 and 4). The functional networks that showed sustained activity were characterized by increased activations in the dorsal anterior cingulate cortex, right dorsolateral prefrontal cortex, and left supramarginal gyrus, and decreased activations in the primary auditory cortex and "default network" regions. The functional networks that did not show sustained activity were instead dominated by increased activation in occipital cortex, dorsal anterior cingulate cortex, sensori-motor cortical regions, and superior parietal cortex. The response shapes suggest that although all four components appear to be invoked at encoding, the two sustained-peak components are likely to be additionally involved in the delay period. Our investigation provides a unique view of the contributions made by a network of brain regions over the course of a multiple-stage working memory trial.
Resumo:
Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.