86 resultados para GENERATION MEANS ANALYSIS
Resumo:
Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling cyanobacterial behaviour in freshwaters is an important tool for understanding their population dynamics and predicting the location and timing of the bloom events in lakes, reservoirs and rivers. A new deterministic–mathematical model was developed, which simulates the growth and movement of cyanobacterial blooms in river systems. The model focuses on the mathematical description of the bloom formation, vertical migration and lateral transport of colonies within river environments by taking into account the major factors that affect the cyanobacterial bloom formation in rivers including light, nutrients and temperature. A parameter sensitivity analysis using a one-at-a-time approach was carried out. There were two objectives of the sensitivity analysis presented in this paper: to identify the key parameters controlling the growth and movement patterns of cyanobacteria and to provide a means for model validation. The result of the analysis suggested that maximum growth rate and day length period were the most significant parameters in determining the population growth and colony depth, respectively.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
This paper deals with the selection of centres for radial basis function (RBF) networks. A novel mean-tracking clustering algorithm is described as a way in which centers can be chosen based on a batch of collected data. A direct comparison is made between the mean-tracking algorithm and k-means clustering and it is shown how mean-tracking clustering is significantly better in terms of achieving an RBF network which performs accurate function modelling.
Resumo:
A statistical technique for fault analysis in industrial printing is reported. The method specifically deals with binary data, for which the results of the production process fall into two categories, rejected or accepted. The method is referred to as logistic regression, and is capable of predicting future fault occurrences by the analysis of current measurements from machine parts sensors. Individual analysis of each type of fault can determine which parts of the plant have a significant influence on the occurrence of such faults; it is also possible to infer which measurable process parameters have no significant influence on the generation of these faults. Information derived from the analysis can be helpful in the operator's interpretation of the current state of the plant. Appropriate actions may then be taken to prevent potential faults from occurring. The algorithm is being implemented as part of an applied self-learning expert system.
Resumo:
PV only generates electricity during daylight hours and primarily generates over summer. In the UK, the carbon intensity of grid electricity is higher during the daytime and over winter. This work investigates whether the grid electricity displaced by PV is high or low carbon compared to the annual mean carbon intensity using carbon factors at higher temporal resolutions (half-hourly and daily). UK policy for carbon reporting requires savings to be calculated using the annual mean carbon intensity of grid electricity. This work offers an insight into whether this technique is appropriate. Using half hourly data on the generating plant supplying the grid from November 2008 to May 2010, carbon factors for grid electricity at half-hourly and daily resolution have been derived using technology specific generation emission factors. Applying these factors to generation data from PV systems installed on schools, it is possible to assess the variation in the carbon savings from displacing grid electricity with PV generation using carbon factors with different time resolutions. The data has been analyzed for a period of 363 to 370 days and so cannot account for inter-year variations in the relationship between PV generation and carbon intensity of the electricity grid. This analysis suggests that PV displaces more carbon intensive electricity using half-hourly carbon factors than using daily factors but less compared with annual ones. A similar methodology could provide useful insights on other variable renewable and demand-side technologies and in other countries where PV performance and grid behavior are different.
Resumo:
The transition to a low-carbon economy urgently demands better information on the drivers of energy consumption. UK government policy has prioritized energy efficiency in the built stock as a means of carbon reduction, but the sector is historically information poor, particularly the non-domestic building stock. This paper presents the results of a pilot study that investigated whether and how property and energy consumption data might be combined for non-domestic energy analysis. These data were combined in a ‘Non-Domestic Energy Efficiency Database’ to describe the location and physical attributes of each property and its energy consumption. The aim was to support the generation of a range of energy-efficiency statistics for the industrial, commercial and institutional sectors of the non-domestic building stock, and to provide robust evidence for national energy-efficiency and carbon-reduction policy development and monitoring. The work has brought together non-domestic energy data, property data and mapping in a ‘data framework’ for the first time. The results show what is possible when these data are integrated and the associated difficulties. A data framework offers the potential to inform energy-efficiency policy formation and to support its monitoring at a level of detail not previously possible.
Resumo:
Atmosphere–ocean general circulation models (AOGCMs) predict a weakening of the Atlantic meridional overturning circulation (AMOC) in response to anthropogenic forcing of climate, but there is a large model uncertainty in the magnitude of the predicted change. The weakening of the AMOC is generally understood to be the result of increased buoyancy input to the north Atlantic in a warmer climate, leading to reduced convection and deep water formation. Consistent with this idea, model analyses have shown empirical relationships between the AMOC and the meridional density gradient, but this link is not direct because the large-scale ocean circulation is essentially geostrophic, making currents and pressure gradients orthogonal. Analysis of the budget of kinetic energy (KE) instead of momentum has the advantage of excluding the dominant geostrophic balance. Diagnosis of the KE balance of the HadCM3 AOGCM and its low-resolution version FAMOUS shows that KE is supplied to the ocean by the wind and dissipated by viscous forces in the global mean of the steady-state control climate, and the circulation does work against the pressure-gradient force, mainly in the Southern Ocean. In the Atlantic Ocean, however, the pressure-gradient force does work on the circulation, especially in the high-latitude regions of deep water formation. During CO2-forced climate change, we demonstrate a very good temporal correlation between the AMOC strength and the rate of KE generation by the pressure-gradient force in 50–70°N of the Atlantic Ocean in each of nine contemporary AOGCMs, supporting a buoyancy-driven interpretation of AMOC changes. To account for this, we describe a conceptual model, which offers an explanation of why AOGCMs with stronger overturning in the control climate tend to have a larger weakening under CO2 increase
Resumo:
Wind generation’s contribution to meeting extreme peaks in electricity demand is a key concern for the integration of wind power. In Great Britain (GB), robustly assessing this contribution directly from power system data (i.e. metered wind-supply and electricity demand) is difficult as extreme peaks occur infrequently (by definition) and measurement records are both short and inhomogeneous. Atmospheric circulation-typing combined with meteorological reanalysis data is proposed as a means to address some of these difficulties, motivated by a case study of the extreme peak demand events in January 2010. A preliminary investigation of the physical and statistical properties of these circulation types suggests that they can be used to identify the conditions that are most likely to be associated with extreme peak demand events. Three broad cases are highlighted as requiring further investigation. The high-over-Britain anticyclone is found to be generally associated with very low winds but relatively moderate temperatures (and therefore moderate peak demands, somewhat in contrast to the classic low-wind cold snap that is sometimes apparent in the literature). In contrast, both longitudinally extended blocking over Scotland/Scandinavia and latitudinally extended troughs over western Europe appear to be more closely linked to the very cold GB temperatures (usually associated with extreme peak demands). In both of these latter situations, wind resource averaged across GB appears to be more moderate.
Resumo:
This paper proposes a practical approach to the enhancement of Quality of Service (QoS) routing by means of providing alternative or repair paths in the event of a breakage of a working path. The proposed scheme guarantees that every Protected Node (PN) is connected to a multi-repair path such that no further failure or breakage of single or double repair paths can cause any simultaneous loss of connectivity between an ingress node and an egress node. Links to be protected in an MPLS network are predefined and a Label Switched path (LSP) request involves the establishment of a working path. The use of multi-protection paths permits the formation of numerous protection paths allowing greater flexibility. Our analysis examined several methods including single, double and multi-repair routes and the prioritization of signals along the protected paths to improve the Quality of Service (QoS), throughput, reduce the cost of the protection path placement, delay, congestion and collision. Results obtained indicated that creating multi-repair paths and prioritizing packets reduces delay and increases throughput in which case the delays at the ingress/egress LSPs were low compared to when the signals had not been classified. Therefore the proposed scheme provided a means to improve the QoS in path restoration in MPLS using available network resources. Prioritizing the packets in the data plane has revealed that the amount of traffic transmitted using a medium and low priority Label Switch Paths (LSPs) does not have any impact on the explicit rate of the high priority LSP in which case the problem of a knock-on effect is eliminated.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.
Resumo:
Modern Portfolio Theory (MPT) has been advocated as a more rational approach to the construction of real estate portfolios. The application of MPT can now be achieved with relative ease using the powerful facilities of modern spreadsheet, and does not necessarily need specialist software. This capability is to be found in the use of an add-in Tool now found in several spreadsheets, called an Optimiser or Solver. The value in using this kind of more sophisticated analysis feature of spreadsheets is increasingly difficult to ignore. This paper examines the use of the spreadsheet Optimiser in handling asset allocation problems. Using the Markowitz Mean-Variance approach, the paper introduces the necessary calculations, and shows, by means of an elementary example implemented in Microsoft's Excel, how the Optimiser may be used. Emphasis is placed on understanding the inputs and outputs from the portfolio optimisation process, and the danger of treating the Optimiser as a Black Box is discussed.
Resumo:
This paper presents the development of an export coefficient model to characterise the rates and sources of P export from land to water in four reservoir systems located in a semi-arid rural region in southern of Portugal. The model was developed to enable effective management of these important water resource systems under the EU Water Framework Directive. This is the first time such an approach has been fully adapted for the semi-arid systems typical of Mediterranean Europe. The sources of P loading delivered to each reservoir from its catchment were determined and scenario analysis was undertaken to predict the likely impact of catchment management strategies on the scale of rate of P loading delivered to each water body from its catchment. The results indicate the importance of farming and sewage treatment works/collective septic tanks discharges as the main contributors to the total diffuse and point source P loading delivered to the reservoirs, respectively. A reduction in the total P loading for all study areas would require control of farming practices and more efficient removal of P from human wastes prior to discharge to surface waters. The scenario analysis indicates a strategy based solely on reducing the agricultural P surplus may result in only a slow improvement in water quality, which would be unlikely to support the generation of good ecological status in reservoirs. The model application indicates that a reduction of P-inputs to the reservoirs should first focus on reducing P loading from sewage effluent discharges through the introduction of tertiary treatment (P-stripping) in all major residential areas. The fully calibrated export coefficient modelling approach transferred well to semi-arid regions, with the only significant limitation being the availability of suitable input data to drive the model. Further studies using this approach in semi-arid catchments are now needed to increase the knowledge of nutrient export behaviours in semi-arid regions.
Resumo:
Data analysis based on station observations reveals that many meteorological variables averaged over the Tibetan Plateau (TP) are closely correlated, and their trends during the past decades are well correlated with the rainfall trend of the Asian summer monsoon. However, such correlation does not necessarily imply causality. Further diagnosis confirms the existence of a weakening trend in TP thermal forcing, characterized by weakened surface sensible heat flux in spring and summer during the past decades. This weakening trend is associated with decreasing summer precipitation over northern South Asia and North China and increasing precipitation over northwestern China, South China, and Korea. An atmospheric general circulation model, the HadAM3, is employed to elucidate the causality between the weakening TP forcing and the change in the Asian summer monsoon rainfall. Results demonstrate that a weakening in surface sensible heating over the TP results in reduced summer precipitation in the plateau region and a reduction in the associated latent heat release in summer. These changes in turn result in the weakening of the near-surface cyclonic circulation surrounding the plateau and the subtropical anticyclone over the subtropical western North Pacific, similar to the results obtained from the idealized TP experiment in Part I of this study. The southerly that normally dominates East Asia, ranging from the South China Sea to North China, weakens, resulting in a weaker equilibrated Sverdrup balance between positive vorticity generation and latent heat release. Consequently, the convergence of water vapor transport is confined to South China, forming a unique anomaly pattern in monsoon rainfall, the so-called “south wet and north dry.” Because the weakening trend in TP thermal forcing is associated with global warming, the present results provide an effective means for assessing projections of regional climate over Asia in the context of global warming.