18 resultados para Supply network mapping
em CentAUR: Central Archive University of Reading - UK
Resumo:
A neural network enhanced self-tuning controller is presented, which combines the attributes of neural network mapping with a generalised minimum variance self-tuning control (STC) strategy. In this way the controller can deal with nonlinear plants, which exhibit features such as uncertainties, nonminimum phase behaviour, coupling effects and may have unmodelled dynamics, and whose nonlinearities are assumed to be globally bounded. The unknown nonlinear plants to be controlled are approximated by an equivalent model composed of a simple linear submodel plus a nonlinear submodel. A generalised recursive least squares algorithm is used to identify the linear submodel and a layered neural network is used to detect the unknown nonlinear submodel in which the weights are updated based on the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model therefore the nonlinear submodel is naturally accommodated within the control law. Two simulation studies are provided to demonstrate the effectiveness of the control algorithm.
Resumo:
Climate change is one of the major challenges facing economic systems at the start of the 21st century. Reducing greenhouse gas emissions will require both restructuring the energy supply system (production) and addressing the efficiency and sufficiency of the social uses of energy (consumption). The energy production system is a complicated supply network of interlinked sectors with 'knock-on' effects throughout the economy. End use energy consumption is governed by complex sets of interdependent cultural, social, psychological and economic variables driven by shifts in consumer preference and technological development trajectories. To date, few models have been developed for exploring alternative joint energy production-consumption systems. The aim of this work is to propose one such model. This is achieved in a methodologically coherent manner through integration of qualitative input-output models of production, with Bayesian belief network models of consumption, at point of final demand. The resulting integrated framework can be applied either (relatively) quickly and qualitatively to explore alternative energy scenarios, or as a fully developed quantitative model to derive or assess specific energy policy options. The qualitative applications are explored here.
Resumo:
This paper describes the hydrochemistry of a lowland, urbanised river-system, The Cut in England, using in situ sub-daily sampling. The Cut receives effluent discharges from four major sewage treatment works serving around 190,000 people. These discharges consist largely of treated water, originally abstracted from the River Thames and returned via the water supply network, substantially increasing the natural flow. The hourly water quality data were supplemented by weekly manual sampling with laboratory analysis to check the hourly data and measure further determinands. Mean phosphorus and nitrate concentrations were very high, breaching standards set by EU legislation. Though 56% of the catchment area is agricultural, the hydrochemical dynamics were significantly impacted by effluent discharges which accounted for approximately 50% of the annual P catchment input loads and, on average, 59% of river flow at the monitoring point. Diurnal dissolved oxygen data demonstrated high in-stream productivity. From a comparison of high frequency and conventional monitoring data, it is inferred that much of the primary production was dominated by benthic algae, largely diatoms. Despite the high productivity and nutrient concentrations, the river water did not become anoxic and major phytoplankton blooms were not observed. The strong diurnal and annual variation observed showed that assessments of water quality made under the Water Framework Directive (WFD) are sensitive to the time and season of sampling. It is recommended that specific sampling time windows be specified for each determinand, and that WFD targets should be applied in combination to help identify periods of greatest ecological risk. This article is protected by copyright. All rights reserved.
Resumo:
The resource based view of strategy suggests that competitiveness in part derives from a firms ability to collaborate with a subset of its supply network to co-create highly valued products and services. This relational capability relies on a foundational intra and inter-organisational architecture, the manifestation of strategic, people, and process decisions facilitating the interface between the firm and its strategic suppliers. Using covariance-based structural equation modelling we examine the relationships between internal and external features of relational architecture, and their relationship with relational capability and relational quality. This is undertaken on data collected by mail survey. We find significant relationships between both internal and external relational architecture and relational capability and between relational capability and relational quality. Novel constructs for internal and external elements of relational architecture are specified to demonstrate their positive influence on relational capability and relationship quality.
Resumo:
Stakeholder analysis plays a critical role in business analysis. However, the majority of the stakeholder identification and analysis methods focus on the activities and processes and ignore the artefacts being processed by human beings. By focusing on the outputs of the organisation, an artefact-centric view helps create a network of artefacts, and a component-based structure of the organisation and its supply chain participants. Since the relationship is based on the components, i.e. after the stakeholders are identified, the interdependency between stakeholders and the focal organisation can be measured. Each stakeholder is associated with two types of dependency, namely the stakeholder’s dependency on the focal organisation and the focal organisation’s dependency on the stakeholder. We identify three factors for each type of dependency and propose the equations that calculate the dependency indexes. Once both types of the dependency indexes are calculated, each stakeholder can be placed and categorised into one of the four groups, namely critical stakeholder, mutual benefits stakeholder, replaceable stakeholder, and easy care stakeholder. The mutual dependency grid and the dependency gap analysis, which further investigates the priority of each stakeholder by calculating the weighted dependency gap between the focal organisation and the stakeholder, subsequently help the focal organisation to better understand its stakeholders and manage its stakeholder relationships.
Resumo:
The European research project TIDE (Tidal Inlets Dynamics and Environment) is developing and validating coupled models describing the morphological, biological and ecological evolution of tidal environments. The interactions between the physical and biological processes occurring in these regions requires that the system be studied as a whole rather than as separate parts. Extensive use of remote sensing including LiDAR is being made to provide validation data for the modelling. This paper describes the different uses of LiDAR within the project and their relevance to the TIDE science objectives. LiDAR data have been acquired from three different environments, the Venice Lagoon in Italy, Morecambe Bay in England, and the Eden estuary in Scotland. LiDAR accuracy at each site has been evaluated using ground reference data acquired with differential GPS. A semi-automatic technique has been developed to extract tidal channel networks from LiDAR data either used alone or fused with aerial photography. While the resulting networks may require some correction, the procedure does allow network extraction over large areas using objective criteria and reduces fieldwork requirements. The networks extracted may subsequently be used in geomorphological analyses, for example to describe the drainage patterns induced by networks and to examine the rate of change of networks. Estimation of the heights of the low and sparse vegetation on marshes is being investigated by analysis of the statistical distribution of the measured LiDAR heights. Species having different mean heights may be separated using the first-order moments of the height distribution.
Resumo:
The construction industry has incurred a considerable amount of waste as a result of poor logistics supply chain network management. Therefore, managing logistics in the construction industry is critical. An effective logistic system ensures delivery of the right products and services to the right players at the right time while minimising costs and rewarding all sectors based on value added to the supply chain. This paper reports on an on-going research study on the concept of context-aware services delivery in the construction project supply chain logistics. As part of the emerging wireless technologies, an Intelligent Wireless Web (IWW) using context-aware computing capability represents the next generation ICT application to construction-logistics management. This intelligent system has the potential of serving and improving the construction logistics through access to context-specific data, information and services. Existing mobile communication deployments in the construction industry rely on static modes of information delivery and do not take into account the worker’s changing context and dynamic project conditions. The major problems in these applications are lack of context-specificity in the distribution of information, services and other project resources, and lack of cohesion with the existing desktop based ICT infrastructure. The research works focus on identifying the context dimension such as user context, environmental context and project context, selection of technologies to capture context-parameters such wireless sensors and RFID, selection of supporting technologies such as wireless communication, Semantic Web, Web Services, agents, etc. The process of integration of Context-Aware Computing and Web-Services to facilitate the creation of intelligent collaboration environment for managing construction logistics will take into account all the necessary critical parameters such as storage, transportation, distribution, assembly, etc. within off and on-site project.
Resumo:
We model the large scale fading of wireless THz communications links deployed in a metropolitan area taking into account reception through direct line of sight, ground or wall reflection and diffraction. The movement of the receiver in the three dimensions is modelled by an autonomous dynamic linear system in state-space whereas the geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a Wiener model from time-domain measurements of the field intensity.
Resumo:
The identification and visualization of clusters formed by motor unit action potentials (MUAPs) is an essential step in investigations seeking to explain the control of the neuromuscular system. This work introduces the generative topographic mapping (GTM), a novel machine learning tool, for clustering of MUAPs, and also it extends the GTM technique to provide a way of visualizing MUAPs. The performance of GTM was compared to that of three other clustering methods: the self-organizing map (SOM), a Gaussian mixture model (GMM), and the neural-gas network (NGN). The results, based on the study of experimental MUAPs, showed that the rate of success of both GTM and SOM outperformed that of GMM and NGN, and also that GTM may in practice be used as a principled alternative to the SOM in the study of MUAPs. A visualization tool, which we called GTM grid, was devised for visualization of MUAPs lying in a high-dimensional space. The visualization provided by the GTM grid was compared to that obtained from principal component analysis (PCA). (c) 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
A neural network was used to map three PID operating regions for a two-input two-output steam generator system. The network was used in stand alone feedforward operation to control the whole operating range of the process, after being trained from the PID controllers corresponding to each control region. The network inputs are the plant error signals, their integral, their derivative and a 4-error delay train.
Resumo:
Almost all the electricity currently produced in the UK is generated as part of a centralised power system designed around large fossil fuel or nuclear power stations. This power system is robust and reliable but the efficiency of power generation is low, resulting in large quantities of waste heat. The principal aim of this paper is to investigate an alternative concept: the energy production by small scale generators in close proximity to the energy users, integrated into microgrids. Microgrids—de-centralised electricity generation combined with on-site production of heat—bear the promise of substantial environmental benefits, brought about by a higher energy efficiency and by facilitating the integration of renewable sources such as photovoltaic arrays or wind turbines. By virtue of good match between generation and load, microgrids have a low impact on the electricity network, despite a potentially significant level of generation by intermittent energy sources. The paper discusses the technical and economic issues associated with this novel concept, giving an overview of the generator technologies, the current regulatory framework in the UK, and the barriers that have to be overcome if microgrids are to make a major contribution to the UK energy supply. The focus of this study is a microgrid of domestic users powered by small Combined Heat and Power generators and photovoltaics. Focusing on the energy balance between the generation and load, it is found that the optimum combination of the generators in the microgrid- consisting of around 1.4 kWp PV array per household and 45% household ownership of micro-CHP generators- will maintain energy balance on a yearly basis if supplemented by energy storage of 2.7 kWh per household. We find that there is no fundamental technological reason why microgrids cannot contribute an appreciable part of the UK energy demand. Indeed, an estimate of cost indicates that the microgrids considered in this study would supply electricity at a cost comparable with the present electricity supply if the current support mechanisms for photovoltaics were maintained. Combining photovoltaics and micro-CHP and a small battery requirement gives a microgrid that is independent of the national electricity network. In the short term, this has particular benefits for remote communities but more wide-ranging possibilities open up in the medium to long term. Microgrids could meet the need to replace current generation nuclear and coal fired power stations, greatly reducing the demand on the transmission and distribution network.
Resumo:
This paper uses long-term regional construction data to investigate whether increases infrastructure investment in the English regions leads to subsequent rises in housebuilding and new commercial property, using time series modeling. Both physical (roads and harbours) and social infrastructure (education and health) impacts are investigated across nine regions in England. Significant effects for physical infrastructure are found across most regions and, also, some evidence of a social infrastructure effect. The results are not consistent across regions, which may be due to geographical differences and to network and diversionary effects. However, the results do suggest that infrastructure does have some impact but follows differential lag structures. These results provide a test of the hypothesis of the economic benefits of infrastructure investment in an approach that has not been used before.
Resumo:
Gene expression is a quantitative trait that can be mapped genetically in structured populations to identify expression quantitative trait loci (eQTL). Genes and regulatory networks underlying complex traits can subsequently be inferred. Using a recently released genome sequence, we have defined cis- and trans-eQTL and their environmental response to low phosphorus (P) availability within a complex plant genome and found hotspots of trans-eQTL within the genome. Interval mapping, using P supply as a covariate, revealed 18,876 eQTL. trans-eQTL hotspots occurred on chromosomes A06 and A01 within Brassica rapa; these were enriched with P metabolism-related Gene Ontology terms (A06) as well as chloroplast-and photosynthesis-related terms (A01). We have also attributed heritability components to measures of gene expression across environments, allowing the identification of novel gene expression markers and gene expression changes associated with low P availability. Informative gene expression markers were used to map eQTL and P use efficiency-related QTL. Genes responsive to P supply had large environmental and heritable variance components. Regulatory loci and genes associated with P use efficiency identified through eQTL analysis are potential targets for further characterization and may have potential for crop improvement.