964 resultados para CHEMORECEPTOR INPUTS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The idea of collective unintelligence is examined in this paper to highlight some of the conceptual and practical problems faced in modeling groups. Examples drawn from international crises and economics provide illustrative problems of collective failures to act in intelligent ways, despite the inputs and efforts of many skilled and intelligent parties. Choices made of “appropriate” perceptions, analysis and evaluations are examined along with how these might be combined. A simple vector representation illustrates some of the issues and creative possibilities in multi-party actions. Revealed as manifest (un-)intelligence are the resolutions of various problems and potentials that arise in dealing with the “each and all” of a group (wherein items are necessarily non-parallel and of unequal valency). Such issues challenge those seeking to model collective intelligence, but much may be learned.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement. ---------------- The Port currently has a network of stormwater sample collection points where event based samples together with grab samples are tested for a range of water quality parameters. Whilst this information provides a ‘snapshot’ of the pollutants being washed from the catchment/s, it does not allow for a quantifiable assessment of total contaminant loads being discharged to the waters of Moreton Bay. It also does not represent pollutant build-up and wash-off from the different land uses across a broader range of rainfall events which might be expected. As such, it is difficult to relate stormwater quality to different pollutant sources within the Port environment. ----------------- Consequently, this would make the source tracking of pollutants to receiving waters extremely difficult and in turn the ability to implement appropriate mitigation measures. Also, without this detailed understanding, the efficacy of the various stormwater quality mitigation measures implemented cannot be determined with certainty. --------------- Current knowledge on port stormwater runoff quality Currently, little knowledge exists with regards to the pollutant generation capacity specific to port land uses as these do not necessarily compare well with conventional urban industrial or commercial land use due to the specific nature of port activities such as inter-modal operations and cargo management. Furthermore, traffic characteristics in a port area are different to a conventional urban area. Consequently, as data inputs based on an industrial and commercial land uses for modelling purposes is questionable. ------------------ A comprehensive review of published research failed to locate any investigations undertaken with regards to pollutant build-up and wash-off for port specific land uses. Furthermore, there is very limited information made available by various ports worldwide about the pollution generation potential of their facilities. Published work in this area has essentially focussed on the water quality or environmental values in the receiving waters such as the downstream bay or estuary. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is the undertaking of ‘cutting edge’ research to strengthen the environmental custodianship of the Port area. This project aims to develop a port specific stormwater quality model to allow informed decision making in relation to stormwater quality improvement in the context of the increased growth of the Port. --------------- Stage 1 of the research project focussed on the assessment of pollutant build-up and wash-off using rainfall simulation from the current Port of Brisbane facilities with the longer-term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. Investigation of complex processes such as pollutant wash-off using naturally occurring rainfall events has inherent difficulties. These can be overcome using simulated rainfall for the investigations. ----------------- The deliverables for Stage 1 included the following: * Pollutant build-up and wash-off profiles for six primary land uses within the Port of Brisbane to be used for water quality model development. * Recommendations with regards to future stormwater quality monitoring and pollution mitigation measures. The outcomes are expected to deliver the following benefits to the Port of Brisbane: * The availability of Port specific pollutant build-up and wash-off data will enable the implementation of customised stormwater pollution mitigation strategies. * The water quality data collected would form the baseline data for a Port specific water quality model for mitigation and predictive purposes. * To be at the cutting-edge in terms of water quality management and environmental best practice in the context of port infrastructure. ---------------- Conclusions: The important conclusions from the study are: * It confirmed that the Port environment is unique in terms of pollutant characteristics and is not comparable to typical urban land uses. * For most pollutant types, the Port land uses exhibited lower pollutant concentrations when compared to typical urban land uses. * The pollutant characteristics varied across the different land uses and were not consistent in terms of the land use. Hence, the implementation of stereotypical structural water quality improvement devices could be of limited value. * The <150m particle size range was predominant in suspended solids for pollutant build-up as well as wash-off. Therefore, if suspended solids are targeted as the surrogate parameter for water quality improvement, this specific particle size range needs to be removed. ------------------- Recommendations: Based on the study results the following preliminary recommendations are made: * Due to the appreciable variation in pollutant characteristics for different port land uses, any water quality monitoring stations should preferably be located such that source areas can be easily identified. * The study results having identified significant pollutants for the different land uses should enable the development of a more customised water quality monitoring and testing regime targeting the critical pollutants. * A ‘one size fits all’ approach may not be appropriate for the different port land uses due to the varying pollutant characteristics. As such, pollution mitigation will need to be specifically tailored to suit the specific land use. * Any structural measures implemented for pollution mitigation to be effective should have the capability to remove suspended solids of size <150m. * Based on the results presented and the particularly the fact that the Port land uses cannot be compared to conventional urban land uses in relation to pollutant generation, consideration should be given to the development of a port specific water quality model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the role of powerful entities and coalitions in shaping international accounting standards. Specifically, the focus is on the process by which the International Accounting Standards Board (IASB) developed IFRS 6, Exploration for and Evaluation of Mineral Resources. In its Issues Paper, the IASB recommended that the successful efforts method be mandated for pre-production costs, eliminating the choice previously available between full cost and successful efforts methods. In spite of the endorsement of this view by a majority of the constituents who responded to the Issues Paper, the final outcome changed nothing, with choice being retained. A compelling explanation of this disparity between the visible inputs and outputs of the standard setting process is the existence of a “black box”, in which powerful extractive industries entities and coalitions covertly influenced the IASB to secure their own ends and ensure that the status quo was maintained

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although current assessments of agricultural management practices on soil organic C (SOC) dynamics are usually conducted without any explicit consideration of limits to soil C storage, it has been hypothesized that the SOC pool has an upper, or saturation limit with respect to C input levels at steady state. Agricultural management practices that increase C input levels over time produce a new equilibrium soil C content. However, multiple C input level treatments that produce no increase in SOC stocks at equilibrium show that soils have become saturated with respect to C inputs. SOC storage of added C input is a function of how far a soil is from saturation level (saturation deficit) as well as C input level. We tested experimentally if C saturation deficit and varying C input levels influenced soil C stabilization of added C-13 in soils varying in SOC content and physiochemical characteristics. We incubated for 2.5 years soil samples from seven agricultural sites that were closer to (i.e., A-horizon) or further from (i.e., C-horizon) their C saturation limit. At the initiation of the incubations, samples received low or high C input levels of 13 C-labeled wheat straw. We also tested the effect of Ca addition and residue quality on a subset of these soils. We hypothesized that the proportion of C stabilized would be greater in samples with larger C Saturation deficits (i.e., the C- versus A-horizon samples) and that the relative stabilization efficiency (i.e., Delta SCC/Delta C input) would decrease as C input level increased. We found that C saturation deficit influenced the stabilization of added residue at six out of the seven sites and C addition level affected the stabilization of added residue in four sites, corroborating both hypotheses. Increasing Ca availability or decreasing residue quality had no effect on the stabilization of added residue. The amount of new C stabilized was significantly related to C saturation deficit, supporting the hypothesis that C saturation influenced C stabilization at all our sites. Our results suggest that soils with low C contents and degraded lands may have the greatest potential and efficiency to store added C because they are further from their saturation level. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Agricultural management affects soil organic matter, which is important for sustainable crop production and as a greenhouse gas sink. Our objective was to determine how tillage, residue management and N fertilization affect organic C in unprotected, and physically, chemically and biochemically protected soil C pools. Samples from Breton, Alberta were fractionated and analysed for organic C content. As in previous report, N fertilization had a positive effect, tillage had a minimal effect, and straw management had no effect on whole-soil organic C. Tillage and straw management did not alter organic C concentrations in the isolated C pools, while N fertilization increased C concentrations in all pools. Compared with a woodlot soil, the cultivated plots had lower total organic C, and the C was redistributed among isolated pools. The free light fraction and coarse particulate organic matter responded positively to C inputs, suggesting that much of the accumulated organic C occurred in an unprotected pool. The easily dispersed silt-sized fraction was the mineral-associated pool most responsive to changes in C inputs, whereas the microaggregate-derived silt-sized fraction best preserved C upon cultivation. These findings suggest that the silt-sized fraction is important for the long-term stabilization of organic matter through both physical occlusion in microaggregates and chemical protection by mineral association.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous research on the protection of soil organic C from decomposition suggests that soil texture affects soil C stocks. However, different pools of soil organic matter (SOM) might be differently related to soil texture. Our objective was to examine how soil texture differentially alters the distribution of organic C within physically and chemically defined pools of unprotected and protected SOM. We collected samples from two soil texture gradients where other variables influencing soil organic C content were held constant. One texture gradient (16-60% clay) was located near Stewart Valley, Saskatchewan, Canada and the other (25-50% clay) near Cygnet, OH. Soils were physically fractionated into coarse- and fine-particulate organic matter (POM), silt- and clay-sized particles within microaggregates, and easily dispersed silt-and clay-sized particles outside of microaggregates. Whole-soil organic C concentration was positively related to silt plus clay content at both sites. We found no relationship between soil texture and unprotected C (coarse- and fine-POM C). Biochemically protected C (nonhydrolyzable C) increased with increasing clay content in whole-soil samples, but the proportion of nonhydrolyzable C within silt- and clay-sized fractions was unchanged. As the amount of silt or clay increased, the amount of C stabilized within easily dispersed and microaggregate-associated silt or clay fractions decreased. Our results suggest that for a given level of C inputs, the relationship between mineral surface area and soil organic matter varies with soil texture for physically and biochemically protected C fractions. Because soil texture acts directly and indirectly on various protection mechanisms, it may not be a universal predictor of whole-soil C content.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between soil structure and the ability of soil to stabilize soil organic matter (SOM) is a key element in soil C dynamics that has either been overlooked or treated in a cursory fashion when developing SOM models. The purpose of this paper is to review current knowledge of SOM dynamics within the framework of a newly proposed soil C saturation concept. Initially, we distinguish SOM that is protected against decomposition by various mechanisms from that which is not protected from decomposition. Methods of quantification and characteristics of three SOM pools defined as protected are discussed. Soil organic matter can be: (1) physically stabilized, or protected from decomposition, through microaggregation, or (2) intimate association with silt and clay particles, and (3) can be biochemically stabilized through the formation of recalcitrant SOM compounds. In addition to behavior of each SOM pool, we discuss implications of changes in land management on processes by which SOM compounds undergo protection and release. The characteristics and responses to changes in land use or land management are described for the light fraction (LF) and particulate organic matter (POM). We defined the LF and POM not occluded within microaggregates (53-250 mum sized aggregates as unprotected. Our conclusions are illustrated in a new conceptual SOM model that differs from most SOM models in that the model state variables are measurable SOM pools. We suggest that physicochemical characteristics inherent to soils define the maximum protective capacity of these pools, which limits increases in SOM (i.e. C sequestration) with increased organic residue inputs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous research suggests that soil organic C pools may be a feature of semiarid regions that are particularly sensitive to climatic changes. We instituted an 18-mo experiment along an elevation gradient in northern Arizona to evaluate the influence of temperature, moisture, and soil C pool size on soil respiration. Soils, from underneath different free canopy types and interspaces of three semiarid ecosystems, were moved upslope and/or downslope to modify soil climate. Soils moved downslope experienced increased temperature and decreased precipitation, resulting in decreased soil moisture and soil respiration las much as 23 acid 20%, respectively). Soils moved upslope to more mesic, cooler sites had greater soil water content and increased rates of soil respiration las much as 40%), despite decreased temperature. Soil respiration rates normalized for total C were not significantly different within any of the three incubation sites, indicating that under identical climatic conditions, soil respiration is directly related to soil C pool size for the incubated soils. Normalized soil respiration rates between sites differed significantly for all soil types and were always greater for soils incubated under more mesic, but cooler, conditions. Total soil C did not change significantly during the experiment, but estimates suggest that significant portions of the rapidly cycling C pool were lost. While long-term decreases in aboveground and belowground detrital inputs may ultimately be greater than decreased soil respiration, the initial response to increased temperature and decreased precipitation in these systems is a decrease in annual soil C efflux.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Power system stabilizers (PSS) work well at the particular network configuration and steady state conditions for which they were designed. Once conditions change, their performance degrades. This can be overcome by an intelligent nonlinear PSS based on fuzzy logic. Such a fuzzy logic power system stabilizer (FLPSS) is developed, using speed and power deviation as inputs, and provides an auxiliary signal for the excitation system of a synchronous motor in a multimachine power system environment. The FLPSS's effect on the system damping is then compared with a conventional power system stabilizer's (CPSS) effect on the system. The results demonstrate an improved system performance with the FLPSS and also that the FLPSS is robust

Relevância:

10.00% 10.00%

Publicador:

Resumo:

China has made great progress in constructing comprehensive legislative and judicial infrastructures to protect intellectual property rights. But levels of enforcement remain low. Estimates suggest that 90% of film and music products consumed in China are ‘pirated’ and in 2009 81% of the infringing goods seized at the US border originated from China. Despite of heavy criticism over its failure to enforce IPRs, key areas of China’s creative industries, including film, mobile-music, fashion and animation, are developing rapidly. This paper explores how the rapid expansion of China’s creative economy might be reconciled with conceptual approaches that view the CIs in terms of creativity inputs and IP outputs. It argues that an evolutionary understanding of copyright’s role in creative innovation might better explain China’s experiences and provide more general insights into the nature of the creative industries and the policies most likely to promote growth in this sector of the economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The economiser is a critical component for efficient operation of coal-fired power stations. It consists of a large system of water-filled tubes which extract heat from the exhaust gases. When it fails, usually due to erosion causing a leak, the entire power station must be shut down to effect repairs. Not only are such repairs highly expensive, but the overall repair costs are significantly affected by fluctuations in electricity market prices, due to revenue lost during the outage. As a result, decisions about when to repair an economiser can alter the repair costs by millions of dollars. Therefore, economiser repair decisions are critical and must be optimised. However, making optimal repair decisions is difficult because economiser leaks are a type of interactive failure. If left unfixed, a leak in a tube can cause additional leaks in adjacent tubes which will need more time to repair. In addition, when choosing repair times, one also needs to consider a number of other uncertain inputs such as future electricity market prices and demands. Although many different decision models and methodologies have been developed, an effective decision-making method specifically for economiser repairs has yet to be defined. In this paper, we describe a Decision Tree based method to meet this need. An industrial case study is presented to demonstrate the application of our method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This tutorial is designed to help new users become familiar with using the Spartan-3E board. The tutorial steps through the following: writing a small program in VHDL which carries out simple combinational logic; connecting the program inputs and outputs to the switches, buttons and LEDs on the Spartan-3E board; and downloading the program to the Spartan-3E board using the Project Navigator software.