937 resultados para Operational Data Stores


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[1] Evaporative fraction (EF) is a measure of the amount of available energy at the earth surface that is partitioned into latent heat flux. The currently operational thermal sensors like the Moderate Resolution Imaging Spectroradiometer (MODIS) on satellite platforms provide data only at 1000 m, which constraints the spatial resolution of EF estimates. A simple model (disaggregation of evaporative fraction (DEFrac)) based on the observed relationship between EF and the normalized difference vegetation index is proposed to spatially disaggregate EF. The DEFrac model was tested with EF estimated from the triangle method using 113 clear sky data sets from the MODIS sensor aboard Terra and Aqua satellites. Validation was done using the data at four micrometeorological tower sites across varied agro-climatic zones possessing different land cover conditions in India using Bowen ratio energy balance method. The root-mean-square error (RMSE) of EF estimated at 1000 m resolution using the triangle method was 0.09 for all the four sites put together. The RMSE of DEFrac disaggregated EF was 0.09 for 250 m resolution. Two models of input disaggregation were also tried with thermal data sharpened using two thermal sharpening models DisTrad and TsHARP. The RMSE of disaggregated EF was 0.14 for both the input disaggregation models for 250 m resolution. Moreover, spatial analysis of disaggregation was performed using Landsat-7 (Enhanced Thematic Mapper) ETM+ data over four grids in India for contrasted seasons. It was observed that the DEFrac model performed better than the input disaggregation models under cropped conditions while they were marginally similar under non-cropped conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vernacular dwellings are well-suited climate-responsive designs that adopt local materials and skills to support comfortable indoor environments in response to local climatic conditions. These naturally-ventilated passive dwellings have enabled civilizations to sustain even in extreme climatic conditions. The design and physiological resilience of the inhabitants have coevolved to be attuned to local climatic and environmental conditions. Such adaptations have perplexed modern theories in human thermal-comfort that have evolved in the era of electricity and air-conditioned buildings. Vernacular local building elements like rubble walls and mud roofs are given way to burnt brick walls and reinforced cement concrete tin roofs. Over 60% of Indian population is rural, and implications of such transitions on thermal comfort and energy in buildings are crucial to understand. Types of energy use associated with a buildings life cycle include its embodied energy, operational and maintenance energy, demolition and disposal energy. Embodied Energy (EE) represents total energy consumption for construction of building, i.e., embodied energy of building materials, material transportation energy and building construction energy. Embodied energy of building materials forms major contribution to embodied energy in buildings. Operational energy (OE) in buildings mainly contributed by space conditioning and lighting requirements, depends on the climatic conditions of the region and comfort requirements of the building occupants. Less energy intensive natural materials are used for traditional buildings and the EE of traditional buildings is low. Transition in use of materials causes significant impact on embodied energy of vernacular dwellings. Use of manufactured, energy intensive materials like brick, cement, steel, glass etc. contributes to high embodied energy in these dwellings. This paper studies the increase in EE of the dwelling attributed to change in wall materials. Climatic location significantly influences operational energy in dwellings. Buildings located in regions experiencing extreme climatic conditions would require more operational energy to satisfy the heating and cooling energy demands throughout the year. Traditional buildings adopt passive techniques or non-mechanical methods for space conditioning to overcome the vagaries of extreme climatic variations and hence less operational energy. This study assesses operational energy in traditional dwelling with regard to change in wall material and climatic location. OE in the dwellings has been assessed for hot-dry, warm humid and moderate climatic zones. Choice of thermal comfort models is yet another factor which greatly influences operational energy assessment in buildings. The paper adopts two popular thermal-comfort models, viz., ASHRAE comfort standards and TSI by Sharma and Ali to investigate thermal comfort aspects and impact of these comfort models on OE assessment in traditional dwellings. A naturally ventilated vernacular dwelling in Sugganahalli, a village close to Bangalore (India), set in warm - humid climate is considered for present investigations on impact of transition in building materials, change in climatic location and choice of thermal comfort models on energy in buildings. The study includes a rigorous real time monitoring of the thermal performance of the dwelling. Dynamic simulation models validated by measured data have also been adopted to determine the impact of the transition from vernacular to modern material-configurations. Results of the study and appraisal for appropriate thermal comfort standards for computing operational energy has been presented and discussed in this paper. (c) 2014 K.I. Praseeda. Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The report describes the results of preliminary analyses of data obtained from a series of water temperature loggers sited at various distances (0.8 to 21.8 km) downstream of Kielder dam on the River North Tyne and in two natural tributaries. The report deals with three aspects of the water temperature records: An analysis of an operational aspect of the data sets for selected stations, a simple examination of the effects of impoundment upon water temperature at or close to the point of release, relative to natural river temperatures, and an examination of rate of change of monthly means of daily mean, maximum, minimum and range (maximum - minimum) with distance downstream of the point of release during 1983.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is an investigation into the nature of data analysis and computer software systems which support this activity.

The first chapter develops the notion of data analysis as an experimental science which has two major components: data-gathering and theory-building. The basic role of language in determining the meaningfulness of theory is stressed, and the informativeness of a language and data base pair is studied. The static and dynamic aspects of data analysis are then considered from this conceptual vantage point. The second chapter surveys the available types of computer systems which may be useful for data analysis. Particular attention is paid to the questions raised in the first chapter about the language restrictions imposed by the computer system and its dynamic properties.

The third chapter discusses the REL data analysis system, which was designed to satisfy the needs of the data analyzer in an operational relational data system. The major limitation on the use of such systems is the amount of access to data stored on a relatively slow secondary memory. This problem of the paging of data is investigated and two classes of data structure representations are found, each of which has desirable paging characteristics for certain types of queries. One representation is used by most of the generalized data base management systems in existence today, but the other is clearly preferred in the data analysis environment, as conceptualized in Chapter I.

This data representation has strong implications for a fundamental process of data analysis -- the quantification of variables. Since quantification is one of the few means of summarizing and abstracting, data analysis systems are under strong pressure to facilitate the process. Two implementations of quantification are studied: one analagous to the form of the lower predicate calculus and another more closely attuned to the data representation. A comparison of these indicates that the use of the "label class" method results in orders of magnitude improvement over the lower predicate calculus technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large numbers of fishing vessels operating from ports in Latin America participate in surface longline fisheries in the eastern Pacific Ocean (EPO), and several species of sea turtles inhabit the grounds where these fleets operate. The endangered status of several sea turtle species, and the success of circle hooks (‘treatment’ hooks) in reducing turtle hookings in other ocean areas, as compared to J-hooks and Japanese-style tuna hooks (‘control’ hooks), prompted the initiation of a hook exchange program on the west coast of Latin America, the Eastern Pacific Regional Sea Turtle Program (EPRSTP)1. One of the goals of the EPRSTP is to determine if circle hooks would be effective at reducing turtle bycatch in artisanal fisheries of the EPO without significantly reducing the catch of marketable fish species. Participating fishers were provided with circle hooks at no cost and asked to replace the J/Japanese-style tuna hooks on their longlines with circle hooks in an alternating manner. Data collected by the EPRSTP show differences in longline gear and operational characteristics within and among countries. These aspects of the data, in addition to difficulties encountered with implementation of the alternating-hook design, pose challenges for analysis of these data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thus far most studies of operational energy use of buildings fail to take a longitudinal view, or in other words, do not take into account how operational energy use changes during the lifetime of a building. However, such a view is important when predicting the impact of climate change, or for long term energy accounting purposes. This article presents an approach to deliver a longitudinal prediction of operational energy use. The work is based on the review of deterioration in thermal performance, building maintenance effects, and future climate change. The key issues are to estimate the service life expectancy and thermal performance degradation of building components while building maintenance and changing weather conditions are considered at the same time. Two examples are presented to demonstrate the application of the deterministic and stochastic approaches, respectively. The work concludes that longitudinal prediction of operational energy use is feasible, but the prediction will depend largely on the availability of extensive and reliable monitoring data. This premise is not met in most current buildings. © 2011 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the Climate Change Act of 2008 the UK Government pledged to reduce carbon emissions by 80% by 2050. As one step towards this, regulations are being introduced requiring all new buildings to be ‘zero carbon’ by 2019. These are defined as buildings which emit net zero carbon during their operational lifetime. However, in order to meet the 80% target it is necessary to reduce the carbon emitted during the whole life-cycle of buildings, including that emitted during the processes of construction. These elements make up the ‘embodied carbon’ of the building. While there are no regulations yet in place to restrict embodied carbon, a number of different approaches have been made. There are several existing databases of embodied carbon and embodied energy. Most provide data for the material extraction and manufacturing only, the ‘cradle to factory gate’ phase. In addition to the databases, various software tools have been developed to calculate embodied energy and carbon of individual buildings. A third source of data comes from the research literature, in which individual life cycle analyses of buildings are reported. This paper provides a comprehensive review, comparing and assessing data sources, boundaries and methodologies. The paper concludes that the wide variations in these aspects produce incomparable results. It highlights the areas where existing data is reliable, and where new data and more precise methods are needed. This comprehensive review will guide the future development of a consistent and transparent database and software tool to calculate the embodied energy and carbon of buildings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reducing energy consumption is a major challenge for energy-intensive industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of optimized operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a method for analysing the operational complexity in supply chains by using an entropic measure based on information theory. The proposed approach estimates the operational complexity at each stage of the supply chain and analyses the changes between stages. In this paper a stage is identified by the exchange of data and/or material. Through analysis the method identifies the stages where the operational complexity is both generated and propagated (exported, imported, generated or absorbed). Central to the method is the identification of a reference point within the supply chain. This is where the operational complexity is at a local minimum along the data transfer stages. Such a point can be thought of as a 'sink' for turbulence generated in the supply chain. Where it exists, it has the merit of stabilising the supply chain by attenuating uncertainty. However, the location of the reference point is also a matter of choice. If the preferred location is other than the current one, this is a trigger for management action. The analysis can help decide appropriate remedial action. More generally, the approach can assist logistics management by highlighting problem areas. An industrial application is presented to demonstrate the applicability of the method. © 2013 Operational Research Society Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most research on technology roadmapping has focused on its practical applications and the development of methods to enhance its operational process. Thus, despite a demand for well-supported, systematic information, little attention has been paid to how/which information can be utilised in technology roadmapping. Therefore, this paper aims at proposing a methodology to structure technological information in order to facilitate the process. To this end, eight methods are suggested to provide useful information for technology roadmapping: summary, information extraction, clustering, mapping, navigation, linking, indicators and comparison. This research identifies the characteristics of significant data that can potentially be used in roadmapping, and presents an approach to extracting important information from such raw data through various data mining techniques including text mining, multi-dimensional scaling and K-means clustering. In addition, this paper explains how this approach can be applied in each step of roadmapping. The proposed approach is applied to develop a roadmap of radio-frequency identification (RFID) technology to illustrate the process practically. © 2013 © 2013 Taylor & Francis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of properly exploiting a classifier's inherent geometric characteristics when developing a classification methodology is emphasized as a prerequisite to achieving near optimal performance when carrying out thematic mapping. When used properly, it is argued that the long-standing maximum likelihood approach and the more recent support vector machine can perform comparably. Both contain the flexibility to segment the spectral domain in such a manner as to match inherent class separations in the data, as do most reasonable classifiers. The choice of which classifier to use in practice is determined largely by preference and related considerations, such as ease of training, multiclass capabilities, and classification cost. © 1980-2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The VEGETATION (VGT) sensor in SPOT 4 has four spectral bands that are equivalent to Landsat Thematic Mapper (TM) bands (blue, red, near-infrared and mid-infrared spectral bands) and provides daily images of the global land surface at a 1-km spatial resolution. We propose a new index for identifying and mapping of snow ice cover, namely the Normalized Difference Snow/Ice Index (NDSII), which uses reflectance values of red and mid-infrared spectral bands of Landsat TM and VGT. For Landsat TM data, NDSII is calculated as NDSIITM =(TM3 -TM5)/(TM3 +TM5); for VGT data, NDSII is calculated as NDSIIVGT =(B2- MIR)/(B2 + MIR). As a case study we used a Landsat TM image that covers the eastern part of the Qilian mountain range in the Qinghai-Xizang (Tibetan) plateau of China. NDSIITM gave similar estimates of the area and spatial distribution of snow/ice cover to the Normalized Difference Snow Index (NDSI=(TM2-TM5)/(TM2+TM5)) which has been proposed by Hall et al. The results indicated that the VGT sensor might have the potential for operational monitoring and mapping of snow/ice cover from regional to global scales, when using NDSIIVGT.