3 resultados para Data handling

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contribution of buildings towards total worldwide energy consumption in developed countries is between 20% and 40%. Heating Ventilation and Air Conditioning (HVAC), and more specifically Air Handling Units (AHUs) energy consumption accounts on average for 40% of a typical medical device manufacturing or pharmaceutical facility’s energy consumption. Studies have indicated that 20 – 30% energy savings are achievable by recommissioning HVAC systems, and more specifically AHU operations, to rectify faulty operation. Automated Fault Detection and Diagnosis (AFDD) is a process concerned with potentially partially or fully automating the commissioning process through the detection of faults. An expert system is a knowledge-based system, which employs Artificial Intelligence (AI) methods to replicate the knowledge of a human subject matter expert, in a particular field, such as engineering, medicine, finance and marketing, to name a few. This thesis details the research and development work undertaken in the development and testing of a new AFDD expert system for AHUs which can be installed in minimal set up time on a large cross section of AHU types in a building management system vendor neutral manner. Both simulated and extensive field testing was undertaken against a widely available and industry known expert set of rules known as the Air Handling Unit Performance Assessment Rules (APAR) (and a later more developed version known as APAR_extended) in order to prove its effectiveness. Specifically, in tests against a dataset of 52 simulated faults, this new AFDD expert system identified all 52 derived issues whereas the APAR ruleset identified just 10. In tests using actual field data from 5 operating AHUs in 4 manufacturing facilities, the newly developed AFDD expert system for AHUs was shown to identify four individual fault case categories that the APAR method did not, as well as showing improvements made in the area of fault diagnosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A certain type of bacterial inclusion, known as a bacterial microcompartment, was recently identified and imaged through cryo-electron tomography. A reconstructed 3D object from single-axis limited angle tilt-series cryo-electron tomography contains missing regions and this problem is known as the missing wedge problem. Due to missing regions on the reconstructed images, analyzing their 3D structures is a challenging problem. The existing methods overcome this problem by aligning and averaging several similar shaped objects. These schemes work well if the objects are symmetric and several objects with almost similar shapes and sizes are available. Since the bacterial inclusions studied here are not symmetric, are deformed, and show a wide range of shapes and sizes, the existing approaches are not appropriate. This research develops new statistical methods for analyzing geometric properties, such as volume, symmetry, aspect ratio, polyhedral structures etc., of these bacterial inclusions in presence of missing data. These methods work with deformed and non-symmetric varied shaped objects and do not necessitate multiple objects for handling the missing wedge problem. The developed methods and contributions include: (a) an improved method for manual image segmentation, (b) a new approach to 'complete' the segmented and reconstructed incomplete 3D images, (c) a polyhedral structural distance model to predict the polyhedral shapes of these microstructures, (d) a new shape descriptor for polyhedral shapes, named as polyhedron profile statistic, and (e) the Bayes classifier, linear discriminant analysis and support vector machine based classifiers for supervised incomplete polyhedral shape classification. Finally, the predicted 3D shapes for these bacterial microstructures belong to the Johnson solids family, and these shapes along with their other geometric properties are important for better understanding of their chemical and biological characteristics.