991 resultados para process parameter data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Supercritical fluid extraction (SFE) from solids has proven to be technically feasible for almost any system; nonetheless, its economical viability has been proven for a restricted number of systems. A common practice is to compare the cost of manufacturing of vegetable extracts by a variety of techniques without deeply considering the huge differences in composition and functional properties among the various types of extracts obtained; under this circumstance, the cost of manufacturing do not favor SFE. Additionally, the influence of external parameters such as the agronomic conditions and the SFE system geometry are not considered. In the present work, these factors were studied for the system fennel seeds + CO2. The effects of the harvesting season and the degree of maturation on the global yields for the system fennel seeds + CO2 were analyzed at 300 bar and 40 degrees C. The effects of the pressure on the global yields were determined for the temperatures of 30 and 40 degrees C. Kinetics experiments were done for various ratios of bed height to bed diameter. Fennel extracts were also obtained by hydrodistillation and low-pressure solvent extraction. The chemical composition of the fennel extracts were determined by gas chromatography. The SFE maximum global yield (12.5%, dry basis) was obtained with dry harvested fennel seeds. Anethole and fenchone were the major constituents of the extract; the following fat acids palmitic (C16H32O2), palmitoleic stearic (C18H36O2), oleic (C18H34O2), linoleic (C18H32O2) and linolenic (C18H30O2) were also detected in the extracts. A relation between amounts of feed and solvent, bed height and diameter, and solvent flow rate was proposed. The models of Sovova, Goto et al. and Tan and Lion were capable of describing the mass transfer kinetics. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Queuing is one of the very important criteria for assessing the performance and efficiency of any service industry, including healthcare. Data Envelopment Analysis (DEA) is one of the most widely-used techniques for performance measurement in healthcare. However, no queue management application has been reported in the health-related DEA literature. Most of the studies regarding patient flow systems had the objective of improving an already existing Appointment System. The current study presents a novel application of DEA for assessing the queuing process at an Outpatients’ department of a large public hospital in a developing country where appointment systems do not exist. The main aim of the current study is to demonstrate the usefulness of DEA modelling in the evaluation of a queue system. The patient flow pathway considered for this study consists of two stages; consultation with a doctor and pharmacy. The DEA results indicated that waiting times and other related queuing variables included need considerable minimisation at both stages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Key Performance Indicators (KPIs) and their predictions are widely used by the enterprises for informed decision making. Nevertheless , a very important factor, which is generally overlooked, is that the top level strategic KPIs are actually driven by the operational level business processes. These two domains are, however, mostly segregated and analysed in silos with different Business Intelligence solutions. In this paper, we are proposing an approach for advanced Business Simulations, which converges the two domains by utilising process execution & business data, and concepts from Business Dynamics (BD) and Business Ontologies, to promote better system understanding and detailed KPI predictions. Our approach incorporates the automated creation of Causal Loop Diagrams, thus empowering the analyst to critically examine the complex dependencies hidden in the massive amounts of available enterprise data. We have further evaluated our proposed approach in the context of a retail use-case that involved verification of the automatically generated causal models by a domain expert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we present a simulation of a recognition process with perimeter characterization of a simple plant leaves as a unique discriminating parameter. Data coding allowing for independence of leaves size and orientation may penalize performance recognition for some varieties. Border description sequences are then used, and Principal Component Analysis (PCA) is applied in order to study which is the best number of components for the classification task, implemented by means of a Support Vector Machine (SVM) System. Obtained results are satisfactory, and compared with [4] our system improves the recognition success, diminishing the variance at the same time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is done as a part of project called FuncMama that is a project between Technical Research Centre of Finland (VTT), Oulu University (OY), Lappeenranta University of Technology (LUT) and Finnish industrial partners. Main goal of the project is to manufacture electric and mechanical components from mixed materials using laser sintering. Aim of this study was to create laser sintered pieces from ceramic material and monitor the sintering event by using spectrometer. Spectrometer is a device which is capable to record intensity of different wavelengths in relation with time. In this study the monitoring of laser sintering was captured with the equipment which consists of Ocean Optics spectrometer, optical fiber and optical lens (detector head). Light from the sintering process hit first to the lens system which guides the light in to the optical fibre. Optical fibre transmits the light from the sintering process to the spectrometer where wavelengths intensity level information is detected. The optical lens of the spectrometer was rigidly set and did not move along with the laser beam. Data which was collected with spectrometer from the laser sintering process was converted with Excel spreadsheet program for result’s evaluation. Laser equipment used was IPG Photonics pulse fibre laser. Laser parameters were kept mainly constant during experimental part and only sintering speed was changed. That way it was possible to find differences in the monitoring results without fear of too many parameters mixing together and affecting to the conclusions. Parts which were sintered had one layer and size of 5 x 5 mm. Material was CT2000 – tape manufactured by Heraeus which was later on post processed to powder. Monitoring of different sintering speeds was tested by using CT2000 reference powder. Moreover tests how different materials effect to the process monitoring were done by adding foreign powder Du Pont 951 which had suffered in re-grinding and which was more reactive than CT2000. By adding foreign material it simulates situation where two materials are accidently mixed together and it was studied if that can be seen with the spectrometer. It was concluded in this study that with the spectrometer it is possible to detect changes between different laser sintering speeds. When the sintering speed is lowered the intensity level of light is higher from the process. This is a result of higher temperature at the sintering spot and that can be noticed with the spectrometer. That indicates it could be possible to use spectrometer as a tool for process observation and support the idea of having system that can help setting up the process parameter window. Also important conclusion was how well the adding of foreign material could be seen with the spectrometer. When second material was added a significant intensity level raise could be noticed in that part where foreign material was mixed. That indicates it is possible to see if there are any variations in the material or if there are more materials mixed together. Spectrometric monitoring of laser sintering could be useful tool for process window observation and temperature controlling of the sintering process. For example if the process window for specific material is experimentally determined to get wanted properties and satisfying sintering speed. It is possible if the data is constantly recorded that the results can show faults in the part texture between layers. Changes between the monitoring data and the experimentally determined values can then indicate changes in the material being generated by material faults or by wrong process parameters. The results of this study show that spectrometer could be one possible tool for monitoring. But to get in that point where this all can be made possible much more researching is needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ASTM A529 carbon¿manganese steel angle specimens were joined by flash butt welding and the effects of varying process parameter settings on the resulting welds were investigated. The weld metal and heat affected zones were examined and tested using tensile testing, ultrasonic scanning, Rockwell hardness testing, optical microscopy, and scanning electron microscopy with energy dispersive spectroscopy in order to quantify the effect of process variables on weld quality. Statistical analysis of experimental tensile and ultrasonic scanning data highlighted the sensitivity of weld strength and the presence of weld zone inclusions and interfacial defects to the process factors of upset current, flashing time duration, and upset dimension. Subsequent microstructural analysis revealed various phases within the weld and heat affected zone, including acicular ferrite, Widmanstätten or side-plate ferrite, and grain boundary ferrite. Inspection of the fracture surfaces of multiple tensile specimens, with scanning electron microscopy, displayed evidence of brittle cleavage fracture within the weld zone for certain factor combinations. Test results also indicated that hardness was increased in the weld zone for all specimens, which can be attributed to the extensive deformation of the upset operation. The significance of weld process factor levels on microstructure, fracture characteristics, and weld zone strength was analyzed. The relationships between significant flash welding process variables and weld quality metrics as applied to ASTM A529-Grade 50 steel angle were formalized in empirical process models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the authors introduce a novel mechanism for data management in a middleware for smart home control, where a relational database and semantic ontology storage are used at the same time in a Data Warehouse. An annotation system has been designed for instructing the storage format and location, registering new ontology concepts and most importantly, guaranteeing the Data Consistency between the two storage methods. For easing the data persistence process, the Data Access Object (DAO) pattern is applied and optimized to enhance the Data Consistency assurance. Finally, this novel mechanism provides an easy manner for the development of applications and their integration with BATMP. Finally, an application named "Parameter Monitoring Service" is given as an example for assessing the feasibility of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper deals with the establishment of a characterization methodology of electric power profiles of medium voltage (MV) consumers. The characterization is supported on the data base knowledge discovery process (KDD). Data Mining techniques are used with the purpose of obtaining typical load profiles of MV customers and specific knowledge of their customers’ consumption habits. In order to form the different customers’ classes and to find a set of representative consumption patterns, a hierarchical clustering algorithm and a clustering ensemble combination approach (WEACS) are used. Taking into account the typical consumption profile of the class to which the customers belong, new tariff options were defined and new energy coefficients prices were proposed. Finally, and with the results obtained, the consequences that these will have in the interaction between customer and electric power suppliers are analyzed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores