973 resultados para processing engineering
Resumo:
Multiresolution Triangular Mesh (MTM) models are widely used to improve the performance of large terrain visualization by replacing the original model with a simplified one. MTM models, which consist of both original and simplified data, are commonly stored in spatial database systems due to their size. The relatively slow access speed of disks makes data retrieval the bottleneck of such terrain visualization systems. Existing spatial access methods proposed to address this problem rely on main-memory MTM models, which leads to significant overhead during query processing. In this paper, we approach the problem from a new perspective and propose a novel MTM called direct mesh that is designed specifically for secondary storage. It supports available indexing methods natively and requires no modification to MTM structure. Experiment results, which are based on two real-world data sets, show an average performance improvement of 5-10 times over the existing methods.
Resumo:
Quantile computation has many applications including data mining and financial data analysis. It has been shown that an is an element of-approximate summary can be maintained so that, given a quantile query d (phi, is an element of), the data item at rank [phi N] may be approximately obtained within the rank error precision is an element of N over all N data items in a data stream or in a sliding window. However, scalable online processing of massive continuous quantile queries with different phi and is an element of poses a new challenge because the summary is continuously updated with new arrivals of data items. In this paper, first we aim to dramatically reduce the number of distinct query results by grouping a set of different queries into a cluster so that they can be processed virtually as a single query while the precision requirements from users can be retained. Second, we aim to minimize the total query processing costs. Efficient algorithms are developed to minimize the total number of times for reprocessing clusters and to produce the minimum number of clusters, respectively. The techniques are extended to maintain near-optimal clustering when queries are registered and removed in an arbitrary fashion against whole data streams or sliding windows. In addition to theoretical analysis, our performance study indicates that the proposed techniques are indeed scalable with respect to the number of input queries as well as the number of items and the item arrival rate in a data stream.
Resumo:
The dairy industry is a global industry that provides significant nutritional benefit to many cultures. in australia the industry is especially important economically, being a large export earner, as well as a vital domestic sector. in recent years the sector has come under increased competitive pressure and has restructured to cope with the changes. the industry recently undertook an eco-efficiency project to investigate where business and environmental improvements might be found. the project involved collecting and collating previous project data and surveying 38 companies in different dairy operations, from market milk to dried products. after the survey, 10 sites in two states were visited to discuss eco-efficiency issues in detail with key players. From the surveys, visits and data compilation, a comprehensive manual was prepared to help interested companies find relevant eco-efficiency data easily and assist them in the implementation process. ten fact sheets were also produced covering the topics of water management, water recycling and re-use, refrigeration optimisation, boiler optimisation, biogas, the use of treated wastewater, yield optimisation and product recovery, optimisation of ciP systems, chemical use and membranes the project highlighted the large amount of technical and engineering expertise within the sector that could result in eco-efficiency outcomes and also identified the opportunities that exist for changes to occur in some operations to save energy, input raw materials and water.
Resumo:
Many recombinant proteins are often over-expressed in host cells, such as Escherichia coli, and are found as insoluble and inactive protein aggregates known as inclusion bodies (IBs). Recently, a novel process for IB extraction and solubilisation, based on chemical extraction, has been reported. While this method has the potential to radically intensify traditional IB processing, the process economics of the new technique have yet to be reported. This study focuses on the evaluation of process economics for several IB processing schemes based on chemical extraction and/or traditional techniques. Simulations and economic analysis were conducted at various processing conditions using granulocyte macrophage-colony stimulating factor, expressed as IBs in E. coli, as a model protein. In most cases, IB processing schemes based on chemical extraction having a shorter downstream cascade demonstrated a competitive economic edge over the conventional route, validating the new process as an economically more viable alternative for IB processing.
Resumo:
In many advanced applications, data are described by multiple high-dimensional features. Moreover, different queries may weight these features differently; some may not even specify all the features. In this paper, we propose our solution to support efficient query processing in these applications. We devise a novel representation that compactly captures f features into two components: The first component is a 2D vector that reflects a distance range ( minimum and maximum values) of the f features with respect to a reference point ( the center of the space) in a metric space and the second component is a bit signature, with two bits per dimension, obtained by analyzing each feature's descending energy histogram. This representation enables two levels of filtering: The first component prunes away points that do not share similar distance ranges, while the bit signature filters away points based on the dimensions of the relevant features. Moreover, the representation facilitates the use of a single index structure to further speed up processing. We employ the classical B+-tree for this purpose. We also propose a KNN search algorithm that exploits the access orders of critical dimensions of highly selective features and partial distances to prune the search space more effectively. Our extensive experiments on both real-life and synthetic data sets show that the proposed solution offers significant performance advantages over sequential scan and retrieval methods using single and multiple VA-files.