19 resultados para engine performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digestate from the anaerobic digestion conversion process is widely used as a farm land fertiliser. This study proposes an alternative use as a source of energy. Dried digestate was pyrolysed and the resulting oil was blended with waste cooking oil and butanol (10, 20 and 30 vol.%). The physical and chemical properties of the pyrolysis oil blends were measured and compared with pure fossil diesel and waste cooking oil. The blends were tested in a multi-cylinder indirect injection compression ignition engine.Engine combustion, exhaust gas emissions and performance parameters were measured and compared with pure fossil diesel operation. The ASTM copper corrosion values for 20% and 30% pyrolysis blends were 2c, compared to 1b for fossil diesel. The kinematic viscosities of the blends at 40 C were 5–7 times higher than that of fossil diesel. Digested pyrolysis oil blends produced lower in-cylinder peak pressures than fossil diesel and waste cooking oil operation. The maximum heat release rates of the blends were approximately 8% higher than with fossil diesel. The ignition delay periods of the blends were higher; pyrolysis oil blends started to combust late and once combustion started burnt quicker than fossil diesel. The total burning duration of the 20% and 30% blends were decreased by 12% and 3% compared to fossil diesel. At full engine load, the brake thermal efficiencies of the blends were decreased by about 3–7% when compared to fossil diesel. The pyrolysis blends gave lower smoke levels; at full engine load, smoke level of the 20% blend was 44% lower than fossil diesel. In comparison to fossil diesel and at full load, the brake specific fuel consumption (wt.) of the 30% and 20% blends were approximately 32% and 15% higher. At full engine load, the CO emission of the 20% and 30% blends were decreased by 39% and 66% with respect to the fossil diesel. Blends CO2 emissions were similar to that of fossil diesel; at full engine load, 30% blend produced approximately 5% higher CO2 emission than fossil diesel. The study concludes that on the basis of short term engine experiment up to 30% blend of pyrolysis oil from digestate of arable crops can be used in a compression ignition engine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.