34 resultados para Network scale-up method

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – Progress in retrofitting the UK's commercial properties continues to be slow and fragmented. New research from the UK and USA suggests that radical changes are needed to drive large-scale retrofitting, and that new and innovative models of financing can create new opportunities. The purpose of this paper is to offer insights into the terminology of retrofit and the changes in UK policy and practice that are needed to scale up activity in the sector. Design/methodology/approach – The paper reviews and synthesises key published research into commercial property retrofitting in the UK and USA and also draws on policy and practice from the EU and Australia. Findings – The paper provides a definition of “retrofit”, and compares and contrasts this with “refurbishment” and “renovation” in an international context. The paper summarises key findings from recent research and suggests that there are a number of policy and practice measures which need to be implemented in the UK for commercial retrofitting to succeed at scale. These include improved funding vehicles for retrofit; better transparency in actual energy performance; and consistency in measurement, verification and assessment standards. Practical implications – Policy and practice in the UK needs to change if large-scale commercial property retrofit is to be rolled out successfully. This requires mandatory legislation underpinned by incentives and penalties for non-compliance. Originality/value – This paper synthesises recent research to provide a set of policy and practice recommendations which draw on international experience, and can assist on implementation in the UK.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Co-combustion performance trials of Meat and Bone Meal (MBM) and peat were conducted using a bubbling fluidized bed (BFB) reactor. In the combustion performance trials the effects of the co-combustion of MBM and peat on flue gas emissions, bed fluidization, ash agglomeration tendency in the bed and the composition and quality of the ash were studied. MBM was mixed with peat at 6 levels between 15% and 100%. Emissions were predominantly below regulatory limits. CO concentrations in the flue gas only exceeded the 100 mg/m3 limit upon combustion of pure MBM. SO2 emissions were found to be over the limit of 50 mg/m3, while in all trials NOx emissions were below the limit of 300 mg/m3. The HCl content of the flue gases was found to vary near the limit of 30 mg/m3. VOCs however were within their limits. The problem of bed agglomeration was avoided when the bed temperature was about 850 °C and only 20% MBM was co-combusted. This study indicates that a pilot scale BFB reactor can, under optimum conditions, be operated within emission limits when MBM is used as a co-fuel with peat. This can provide a basis for further scale-up development work in industrial scale BFB applications

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a world where data is captured on a large scale the major challenge for data mining algorithms is to be able to scale up to large datasets. There are two main approaches to inducing classification rules, one is the divide and conquer approach, also known as the top down induction of decision trees; the other approach is called the separate and conquer approach. A considerable amount of work has been done on scaling up the divide and conquer approach. However, very little work has been conducted on scaling up the separate and conquer approach.In this work we describe a parallel framework that allows the parallelisation of a certain family of separate and conquer algorithms, the Prism family. Parallelisation helps the Prism family of algorithms to harvest additional computer resources in a network of computers in order to make the induction of classification rules scale better on large datasets. Our framework also incorporates a pre-pruning facility for parallel Prism algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fast increase in the size and number of databases demands data mining approaches that are scalable to large amounts of data. This has led to the exploration of parallel computing technologies in order to perform data mining tasks concurrently using several processors. Parallelization seems to be a natural and cost-effective way to scale up data mining technologies. One of the most important of these data mining technologies is the classification of newly recorded data. This paper surveys advances in parallelization in the field of classification rule induction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scale up of Spark Plasma Sintering (SPS) for the consolidation of large square monoliths (50 × 50 × 3 mm3) of thermoelectric material is demonstrated and the properties of the fabricated samples compared with those from laboratory scale SPS. The SPS processing of n-type TiS2 and p-type Cu10.4Ni1.6Sb4S13 produces highly dense compacts of phase pure material. Electrical and thermal transport property measurements reveal that the thermoelectric performance of the consolidated n- and p-type materials is comparable with that of material processed using laboratory scale SPS, with ZT values that approach 0.8 and 0.35 at 700 K for Cu10.4Ni1.6Sb4S13 and TiS2, respectively. Mechanical properties of the consolidated materials shows that large-scale SPS processing produces highly homogeneous materials with hardness and elastic moduli that deviate little from values obtained on materials processed on the laboratory scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. These large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Land cover data derived from satellites are commonly used to prescribe inputs to models of the land surface. Since such data inevitably contains errors, quantifying how uncertainties in the data affect a model’s output is important. To do so, a spatial distribution of possible land cover values is required to propagate through the model’s simulation. However, at large scales, such as those required for climate models, such spatial modelling can be difficult. Also, computer models often require land cover proportions at sites larger than the original map scale as inputs, and it is the uncertainty in these proportions that this article discusses. This paper describes a Monte Carlo sampling scheme that generates realisations of land cover proportions from the posterior distribution as implied by a Bayesian analysis that combines spatial information in the land cover map and its associated confusion matrix. The technique is computationally simple and has been applied previously to the Land Cover Map 2000 for the region of England and Wales. This article demonstrates the ability of the technique to scale up to large (global) satellite derived land cover maps and reports its application to the GlobCover 2009 data product. The results show that, in general, the GlobCover data possesses only small biases, with the largest belonging to non–vegetated surfaces. In vegetated surfaces, the most prominent area of uncertainty is Southern Africa, which represents a complex heterogeneous landscape. It is also clear from this study that greater resources need to be devoted to the construction of comprehensive confusion matrices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The state of river water deterioration in the Agueda hydrographic basin, mostly in the western part, partly reflects the high rate of housing and industrial development in this area in recent years. The streams have acted as a sink for organic and inorganic loads from several origins: domestic and industrial sewage and agricultural waste. The contents of the heavy metals Cr, Cd, Ni, Cu, Pb, and Zn were studied by sequential chemical extraction of the principal geochemical phases of streambed sediments, in the <63 mum fraction, in order to assess their potential availability to the environment, investigating, the metal concentrations, assemblages, and trends. The granulometric and mineralogical characteristics of this sediment fraction were also studied. This study revealed clear pollution by Cr, Cd, Ni, Cu, Zn, and Pb, as a result from both natural and anthropogenic origins. The chemical transport of metals appears to be essentially by the following geochemical phases, in decreasing order of significance: (exchangeable + carbonates) much greater than (organics) much greater than (Mn and Fe oxides and hydroxides). The (exchangeable + carbonate) phase plays an important part in the fixation of Cu, Ni, Zn, and Cd. The organic phase is important in the fixation of Cr, Pb, and also Cu and Ni. Analyzing the metal contents in the residual fraction, we conclude that Zn and Cd are the most mobile, and Cr and Pb are less mobile than Cu and Ni. The proximity of the pollutant sources and the timing of the influx of contaminated material control the distribution of the contaminant-related sediments locally and on the network scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a multicriteria decision-making model for lifespan energy efficiency assessment of intelligent buildings (IBs). The decision-making model called IBAssessor is developed using an analytic network process (ANP) method and a set of lifespan performance indicators for IBs selected by a new quantitative approach called energy-time consumption index (ETI). In order to improve the quality of decision-making, the authors of this paper make use of previous research achievements including a lifespan sustainable business model, the Asian IB Index, and a number of relevant publications. Practitioners can use the IBAssessor ANP model at different stages of an IB lifespan for either engineering or business oriented assessments. Finally, this paper presents an experimental case study to demonstrate how to use IBAssessor ANP model to solve real-world design tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We summarise the work of an interdisciplinary network set up to explore the impacts of climate change in the British Uplands. In this CR Special, the contributors present the state of knowledge and this introduction synthesises this knowledge and derives implications for decision makers. The Uplands are valued semi-natural habitats, providing ecosystem services that have historically been taken for granted. For example, peat soils, which are mostly found in the Uplands, contain around 50% of the terrestrial carbon in the UK. Land management continues to be a driver of ecosystem service delivery. Degraded and managed peatlands are subject to erosion and carbon loss with negative impacts on biodiversity, carbon storage and water quality. Climate change is already being experienced in British Uplands and is likely to exacerbate these pressures. Climate envelope models suggest as much as 50% of British Uplands and peatlands will be exposed to climate stress by the end of the 21st century under low and high emissions scenarios. However, process-based models of the response of organic soils to this climate stress do not give a consistent indication of what this will mean for soil carbon: results range from a very slight increase in uptake, through a clear decline, to a net carbon loss. Preserving existing peat stocks is an important climate mitigation strategy, even if new peat stops forming. Preserving upland vegetation cover is a key win–win management strategy that will reduce erosion and loss of soil carbon, and protect a variety of services such as the continued delivery of a high quality water resource.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study is to investigate the separation of astaxanthin from the cells of Phaffia rhodozyma using colloidal gas aphrons (CGA), which are surfactant stabilized microbubbles, in a flotation column. It was reported in previous studies that optimum recoveries are achieved at conditions that favor electrostatic interactions. Therefore, in this study, CGA generated from the cationic surfactant hexadecyl trimethyl ammonium bromide (CTAB) were applied to suspensions of cells pretreated with NaOH. The different operation modes (batch or continuous) and the effect of volumetric ratio of CGA to feed, initial concentration of feed, operating height, and flow rate of CGA on the separation of astaxanthin were investigated. The volumetric ratio was found to have a significant effect on the separation of astaxanthin for both batch and continuous experiments. Additionally, the effect of homogenization of the cells on the purity of the recovered fractions was investigated, showing that the homogenization resulted in increased purity. Moreover, different concentrations of surfactant were used for the generation of CGA for the recovery of astaxanthin on batch mode; it was found that recoveries up to 98% could be achieved using CGA generated from a CTAB solution 0.8 mM, which is below the CTAB critical micellar concentration (CMC). These results offer important information for the scale-up of the separation of astaxanthin from the cells of P. rhodozyma using CGA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a previous study we have demonstrated that gallic acid (GA) in its anionic form can be recovered from aqueous solutions using colloidal gas aphrons (CGA) generated from the cationic surfactant cetyltrimethylammonium bromide (CTAB). The aim of the present work is to get a better understanding of the separation mechanism in order to determine the optimum operating conditions to maximise the recovery of GA while preserving its antioxidant properties. Zeta potential measurements were carried out to characterise the surface charge of GA, CTAB and their mixtures at three different pH conditions (both in buffers and in aqueous solutions). GA interacted strongly with CTAB at pH higher than its pKa 3.14 where it is ionised and negatively charged. However, at pH higher than 7 GA becomes oxidised and loses its antioxidant power. GA recovery was mainly affected by pH, ionic strength, surfactant/GA molar ratio, mixing conditions and contact time. Scale-up of the separation using a flotation column resulted in both higher recovery and reproducibility. Preliminary experiments with grape marc extracts confirmed the potential application of this separation for the recovery of polyphenols from complex feedstocks

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forest canopies are important components of the terrestrial carbon budget, which has motivated a worldwide effort, FLUXNET, to measure CO2 exchange between forests and the atmosphere. These measurements are difficult to interpret and to scale up to estimate exchange across a landscape. Here we review the effects of complex terrain on the mean flow, turbulence, and scalar exchange in canopy flows, as exemplified by adjustment to forest edges and hills, including the effects of stable stratification. We focus on the fundamental fluid mechanics, in which developments in theory, measurements, and modeling, particularly through large-eddy simulation, are identifying important processes and providing scaling arguments. These developments set the stage for the development of predictive models that can be used in combination with measurements to estimate exchange at the landscape scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.